Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator&page=1' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Announcing the GTAC 2015 Agenda

Google Testing Blog - Sun, 08/21/2016 - 17:31
by Anthony Vallone on behalf of the GTAC Committee 

We have completed the selection and confirmation of all speakers and attendees for GTAC 2015. You can find the detailed agenda at: developers.google.com/gtac/2015/schedule.

Thank you to all who submitted proposals!

There is a lot of interest in GTAC once again this year with about 1400 applicants and about 200 of those for speaking. Unfortunately, our venue only seats 250. We will livestream the event as usual, so fret not if you were not selected to attend. Information about the livestream and other details will be posted on the GTAC site soon and announced here.

Categories: Testing & QA

GTAC 2015 is Next Week!

Google Testing Blog - Sun, 08/21/2016 - 17:31
by Anthony Vallone on behalf of the GTAC Committee

The ninth GTAC (Google Test Automation Conference) commences on Tuesday, November 10th, at the Google Cambridge office. You can find the latest details on the conference site, including schedule, speaker profiles, and travel tips.

If you have not been invited to attend in person, you can watch the event live. And if you miss the livestream, we will post slides and videos later.

We have an outstanding speaker lineup this year, and we look forward to seeing you all there or online!

Categories: Testing & QA

The Inquiry Method for Test Planning

Google Testing Blog - Sun, 08/21/2016 - 17:30
by Anthony Vallone
updated: July 2016



Creating a test plan is often a complex undertaking. An ideal test plan is accomplished by applying basic principles of cost-benefit analysis and risk analysis, optimally balancing these software development factors:
  • Implementation cost: The time and complexity of implementing testable features and automated tests for specific scenarios will vary, and this affects short-term development cost.
  • Maintenance cost: Some tests or test plans may vary from easy to difficult to maintain, and this affects long-term development cost. When manual testing is chosen, this also adds to long-term cost.
  • Monetary cost: Some test approaches may require billed resources.
  • Benefit: Tests are capable of preventing issues and aiding productivity by varying degrees. Also, the earlier they can catch problems in the development life-cycle, the greater the benefit.
  • Risk: The probability of failure scenarios may vary from rare to likely, and their consequences may vary from minor nuisance to catastrophic.
Effectively balancing these factors in a plan depends heavily on project criticality, implementation details, resources available, and team opinions. Many projects can achieve outstanding coverage with high-benefit, low-cost unit tests, but they may need to weigh options for larger tests and complex corner cases. Mission critical projects must minimize risk as much as possible, so they will accept higher costs and invest heavily in rigorous testing at all levels.
This guide puts the onus on the reader to find the right balance for their project. Also, it does not provide a test plan template, because templates are often too generic or too specific and quickly become outdated. Instead, it focuses on selecting the best content when writing a test plan.

Test plan vs. strategy
Before proceeding, two common methods for defining test plans need to be clarified:
  • Single test plan: Some projects have a single "test plan" that describes all implemented and planned testing for the project.
  • Single test strategy and many plans: Some projects have a "test strategy" document as well as many smaller "test plan" documents. Strategies typically cover the overall test approach and goals, while plans cover specific features or project updates.
Either of these may be embedded in and integrated with project design documents. Both of these methods work well, so choose whichever makes sense for your project. Generally speaking, stable projects benefit from a single plan, whereas rapidly changing projects are best served by infrequently changed strategies and frequently added plans.
For the purpose of this guide, I will refer to both test document types simply as "test plans‚ÄĚ. If you have multiple documents, just apply the advice below to your document aggregation.

Content selection
A good approach to creating content for your test plan is to start by listing all questions that need answers. The lists below provide a comprehensive collection of important questions that may or may not apply to your project. Go through the lists and select all that apply. By answering these questions, you will form the contents for your test plan, and you should structure your plan around the chosen content in any format your team prefers. Be sure to balance the factors as mentioned above when making decisions.

Prerequisites
  • Do you need a test plan? If there is no project design document or a clear vision for the product, it may be too early to write a test plan.
  • Has testability been considered in the project design? Before a project gets too far into implementation, all scenarios must be designed as testable, preferably via automation. Both project design documents and test plans should comment on testability as needed.
  • Will you keep the plan up-to-date? If so, be careful about adding too much detail, otherwise it may be difficult to maintain the plan.
  • Does this quality effort overlap with other teams? If so, how have you deduplicated the work?

Risk
  • Are there any significant project risks, and how will you mitigate them? Consider:
    • Injury to people or animals
    • Security and integrity of user data
    • User privacy
    • Security of company systems
    • Hardware or property damage
    • Legal and compliance issues
    • Exposure of confidential or sensitive data
    • Data loss or corruption
    • Revenue loss
    • Unrecoverable scenarios
    • SLAs
    • Performance requirements
    • Misinforming users
    • Impact to other projects
    • Impact from other projects
    • Impact to company‚Äôs public image
    • Loss of productivity
  • What are the project‚Äôs technical vulnerabilities? Consider:
    • Features or components known to be hacky, fragile, or in great need of refactoring
    • Dependencies or platforms that frequently cause issues
    • Possibility for users to cause harm to the system
    • Trends seen in past issues

Coverage
  • What does the test surface look like? Is it a simple library with one method, or a multi-platform client-server stateful system with a combinatorial explosion of use cases? Describe the design and architecture of the system in a way that highlights possible points of failure.
  • What platforms are supported? Consider listing supported operating systems, hardware, devices, etc. Also describe how testing will be performed and reported for each platform.
  • What are the features? Consider making a summary list of all features and describe how certain categories of features will be tested.
  • What will not be tested? No test suite covers every possibility. It‚Äôs best to be up-front about this and provide rationale for not testing certain cases. Examples: low risk areas that are a low priority, complex cases that are a low priority, areas covered by other teams, features not ready for testing, etc. 
  • What is covered by unit (small), integration (medium), and system (large) tests? Always test as much as possible in smaller tests, leaving fewer cases for larger tests. Describe how certain categories of test cases are best tested by each test size and provide rationale.
  • What will be tested manually vs. automated? When feasible and cost-effective, automation is usually best. Many projects can automate all testing. However, there may be good reasons to choose manual testing. Describe the types of cases that will be tested manually and provide rationale.
  • How are you covering each test category? Consider:
  • Will you use static and/or dynamic analysis tools? Both static analysis tools and dynamic analysis tools can find problems that are hard to catch in reviews and testing, so consider using them.
  • How will system components and dependencies be stubbed, mocked, faked, staged, or used normally during testing? There are good reasons to do each of these, and they each have a unique impact on coverage.
  • What builds are your tests running against? Are tests running against a build from HEAD (aka tip), a staged build, and/or a release candidate? If only from HEAD, how will you test release build cherry picks (selection of individual changelists for a release) and system configuration changes not normally seen by builds from HEAD?
  • What kind of testing will be done outside of your team? Examples:
    • Dogfooding
    • External crowdsource testing
    • Public alpha/beta versions (how will they be tested before releasing?)
    • External trusted testers
  • How are data migrations tested? You may need special testing to compare before and after migration results.
  • Do you need to be concerned with backward compatibility? You may own previously distributed clients or there may be other systems that depend on your system‚Äôs protocol, configuration, features, and behavior.
  • Do you need to test upgrade scenarios for server/client/device software or dependencies/platforms/APIs that the software utilizes?
  • Do you have line coverage goals?

Tooling and Infrastructure
  • Do you need new test frameworks? If so, describe these or add design links in the plan.
  • Do you need a new test lab setup? If so, describe these or add design links in the plan.
  • If your project offers a service to other projects, are you providing test tools to those users? Consider providing mocks, fakes, and/or reliable staged servers for users trying to test their integration with your system.
  • For end-to-end testing, how will test infrastructure, systems under test, and other dependencies be managed? How will they be deployed? How will persistence be set-up/torn-down? How will you handle required migrations from one datacenter to another?
  • Do you need tools to help debug system or test failures? You may be able to use existing tools, or you may need to develop new ones.

Process
  • Are there test schedule requirements? What time commitments have been made, which tests will be in place (or test feedback provided) by what dates? Are some tests important to deliver before others?
  • How are builds and tests run continuously? Most small tests will be run by continuous integration tools, but large tests may need a different approach. Alternatively, you may opt for running large tests as-needed. 
  • How will build and test results be reported and monitored?
    • Do you have a team rotation to monitor continuous integration?
    • Large tests might require monitoring by someone with expertise.
    • Do you need a dashboard for test results and other project health indicators?
    • Who will get email alerts and how?
    • Will the person monitoring tests simply use verbal communication to the team?
  • How are tests used when releasing?
    • Are they run explicitly against the release candidate, or does the release process depend only on continuous test results? 
    • If system components and dependencies are released independently, are tests run for each type of release? 
    • Will a "release blocker" bug stop the release manager(s) from actually releasing? Is there an agreement on what are the release blocking criteria?
    • When performing canary releases (aka % rollouts), how will progress be monitored and tested?
  • How will external users report bugs? Consider feedback links or other similar tools to collect and cluster reports.
  • How does bug triage work? Consider labels or categories for bugs in order for them to land in a triage bucket. Also make sure the teams responsible for filing and or creating the bug report template are aware of this. Are you using one bug tracker or do you need to setup some automatic or manual import routine?
  • Do you have a policy for submitting new tests before closing bugs that could have been caught?
  • How are tests used for unsubmitted changes? If anyone can run all tests against any experimental build (a good thing), consider providing a howto.
  • How can team members create and/or debug tests? Consider providing a howto.

Utility
  • Who are the test plan readers? Some test plans are only read by a few people, while others are read by many. At a minimum, you should consider getting a review from all stakeholders (project managers, tech leads, feature owners). When writing the plan, be sure to understand the expected readers, provide them with enough background to understand the plan, and answer all questions you think they will have - even if your answer is that you don‚Äôt have an answer yet. Also consider adding contacts for the test plan, so any reader can get more information.
  • How can readers review the actual test cases? Manual cases might be in a test case management tool, in a separate document, or included in the test plan. Consider providing links to directories containing automated test cases.
  • Do you need traceability between requirements, features, and tests?
  • Do you have any general product health or quality goals and how will you measure success? Consider:
    • Release cadence
    • Number of bugs caught by users in production
    • Number of bugs caught in release testing
    • Number of open bugs over time
    • Code coverage
    • Cost of manual testing
    • Difficulty of creating new tests


Categories: Testing & QA

Extreme Programming Explained, Second Edition: Re-Read Week 9 (Chapters 18 ‚Äď 19)

XP Explained Cover

This week we continue with the re-read of Kent Beck and Cynthia Andres’s Extreme Programing Explained, Second Edition (2005) with two more chapters in Section Two.  Chapters 18 and 19 provide a view into two very different management philosophies that shaped software development in general and have had a major impact on XP.  Chapter 18 discusses Taylorism and scientific management; a management knows best view of the world. Chapter 19 talks about the Toyota Production System, which puts significant power back in the hands of the practitioner to deliver a quality product.

Chapter 18: Taylorism and Software

Somewhere in the dark ages when I was a senior in high school, I worked for Firestone Tire and Rubber. During my stint as a tire sorter and mold cleaner the time and motion people terrorized me and almost everyone at the plant. ¬†They lurked behind the machines with clipboards and stopwatches to ensure all workers were following ‚Äėthe right way‚Äô to do work. Cut to approximately four years later when I was a senior at Louisiana State University, I took several courses in industrial engineering. ¬†Between both sets of experiences, I learned a lot about industrial engineering, scientific management, and Taylorism. Some of what I learned made sense for highly structured manufacturing plants, but very little makes sense for organizations writing and delivering software (although many people still try to apply these concepts).

Frederick Taylor led the efficiency movement of the 1890’s, culminating in his book The Principles of Scientific Management (1911). Scientific management suggests that there is ‚Äėone best way‚Äô that can be identified and measured to do any piece of work (hence the stopwatches). Scientific management continues to be used by many organizations from manufacturing to hospitals (at least according to my sister-in-law who is a nurse). It is hard to resist something named scientific management, even though scientific management tends to clash with the less regimented concepts found in the Agile and lean frameworks used in knowledge work.

Side Note: Beck points out the brilliance in naming “scientific management,” who would be in favor of the opposite, “unscientific management”? (The book notes that when picking descriptive names, it helps to pick a name whose opposite is unappealing, for example, the Patriot Act.)

Why the clash? ¬†Scientific management was born in a manufacturing environment not software development environment. Taylor was focused on getting the most out workers that he felt had to lead and controlled closely. ¬†In Taylor’s world, making steel or assembling cars were repeatable and predictable processes and the workers were cogs in the machine. Time and motion studies, which are a common tool in scientific management, run into problems based on several simplifying assumptions when applied to many types of work, including software development. ¬†Beck points out three critical assumptions made by Taylor.

  1.   Things usually go according to plan as work moves through a repeatable process.
  2.   Improving individual steps leads optimization of the overall process.
  3.   People are mostly interchangeable and need to be told what to do.

Take a few minutes while you consider the simplifying assumptions as they are applied to writing, testing and delivering functionality, and then stop laughing.

While very few enlightened CIOs would admit to being adherents of Taylor, many would describe their “shop” as a software factory and actively leverage at least some of the tenets of scientific management. ¬†The practice of social engineering developed by Taylor and his followers is built into the role specialization model of IT that is nearly ubiquitous. One form of social engineering practiced on the factory floor even today is the separation of workers from planners, which translates into software development as separating estimators and project managers from developers and testers in today’s IT organization. The planners and estimators decide how and how long the workers will take to deliver and a piece of work. Developers are considered the 21st-century cogs in the machine that work at the pace specified by the planners and estimators. ¬†Every software development framework decries this practice and yet it still exists. ¬†Similarly, Beck points out that creating a separate quality department is another form of social engineering. ¬†The separation of the quality function ensures the workers are working to the correct quality by checking at specific points in the process flow. ¬†In every case, separating activities generates bottlenecks and constraints, and potentially makes each group the enemy of each other. Once upon a time I heard a group of developers mention that they had completed development, but the testers caused the project to be late. ¬†This is a reflection of Taylorism and social engineering.

Chapter 19: Toyota

The Toyota Production System (TPS) is an alternative to Taylorism.  Much has been written about TPS, including several books by Tom and Mary Poppendiech that pioneered applying TPS to software development and maintenance. In TPS, each worker is responsible for the whole production line rather than a single function. One of the goals of each step in the process is to make the quality of the production line high enough that downstream quality assurance is not needed.

In the TPS there is less social stratification and less need to perform independent checking. Less independent checking is needed becasue workers feel accountable for their work because it will immediately used by the next step process. In software development, a developer writes code and tests code that forms the basis for the future stories. A developer in an organization using the TPS can’t hide if they deliver poor quality and will be subject to peer pressure clean up their act and deliver good quality.

Beck caps the chapter with a reminder of the time value of money.  Making anything and then not using it immediately so that you generate feedback causes the informational value to evaporate. Quick feedback is one of the reasons why short iterations and quick feedback generates more value than classic waterfall.

Previous installments of Extreme Programing Explained, Second Edition (2005) on Re-read Saturday:

Extreme Programming Explained: Embrace Change Second Edition Week 1, Preface and Chapter 1

Week 2, Chapters 2 ‚Äď 3

Week 3, Chapters 4 ‚Äď 5

Week 4, Chapters 6 ‚Äď 7 ¬†

Week 5, Chapters 8 ‚Äď 9

Week 6, Chapters 10 ‚Äď 11

Week 7, Chapters 12 ‚Äď 13

Week 8, Chapters 14 ‚Äď 15

Week 9, Chapters 16 ‚Äď 17

A few quick notes. We are going to read The Five Dysfunctions of a Team by Jossey-Bass .  This will be a new book for me, therefore, an initial read (I have not read this book yet), not a re-read!  Steven Adams suggested the book and it has been on my list for a few years! Click the link (The Five Dysfunctions of a Team), buy a copy and in a few weeks, we will begin to read the book together.

 

 


Categories: Process Management

Stuff The Internet Says On Scalability For August 19th, 2016

Hey, it's HighScalability time:

 


Modern art? Nope. Pancreatic cancer revealed by fluorescent labeling.

 

If you like this sort of Stuff then please support me on Patreon.
  • 4: SpaceX rocket landings at sea; 32TB: 3D Vertical NAND Flash; 10x: compute power for deep learning as the best of today’s GPUs; 87%: of vehicles could go electric without any range problems; 06%: visitors that post comments on NPR; 235k: terrorism related Twitter accounts closed; 40%: AMD improvement in instructions per clock for Zen; 15%: apps are slower is summer because of humidity;

  • Quotable Quotes:
    • @netik: There is no Internet of Things. There are only many unpatched, vulnerable small computers on the Internet.
    • @Pinboard: The Programmers’ Credo: we do these things not because they are easy, but because we thought they were going to be easy
    • Aphyr: This advantage is not shared by sequential consistency, or its multi-object cousin, serializability. This much, I knew–but Herlihy & Wing go on to mention, almost offhand, that strict serializability is also nonlocal!
    • @PHP_CEO: I’VE HAD AN IDEA / WE’LL TAKE ALL THE BAD CODE / BUNDLE IT TOGETHER / AND SELL IT TO VCS AS A COLLATERALIZED TECHNICAL DEBT OBLIGATION
    • felixgallo: I agree, the actor model is a significantly more usable metaphor for containers than functions. When you start thinking about supervisor trees, you start heading towards Kubernetes, which is interesting.
    • David Rosenthal: So in practice blockchains are decentralized (not), anonymous (not and not), immutable (not), secure (not), fast (not) and cheap (not). What's (not) to like?
    • @grimmelm: You know, you can’t spell “idiotic” without “IoT”
    • @jroper: 10 years ago, backends were monolithic services and frontends many pages. Now frontends are monolithic pages and backends many services.
    • @jakevoytko: Ordinary human: Hey, this is a fork. You can eat with it! People who comment on programming blogs: You can't eat soup with that.
    • iLoch: Wow $5000/mo for 2000rps, just for the application servers? That's absurd. I think we're paying around $2000/mo for our app servers, a database which is over 2TB in size, and we ingest about 10 megabytes of text data per second, on top of a couple thousand requests per second to the user facing application.
    • @josh_wills: I'm thinking about writing a book on data engineering for kids: "An Immutable, Append-Only Log of Unfortunate Events"
    • Kill Process: What the world needs is not a new social network that concentrates power in a single place, but a design to intrinsically prevent the concentration of power that results in barriers to switching.
    • ljmasternoob: the bump was just Schrödinger's cat stepping on Occam's razor.
    • carsongross: The JVM is a treasure just sitting there waiting to be rediscovered.
    • @mjpt777: When @nitsanw points out some of what he finds in the JVM I often end up crying :(
    • @karpathy: I hoped TensorFlow would standardize our code but it's low level so we've diverged on layers over it: Slim, PrettyTensor, Keras, TFLearn ...
    • @rbranson:  coordination is a scaling bottleneck in teams as much as it is in distributed systems.
    • @mathiasverraes: There are only two hard problems in distributed systems:  2. Exactly-once delivery 1. Guaranteed order of messages 2. Exactly-once delivery
    • @PhilDarnowsky: I've been using dynamically typed languages for a living for a decade. As a result, I prefer statically typed languages.
    • Allyn Malventano: 64-Layer is Samsung's 4th generation of V-NAND. We've seen 48-Layer and 32-Layer, but few know that 24-Layer was a thing (but was mainly in limited enterprise parts).
    • @cmeik: "It's a bit odd to me that programming languages today only give you the ability to write something that runs on one machine..." [1/2]
    • @trengriffin: @amcafee Use of higher radio frequencies will require a lot more antennas creating ever smaller coverage areas. More heterogeneous bandwidth
    • @jamesurquhart: Disagree IaaS multicloud tools will play major role moving forward. Game is in PaaS and app deployment (containers).

  • Linking it all together on a great episode of This Week In Tech. Google’s new OS, Fuchsia, for places where Android fears to tread, smaller, lower power IoT type devices. Intel Optane is an almost shipping non-volatile memory that is 1000X faster than SSD (maybe not), has up to 10X the capacity of DRAM, while only being a few X slower than typical DRAM, is perfect for converged IoT devices. Say goodbye to blocks and memory tiers. IoT devices don't have to be fast, so DRAM can be replaced with this new memory, hopefully making simpler cheaper devices that can last a decade on a small battery, especially when combined with low power ARM CPUsNVMe is replacing SATA and AHCI for higher bandwidth, lower latency access to non-volatile memory. 5g, when it comes out, will specifically support billions of low power IoT devices. Machine learning ties everything together. That future that is full of sensors may actually happen. As Greg Ferro said~ We are starting to see the convergence of multiple advances. You can start to plot a pathway forward to see where the disruption occurs. The irony, still, is nothing will work together. We have ubiquitous wifi more from a fluke of history than any conscious design. We see how when left up to industry the silo mindset captures all reason, and we are all the poorer for it.

  • We have water rights. Mineral rights. Surface rights. Is there such a thing as virtual property rights? Do you own the virtual property rights of your own property when someone else decides to use it in an application? Pokemon GO Hit With Class Action LawsuitWhy do people keep coming to this couple’s home looking for lost phones?

  • As data becomes more valuable that we are the product becomes assumed. Provider of Personal Finance Tools Tracks Bank Cards, Sells Data to Investors: Yodlee has another way of making money: The company sells some of the data it gathers from credit- and debit-card transactions to investors and research firms...Yodlee can tell you down to the day how much the water bill was across 25,000 citizens of San Francisco” or the daily spending at McDonald’s throughout the country...The details are so valuable that some investment firms have paid more than $2 million apiece for an annual subscription to Yodlee’s service.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Two Teams or Not: Courage and Flexibility

You got to have courage!

You got to have courage!

I listen to Malcolm Gladwell‚Äôs wonderful Revisionist History podcast. In the last podcast of season one, he discussed ‚Äúthe satire paradox.‚ÄĚ The punchline of the most recent installment of the podcast is that change is not possible without courage. Flexibility requires courage. ¬†Change, when embracing something like Agile, requires the flexibility to give something up. ¬†Perhaps we might be asked to move outside of our comfort zone and work differently, or to work with people we haven‚Äôt worked with before. ¬†Asking testers and developers to work on the same team or to work as pairs or asking backend and UI subteams to work together require¬†flexibility. ¬†We can define flexibility to embrace Agile or any other significant modification to work based on four basic attributes:

  1. Ability to accept changing priorities РAgile projects are based on a backlog that encompasses the prioritized list of work for team(s).  This backlog will evolve based on knowledge and feedback. The evolution of the backlog includes changes to the items on the list and the priority of the items on the list. All team members, whether a developer, business analyst or tester, need to accept that what we planned on doing in the next sprint might not be what we originally thought.  
  2. Ability to accept changing roles and workload РSelf-directed and self-managed teams make decisions that affect who does what and when.  Each team member needs to accept that they might be asked (or need to volunteer) to do whatever is needed for the team to be successful.  Adopting concepts such as specializing generalists or T-shaped people are a direct reflection of the need for flexibility.
  3. Ability to adapt to changing environments – Business and technical architectures change over time. Architectures are a reflection of how someone (a team or an architect) perceives the environment at a specific moment in time. Implementing the adage that developers should ‚Äúrun towards feedback‚ÄĚ requires courage and flexibility.
  4. Ability to persist РAny process change requires doing something different, which is often scary or uncomfortable even if it is only briefly. If we give up immediately at the first sign of unease nothing would ever change, even if the data says that staying the course will be good.  For example, the first day at all six universities that I attended was full of stress (I remember once even having the dream that I could not find my classes). I was able to find the courage to persist and push through that unease in order to make the change and find a seat in the back of room in each class.

When I was asked whether two teams were really one team or whether they should find a way to work together, the answers have been premised on the assumption that they had the courage or the flexibility to change. The discussion of courage and flexibility is really less about Agile techniques, but rather a change management issue. A test of whether courage and flexibility are basic issues can be as simple as listening to team members comments. ¬†If you hear comments such as ‚Äúwe have always done it that way‚ÄĚ or ‚Äúwhy can’t we do it the way we used to?‚ÄĚ, then leaders and influencers need to assess whether a team or individual has the courage and flexibility to change. ¬†If they do not have the flexibility and courage needed, leaders and coaches need to help develop the environment where courage and flexibility can develop before any specific process framework or technique can be successful.

Changing how people work is difficult because most people only choose to change if they see a greater benefit/ pain avoidance than the pain of not making the change. Flexibility is a set of abilities help individuals and teams to make a choice and then establishing a commitment to that choice so that change happens.


Categories: Process Management

Invoking "Laws" Without a Domain or Context

Herding Cats - Glen Alleman - Thu, 08/18/2016 - 22:31

It seems to be common invoke Laws in place of actual facts when trying to support a point. Here's two recent ones I've encountered with some Agile and #NoEstimates advocates. Two of my favorite are:

  • Goodhart's Law
  • Hofstadter's law

These are not Laws in the same way as the Laws of Physics, Laws of Chemistry, Laws of Queuing theory - which is why it's so easy to misapply them, misuse them, use them to obviscate the situation and hide behind fancy terms which have no meaning to the problem at hand. Here's some real laws.

  • Newton's Law(s), there are three of them:
    • First law: When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by a net force.
    • Second law: In an inertial reference frame, the vector sum of the forces F on an object is equal to the mass m of that object multiplied by the acceleration vector a of the object: F = ma.
    • Third law: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.
  • Boyle's Law -¬†For a fixed mass of gas at constant temperature, the volume is inversely proportional to the pressure.¬†pv = Constant.
  • Charle's Laws - For a fixed mass of gas at constant pressure, the volume is directly proportional to the kelvin temperature.¬†V = Constant x T
  • 2nd Law of Thermodynamics¬†states that the total entropy of an isolated system always increases over time, or remains constant in ideal cases where the system is in a steady state or undergoing a reversible process. The increase in entropy accounts for the irreversibility of natural processes, and the asymmetry between future and past.
  • Little's Law - which is¬†l = őĽw, which asserts that the time average number of customers in a queueing system, l, is equal to the rate at which customers arrive and enter the system, őĽ, times¬†the average sojourn time of a customer, w. And just to be clear the statistics of the processes in Little's Law are IID - Independent, Identicially Distribution and Stationary. Rarely the case in software development, where Little's Law is misused often.

Misuse of Goodhart's Law

This post, like many of other posts, was stimulated by a conversation on social media. Sometimes the conversations trigger ideas that have laid dormant for awhile. Sometimes, I get a new idea from a word or a phrase. But most of the time, they come from a post that was either wrong, misinformed, or worse misrepresenting  no principles.

The OP claimed Goodhart's Law was the source of most of the problems with software development. See the law below. 

But the real issue with invoking Goodhart's Law has several dimensions, using Goodhart's Law named after the economist who originated it, Charles Goodhart. Its most popular formulation is: "When a measure becomes a target, it ceases to be a good measure." This law is part of a broader discussion of making policy decision on macro economic models. 

Given that the structure of an econometric model consists of optimal decision rules of economic agents, and that optimal decision rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models.

What this says is again when the measure becomes the target, that target impacts the measure, changing the target. 

So first a big question

Is this macroeconomic model a correct  operational model for software development processes - measuring changes the target?

Setting targets and measuring performance against that target is the basis of all closed loop control systems used to manage projects. In our domain this control system is the Risk Adjusted Earned Value Management System (EVMS). EVM is a project management technique for measuring project performance and progress in an objective manner. A baseline of the planned value is established, work is performed, physical percent complete is measured, and the earned value is calculated. This process provides actionable information about the performance of the project using Quantifiable Backup Data (QBD) for the expected outcomes of the work, for the expected cost, at the expected time all adjusted for the reducible and irreducible uncertanties of the work.

Without setting a target to measure against, we have:

  • No baseline control.
  • No measures of effectiveness.
  • No measures of performance.
  • No technical performance measures.
  • No Key Performance Parameters.

With no target and no measures of progress toward the target ... 

We have no project management, no program controls, we have no closed loop control system.

With these missing pieces, the project is doomed on day one. And then we're surprised it runs over cost and schedule, and doesn't deliver the needed capabilities in exchange for the cost and time invested.

When you hear Goodhart's Law is the cause of project failure, you're likely talking to someone with little understanding of managing projects with a budget and do date for the needed capabilities - you know an actual project. So what this means in economics and not in project management is ...

... when a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it. - Mario Biagioli, Nature (volume 535, page 201, 2015)

Note the term Economy, not cost, schedule, and technical performance measures of projects. Measuring the goals and activities of monetary policy Goodhart might be applicable. For managing development of products with other people's money, probably not.

Gaming of the system is certainly possible on projects. But unlike the open economy, those gaming the project measures can be made to stop, with a simple command. Stop gaming or I'll find someone else to take your place.

Misuse of Hofstadter's Law

My next favorite misused law is this one, which is popular among the #Noestimates advocates who claim estimating can't be done. 

Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law¬†‚ÄĒ‚ÄĮDouglas Hofstadter,¬†G√∂del, Escher, Bach: An Eternal Golden Braid

Hofstadter's Law is about the development and use of self-referencing systems. The statement is about how long it takes is itself a self-referencing statement. He's speaking about the development of a Chess playing program - and doing so from the perspective of 1978 style software development. The game playing programs use a look ahead tree with branches of the moves and countermoves. The art of the program is to avoid exploring every branch of the look ahead tree down to the terminal nodes. In chess - actual chess, people - not the computer - have the skill to know what branches to look down and what branches to not look down. 

In the early days (before 1978) people used to estimate that it would be ten years until the computer was a world champion, But after ten years (1988) it was still estimated that day was still ten years away. 

This notion is part of the recursive Hofstadter's Law which is what the whole book is about. The principle of Recursion and Unpredictability is described at the bottom of page 152. 

For a set to be recursively enumerable (the condition to traverse the look ahead tree for all position moves), means it can be generated from a set of starting points (axioms), by the repeated application of rules of inference. Thus, the set grows and grows, each new element being compounded somehow out of previous elements, in a sort of mathematical snowball. But this is the essence of recursion - something being defined in terms of simpler versions of itself, instead of explicitly. 

Recursive enumeration is a process in which new things emerge from old things by fixed rules. There seem to be many surprises in such processes ...

So if you work on the development of recursive enumeration based software systems, then yes - estimating when you'll have your program working is likely going to be hard. Or if you work on the development of software that has no stated Capabilities, no Product Roadmap, no Release Plan, no Product Owner or Customer that may have even the slightest notion of what Done Looks like in units of measure meaningful to the decision makers, then probably you can apply Hofstadter's Law. Yourdan calls this type of project A Death March Project - good luck with that.

If not, then DO NOT fall prey to the misuse of Hofstadter's Law by those likely to not have actually read Hofstadter's book, nor have the skills and experience to understand the processes needed to produce credible estimates.

So once again, time to call BS, when quotes are misused

Related articles Agile Software Development in the DOD Empirical Data Used to Estimate Future Performance Thinking, Talking, Doing on the Road to Improvement Herding Cats: The Misuse Hofstadter's Law Just Because You Say Words, It Doesn't Make Then True There is No Such Thing as Free Doing the Math Building a Credible Performance Measurement Baseline Your Project Needs a Budget and Other Things
Categories: Project Management

Invoking "Laws" Without a Domain or Context

Herding Cats - Glen Alleman - Thu, 08/18/2016 - 22:31

It seems to be common invoke Laws in place of actual facts when trying to support a point. Here's two recent ones I've encountered with some Agile and #NoEstimates advocates. Two of my favorite are:

  • Goodhart's Law
  • Hofstadter's law

These are not Laws in the same way as the Laws of Physics, Laws of Chemistry, Laws of Queuing theory - which is why it's so easy to misapply them, misuse them, use them to obviscate the situation and hide behind fancy terms which have no meaning to the problem at hand. Here's some real laws.

  • Newton's Law(s), there are three of them:
    • First law: When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by a net force.
    • Second law: In an inertial reference frame, the vector sum of the forces F on an object is equal to the mass m of that object multiplied by the acceleration vector a of the object: F = ma.
    • Third law: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.
  • Boyle's Law -¬†For a fixed mass of gas at constant temperature, the volume is inversely proportional to the pressure.¬†pv = Constant.
  • Charle's Laws - For a fixed mass of gas at constant pressure, the volume is directly proportional to the kelvin temperature.¬†V = Constant x T
  • 2nd Law of Thermodynamics¬†states that the total entropy of an isolated system always increases over time, or remains constant in ideal cases where the system is in a steady state or undergoing a reversible process. The increase in entropy accounts for the irreversibility of natural processes, and the asymmetry between future and past.
  • Little's Law - which is¬†l = őĽw, which asserts that the time average number of customers in a queueing system, l, is equal to the rate at which customers arrive and enter the system, őĽ, times¬†the average sojourn time of a customer, w. And just to be clear the statistics of the processes in Little's Law are IID - Independent, Identicially Distribution and Stationary. Rarely the case in software development, where Little's Law is misused often.

Misuse of Goodhart's Law

This post, like many of other posts, was stimulated by a conversation on social media. Sometimes the conversations trigger ideas that have laid dormant for awhile. Sometimes, I get a new idea from a word or a phrase. But most of the time, they come from a post that was either wrong, misinformed, or worse misrepresenting  no principles.

The OP claimed Goodhart's Law was the source of most of the problems with software development. See the law below. 

But the real issue with invoking Goodhart's Law has several dimensions, using Goodhart's Law named after the economist who originated it, Charles Goodhart. Its most popular formulation is: "When a measure becomes a target, it ceases to be a good measure." This law is part of a broader discussion of making policy decision on macro economic models. 

Given that the structure of an econometric model consists of optimal decision rules of economic agents, and that optimal decision rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models.

What this says is again when the measure becomes the target, that target impacts the measure, changing the target. 

So first a big question

Is this macroeconomic model a correct  operational model for software development processes - measuring changes the target?

Setting targets and measuring performance against that target is the basis of all closed loop control systems used to manage projects. In our domain this control system is the Risk Adjusted Earned Value Management System (EVMS). EVM is a project management technique for measuring project performance and progress in an objective manner. A baseline of the planned value is established, work is performed, physical percent complete is measured, and the earned value is calculated. This process provides actionable information about the performance of the project using Quantifiable Backup Data (QBD) for the expected outcomes of the work, for the expected cost, at the expected time all adjusted for the reducible and irreducible uncertanties of the work.

Without setting a target to measure against, we have:

  • No baseline control.
  • No measures of effectiveness.
  • No measures of performance.
  • No technical performance measures.
  • No Key Performance Parameters.

With no target and no measures of progress toward the target ... 

We have no project management, no program controls, we have no closed loop control system.

With these missing pieces, the project is doomed on day one. And then we're surprised it runs over cost and schedule, and doesn't deliver the needed capabilities in exchange for the cost and time invested.

When you hear Goodhart's Law is the cause of project failure, you're likely talking to someone with little understanding of managing projects with a budget and do date for the needed capabilities - you know an actual project. So what this means in economics and not in project management is ...

... when a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it. - Mario Biagioli, Nature (volume 535, page 201, 2015)

Note the term Economy, not cost, schedule, and technical performance measures of projects. Measuring the goals and activities of monetary policy Goodhart might be applicable. For managing development of products with other people's money, probably not.

Gaming of the system is certainly possible on projects. But unlike the open economy, those gaming the project measures can be made to stop, with a simple command. Stop gaming or I'll find someone else to take your place.

Misuse of Hofstadter's Law

My next favorite misused law is this one, which is popular among the #Noestimates advocates who claim estimating can't be done. 

Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law¬†‚ÄĒ‚ÄĮDouglas Hofstadter,¬†G√∂del, Escher, Bach: An Eternal Golden Braid

Hofstadter's Law is about the development and use of self-referencing systems. The statement is about how long it takes is itself a self-referencing statement. He's speaking about the development of a Chess playing program - and doing so from the perspective of 1978 style software development. The game playing programs use a look ahead tree with branches of the moves and countermoves. The art of the program is to avoid exploring every branch of the look ahead tree down to the terminal nodes. In chess - actual chess, people - not the computer - have the skill to know what branches to look down and what branches to not look down. 

In the early days (before 1978) people used to estimate that it would be ten years until the computer was a world champion, But after ten years (1988) it was still estimated that day was still ten years away. 

This notion is part of the recursive Hofstadter's Law which is what the whole book is about. The principle of Recursion and Unpredictability is described at the bottom of page 152. 

For a set to be recursively enumerable (the condition to traverse the look ahead tree for all position moves), means it can be generated from a set of starting points (axioms), by the repeated application of rules of inference. Thus, the set grows and grows, each new element being compounded somehow out of previous elements, in a sort of mathematical snowball. But this is the essence of recursion - something being defined in terms of simpler versions of itself, instead of explicitly. 

Recursive enumeration is a process in which new things emerge from old things by fixed rules. There seem to be many surprises in such processes ...

So if you work on the development of recursive enumeration based software systems, then yes - estimating when you'll have your program working is likely going to be hard. Or if you work on the development of software that has no stated Capabilities, no Product Roadmap, no Release Plan, no Product Owner or Customer that may have even the slightest notion of what Done Looks like in units of measure meaningful to the decision makers, then probably you can apply Hofstadter's Law. Yourdan calls this type of project A Death March Project - good luck with that.

If not, then DO NOT fall prey to the misuse of Hofstadter's Law by those likely to not have actually read Hofstadter's book, nor have the skills and experience to understand the processes needed to produce credible estimates.

So once again, time to call BS, when quotes are misused

Related articles Agile Software Development in the DOD Empirical Data Used to Estimate Future Performance Thinking, Talking, Doing on the Road to Improvement Herding Cats: The Misuse Hofstadter's Law Just Because You Say Words, It Doesn't Make Then True There is No Such Thing as Free Doing the Math Building a Credible Performance Measurement Baseline Your Project Needs a Budget and Other Things
Categories: Project Management

Hackable Projects

Google Testing Blog - Thu, 08/18/2016 - 19:18
By: Patrik Höglund
IntroductionSoftware development is difficult. Projects often evolve over several years, under changing requirements and shifting market conditions, impacting developer tools and infrastructure. Technical debt, slow build systems, poor debuggability, and increasing numbers of dependencies can weigh down a project The developers get weary, and cobwebs accumulate in dusty corners of the code base.

Fighting these issues can be taxing and feel like a quixotic undertaking, but don‚Äôt worry ‚ÄĒ the Google Testing Blog is riding to the rescue! This is the first article of a series on ‚Äúhackability‚ÄĚ that identifies some of the issues that hinder software projects and outlines what Google SETIs usually do about them.

According to Wiktionary, hackable is defined as:
Adjective
hackable ‚Äé(comparative more hackable, superlative most hackable)
  1. (computing) That can be hacked or broken into; insecure, vulnerable. 
  2. That lends itself to hacking (technical tinkering and modification); moddable.

Obviously, we‚Äôre not going to talk about making your product more vulnerable (by, say, rolling your own crypto or something equally unwise); instead, we will focus on the second definition, which essentially means ‚Äúsomething that is easy to work on.‚ÄĚ This has become the mainfocus for SETIs at Google as the role has evolved over the years.
In PracticeIn a hackable project, it’s easy to try things and hard to break things. Hackability means fast feedback cycles that offer useful information to the developer.

This is hackability:
  • Developing is easy
  • Fast build
  • Good, fast tests
  • Clean code
  • Easy running + debugging
  • One-click rollbacks
In contrast, what is not hackability?
  • Broken HEAD (tip-of-tree)
  • Slow presubmit (i.e. checks running before submit)
  • Builds take hours
  • Incremental build/link > 30s
  • Flakytests
  • Can‚Äôt attach debugger
  • Logs full of uninteresting information
The Three Pillars of HackabilityThere are a number of tools and practices that foster hackability. When everything is in place, it feels great to work on the product. Basically no time is spent on figuring out why things are broken, and all time is spent on what matters, which is understanding and working with the code. I believe there are three main pillars that support hackability. If one of them is absent, hackability will suffer. They are:


Pillar 1: Code Health‚ÄúI found Rome a city of bricks, and left it a city of marble.‚ÄĚ
   -- Augustus
Keeping the code in good shape is critical for hackability. It’s a lot harder to tinker and modify something if you don’t understand what it does (or if it’s full of hidden traps, for that matter).
TestsUnit and small integration tests are probably the best things you can do for hackability. They’re a support you can lean on while making your changes, and they contain lots of good information on what the code does. It isn’t hackability to boot a slow UI and click buttons on every iteration to verify your change worked - it is hackability to run a sub-second set of unit tests! In contrast, end-to-end (E2E) tests generally help hackability much less (and can evenbe a hindrance if they, or the product, are in sufficiently bad shape).

Figure 1: the Testing Pyramid.
I’ve always been interested in how you actually make unit tests happen in a team. It’s about education. Writing a product such that it has good unit tests is actually a hard problem. It requires knowledge of dependency injection, testing/mocking frameworks, language idioms and refactoring. The difficulty varies by language as well. Writing unit tests in Go or Java is quite easy and natural, whereas in C++ it can be very difficult (and it isn’t exactly ingrained in C++ culture to write unit tests).

It’s important to educate your developers about unit tests. Sometimes, it is appropriate to lead by example and help review unit tests as well. You can have a large impact on a project by establishing a pattern of unit testing early. If tons of code gets written without unit tests, it will be much harder to add unit tests later.

What if you already have tons of poorly tested legacy code? The answer is refactoring and adding tests as you go. It’s hard work, but each line you add a test for is one more line that is easier to hack on.
Readable Code and Code ReviewAt Google, ‚Äúreadability‚ÄĚ is a special committer status that is granted per language (C++, Go, Java and so on). It means that a person not only knows the language and its culture and idioms well, but also can write clean, well tested and well structured code. Readability literally means that you‚Äôre a guardian of Google‚Äôs code base and should push back on hacky and ugly code. The use of a style guide enforces consistency, and code review (where at least one person with readability must approve) ensures the code upholds high quality. Engineers must take care to not depend too much on ‚Äúreview buddies‚ÄĚ here but really make sure to pull in the person that can give the best feedback.

Requiring code reviews naturally results in small changes, as reviewers often get grumpy if you dump huge changelists in their lap (at least if reviewers are somewhat fast to respond, which they should be). This is a good thing, since small changes are less risky and are easy to roll back. Furthermore, code review is good for knowledge sharing. You can also do pair programming if your team prefers that (a pair-programmed change is considered reviewed and can be submitted when both engineers are happy). There are multiple open-source review tools out there, such as Gerrit.

Nice, clean code is great for hackability, since you don’t need to spend time to unwind that nasty pointer hack in your head before making your changes. How do you make all this happen in practice? Put together workshops on, say, the SOLID principles, unit testing, or concurrency to encourage developers to learn. Spread knowledge through code review, pair programming and mentoring (such as with the Readability concept). You can’t just mandate higher code quality; it takes a lot of work, effort and consistency.
Presubmit Testing and LintConsistently formatted source code aids hackability. You can scan code faster if its formatting is consistent. Automated tooling also aids hackability. It really doesn’t make sense to waste any time on formatting source code by hand. You should be using tools like gofmt, clang-format, etc. If the patch isn’t formatted properly, you should see something like this (example from Chrome):

$ git cl upload
Error: the media/audio directory requires formatting. Please run
git cl format media/audio.

Source formatting isn’t the only thing to check. In fact, you should check pretty much anything you have as a rule in your project. Should other modules not depend on the internals of your modules? Enforce it with a check. Are there already inappropriate dependencies in your project? Whitelist the existing ones for now, but at least block new bad dependencies from forming. Should our app work on Android 16 phones and newer? Add linting, so we don’t use level 17+ APIs without gating at runtime. Should your project’s VHDL code always place-and-route cleanly on a particular brand of FPGA? Invoke the layout tool in your presubmit and and stop submit if the layout process fails.

Presubmit is the most valuable real estate for aiding hackability. You have limited space in your presubmit, but you can get tremendous value out of it if you put the right things there. You should stop all obvious errors here.

It aids hackability to have all this tooling so you don’t have to waste time going back and breaking things for other developers. Remember you need to maintain the presubmit well; it’s not hackability to have a slow, overbearing or buggy presubmit. Having a good presubmit can make it tremendously more pleasant to work on a project. We’re going to talk more in later articles on how to build infrastructure for submit queues and presubmit.
Single Branch And Reducing RiskHaving a single branch for everything, and putting risky new changes behind feature flags, aids hackability since branches and forks often amass tremendous risk when it’s time to merge them. Single branches smooth out the risk. Furthermore, running all your tests on many branches is expensive. However, a single branch can have negative effects on hackability if Team A depends on a library from Team B and gets broken by Team B a lot. Having some kind of stabilization on Team B’s software might be a good idea there. Thisarticle covers such situations, and how to integrate often with your dependencies to reduce the risk that one of them will break you.
Loose Coupling and TestabilityTightly coupled code is terrible for hackability. To take the most ridiculous example I know: I once heard of a computer game where a developer changed a ballistics algorithm and broke the game’s chat. That’s hilarious, but hardly intuitive for the poor developer that made the change. A hallmark of loosely coupled code is that it’s upfront about its dependencies and behavior and is easy to modify and move around.

Loose coupling, coherence and so on is really about design and architecture and is notoriously hard to measure. It really takes experience. One of the best ways to convey such experience is through code review, which we’ve already mentioned. Education on the SOLID principles, rules of thumb such as tell-don’t-ask, discussions about anti-patterns and code smells are all good here. Again, it’s hard to build tooling for this. You could write a presubmit check that forbids methods longer than 20 lines or cyclomatic complexity over 30, but that’s probably shooting yourself in the foot. Developers would consider that overbearing rather than a helpful assist.

SETIs at Google are expected to give input on a product’s testability. A few well-placed test hooks in your product can enable tremendously powerful testing, such as serving mock content for apps (this enables you to meaningfully test app UI without contacting your real servers, for instance). Testability can also have an influence on architecture. For instance, it’s a testability problem if your servers are built like a huge monolith that is slow to build and start, or if it can’t boot on localhost without calling external services. We’ll cover this in the next article.
Aggressively Reduce Technical DebtIt‚Äôs quite easy to add a lot of code and dependencies and call it a day when the software works. New projects can do this without many problems, but as the project becomes older it becomes a ‚Äúlegacy‚ÄĚ project, weighed down by dependencies and excess code. Don‚Äôt end up there. It‚Äôs bad for hackability to have a slew of bug fixes stacked on top of unwise and obsolete decisions, and understanding and untangling the software becomes more difficult.

What constitutes technical debt varies by project and is something you need to learn from experience. It simply means the software isn’t in optimal form. Some types of technical debt are easy to classify, such as dead code and barely-used dependencies. Some types are harder to identify, such as when the architecture of the project has grown unfit to the task from changing requirements. We can’t use tooling to help with the latter, but we can with the former.

I already mentioned that dependency enforcement can go a long way toward keeping people honest. It helps make sure people are making the appropriate trade-offs instead of just slapping on a new dependency, and it requires them to explain to a fellow engineer when they want to override a dependency rule. This can prevent unhealthy dependencies like circular dependencies, abstract modules depending on concrete modules, or modules depending on the internals of other modules.

There are various tools available for visualizing dependency graphs as well. You can use these to get a grip on your current situation and start cleaning up dependencies. If you have a huge dependency you only use a small part of, maybe you can replace it with something simpler. If an old part of your app has inappropriate dependencies and other problems, maybe it’s time to rewrite that part.

The next article will be on Pillar 2: Debuggability.
Categories: Testing & QA

Google Developers to open a startup space in San Francisco

Google Code Blog - Thu, 08/18/2016 - 19:10

Posted by Roy Glasberg Global Lead, Launchpad Accelerator

We’re heading to the city of San Francisco this September to open a new space for developers and startups. With over 14,000 sq. ft. at 301 Howard Street, we’ll have more than enough elbow room to train, educate and collaborate with local and international developers and startups.

The space will hold a range of events: Google Developer Group community meetups, Codelabs, Design Sprints, and Tech Talks. It will also host the third class of Launchpad Accelerator, our equity-free accelerator for startups in emerging markets. During each class, over 20 Google teams provide comprehensive mentoring to late-stage app startups who seek to scale and become leaders in their local markets. The 3-month program starts with an all-expenses-paid two week bootcamp at Google HQ.

Developers are in an ever-changing landscape and seek technical training. We’ve also seen a huge surge in the number of developers starting their own companies. Lastly, this is an unique opportunity to bridge the gap between Silicon Valley and emerging markets. To date Launchpad Accelerator has nearly 50 alumni in India, Indonesia, Brazil and Mexico. Startups in these markets are tackling critical local problems, but they often lack access to the resources and network we have here. This dedicated space will enable us to regularly engage with developers and serve their evolving needs, whether that is to build a product, grow a company or make revenue.

We can’t wait to get started and work with developers to build successful businesses that have a positive impact locally and globally.

Categories: Programming

A Growth Job

Herding Cats - Glen Alleman - Thu, 08/18/2016 - 16:23
  • Is never permanent.
  • Makes you like yourself.
  • Is fun.
  • Is sometimes tedious, painful, frustrating, monotonous, and at the same time gives a sense of accomplishment.
  • Bases compensation on productivity.
  • Is complete: One thinks, plans, manages and is the final judge of one's work.
  • Addresses real need in the world are large - people want what you do because they need it.
  • Involves risk-taking.
  • Has a few sensible entrance requirements.
  • Ends automatically when a task is completed.
  • Encourages self-competitive excellence.
  • Causes anxiety because you don't necessarily know what you're doing.
  • Is one where you manage your time, money and people, and where you are accountable for specific results, which are evaluated by people you serve.
  • Never involves saying¬†Thank God It's Friday.
  • Is where the overall objectives of the organizations are supported by your work.
  • Is where good judgment is one, maybe the only, job qualification.¬†
  • Gives every jobholder the chance to influence, sustain or change organizational objectives.
  • Is when you can quit or be fired at any time.
  • Encourages reciprocity and and parity between the boss and the bossed.
  • Is when we work from a sense of mission and desire, not obligation and duty.

From If things Don't Improve Soon I May Ask You To Fire Me - Richard K. Irish

Related articles IT Risk Management Applying the Right Ideas to the Wrong Problem Build a Risk Adjusted Project Plan in 6 Steps
Categories: Project Management

The Problems with Schedules #Redux #Redux

Herding Cats - Glen Alleman - Wed, 08/17/2016 - 17:35

Here's an article, recently referenced by a #NoEstimates twitter post. The headline is deceiving, the article DOES NOT suggest we don't need deadlines, but that deadlines without credible assessment of their credibility are the source of many problems on large programs...

Screen Shot 2016-08-10 at 10.12.36 AM

The Core Problem with Project Success

There are many core Root Causes of program problems. Here's 4 from research at PARCA

Bliss Chart

  • Unrealistic performance expectations missing Measures of Effectiveness and Measures of Performance.
  • Unrealistic Cost and Schedule estimates based on inadequate risk adjusted growth models.
  • Inadequate accessment of risk and unmitigated exposure to these risks with proper handling plans.
  • Unanticipated Technical issues with alternative plans and solutions to maintain effectiveness.

Before diving into the details of these, let me address another issue that has come up around project success and estimates. There is a common chart used to show poor performance of projects that compares Ideal project performance with the Actual project performance. Here's the notional replica of that chart.

Screen Shot 2016-08-17 at 10.11.56 AM

This chart shows several things

  • The¬†notion of¬†Ideal is just that - notional. All that line says is this was the baseline Estimate at Completion for the project work. It says nothing about the credibility of that estimate, the possibility that one or all of the Root Causes above are in play.
  • Then the chart shows that many projects cost more or take longer (costing more) in the sample population of projects.¬†
  • The term¬†Ideal¬†is a misnomer. There is no ideal in the estimating business. Just the estimate.
    • The estimate has two primary attributes - accuracy and precision.
  • The chart (even the notional charts) usually don't say what the accuracy or precision is of the value that make up the line.

So let's look at the estimating process and the actual project performance 

  • There is no such thing as the¬†ideal cost estimate. Estimates are probabilistic. They have a probability distribution function (PDF) around the Mode of the possible values from the estimate. This Mode is the Most Likely value of the estimate. If the PDF is symmetric (as shown above) the upper and lower limits are usually done in some 20/80 bounds. This is typical in or domain. Other domains may vary.
  • This says here's our estimate with an 80% confidence.¬†
  • So now if the actual cost or schedule, or so technical parameter falls inside the¬†acceptable range¬†(the confidence interval) it's considered GREEN. This range of variances addresses the uncertanty in both the estimate and the project performance.

But here's three problems. First, there is no cause stated for that variance. Second, the ideal line can never be ideal. The straight line is for the estimate of the cost (and schedule) and that estimate is probabilistic. So the line HAS to have a probability distribution around it. The confidence interval on the range of the estimate. The resulting actual cost or schedule may well be within acceptable range of the estimate. Third is are the estimates being updated, when work is performed or new work is discovered and are those updates the result of changing scope? You can't state we did make our estimate if the scope is changing. This is core Performance Measurement Baseline struff we use every week where we work.

As well since ideal line has no probabilistic attributes in the original paper(s), no shown above - Here's how we think about cost, schedule, and technical performance modeling in the presence of the probabilistic and statistical processes of all project work. †

So let's be clear. NO point estimates can be credible. The Ideal line is a point estimate. It's bogus on day and continues to mislead as more data is captured from projects claimed to not match the original estimate. Without the underlying uncertanties (aleatory and epistemic) in the estimating model the ideal estimates are worthless. So when the actual numbers come in and don't match the ideal estimate there is NO way to know why. 

Was the estimate wrong (and all point estimates are wrong) or was one or all of Mr. Bliss's root causes the cause of the actual variance

CostSo another issue with the Ideal Line is there is no confidence intervals around the line. What if the actual cost came inside the acceptable range of the ideal cost? Then would the project be considered on cost and on schedule? Add to that to coupling  between cost, schedule, and the technical performance as shown above. 

The use of the Ideal is Notional. That's fine if your project is Notional.

What's the reason a project or a collection of projects don't match the baselined estimate. That estimate MUST have an accuracy and precision number before being useful to anyone. 

  • Essentially that straight line is likely an unquantified¬†point estimate.¬†And ALL point estimates are WRONG, BOGUS, WORTHLESS. (Yes I am shouting on the internet).
  • Don't ever make decisions in the presence of uncertanty with point estimates.
  • Don't ever do analysis of cost and schedule variances without first understanding the accuracy and precision of the original estimate.
  • Don't ever make suggestions to make changes to the processes without first finding the root cause of why the actual performance has a variance with the planned performance.

 So what's the summary so far:

  • All project work is probabilistic, driven by the underlying uncertainty of many processes. These process are coupled - they have to be for any non-trivial projects. What's the coupling factors? The non-linear couplings? Don't know these, no way to suggest much of anything about the time phased cost and schedule.
  • Knowing the reducible and irreducible uncertainties of the project is the minimal critical success factor for project success.
  • Don't know these? You've doomed the project on day one.

So in the end, any estimate we make in the beginning of the project, MUST be updated at the project proceeds. With this past performance data we can make improved estimates of the future performance as shown below. By the way, when the #NoEstimates advocates suggest using past data (empirical data) and don't apply the statistical assessment of that data to produce a confidence interval for the future estimate (a forecast is an estimate of a future outcome) they have only done half the work needed to inform those paying what is the likelihood of the future cost, schedule, or technical performance.

6a00d8341ca4d953ef01bb07e7f187970d

So Now To The Corrective Actions of The Causes of Project Variance

If we take the 4 root causes in the first chart - courtesy of Mr. Gary Bliss, Director Performance Assessment and Root Cause Analysis (PARCA), let's see what the first approach is to fix these

Unrealistic performance expectations missing Measures of Effectiveness and Measures of Performance

  • Defining the Measures of Performance, the resulting Measures of Effectiveness, and the Technical Performance Measures of the resulting project outcomes is a critical success factor.
  • Along with the Key Performance Parameters, these measures define what DONE looks like in units of measure meaningful to the decision makers.
  • Without these measures, those decision makers and those building the products that implement the solution have no way to know what DONE looks like.

Unrealistic Cost and Schedule estimates based on inadequate risk adjusted growth models

  • Here's where estimating comes in. All project work is subject to uncertainty. Reducible (Epistemic) uncertainty and Irreducible (Aleatory) uncertainty.¬†
  • Here's how to Manage in the Presence of Uncertainty.
  • Both these cause risk to cost, schedule, and technical outcomes.
  • Determining the range of possible values for aleatory and epistemic uncertainties means making estimates from past performance data or parametric models.

Inadequate assessment of risk and unmitigated exposure to these risks without proper handling plans

  • This type of risk is held in the Risk Register.
  • This means making estimates of the probability of occurrence, probability of impact, probability of the cost to mitigate, the¬†probability of any residual risk, the probability of the impact of this residual risk.
  • Risk management means making estimates.¬†
  • Risk management is how adults manage projects. No risk management, no adult management. No estimating no adult management.

Unanticipated Technical issues with no alternative plans and solutions to maintain effectiveness

  • Things go wrong, it's called development.
  • When thing go wrong, where's Plan B? Maybe even Plan C.

When we hear we can't estimate, planning is hard or maybe not even needed, we can't forecast the future, let's ask some serious questions.

  • Do you know what DONE looks like in meaningful units of measure?
  • Do you have a plan to get to Done when the customer needs you to, for the cost the customer can afford?
  • Do you have the needed resources to reach Done for the planned cost and schedule?
  • Do you know something about the risk to reaching Done and do you have plans to mitigate those risks in some way?
  • Do you have some way to measure physical percent complete toward Done, again in units meaningful to the decision makers, so you can get feedback (variance) from your work to take corrective actions to keep the project going in the right direction?

The answers should be YES to these Five Immutable Principles of Project Success

If not, you're late, over budget, and have a low probability of success on Day One.

†NRO Cost Group Risk Process, Aerospace Corporation, 2003

Related articles Applying the Right Ideas to the Wrong Problem Herding Cats: #NoEstimates Book Review - Part 1 Some More Background on Probability, Needed for Estimating A Framework for Managing Other Peoples Money Are Estimates Really The Smell of Dysfunction? Five Estimating Pathologies and Their Corrective Actions Qualitative Risk Management and Quantitative Risk Management
Categories: Project Management

Software Development Linkopedia August 2016

From the Editor of Methods & Tools - Wed, 08/17/2016 - 15:25
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about team management, the (new) software crisis, saying no, software testing, user experience, data modeling, Scrum retrospectives, java microservices, Selenium tests and product backlog refinement. Blog: The Principles of Quantum Team […]

Two Teams or Not: First Do No Harm (Part 2)

A pile of empty pizza boxes!

WIP limits are needed to stop waiting in queues.

Recently a long-time reader and listener came to me with a question about a team with two sub-teams that were not participating well together. In a previous entry we began describing how kanban or Scrumban could be leveraged to help teams identify issues with how they work and then to fix them.  We conclude with the last two steps in a simple approach to leveraging kanban or Scrumban:

  1. ¬†¬†Establish beginning WIP limits for each task. Work in Process (WIP) limits indicate how many items any specific task should control at a time (being worked on or waiting in queue). An easy approach to determining an initial WIP limit for a task is to count the number of people whose primary responsibility is to perform that task (Joe is a primarily a coder ‚Äď count 1 coder) under the assumption that a person can only do one thing at time (good assumption), and then use the count of people as the WIP limit. Roles that are spread across multiple people are a tad messier, but start by summing the fraction of time each person that does the role typically spends in that function (round to the nearest whole person for the WIP limit). ¬†The initial WIP limit is merely a starting point and should be tuned as constraints and bottlenecks are observed (see the next step).

As the team is determining the WIP limits, think about whether there are tasks that only one person can perform that are necessary for a story to get to production. These steps are potential bottlenecks or constraints.  When developing the WIP limits identify alternates that can perform tasks (remember T-shaped people!).  If members of a silo can participate only in their own silo it will be difficult for them to help fellow team members outside their silo, which can be harmful to team morale.  This type of issue suggests a need for cross training (or pair-programming or mob programming) to begin knowledge transfer.  

  1. ¬†¬†Pull stories from the backlog and get to work! Pull highest priority stories into the first task or tasks (if you have multiple independent workflows you will have multiple entry points into the flow). ¬†When a story is complete it should be pulled into the next task, if that task has not reached its WIP limit. ¬†¬†If a task can’t be pulled into the next step, it will have to wait. ¬†When stories have to wait, there is a bottleneck and a learning opportunity for the team.

As soon as stories begin to queue up waiting to get to the next step in the flow, hold an ad-hoc retrospective.  Ask the team to determine why there is a bottleneck. One problem might be that the WIP limit of the previous task is too high.  Ask them how to solve the problem.  If they need help getting started ask if the queue of stories is due to a temporary problem (for example, Joe is out due to the flu) and then ask if there is more capacity to tide things over.  If the reason is not temporary (for example, only a single person can do a specific task, or stories are too large and tend to get stuck) ask the team to identify a solution that can be implemented and tested.  The goal is to have the team identify the solution rather than have the solution imposed on them from someone else (think buy-in).

Using kanban or Scrumban to identify and generate solutions to how teams work facilitates the development of good teams. Good Agile teams exhibit three attributes:

  • Bounded – Team members belong to the team and the relationships that they develop will be ‚Äústicky.‚ÄĚ
  • Cross-functional – Cross-functional teams spend less time negotiating hand-offs and tracking down who can or should do any piece of work, thereby reducing the potential for bottlenecks.
  • Self-organized and self-managed – Self-organized and self-managed teams don‚Äôt need to wait for permission to make the decisions needed to remove bottlenecks or process constraints.

Overlaying kanban or Scrumban on top of the team’s current process does not change anything. . . .to start with. But it does position the team to take action when they SEE a problem. ¬†Visualization of how work is flowing will show the team where bottlenecks occur. The scrum master or coach then needs to challenge the team to eliminate those bottlenecks, promoting the health of the team in the process.

 


Categories: Process Management

A Google Santa Tracker update from Santa's Elves

Google Code Blog - Wed, 08/17/2016 - 00:10

Sam Thorogood, Developer Programs Engineer

Today, we're announcing that the open source version of Google's Santa Tracker has been updated with the Android and web experiences that ran in December 2015. We extended, enhanced and upgraded our code, and you can see how we used our developer products - including Firebase and Polymer - to build a fun, educational and engaging experience.

To get started, you can check out the code on GitHub at google/santa-tracker-weband google/santa-tracker-android. Both repositories include instructions so you can build your own version.

Santa Tracker isn’t just about watching Santa’s progress as he delivers presents on December 24. Visitors can also have fun with the winter-inspired experiences, games and educational content by exploring Santa's Village while Santa prepares for his big journey throughout the holidays.

Below is a summary of what we’ve released as open source.

Android app
  • The Santa Tracker Android app is a single APK, supporting all devices, such as phones, tablets and TVs, running Ice Cream Sandwich (4.0) and up. The source code for the app can be found here.
  • Santa Tracker leverages Firebase features, including Remote Config API, App Invites to invite your friends to play along, and Firebase Analytics to help our elves better understand users of the app.
  • Santa‚Äôs Village is a launcher for videos, games and the tracker that responds well to multiple devices such as phones and tablets. There's even an alternative launcher based on the Leanback user interface for Android TVs.

  • Games on Santa Tracker Android are built using many technologies such as JBox2D (gumball game), Android view hierarchy (memory match game) and OpenGL with special rendering engine (jetpack game). We've also included a holiday-themed variation of Pie Noon, a fun game that works on Android TV, your phone, and inside Google Cardboard's VR.
Android Wear

  • The custom watch faces on Android Wear provide a personalized touch. Having Santa or one of his friendly elves tell the time brings a smile to all. Building custom watch faces is a lot of fun but providing a performant, battery friendly watch face requires certain considerations. The watch face source code can be found here.
  • Santa Tracker uses notifications to let users know when Santa has started his journey. The notifications are further enhanced to provide a great experience on wearables using custom backgrounds and actions that deep link into the app.
On the web

  • Santa Tracker is mobile-first: this year's experience was built for the mobile web, including an amazing brand new, interactive - yet fully responsive, village: with three breakpoints, touch gesture support and support for the Web App Manifest.
  • To help us develop Santa at scale, we've upgraded to Polymer 1.0+. Santa Tracker's use of Polymer demonstrates how easy it is to package code into reusable components. Every housein Santa's Village is a custom element, only loaded when needed, minimizing the startup cost of Santa Tracker.

  • Many of the amazing new games (like Present Bounce) were built with the latest JavaScript standards (ES6) and are compiled to support older browsers via the Google Closure Compiler.
  • Santa Tracker's interactive and fun experience is enhanced using the Web Animations API, a standardized JavaScript APIfor unifying animated content.
  • We simplified the Chromecast support this year, focusing on a great screensaver that would countdown to the big event on December 24th - and occasionally autoplay some of the great video content from around Santa's Village.

We hope that this update inspires you to make your own magical experiences based on all the interesting and exciting components that came together to make Santa Tracker!

Categories: Programming

SE-Radio Episode 266: Charles Nutter on the JVM as a Language Platform

Charles Nutter talks to Charles Anderson about the JRuby language and the JVM as a platform for implementing programming languages. They discuss JRuby and its implementation on the JVM as an example of a language other than Java on the JVM. Venue: Skype Related Links Charles Nutter on Twitter:¬†https://twitter.com/headius Charles Nutter on GitHub: https://github.com/headius JRuby […]
Categories: Programming

Range of Domains in Sofwtare Development

Herding Cats - Glen Alleman - Tue, 08/16/2016 - 17:45

Once again I've encountered a conversation about estimating where there was a broad disconnect between the world I work in - Software Intensive System of Systems - and our approach to Agile software development, and someone claiming things that would be unheard of here.

Here's a briefing I built to sort out where on the spectrum you are, before proceeding further with what works in your domain may actually be forbidden in mine. 

So when someone starts stating what can or can't be done, what can or can't be known, what can or can't be a process - ask what domain do you work in?

Paradigm of agile project management from Glen Alleman Related articles Agile Software Development in the DOD How Think Like a Rocket Scientist - Irreducible Complexity Herding Cats: Value and the Needed Units of Measure to Make Decisions The Art of Systems Architecting Complex Project Management
Categories: Project Management

Sponsored Post: Zohocorp, Exoscale, Host Color, Cassandra Summit, Scalyr, Gusto, LaunchDarkly, Aerospike, VividCortex, MemSQL, AiScaler, InMemory.Net

Who's Hiring?
  • IT Security Engineering. At Gusto we are on a mission to create a world where work empowers a better life. As Gusto's IT Security Engineer you'll shape the future of IT security and compliance. We're looking for a strong IT technical lead to manage security audits and write and implement controls. You'll also focus on our employee, network, and endpoint posture. As Gusto's first IT Security Engineer, you will be able to build the security organization with direct impact to protecting PII and ePHI. Read more and apply here.

Fun and Informative Events
  • Join database experts from companies like Apple, ING, Instagram, Netflix, and many more to hear about how Apache Cassandra changes how they build, deploy, and scale at Cassandra Summit 2016. This September in San Jose, California is your chance to network, get certified, and trained on the leading NoSQL, distributed database with an exclusive 20% off with  promo code - Academy20. Learn more at CassandraSummit.org

  • NoSQL Databases & Docker Containers: From Development to Deployment. What is Docker and why is it important to Developers, Admins and DevOps when they are using a NoSQL database? Find out in this on-demand webinar by Alvin Richards, VP of Product at Aerospike, the enterprise-grade NoSQL database. The video includes a demo showcasing the core Docker components (Machine, Engine, Swarm and Compose) and integration with Aerospike. See how much simpler Docker can make building and deploying multi-node, Aerospike-based applications!  
Cool Products and Services
  • Do you want a simpler public cloud provider but you still want to put real workloads into production? Exoscale gives you VMs with proper firewalling, DNS, S3-compatible storage, plus a simple UI and straightforward API. With datacenters in Switzerland, you also benefit from strict Swiss privacy laws. From just €5/$6 per month, try us free now.

  • High Availability Cloud Servers in Europe: High Availability (HA) is very important on the Cloud. It ensures business continuity and reduces application downtime. High Availability is a standard service on the European Cloud infrastructure of Host Color, active by default for all cloud servers, at no additional cost. It provides uniform, cost-effective failover protection against any outage caused by a hardware or an Operating System (OS) failure. The company uses VMware Cloud computing technology to create Public, Private & Hybrid Cloud servers. See Cloud service at Host Color Europe.

  • Dev teams are using LaunchDarkly’s Feature Flags as a Service to get unprecedented control over feature launches. LaunchDarkly allows you to cleanly separate code deployment from rollout. We make it super easy to enable functionality for whoever you want, whenever you want. See how it works.

  • Scalyr is a lightning-fast log management and operational data platform.  It's a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services -- all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex measures your database servers’ work (queries), not just global counters. If you’re not monitoring query performance at a deep level, you’re missing opportunities to boost availability, turbocharge performance, ship better code faster, and ultimately delight more customers. VividCortex is a next-generation SaaS platform that helps you find and eliminate database performance problems at scale.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

 

If any of these items interest you there's a full description of each sponsor below...

Categories: Architecture

The Legend of the 5 Monkeys, the Doctor and the Rose

Xebia Blog - Mon, 08/15/2016 - 17:16
As Product Managers people look up to us to carry the vision, to make sure all the noses are aligned, the troops are rallied and that sort of stuff. But what is it that influences behavior? And what makes your team do what they do? The answer has more to do with you than with

How PayPal Scaled to Billions of Transactions Daily Using Just 8VMs

How did Paypal take a billion hits a day system that might traditionally run on a 100s of VMs and shrink it down to run on 8 VMs, stay responsive even at 90% CPU, at transaction densities Paypal has never seen before, with jobs that take 1/10th the time, while reducing costs and allowing for much better organizational growth without growing the compute infrastructure accordingly? 

PayPal moved to an Actor model based on Akka. PayPal told their story here: squbs: A New, Reactive Way for PayPal to Build Applications. They open source squbs and you can find it here: squbs on GitHub.

The stateful service model still doesn't get enough consideration when projects are choosing a way of doing things. To learn more about stateful services there's an article, Making The Case For Building Scalable Stateful Services In The Modern Era, based on an great talk given by Caitie McCaffrey. And if that doesn't convince you here's WhatsApp, who used Erlang, an Akka competitor, to achieve incredible throughput: The WhatsApp Architecture Facebook Bought For $19 Billion.

I refer to the above articles because the PayPal article is short on architectural details. It's more about the factors the led the selection of Akka and the benefits they've achieved by moving to Akka. But it's a very valuable motivating example for doing something different than the status quo. 

What's wrong with services on lots of VMs approach?

Categories: Architecture