Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Project Management

Software Development Conferences Forecast August 2015

From the Editor of Methods & Tools - 15 hours 58 min ago
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & […]

Estimates and Commitment

Herding Cats - Glen Alleman - Thu, 08/27/2015 - 23:11

In our domain,  Jon Katzenbach's definition of a team informs how we interact with our project members. A Team is defined as ...

A group of qualified individual who hold each other mutually accountable for a shared outcome - Katzenbach, Wisdom of Teams

It has been suggested that ...

The Estimate-Commitment relationship stands in opposition to collaboration. It works against collaboration. It supports conflict, not teamwork.

This position is counter to our Katzenbach based teaming processes. The conjecture that estimates work against collaboration, rather than for collaboration, removes the mutual accountability condition for team success.

This is like speaking with our builder about the bedroom remodel project and him saying...

Oh here's my estimate to complete your bedroom remodel, but I have no intention of meeting that estimate. 

Where we work, Estimates provide clarity and understanding of the mutual accountability for the shared outcome between the group of qualified individuals. 

Where we work, and apply Agile software development processes, we've adopted Seven Pillars of Program Success. We work hard, every day, to: †

  1. Have a well understood set of capabilities needed to define "Done" in units of measure meaningful to the decision makers. These are usually stated in terms of effectiveness and performance. 
  2. Have a genuine integrated plan associated with measuring physical percent complete in terms of Quantifiable Backup Data (QBD) to inform our Estimate To Complete (ETC) and Estimate at Completion (EAC). This QBD is a perfect fit with Agile's working software, predefined with the needed capabilities. This is why, in our domain, Earned Value Management + Agile is a match made in heaven.
  3. Produce an independent estimate of the cost, schedule, and probability the needed capabilities will perform as planned. These estimates is truly independent and not part of a missionary movement where people are trying to sell the program or force it to fit within the available funds.
  4. Provide sufficient and stable funding.
  5. Establish a culture of asking for and listening to outside competencies.
  6. Assure a willingness to ask  hard questions and the courage and energy to not quit until there are credible answers to the questions.
  7. Recognize that it takes requirements (the implementation of the capabilities), resources, business processes, and everyone working together to increase the probability of success.

Your domain of course will be different. You or your team may not work on projects must succeed on our before the needed date, at or below the needed budget with the needed capabilities. That is, you can show up late, over budget, and with missing capabilities and the customer will consider that OK. And just to be clear, the notion of the value of  incremental delivery is defined by the receiver of those capabilities, not the producer. Ask the customer if the partial outcomes can actually be put to productive use in the business environment. Capabilities Based Planning defines which capabilities are needed in what order to provide business value.

We show up late, over budget, and with missing capabilities many times of course - so no need to point that out - without corrective actions attached. Any number of reports, including bogus reports show this. But a critical understanding is we know we're going to be late, and we know we're going to be over budget, and most of the time we know the delivered capabilities will not meet the intended specifications every reporting period and have a plan (maybe not the right plan) to fix it. 

Risk Management is How Adults Manage Projects - Tim Lister

In our domain, being late, over budget, and less than required capabilities it is never acceptable to the customer. Are we late, over budget, and have performance issues? Of course. It's called development. But we know it, have visibility to the root causes, and have corrective action plans. This visibility is part of the process. Without a steering target and actuals, no error signal can be generated to be used for course correction. One of our PMs was a Navy navigator on an air craft carrier. The commanded heading was required for him to carry out is navigation processes. Without estimates of the impediments to be encountered along the way for the course to the desired destination, the productivity of progress, with effort to make progress along the course there is no way to know which path to take to that destination. By the way, measuring past performance and projecting that as future performance only works if the future conditions are like the past conditions. This is rarely the case on any sufficiently complex project.

Yogi Berra reminds us — If you don't know where you are going, you'll end up someplace else. 

This poor performance is actually reported in a database for review every reporting period (minimal monthly) and used to adjust award fees and assessment for the next job that significantly impacts the selection process. This is called Closed Loop Control.

When there are no Estimates to Complete (ETC) or Estimates At Completion (EAC) there is an Open Loop Control condition and the corrective actions needed (but not always effective) have no steering target with variance to steer toward to move the project back to GREEN. 

So estimates don't stand in the way of cooperation, they are the foundation of mutual accountability for the shared outcome based on cooperation.

† These seven pillars are derived from VADM Joseph Wendell Dyer, USN (Retired), Navy's chief test pilot, F/A-18E/F Program Manager, and Commander, Naval Air Systems Command, plus ten years as an executive at iRobot Corporation. Many of our projects are not VADM Dyer's but they are still mission critical, manifestly important to the success of our customers business success. If they were to fail - cost too much, show up beyond the business need date, or not provide the needed capabilities, the success of the business is in jeopardy.   Again you're domain my be significantly different. Use as appropriate.

Related articles IT Risk Management Architecture -Center ERP Systems in the Manufacturing Domain Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Software Development Linkopedia August 2015

From the Editor of Methods & Tools - Thu, 08/27/2015 - 14:44
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about commitment & estimation, better software testing, the dark side of metrics, scrum retrospectives, databases and Agile adoption. Blog: Against Estimate-Commitment Blog: Write Better Tests in 5 Steps Blog: Story Point […]

The Product Roadmap is Not the Project Portfolio

I keep seeing talks and arguments about how the portfolio team should manage the epics for a program. That conflates the issue of project portfolio management and product management.

Teamsandvalue

Several potential teams affect each project (or program).

Starting at the right side of this image, the project portfolio team decides which projects to do and when for the organization.

The product owner value team decides which features/feature sets to do when for a given product. That team may well split feature sets into releases, which provides the project portfolio team opportunities to change the project the cross-functional team works on.

The product development team (the agile/lean cross-functional team) decides how to design, implement, and test the current backlog of work.

When the portfolio team gets in the middle of the product roadmap planning, the product manager does not have the flexibility to manage the product backlog or the capabilities of the product over time.

When the product owner value team gets in the middle (or doesn’t plan enough releases), they prevent the project portfolio team from being able to change their minds over time.

When the product development team doesn’t release working product often, they prevent the product owner team from managing the product value. In addition, the product development team prevents the project portfolio team from implementing the organizational strategy when they don’t release often.

All of these teams have dependencies on each other.

The project portfolio team optimizes the organization’s output.

The product owner value team optimizes the product’s output.

The product development team determines how to optimize for features moving across the board. When the features are complete, the product owner team can replan for this product and the project portfolio team can replan for the organization. Everyone wins.

That’s why the product owner team is not the project portfolio team. (In small organizations, it’s possible people have multiple roles. If so, which hat are they wearing to make this decision?

The product roadmap is not the project portfolio. Yes, you may well use the same ranking approaches. The product roadmap optimizes for this product. The project portfolio team optimizes for the overall organization. They fulfill different needs. Please do not confuse the two decisions.

Categories: Project Management

An Iterative Waterfall Isn’t Agile

Mike Cohn's Blog - Tue, 08/25/2015 - 16:43

I’ve noticed something disturbing over the past two years. And it’s occurred uniformly with teams I’ve worked with all across the world. It’s the tendency to create an iterative waterfall process and then to call it agile.

An iterative waterfall looks something like this: In one sprint, someone (perhaps a business analyst working with a product owner) figures out what is to be built. 

Because they’re trying to be agile, they do this with user stories. But rather than treating the user stories as short placeholders for future conversations, each user story becomes a mini-specification document, perhaps three to five pages long. And I’ve seen them longer than that. 

These mini-specs/user stories document nearly everything conceivable about a given user story.

Because this takes a full sprint to figure out and document, a second sprint is devoted to designing the user interface for the user story. Sometimes, the team tries to be a little more agile (in their minds) by starting the design work just a little before the mini-spec for a user story is fully written. 

Many on the team will consider this dangerous because the spec isn’t fully figured out yet. But, what the heck, they’ll reason, this is where the agility comes in.

Programmers are then handed a pair of documents. One shows exactly what the user story should look like when implemented, and the other provides all details about the story’s behavior. 

No programming can start until these two artifacts are ready. In some companies, it’s the programmers who force this way of working. They take an attitude of saying they will build whatever is asked for, but you better tell them exactly what is needed at the start of the sprint.

Some organizations then stretch things out even further by having the testers work an iteration behind the programmers. This seems to happen because a team’s user stories get larger when each user story needs to include a mini-spec and a full UI design before it can be coded.

Fortunately, most teams realize that programmers and testers need to work together in the same iteration, but not extend that to being a whole team working together. This leads to the process shown in this figure.

This figure shows a first iteration devoted to analysis. A second iteration (possibly slightly overlapping with the first) is devoted to user experience design. And then a third iteration is devoted to coding and testing.
This is not agile. It might be your organization’s first step toward becoming agile. But it’s not agile.

What we see in this figure is an iterative waterfall.

In traditional, full waterfall development, a team does all of the analysis for the entire project first. Then they do all the design for the entire project. Then they do all the coding for the entire project. Then they do all the testing for the entire project.

In the iterative waterfall of the figure above, the team is doing the same thing but they are treating each story as a miniature project. They do all the analysis for one story, then all the design for one story, then all the coding and testing for one story. This is an iterative waterfall process, not an agile process.

Ideally, in an agile process, all types of work would finish at exactly the same time. The team would finish analyzing the problem at exactly the same time they finished designing the solution to the problem, which would also be the same time they finished coding and testing that solution. All four of those disciplines (and any others I’m not using in this example) would all finish at exactly the same time.

It’s a little naïve to assume a team can always perfectly achieve that. (It can be achieved some times.) But it can remain the goal a team can work towards.

A team should always work to overlap work as much as possible. And upfront thinking (analysis, design and other types of work) should be done as late as possible and in as little detail as possible while still allowing the work to be completed within the iteration.

If you are treating your user stories as miniature specification documents, stop. Start instead thinking about each as a promise to have a conversation. 

Feel free to add notes to some stories about things you want to make sure you bring up during that conversation. But adding these notes should be an optional step, not a mandatory step in a sequential process. 

Leaving them optional avoids turning the process into an iterative waterfall process and keeps your process agile.

Decision Making Means Making Inferences

Herding Cats - Glen Alleman - Mon, 08/24/2015 - 17:03

In software development, we almost always encounter situations where a decision must be made when we are uncertain what the outcome might or even the uncertainty in data used to make that decision.

Decision making in the presence of uncertainty is standard management practice in all business and technical domains. From business investment decision, to technical choices for project work. 

Making decisions in the presence of uncertainty means making probabilistic inferences from the information available to the decision maker.

There are many techniques for decision making. Decision trees are common. Where the probability of an outcome of a decision is part of a branch of a tree. If I go left in the branch - the decision - what happens? If I go right what happens? Each branch point becomes the decision. Each of the two or more branches becomes the outcomes. The probabilistic aspect is applied to the branches, and the outcomes - which may be probabilistic as well and are assessed for befits to those making the decision.

Decision-tree
Another approach is Monte Carlo Simulation of decision trees. Here's a tool we use for many decisions in our domain, Palisade, Crystal Ball. There are others. They work like the manual process in the first picture, but let you tune the probabilistic branching, probabilistic outcomes to model complex decision making processes.

Screen Shot 2015-08-24 at 8.00.19 AMIn the project management paradigm of projects we work, there are networks of activities. Each of these activities has some dependency or prior work, and each activity produces dependencies on follow on work. These can be model with Monte Carlo Simulation as well.

Screen Shot 2015-08-24 at 8.06.05 AM

The Schedule Risk Analysis (SRA) of the network of work activities is mandated on a monthly basis in many of the programs we work. 

In Kanban and Scrum systems Monte Carlo Simulation is a powerful tool to reveal the expected performance of the development activity. Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban & Scrum Projects Using Monte Carlo Simulation, Troy Magennisis a good place to start for this approach.

Each of these approaches and others are designed to provide actionable information to the decision makers. This information requires a minimum understanding of what is happening to the system being managed:

  • What are the naturally occurring variances of the work activities that we have no control over - aleatory uncertainty?
  • What are the event based probabilities of some occurrence - epistemic uncertainty?
  • What are the consequences of each outcome - decision, probabilistic event, or naturally occurring variance - on the desired behavior of the system?
  • What choices can be made that will influence these outcomes?

In many cases, the information available to make these choices is in the future. Some is in the past. But that information in the past needs careful assessment.

Past data is Only useful if you can be assured the future is like the past. If not, making decision using past data without adjusting that data for the possible changes in the future takes you straight into the ditch - see The Flaw of Averages. 

In order to have any credible assessment of the impact of a decision using data in the future - where will the system be going in the future? - it is mandatory to ESTIMATE.

It is simply not possible to make decisions about future outcomes in the presence of uncertainty in that future without making estimates.

Anyone says you can is incorrect. And if they insist it can be done, ask for testable evidence of their conjecture, based on the mathematics of probabilistic systems. No testable credible testable data, then it's pure speculation. Move on.

The False Conjecture of Deciding in Presence of Uncertainty without Estimates

  • Slicing the work into similar sized chunks, performing work on those chunks and using that information to produce information about the future makes the huge assumption the future is like the past.
  • Record past performance, making nice plots, running static analysis for mean, mode, standard deviation, variance is naive at best. The time series variances are rolled up hiding the latent variances that will emerge in the future. Time series analysis (ARIMA) is required to reveal the possible values in the dataset from the past that will emerge in the future, since the system under observation remains the same.

Time series analysis is a fundamental tool for making forecasting of future outcomes from past data. Weather forecasting - plus complex compressible fluid flow models - is based on time series analysis. Stock market forecasting uses time series analysis. Cost and Schedule modeling uses time series analysis. Adaptive process control algorithms, like the speed control and fuel management in your modern car uses time series analysis.

One of the originators of time series analysis, George E. P. Box and his seminal book Time Series Analysis, Forecasting and Control, is often seriously misquoted, when he said All Models are Wrong, Some are Useful.  Anyone misusing that quote to try and convince you, you can't model the future didn't (or can't) do the math in Box's book and likely got a D in the High School probability and statistics class.

So do the math, read the proper books, gather past data, model the future with dependency networks, Kanban and Scrum backlogs, measure current production, forecast future production based on Monte Carlo Models - and don't believe for a moment that you can make decision about future outcomes in the presence of uncertainties without estimating that future.

Related articles Making Conjectures Without Testable Outcomes Why Guessing is not Estimating and Estimating is not Guessing IT Risk Management Architecture -Center ERP Systems in the Manufacturing Domain
Categories: Project Management

My Agile 2015 Roundup

Agile 2015 was the week of Aug 3-7 this year. It was a great week. Here are the links to my interviews and talks.

Interview with Dave Prior. We spoke about agile programs, continuous planning, and how you might use certifications. I made a little joke about measurement.

Interview with Paul DuPuy of SolutionsIQ. We also spoke about agile programs. Paul had some interesting questions, one of which I was not prepared for. That’s okay. I answered it anyway.

The slides from Scaling Agile Projects to Programs: Networks of Autonomy, Collaboration and Exploration. At some point, the Agile Alliance will post the video of this on their site.

The slides from my workshop Agile Hiring: It’s a Team Sport. Because it was a workshop, there are built-in activities. You can try these where you work.

My pecha kucha (it was part of the lightning talks) of Living an Agile Life.

I hope you enjoy these. I had a great time at the conference.

 

Categories: Project Management

Managing in the Presence of Uncertainty

Herding Cats - Glen Alleman - Mon, 08/24/2015 - 05:43

A Tweet caught my eye this weekend

Screen Shot 2015-08-23 at 1.39.09 PMBefore moving to risk let's look at what Agile is

Agile development is a phrase used in software development to describe methodologies for incremental software development. Agile development is an alternative to traditional project management where emphasis is placed on empowering people to collaborate and make team decisions in addition to continuous planning, continuous testing and continuous integration.

Next the notion that Agile is actually risk management is very misunderstood. Agile provides raw information for risk management, but risk management has little to do with what software development method is being used. The continuous nature of Agile provides more frequent feedback on the state of the project. That is advantageous to risk management. Since Agile mandates this feedback on fine grained boundaries - weeks not months - the actions in the risk management paradigm below are also fine grained.

Where Does Risk Come From?

All risk comes from uncertainty. Uncertainty comes in two types. (1) Aleatory (naturally occurring in the underlying process and therefore irreducible) and (2) Epistemic (a probability that something unfavorable will happen). 

Risk results from uncertainty. To deal with the risk from Aleatory uncertainty we can only have margin, since the resulting risk is irreducible. This is schedule margin, cost margin, and product performance margin. This type of risk is just part of the world we live in. Natural variances in the work performed developing products needs margin. Natural variances in the performance is a server's throughput needed margin.

We can deal directly with the risk from Epistemic uncertainty by buying down the uncertainty. This is done with experiments, trials, incremental development and other risk reduction  activities that lower the uncertainty in the processes.

By the way many use the notion that risk is both positive and negative. This is not true. It's a naive understanding of the mathematics of risk processes. PMI does this. It is not allowed in our domain(s).

Agile and Incremental Delivery

There is a popular myth in the agile community that they have a lock on the notion of incremental delivery. This is again not true. Many product development lifecycles use incremental and iterative processes to produce products. Spiral development, Integrated Master Plan/Integrated Master Schedule, Incremental Commitment. All applicable to Software Intensive Systems and System of Systems domains, like Enterprise ERP.

Managing in the Presence of Uncertainty and the Resulting Risk

Here's how we manage SIS and SoS in the presence of uncertainty.

The methods used to collect requirements, turn those requirements into products and ship those products produces are of little concern to the Risk Management Process. They are of great concern to those engineering those products, but the Risk Management Process is above that activity.  You can start with the SEI Continuous Risk Management Guidebook for a framework of managing software development in the presence of risk. The management of risk in agile is very close to the management of risk in any other product or service development process. For any risk management process to work it needs to have these processes as a minimum. So when you here agile manages risk, confirm that is actually taking place by having the person making that system show clearly in a process description how each of the process areas below are implemented. Without that connection it just ain't true. Screen Shot 2015-08-23 at 3.03.44 PM   And here's how to put these processes together to ensure risk is being managed to increase the probability of success of your project.
Screen Shot 2015-08-23 at 3.42.33 PM And how to put these process areas to work on a project Screen Shot 2015-08-23 at 4.20.33 PM For those interested in managing projects in the presence of uncertainty and the risk that uncertainty creates, independent of any development methodology or development framework, here's a collection from the office library, in no particular order:

And a short white paper on Risk Management in Enterprise IT

Information Technology Risk Management from Glen Alleman Related articles IT Risk Management Making Conjectures Without Testable Outcomes Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Anecdotes versus Numbers (Statistics)

Herding Cats - Glen Alleman - Sat, 08/22/2015 - 04:08

In the world of project management and the process improvement efforts needed to increase the Probability of Project Success anecdotes appear to prevail when it comes to suggesting alternatives to observed dysfunction. 

If we were to pile all the statistics for all the data for the effectiveness or not effectiveness of all the process improvement methods on top of each other they would lack the persuasive power of a single anecdote in most software development domains outside of Software Intensive Systems. 

Why? because most people working in small groups, agile, development projects, compared to Enterprise, Mission Critical can't fail, that must show up on time, on budget, with not just the minimum viable products, the the mandatorily needed viable capability - rely on anecdotes to communicate their messages.

I say this not from just personal experience, but from research for government agencies and commercial enterprise firms tasked with Root Cause Analysis, conference proceedings, refereed journal papers, and guidance from those tasked with the corrective actions of major program failures.

Anecdotes appeal to emotion. Statistics, numbers, verifiable facts appeal to reason. It's not a fair fight. Emotiona always wins without acknowledging that emotion is seriously flawed when making decisions.

Anecdotal evidence is evidence where small numbers of anecdotes are presented. There is a large chance - statistically - this evidence is unreliable due to cherry picking or self selection (this is the core issue with the Standish Reports or anyone claiming anything without proper statistical sampling processes). 

Anecdotal evidence is considered dubious support of any generalized claim. Anecdotal evidence is no more than a type description (i.e., short narrative), and is often confused in discussions with its weight, or other considerations, as to the purpose(s) for which it is used.

We've all heard stories, ½ of all IT projects fail. Waterfall is evil, hell even estimates are evil stop doing them cold turkey. They prove the point the speaker is making right? Actually they don't. I just used an anecdote to prove a point. 

If I said The Garfunkel Institute just released a study showing 68% of all software development projects did not succeed because of a requirements gathering process failed to define what capabilities were needed when done, I've have made a fact base point. And you'd become bored reading the 86 pages of statistical analysis and correlation charts between all the causal factors contributing to the success or failure of the sample space of projects. See you are bored.

Instead if I said every project I've worked on went over budget and was behind schedule because we were very poor at making estimates. That'd be more appealing to your emotions, since it is a message you can relate to personally - having likely experienced many of the same failures.

The purveyors of anecdotal evidence to support a position make use of a common approach. Willfully ignoring a fact based methodology through a simple tactic...

We all know what Mark Twain said about lies, dammed lies, and statistics

People can certainly lie with statistics, done all the time. Start with How to Lie With Statistics But those types of Lies are nothing compared to the able to script personal anecdotes to support a message. From I never seen that work, to what now you're telling me - the person that actually invented this earth shattering way of writing software - that it doesn't work outside my personal sphere of experience?

An anecdote is a statistic with a sample size of one. OK, maybe a sample size of a small group  of your closest friends and fellow travelers. 

We fall for this all the time.  It's easier to accept an anecdote describing a problem and possible solution from someone we have shared experiences with, than to investigate the literature, do the math, even do the homework needed to determine the principles, practices, and processes needed for corrective action.

Don’t fall for manipulative opinion-shapers who use story-telling as a substitute for facts. When we're trying to persuade, use facts, use actual example based on those facts. Use data that can be tested outside personal anecdotes used to support an unsubstantiated claim without suggesting both the rot cause and the testable corrective actions.  

Related articles Making Conjectures Without Testable Outcomes Deadlines Always Matter Root Cause of Project Failure
Categories: Project Management

How To Lie With Statistics

Herding Cats - Glen Alleman - Fri, 08/21/2015 - 22:51

How-to-lie-with-statisticsHow To Lie With Statistics is a critically important book to have on your desk if you're involved any decision making. My edition is a First Edition, but I don't have the dust jacket, so not worth that much beyond the current versions.

The reason for this post is to lay the ground work for assessing reports, presentations, webinars, and other selling documents that contain statistical information. 

The classic statistical misuse if the Standish Report, describing the success and failure of IT projects.

Here's my summation on the elements of How To Lie in our project domain

  • Sample with the Built In Bias - the population of the sample space is not defined. The samples are self selected in that those who respond are the basis of the statistics. No adjustment for all those who did not respond to a survey for example. 
  • The Well Chosen Average - The arithmetical average, Median, and Mode are estimators of the population statistics. Any of these without a variance is of little value for decision making. 
  • Little Figures That Are Not There - the classic is use this approach (in this case #NoEstimates) and your productivity will improve 10X, that 1000% by the way. A 1000% improvement. That's unbelievable, literally unbelievable. The actual improvements are stated, only the percentage. The baseline performance is not stated. It's unbelievable.
  • Much Ado About Practically Nothing -  the probability of being in the range of normal. This is the basis of advertising. What's the variance?
  • Gee-Whiz Graphs - using graphics and adjustable scales provides the opportunity to manipulate the message. The classic example of this is the estimating errors in a popular graph used by the No Estimates advocates. It's a graph showing the number of projects that complete over there estimated cost and schedule. What's not shown is the credibility of the original estimate.
  • One Dimensional Picture - using a picture to show numbers, where the picture is not in the scale as the numbers provides a messaging path for visual readers. 
  • Semi-attached Picture - If you can't prove what you want to prove, demonstrate something else and pretend that they are the same thing. In one example, the logic is inverted. Estimating is conjectured to be the root cause of problems. With no evidence of that, the statement we don't see how estimating can produce success, so not estimating will increase the probability of success. 
  • Post Hoc Rides Again - posy hoc causality is common in the absence of a cause and effect understanding. The correlation and causality differences are many times not understood. 

Here's a nice example of How To Lie

There's a chart from an IEEE Computer article showing the numbers of projects that exceeded their estimated cost. But let's start with some research on the problem. Coping with the Cone of Uncertainty.

There is a graph, popularly used to show that estimates 

Screen Shot 2015-08-14 at 3.09.33 PM

 

This diagram is actually MISUSED by the #NoEstimates advocates.

The presentation below shows the follow on information for how estimates can be improved the increase the confidence in the process and improvements in the business. As well shows the root causes of poor estimates and their corrective actions. Please ignore any ruse of Todd's chart without the full presentation. 

 

My mistake was doing just that.

So before anyone accepts any conjecture from a #NoEstimates advocate using the graph above, please read the briefing at the link below to see the corrective actions for making poor estimates.

Screen Shot 2015-08-14 at 3.24.50 PM

Here's the link to Todd's entire briefing not just the many times misused graph of estimates not representing the actuals Uncertainty Surrounding the Cone of Uncertainty. 

Related articles Root Cause of Project Failure Estimating and Making Decisions in Presence of Uncertainty Are Estimates Really The Smell of Dysfunction?
Categories: Project Management

What is a Software Intensive System?

Herding Cats - Glen Alleman - Thu, 08/20/2015 - 14:52

When we hear about software development in the absence of a domain, it's difficult to have a discussion about the appropriate principles, processes, and practices of that work. Here's one paradigm that has served us well.

In the Software Intensive System world, Number 6 and beyond, here's some background Related articles Making Conjectures Without Testable Outcomes Root Cause of Project Failure
Categories: Project Management

One More #NoEstimates Post

Herding Cats - Glen Alleman - Thu, 08/20/2015 - 02:37

Steve McConnell's recent post on estimating prompted me to make one more post on this topic. First some background on my domain and point of view.

I work in what is referred to a Software Intensive Systems (SIS) involving Introduction, Foundations, Development Lifecycle, Requirements, Analysis and Design, Implementation, Verification and Validation, Summary and Outlook and those SIS's are usually embedded in System of Systems 

This may not be the domain where the No Estimates advocates work. Their system may not be software intensive and more that not system of systems. And as one of the more vocal supporters of No Estimates likes to say the color of your sky is different than mine. And yes it is, it's Blue, and we know why. It's Rayleigh Scattering. The reason we know why is engineers and scientist occupy the hallways of our office, along with all the IT and business SW developers running the enterprise IT systems that enable the production of all the SIS's embedded in the SoS products.

Here's a familiar framework for the spectrum of software systems

I'll add to Steve's comments in italics, while editing out material not germain to my responses but still in support of Steve's. Before we start here's one important concept In project management we do not seek perfect prediction. We seek early warning signals to enable predictive corrective actions. 

1. Estimation is often done badly and ineffectively and in an overly time-consuming way. 

My company and I have taught upwards of 10,000 software professionals better estimation practices, and believe me, we have seen every imaginable horror story of estimation done poorly. There is no question that “estimation is often done badly” is a true observation of the state of the practice. 

The role of estimating is found on many domains. Independent Cost Estimates (ICE) are mandated in many domains I work. Estimating professional organizations provide guidance, materials, and communities. www.iceaa.org, www.aace.org, NASA, DOD, DOE, DHS, DOJ, most every "heavy industry" from dirt moving to writing software for money, has some formalized estimating process.

2. The root cause of poor estimation is usually lack of estimation skills. 

Estimation done poorly is most often due to lack of estimation skills. Smart people using common sense is not sufficient to estimate software projects. Reading two page blog articles on the internet is not going to teach anyone how to estimate very well. Good estimation is not that hard, once you’ve developed the skill, but it isn’t intuitive or obvious, and it requires focused self-education or training. 

One of the most common estimation problems is people engaging with so-called estimates that are not really Estimates, but that are really Business Targets or requests for Commitments. You can read more about that in my estimation book or watch my short video on Estimates, Targets, and Commitments. 

Root Cause Analysis is one of our formal processes. We apply Reality Charting® to all technologies of our work. RCA is part of Governance and continuous process improvement. Conjecturing that estimates are somehow the "smell" of something else without stating that problem and most importantly confirming the Root Cause of the problem, providing corrective actions, and most critically confirming the corrective action removes the root cause is bad management at best and naive management at worst 

3. Many comments in support of #NoEstimates demonstrate a lack of basic software estimation knowledge. 

I don’t expect most #NoEstimates advocates to agree with this thesis, but as someone who does know a lot about estimation I think it’s clear on its face. Here are some examples

(a) Are estimation and forecasting the same thing? As far as software estimation is concerned, yes they are. (Just do a Google or Bing search of “definition of forecast”.) Estimation, forecasting, prediction--it's all the same basic activity, as far as software estimation is concerned. 

The notion of redefining terms to suite the needs of the speaker is troubling. Estimating is about the past, present, and future. As a former physicist I made estimates of the scattering cross section of particle collisions, so we knew where to look for the signature of the collision. In a second career, since I really didn't have an original idea needed for the profession of particle physics, I estimated the signature parameter is mono-pulse doppler radar signals to identify targets in missile defense systems. Same for signatures from sonar system to separate whales from Biscayne Bay speed boats, the Oscar Class Russian Submarines.

Forecasting is estimating some outcome in the future. Weather forecasters make predictions of the probability of rain in the coming days. 

(b) Is showing someone several pictures of kitchen remodels that have been completed for $30,000 and implying that the next kitchen remodel can be completed for $30,000 estimation? Yes, it is. That’s an implementation of a technique called Reference Class Forecasting. 

Reference Class Forecasting is fundamental to good estimating. But other techniques are useful as well. Parametric modeling, design based models in systems engineering in sysML has estimating databases. Model Based Design is a well developed discipline in our domain and others. Evene Subject Matter Experts (although actually undesirable) can be a start, with wide-band Delphi

(c) Is doing a few iterations, calculating team velocity, and then using that empirical velocity data to project a completion date count as estimation? Yes it does. Not only is it estimation, it is a really effective form of estimation. I’ve heard people argue that because velocity is empirically based, it isn’t estimation. Good estimation is empirically based, so that argument exposes a lack of basic understanding of the nature of estimation. 

All good estimates are based on some "reference class." Gathering data to build a reference class may be needed. But care is needed is using the "first few sprints" without first answering some questions.

  • Is a forecast of the future.
  • Is the future like the past?
  • Are there changes in the underlying statistical process in the future that are not accounted for in the past.
  • Are the underlying statistical processes for irreducible (aleatory) uncertainty stationary. That is are the natural variances in the project work the same across the life span of the project. Or do they change as time passes?
Empirical estimation require knowing something about the underlying statistical and probabilistic processes. Without this knowledge, those empirical measurement are "point" measures and not likely to be representative of the future

(d) Is counting the number of stories completed in each sprint rather than story points, calculating the average number of stories completed each sprint, and using that for sprint planning, estimation? Yes, for the same reasons listed in point (c). 

This is estimating. But the numbers alone are no good "estimators." The variance and stability of the variance is needed. The past is a predictor of the future ONLY is the future is like the past. This is the role of time series analysis where simple and free tools can be used to produce a credible estimate of the future from the past.

(e) Most of the #NoEstimates approaches that have been proposed, including (c) and (d) above, are approaches that were defined in my book Software Estimation: Demystifying the Black Art, published in 2006. The fact that people people are claiming these long-ago-published techniques as "new" under the umbrella of #NoEstimates is another reason I say many of the #NoEstimates comments demonstrate a lack of basic software estimation knowledge. 

The use of Slicing - one proposed #Noestimates technique is estimating. Using the NO in front of Estimate and then referencing "slicing" seems a bit disingenuous. But slicing is subject to the same issue as all reference class that are not adjusted for future changes. The past may not be like the future. Confirmation and adjustment are part of good estimating. 

(f) Is estimation time consuming and a waste of time? One of the most common symptoms of lack of estimation skill is spending too much time on ineffective activities. This work is often well-intentioned, but it’s common to see well-intentioned people doing more work than they need to get worse estimates than they could be getting.

This notion that those spending the money get to say what is a waste and what is waste would be considered hubris in any other context. In an attempt not to be rude (one of the No estimates advocates favorite come backs when presented with a tough question - ala Jar Jar Binks) estimates are primarily not for those spending the money but for those providing the money. How much, when and what are business questions that need answers in any non-trivial business transaction. If the need to know is not there, it is likely the "value at risk" for the work is low enough, no one cares what it costs, when it will be done, or what we'll get when we're done.

Just to be Crystal clear I use the term non-trivial to mean a project whose cost and schedule and possible whose produced content - when missed - do not impact the business in any manner detrimental to its operation. 

(g) Is it possible to get good estimates? Absolutely. We have worked with multiple companies that have gotten to the point where they are delivering 90%+ of their projects on time, on budget, with intended functionality. 

Of course it is, and good estimates happen all the time. Bad estimates happen all the time as well. One of my engagements is with the Performance Assessment and Root Cause Analyses division of the US DOD. Root Cause Analysis of ACAT1 Nunn McCurdy programs shows the following. Similar Root Causes can be found for commercial projects. 

Gary Bliss Chart

One reason many people find estimation discussions (aka negotiations) challenging is that they don't really believe the estimates they came up with themselves. Once you develop the skill needed to estimate well -- as well as getting clear about whether the business is really talking about an estimate, a target, or a commitment -- estimation discussions become more collaborative and easier. 

The Basis of Estimate problem is universal. Is was on a proposal team that lost to an arch rival because our "basis of estimate" included an unrealistic staffing plan. Build a credible estimate is actual work. The size of the project, the "value at risk," the tolerance for risk, and a myriad of other factors all go into the deciding how to make the estimate. All good estimates and estimating practices are full collaboration. 

When management abuse is called out when estimating, it has not been explained how NOT estimating that corrects the management abuse.

4. Being able to estimate effectively is a skill that any true software professional needs to develop, even if they don’t need it on every project. 

“Estimation often doesn't work very well, therefore software professionals should not develop estimation skill” – this is a common line of reasoning in #NoEstimates. This argument doesn't make any more sense than the argument, "Scrum often doesn't work very well, therefore software professionals should not try to use Scrum." The right response in both cases is, "Get better at the practice," not "Throw out the practice altogether." 

The notion of "I can't learn to estimate well," is not the same as "it's possible to learn to estimate well." There are professional estimating organizations, books, journals, courses. What is really being said is "I don't want to learn to estimate." 

#NoEstimates advocates say they're just exploring the contexts in which a person or team might be able to do a project without estimating. That exploration is fine, but until someone can show that the vast majority of projects do not need estimates at all, deciding to not estimate and not develop estimations skills is premature. And my experience tells me that when all the dust settles, the cases in which no estimates are needed will be the exception rather than the rule. Thus software professionals will benefit -- and their organizations will benefit -- from developing skill at estimation. 

Those #NoEstimate advocates have appeared to no ask those paying their salary what they need in terms of estimates. Ignore for the moment the Dilbert managers. This is a day one issue. #Noestimates willfully ignores the needs of the business. And when called on it says "if management needs estimates, we should estimate." Any manager accountable for a non-trivial expenditure that doesn't have some type of "estimate to complete and Estimate at Completion isn't going to be a manager for very long when the project shows up late, over budget, and doesn't deliver the needed capabilities.

I would go further and say that a true software professional should develop estimation skill so that you can estimate competently on the numerous projects that require estimation. I don't make these claims about software professionalism lightly. I spent four years as chair of the IEEE committee that oversees software professionalism issues for the IEEE, including overseeing the Software Engineering Body of Knowledge, university accreditation standards, professional certification programs, and coordination with state licensing bodies. I spent another four years as vice-chair of that committee. I also wrote a book on the topic, so if you're interested in going into detail on software professionalism, you can check out my book, Professional Software Development. Or you can check out a much briefer, more specific explanation in my company's white paper about our Professional Development Ladder. 

5. Estimates serve numerous legitimate, important business purposes.

Estimates are used by businesses in numerous ways, including: 

  • Allocating budgets to projects (i.e., estimating the effort and budget of each project)
  • Making cost/benefit decisions at the project/product level, which is based on cost (software estimate) and benefit (defined feature set)
  • Deciding which projects get funded and which do not, which is often based on cost/benefit
  • Deciding which projects get funded this year vs. next year, which is often based on estimates of which projects will finish this year
  • Deciding which projects will be funded from CapEx budget and which will be funded from OpEx budget, which is based on estimates of total project effort, i.e., budget
  • Allocating staff to specific projects, i.e., estimates of how many total staff will be needed on each project
  • Allocating staff within a project to different component teams or feature teams, which is based on estimates of scope of each component or feature area
  • Allocating staff to non-project work streams (e.g., budget for a product support group, which is based on estimates for the amount of support work needed)
  • Making commitments to internal business partners (based on projects’ estimated availability dates)
  • Making commitments to the marketplace (based on estimated release dates)
  • Forecasting financials (based on when software capabilities will be completed and revenue or savings can be booked against them)
  • Tracking project progress (comparing actual progress to planned (estimated) progress)
  • Planning when staff will be available to start the next project (by estimating when staff will finish working on the current project)
  • Prioritizing specific features on a cost/benefit basis (where cost is an estimate of development effort)

These are just a subset of the many legitimate reasons that businesses request estimates from their software teams. I would be very interested to hear how #NoEstimates advocates suggest that a business would operate if you remove estimates for each of these purposes.

The #NoEstimates response to these business needs is typically of the form, “Estimates are inaccurate and therefore not useful for these purposes” rather than, “The business doesn’t need estimates for these purposes.” 

That argument really just says that businesses are currently operating on the basis of much worse predictions than they should be, and probably making poorer decisions as a result, because the software staff are not providing very good estimates. If software staff provided more accurate estimates, the business would make better decisions in each of these areas, which would make the business stronger. 

The other #NoEstimates response is that "Estimates are always waste." I don't agree with that. By that line of reasoning, daily stand ups are waste. Sprint planning is waste. Retrospectives are waste. Testing is waste. Everything but code-writing itself is waste. I realize there are Lean purists who hold those views, but I don't buy any of that. 

Estimates, done well, support business decision making, including the decision not to do a project at all. Taking the #NoEstimates philosophy to its logical conclusion, if #NoEstimates eliminates waste, then #NoProjectAtAll eliminates even more waste. In most cases, the business will need an estimate to decide not to do the project at all.  

In my experience businesses usually value predictability, and in many cases, they value predictability more than they value agility. Do businesses always need predictability? No, there are few absolutes in software. Do businesses usually need predictability? In my experience, yes, and they need it often enough that doing it well makes a positive contribution to the business. Responding to change is also usually needed, and doing it well also makes a positive contribution to the business. This whole topic is a case where both predictability and agility work better than either/or. Competency in estimation should be part of the definition of a true software professional, as should skill in Scrum and other agile practices. 

Estimates are the basis of managerial finance and decision making in the presence of uncertainty (Microeconomics of software development). The accuracy and precision of the estimates is usually determined by the value at risk. From low risk, which may mean no estimates. To high risk which means frequently updated independent validation of the estimates. But in nearly all business decisions - unless the value at risk can be written off - there is a need to know something about the potential loss as well as the potential gain. 

6. Part of being an effective estimator is understanding that different estimation techniques should be used for different kinds of estimates. 

One thread that runs throughout the #NoEstimates discussions is lack of clarity about whether we’re estimating before the project starts, very early in the project, or after the project is underway. The conversation is also unclear about whether the estimates are project-level estimates, task-level estimates, sprint-level estimates, or some combination. Some of the comments imply ineffective attempts to combine kinds of estimates—the most common confusion I’ve read is trying to use task-level estimates to estimate a whole project, which is another example of lack of software estimation skill. 

You can see a summary of estimation techniques and their areas of applicability here. This quick reference sheet assumes familiarity with concepts and techniques from my estimation book and is not intended to be intuitive on its own. But just looking at the categories you can see that different techniques apply for estimating size, effort, schedule, and features. Different techniques apply for small, medium, and large projects. Different techniques apply at different points in the software lifecycle, and different techniques apply to Agile (iterative) vs. Sequential projects. Effective estimation requires that the right kind of technique be applied to each different kind of estimate. 

Learning these techniques is not hard, but it isn't intuitive. Learning when to use each technique, as well as learning each technique, requires some professional skills development. 

When we separate the kinds of estimates we can see parts of projects where estimates are not needed. One of the advantages of Scrum is that it eliminates the need to do any sort of miniature milestone/micro-stone/task-based estimates to track work inside a sprint. If I'm doing sequential development without Scrum, I need those detailed estimates to plan and track the team's work. If I'm using Scrum, once I've started the sprint I don't need estimation to track the day-to-day work, because I know where I'm going to be in two weeks and there's no real value added by predicting where I'll be day-by-day within that two week sprint. 

That doesn't eliminate the need for estimates in Scrum entirely, however. I still need an estimate during sprint planning to determine how much functionality to commit to for that sprint. Backing up earlier in the project, before the project has even started, businesses need estimates for all the business purposes described above, including deciding whether to do the project at all. They also need to decide how many people to put on the project, how much to budget for the project, and so on. Treating all the requirements as emergent on a project is fine for some projects, but you still need to decide whether you're going to have a one-person team treating requirements as emergent, or a five-person team, or a 50-person team. Defining team size in the first place requires estimation. 

7. Estimation and planning are not the same thing, and you can estimate things that you can’t plan. 

Many of the examples given in support of #NoEstimates are actually indictments of overly detailed waterfall planning, not estimation. The simple way to understand the distinction is to remember that planning is about “how” and estimation is about “how much.” 

Can I “estimate” a chess game, if by “estimate” I mean how each piece will move throughout the game? No, because that isn’t estimation; it’s planning; it’s “how.”

Can I estimate a chess game in the sense of “how much”? Sure. I can collect historical data on the length of chess games and know both the average length and the variation around that average and predict the length of a game. 

More to the point, estimating an individual software project is not analogous to estimating one chess game. It’s analogous to estimating a series of chess games. People who are not skilled in estimation often assume it’s more difficult to estimate a series of games than to estimate an individual game, but estimating the series is actually easier. Indeed, the more chess games in the set, the more accurately we can estimate the set, once you understand the math involved. 

This all goes back to the idea that we need estimates for different purposes at different points in a project. An agile project may be about "steering" rather than estimating once the project gets underway. But it may not be allowed to get underway in the first place if there aren't early estimates that show there's a business case for doing the project. 

Plans are strategies for the success of the project. What accomplishments must occur, how those accomplishments are assessed in units of measure meaningful to the decision makers are the start of Planning. Choices made during the planned process and most certainly during the execution process are informed by estimates of future outcomes from the decisions made today and the possible decision made in the future. This is the basis of Microeconomics of decision making.

Strategy making is many times used by #NoEstimates advocates when they are actually applying operational effectiveness. Strategic decision making is a critical success factor for non-trivial projects.

Strategic portfolio management from Glen Alleman

8. You can estimate what you don’t know, up to a point. 

In addition to estimating “how much,” you can also estimate “how uncertain.” In the #NoEstimates discussions, people throw out lots of examples along the lines of, “My project was doing unprecedented work in Area X, and therefore it was impossible to estimate the whole project.” This is essentially a description of the common estimation mistake of allowing high variability in one area to insert high variability into the whole project's estimate rather than just that one area's estimate. 

Most projects contain a mix of precedented and unprecedented work (also known as certain/uncertain, high risk/low risk, predictable/unpredictable, high/low variability--all of which are loose synonyms as far as estimation is concerned). Decomposing the work, estimating uncertainty in each area, and building up an overall estimate that includes that uncertainty proportionately is one technique for dealing with uncertainty in estimates. 

Why would that ever be needed? Because a business that perceives a whole project as highly risky might decide not to approve the whole project. A business that perceives a project as low to moderate risk overall, with selected areas of high risk, might decide to approve that same project. 

You can estimate anything that is knowable. You personally may not know it - so go find someone who does, do research, "explore," experiment, build models, build prototypes. Do what ever is necessary to improve your knowledge (epistemology) of the uncertainties and improve your understanding of the natural variance (aleatory uncertainty). But if it's knowable, then don't say it's unknown. It's just unknown to you. 

The classic error and unbounded hubris about estimates comes from Donald Rumsfeld when he used the Unknown Unknowns in the first Iraq war. he never read The Histories,  Herodotus, 5th Century B.C. Where the author told the reader "don't go to what is now Iraq," the tribal powers will never comply with your will. Same for what is now Afghanistan, where Alexander the Great was ejected by the local tribesman.  

9. Both estimation and control are needed to achieve predictability. 

Much of the writing on Agile development emphasizes project control over project estimation. I actually agree that project control is more powerful than project estimation, however, effective estimation usually plays an essential role in achieving effective control. 

Closed loop control and especially feedforward adaptive control requires making estimates for future states - before they unfavorably impact the outcome. This means estimating. Software development is a closed loop adaptive control system.

To put this in Agile Manifesto-like terms:

We have come to value project control over project estimation, 
as a means of achieving predictability.

My 1st disagreement with Steve. Control is based on estimating. Both needed in any close loop control system. By the Way the conjecture used of slicing is not Close Loop Control. There is no steering target. The slicing data does not say what the performance (how many slices or what ever units you want) are Needed to meet the goals of the project. Slicing is Open Loop Control.  The #Noestimates advocates need to  pick up any "Control System" book to see how this works.

As in the Agile Manifesto, we value both terms, which means we still value the term on the right. 

#NoEstimates seems to pay lip service to both terms, but the emphasis from the hashtag onward is really about discarding the term on the right. This is another case where I believe the right answer is both/and, not either/or. 

I wrote an essay when I was Editor in Chief of IEEE Software called "Sitting on the Suitcase" that discussed the interplay between estimation and control and discussed why we estimate even though we know the activity has inherent limitations. This is still one of my favorite essays. 

10. People use the word "estimate" sloppily. 

No doubt. Lack of understanding of estimation is not limited to people tweeting about #NoEstimates. Business partners often use the word “estimate” to refer to what would more properly be called a “planning target” or “commitment.”  

The word "estimate" does have a clear definition, for those who want to look it up.  

The gist of these definitions is that an "estimate" is something that is approximate, rough, or tentative, and is based upon impressions or opinion. People don't always use the word that way, and you can see my video on that topic here. 

Better yet how about definition from the actual estimating community

  • Software Cost Estimation with COCOMO II
  • Software Sizing and Estimating
  • Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban and Scrum Projects using Monte-Carlo Simulation
  • Estimating Software-Intensive Systems" Project, Products, and Processes
  • Making Hard Decisions
  • Forecasting Methods and Applications  
  • Probability Methods for Cost Uncertainty Analysis
  • Cost Estimate Classification System, AACEI
  • Cost Estimating Body of Knowledge, ICEAA
  • Parametric Estimating Handbook, ICEAA
  • Basic Software Cost Estimating, CEB 09, ICEAA Online

The last opens with "“Any sufficiently advanced technology is indistinguishable from magic.” - Arthur C. Clarke. This may be one of the Root Causes for the #NoEstimates advocates. They've encountered a sufficiently advanced technology and see it as magic and therefore not within their grasp

There is no need to redefine anything. The estimating community has done that already.

Because people use the word sloppily, one common mistake software professionals make is trying to create a predictive, approximate estimate when the business is really asking for a commitment, or asking for a plan to meet a target, but using the word “estimate” to ask for that. It's common for businesses to think they have a problem with estimation when the bigger problem is with their commitment process. 

We have worked with many companies to achieve organizational clarity about estimates, targets, and commitments. Clarifying these terms makes a huge difference in the dynamics around creating, presenting, and using software estimates effectively. 

11. Good project-level estimation depends on good requirements, and average requirements skills are about as bad as average estimation skills. 

A common refrain in Agile development is “It’s impossible to get good requirements,” and that statement has never been true. I agree that it’s impossible to get perfect requirements, but that isn’t the same thing as getting good requirements. I would agree that “It is impossible to get good requirements if you don’t have very good requirement skills,” and in my experience that is a common case.  I would also agree that “Projects usually don’t have very good requirements,” as an empirical observation—but not as a normative statement that we should accept as inevitable. 

If you don't know where you are going,you'll end up someplace else - Yogi Berra'

Figure it out, don't put up with being lazy. Use Capabilities Based Planning to elicit the requirements. What do you want this thing to do when it's done? Don't know, then why are you spend the customers money to build something. 

Agile is essentially spending the customers money to find out what the customer doesn't know. Ask first, is this the best use of the money?

Like estimation skill, requirements skill is something that any true software professional should develop, and the state of the art in requirements at this time is far too advanced for even really smart people to invent everything they need to know on their own. Like estimation skill, a person is not going to learn adequate requirements skills by reading blog entries or watching short YouTube videos. Acquiring skill in requirements requires focused, book-length self-study or explicit training or both. 

If your business truly doesn’t care about predictability (and some truly don’t), then letting your requirements emerge over the course of the project can be a good fit for business needs. But if your business does care about predictability, you should develop the skill to get good requirements, and then you should actually do the work to get them. You can still do the rest of the project using by-the-book Scrum, and then you’ll get the benefits of both good requirements and Scrum.

From my point of view, I often see agile-related claims that look kind of like this, What practices should you use if you have: 

  • Mediocre skill in Estimation
  • Mediocre skill in Requirements
  • Good to excellent skill in Scrum and Related Practices

Not too surprisingly, the answer to this question is, Scrum and Related Practices. I think a more interesting question is,What practices should you use if you have: 

  • Good to excellent skill in Estimation
  • Good to excellent skill in Requirements
  • Good to excellent skill in Scrum and related practices

Having competence in multiple areas opens up some doors that will be closed with a lesser skill set. In particular, it opens up the ability to favor predictability if your business needs that, or to favor flexibility if your business needs that. Agile is supposed to be about options, and I think that includes the option to develop in the way that best supports the business. 

12. The typical estimation context involves moderate volatility and a moderate levels of unknowns

Ron Jeffries writes, “It is conventional to behave as if all decent projects have mostly known requirements, low volatility, understood technology, …, and are therefore capable of being more or less readily estimated by following your favorite book.” I don’t know who said that, but it wasn’t me, and I agree with Ron that that statement doesn’t describe most of the projects that I have seen. 

The color of Ron's sky must not be blue - the normal color. Every project we work has volatile requirements.

Don't undertake a project unless it is manifestly important and nearly impossible. - Edwin Land 

For enterprise IT there are databases showing the performance of past projects

  • www.nesma.org

  • www.isbsg.org

  • www.cosmicon.com

I think it would be more true to say, “The typical software project has requirements that are knowable in principle, but that are mostly unknown in practice due to insufficient requirements skills; low volatility in most areas with high volatility in selected areas; and technology that tends to be either mostly leading edge or mostly mature." In other words, software projects are challenging, but the challenge level is manageable. If you have developed the full set of skills a software professional should have, you will be able to overcome most of the challenges or all of them. 

Of course there is a small percentage of projects that do have truly unknowable requirements and across-the-board volatility. I consider those to be corner cases. It’s good to explore corner cases, but also good not to lose sight of which cases are most common. 

13. Responding to change over following a plan does not imply not having a plan. 

It’s amazing that in 2015 we’re still debating this point. Many of the #NoEstimates comments literally emphasize not having a plan, i.e., treating 100% of the project as emergent. They advocate a process—typically Scrum—but no plan beyond instantiating Scrum. 

Plans are strategies for success of the projects.  Strategies are hypothesis. Hypothesis's need tests (experiments) to continually validate them. Ron can lecture us all he wants. But agile is a SW Development paradigm embedded in a larger strategic development paradigm and plans come from there. That's how enterprises function. Both are needed.

According to the Agile Manifesto, while agile is supposed to value responding to change, it also is supposed to value following a plan. The Agile Manifesto says, "there is value in the items on the right" which includes the phrase "following a plan." 

While I agree that minimizing planning overhead is good project management, doing no planning at all is inconsistent with the Agile Manifesto, not acceptable to most businesses, and wastes some of Scrum's capabilities. One of the amazingly powerful aspects of Scrum is that it gives you the ability to respond to change; that doesn’t imply that you need to avoid committing to plans in the first place. 

My company and I have seen Agile adoptions shut down in some companies because an Agile team is unwilling to commit to requirements up front or refuses to estimate up front. As a strategy, that’s just dumb. If you fight your business about providing estimates, even if you win the argument that day, you will still get knocked down a peg in the business’s eyes. 

I've commented in other contexts that I have come to the conclusion that most businesses would rather be wrong than vague. Businesses prefer to plant a stake in the ground and move it later rather than avoiding planting a stake in the ground in the first place. The assertion that businesses value flexibility over predictability is Agile's great unvalidated assumption. Some businesses do value flexibility over predictability, but most do not. If in doubt, ask your business. 

If your business does value predictability, use your velocity to estimate how much work you can do over the course of a project, and commit to a product backlog based on your demonstrated capacity for work. Your business will like that. Then, later, when your business changes its mind—which it probably will—you’ll still be able to respond to change. Your business will like that even more.  

14. Scrum provides better support for estimation than waterfall ever did, and there does not have to be a trade off between agility and predictability. 

Not quite true. Waterfall projects have excellent estimating processes. Trouble is during the execution of the project things change. When the Plan and the Estimate aren;'t updated to match this change - which is one of the root causes of project failure- then the estimate becomes of little use. Apply agile processes to estimating is the same as applying agile processes to codinig. Frequent assessments of progress to plan and corrective actions when variances appear.

Some of the #NoEstimates discussion seems to interpret challenges to #NoEstimates as challenges to the entire ecosystem of Agile practices, especially Scrum. Many of the comments imply that estimation will somehow impair agility. The examples cited to support that are mostly examples of unskilled misapplications of estimation practices, so I see them as additional examples of people not understanding estimation very well. 

The idea that we have to trade off agility to achieve predictability is a false trade off. If we define "agility" to mean, "no notion of our destination" or "treat all the requirements on the project as emergent," then of course there is a trade off, by definition. If, on the other hand, we define "agility" as "ability to respond to change," then there doesn't have to be any trade off. Indeed, if no one had ever uttered the word “agile” or applied it to Scrum, I would still want to use Scrum because of its support for estimation and predictability, as well as for its support for responding to change. 

The combination of story pointing, velocity calculation, product backlog, short iterations, just-in-time sprint planning, and timely retrospectives after each sprint creates a nearly perfect context for effective estimation. To put it in estimation terminology, story pointing is a proxy based estimation technique. Velocity is calibrating the estimate with project data. The product backlog (when constructed with estimation in mind) gives us a very good proxy for size. Sprint planning and retrospectives give us the ability to "inspect and adapt" our estimates. All this means that Scrum provides better support for estimation than waterfall ever did. 

If a company truly is operating in a high uncertainty environment, Scrum can be an effective approach. In the more typical case in which a company is operating in a moderate uncertainty environment, Scrum is well-equipped to deal with the moderate level of uncertainty and provide high predictability (e.g., estimation) at the same time. 

15. There are contexts where estimates provide little value. 

I don’t estimate how long it will take me to eat dinner, because I know I’m going to eat dinner regardless of what the estimate says. If I have a defect that keeps taking down my production system, the business doesn’t need an estimate for that because the issue needs to get fixed whether it takes an hour, a day, or a week. 

The most common context I see where estimates are not done on an ongoing basis and truly provide little business value is online contexts, especially mobile, where the cycle times are measured in days or shorter, the business context is highly volatile, and the mission truly is, “Always do the next most useful thing with the resources available.” 

In both these examples, however, there is a point on the scale at which estimates become valuable. If the work on the production system stretches into weeks or months, the business is going to want and need an estimate. As the mobile app matures from one person working for a few days to a team of people working for a few weeks, with more customers depending on specific functionality, the business is going to want more estimates. As the group doing the work expands, they'll need budget and headcount, and those numbers are determined by estimates. Enjoy the #NoEstimates context while it lasts; don’t assume that it will last forever. 

Start with Value At Risk. What are you willing to lose if your estimate is wrong. Then decides if the cost of estimating covers than risk.

16. This is not religion. We need to get more technical and more economic about software discussions. 

I’ve seen #NoEstimates advocates treat these questions of requirements quality, estimation effectiveness, agility, and predictability as value-laden moral discussions. "Agile" is a compliment and "Waterfall" is an invective. The tone of the argument is more moral than economic. The arguments are of the form, "Because this practice is good," rather than of the form, "Because this practice supports business goals X, Y, and Z." 

That religion isn’t unique to Agile advocates, and I’ve seen just as much religion on the non-Agile sides of various discussions. It would be better for the industry at large if people could stay more technical and economic more often. 

Agile is About Creating Options, Right?

I subscribe to the idea that engineering is about doing for a dime what any fool can do for a dollar, i.e., it's about economics. If we assume professional-level skills in agile practices, requirements, and estimation, the decision about how much work to do up front on a project should be an economic decision about which practices will achieve the business goals in the most cost-effective way. We consider issues including the cost of changing requirements and the value of predictability. If the environment is volatile and a high percentage of requirements are likely to spoil before they can be implemented, then it’s a bad economic decision to do lots of up front requirements work. If predictability provides little or no business value, emphasizing up front estimation work would be a bad economic decision.

On the other hand, if predictability does provide business value, then we should support that in a cost-effective way. If we do a lot of the requirements work up front, and some requirements spoil, but most do not, and that supports improved predictability, that would be a good economic choice. 

The economics of these decisions are affected by the skills of the people involved. If my team is great at Scrum but poor at estimation and requirements, the economics of up front vs. emergent will tilt toward Scrum. If my team is great at estimation and requirements but poor at Scrum, the economics will tilt toward estimation and requirements. 

Of course, skill sets are not divinely dictated or cast in stone; they can be improved through focused self-study and training. So we can treat the decision to invest in skills development as an economic issue too. 

Decision to Develop Skills is an Economic Decision Too

What is the cost of training staff to reach competency in estimation and requirements? Does the cost of achieving competency exceed the likely benefits that would derive from competency? That goes back to the question of how much the business values predictability. If the business truly places no value on predictability, there won’t be any ROI from training staff in practices that support predictability. But I do not see that as the typical case. 

My company and I can train software professionals to approach competency in both requirements and estimation in about a week. In my experience most businesses place enough value on predictability that investing a week to make that option available provides a good ROI to the business. Note: this is about making the option available, not necessarily exercising the option on every project. 

My company and I can also train software professionals to approach competency in a full complement of Scrum and other Agile technical practices in about a week. That produces a good ROI too. In any given case, I would recommend both sets of training. If I had to recommend only one or the other, sometimes I would recommend starting with the Agile practices. But my real recommendation is to "embrace the and" and develop both sets of skills.  

For context about training software professionals to "approach competency" in requirements, estimation, Scrum, and other Agile practices, I am using that term based on work we've done with our  Professional Development Ladder. In that ladder we define capability levels of "Introductory," "Competence," "Leadership," and "Mastery." A few days of classroom training will advance most people beyond Introductory and much of the way toward Competence in a particular skill. Additional hands-on experience, mentoring, and feedback will be needed to cement Competence in an area. Classroom study is just one way to acquire these skills. Self-study or working with an expert mentor can work about as well. The skills aren't hard to learn, but they aren't self-evident either. As I've said above, the state of the art in estimation, requirements, and agile practices has moved well beyond what even a smart person can discover on their own. Focused professional development of some kind or other is needed to acquire these skills. 

Is a week enough to accomplish real competency? My company has been training software professionals for almost 20 years, and our consultants have trained upwards of 50,000 software professionals during that time. All of our consultants are highly experienced software professionals first, trainers second. We don't have any methodological ax to grind, so we focus on what is best for each individual client. We all work hands-on with clients so we know what is actually working on the ground and what isn't, and that experience feeds back into our training. We have also also invested heavily in training our consultants to be excellent trainers. As a result, our service quality is second to none, and we can make a tremendous amount of progress with a few days of training. Of course additional coaching, mentoring and support are always helpful. 

17. Agility plus predictability is better than agility alone. 

Agility in the absence of steering targets created by estimating in the presence of uncertainty of of little value. Any Closed Loop Control systems requires rapid response to changing conditions and a steering signal that may required an estimate of where we want to be when we arrive. 

Skills development in practices that support estimation and predictability vs. practices that support agility is not an either/or choice. A truly agile business would be able to be flexible when needed, or predictable when needed. A true software professional will be most effective when skilled in both skill sets. 

If you think your business values agility only, ask your business what it values. Businesses vary, and you might work in a business that truly does value agility over predictability or that values agility exclusively. Many businesses value predictability over agility, however, so don't just assume it's one or the other.  

I think it’s self-evident that a business that has both agility and predictability will outperform a business that has agility only. With today's powerful agile practices, especially Scrum, there's no reason we can't have both.  

Overall, #NoEstimates seems like the proverbial solution in search of a problem. I don't see businesses clamoring to get rid of estimates. I see them clamoring to get better estimates. The good news for them is that agile practices, Scrum in particular, can provide excellent support for agility and estimation at the same time. 

My closing thought, in this hash tag-happy discussion, is that #AgileWithEstimationWorksBest -- and #EstimationWithAgileWorksBest too. 

Woody has successful created what he wanted - a discussion of sorts - about estimating. Trouble is without a principled discussion it turns into personal anecdotes rather than fact based dialog. Those of us asking for fact based examples are then seen as improperly challenging the anecdotes and since there is not yet any fact based response the need to improve the probability of success for software development goes unanswered, replaced acquisitions and name calling.

Related articles Making Conjectures Without Testable Outcomes Root Cause of Project Failure IT Risk Management Deadlines Always Matter Thinking, Talking, Doing on the Road to Improvement Information Technology Estimating Quality
Categories: Project Management

Why Bother with Probability and Statistics?

Herding Cats - Glen Alleman - Wed, 08/19/2015 - 18:06

IWs3OkqfydCQIt is conjectured that uncertainty can be dealt with ordinary means with open conversation, identification of the uncertainties and their handling strategies. That quantitative methods are too elaborate and unnecessary for problems except the most technical and complicated ones.

When asked what is meant by uncertainty the answer many times is probably or very likely. But not any quantitative measure meaningful to the decision makers. Since the future is always uncertain in our project domain, making decisions in the presence of uncertainty is a critical success factor [1] for all project work. 

Decision making is one of the hard things in life. True decision-making occurs not when we already know the outcome, but when we do not know what to do. When we have to balance conflicting values, costs, schedule, needed capabilities, sort through complex situations, and deal with real uncertainty. To make decisions in the presence of this uncertainty we need to know the possible outcomes of our decision, the possible alternatives and their costs - in the short term and in the long term. Making these types of decisions requires we make estimates of all the variables involved in the decision-making process.

What Are Probabilities? 

There is a trend in the software development domain to redefine well established terms in mathematics, engineering, and science - it seems to suit the needs of those proffering that in the presence of uncertainty decisions can't be made.

Probabilities represent our state of knowledge. They are a statement of how likely we think an event might occur or the possible of a value being within a range of values.

These probabilities are based in uncertainty, and uncertainty comes in two forms. Aleatory and Epistemic. 

  • Aleatory uncertainty is the natural randomness in a process. For discrete variables, the randomness is parameterized by the probability of each possible value. For continuous variables, the randomness is parameterized by the probability density function (pdf).
  • Epistemic uncertainty is the uncertainty in the model of the process. It is due to limited data and knowledge. The epistemic uncertainty is characterized by alternative models. For discrete random variables, the epistemic uncertainty is modeled by alternative probability distributions. For continuous random variables, the epistemic uncertainty is modeled by alternative probability density functions. In addition, there is epistemic uncertainty in parameters that are not random by have only a single correct (but unknown) value.

Both these uncertainties exist on projects. When making good decisions on projects we know something about these uncertainties and have handling plans for the resulting risk produced by the uncertainties.

  • For Aleatory uncertainty (irreducible risk) we need margin. The margin protects the project deliverables from unfavorable cost, schedule, and technical performance that is part of the naturally occurring variances.
  • For Epistemic uncertainty (reducible risk) can be addressed by buying down the uncertainty. Paying money to learn more.

This by the way is a primary benefit of Agile Software Development, where forced short term deliverables provide information to reduce risk. Agile is Not a risk management process, many other steps needed for that. But Agile is a means to reveal risk and take corrective action on much shorter time boundaries - reducing the accumulation of risk.

Some Background on Decision Making in the Presence of Uncertainty 

One way to distinguish good decisions from bad decisions is to assess the outcomes of those decisions. The measurement critical for a good or bad decision needs some definition itself. There are issues of course. The results of the decision may not appear for some time in the future, but we need to know something about the possible results before we make the decision. As well we'd like to see the results of the alternatives of our decision for the choices that weren't made or rejected.

A fundamental purpose of quantitative decision making is to distinguish between good and bad decisions. And to provide criteria for assessing the goodness of the decision. To do this we need first to establish what the decision is about.

  • When do you think we'll be ready to go live with the needed capabilities we're paying you develop?
  • If we switch from our legacy systems to an ERP system, how much will we save over the next 5 years with the sunk cost of the entire project?
  • On the list of desirable features, which ones can we get on the current need date if we reduce the budget by 15%?

Making decisions like these in the presence of uncertainty by estimating future outcomes is a normal, everyday, business process. Any suggestion these decisions can be made without estimates is utter nonsense.

Decision analysis starts with defining what a decision is - the commitment to resources that is irrevocable only at some cost. If there is not cost associated with making the decsion or changing your mind after the decision has been made - in the business domain - the decision was of little if any value. This is the value at risk discussion. How much are we willing to risk if we don't know to some level of confidence what the outcome of our decision is?

The elements of good decision analysis are [2]. So for any good decision and its decision making process, we'll need answers to the questions on the left, some form of logic to make a decision, the defined actionable steps from that decision and then an assessment of the outcomes to inform future decisions - learning from our decisions 

Screen Shot 2015-08-19 at 10.19.27 AM

Decision support systems that implement the process above are based in part on the underlying uncertainties of the systems under management. Research into the cost and schedule behaviors of these systems is well developed. Here's one example.

Screen Shot 2015-08-20 at 8.01.06 AM

In the end the decision making process will not meet the needs of the decision makes if we don't have alternatives defined, information at hand - and most times this information is probabilities information from condition in the future in the presence of uncertainty, and the value we assign to the outcomes - then making decisions is going to turn out BAD.

We're driving in the dark with the lights off, while spending other peoples money and our project will end up like this...

Upside down

Reference Material for Further Understanding

  1. Strategic Planning with Critical Success Factors and Future Scenarios: An Integrated Strategic Planning Framework, Technical Report CMU/SEI-2010-TR-037 ESC-TR-2010-102.
  2. Decision Analysis for the Professional 4th Edition, Peter McNamee and John Celina, 
  3. Real Options: Managing Strategic Investment  in an Uncertain World, Amran, Martha, and Nalin Kulatilaka, 

    Harvard Business School Press, 1999. 
  4. Making Hard Decisions: An Introduction to Decision Analysis, Robert Clemen, 

    Duxbury Press, 1996.

  5. Software Design as an Investment Activity: A Real Options Perspective, Kevin Sullivan and Prasad Chalasani 
  6. Probabilistic Modeling as an Exploratory Decision Making Tool, Martin Pergler and Andrew Freeman, McKinsey & Company, Number 6, September 2008
  7. Value at Risk for IS/IT Project and Portfolio Appraisal and Risk Management, Stefan Koch, Department of Information Business, Vienna University of Economics and BA, Austria, 
Related articles Making Conjectures Without Testable Outcomes Root Cause of Project Failure IT Risk Management
Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Wed, 08/19/2015 - 14:55

The door of a bigoted mind opens outwards. The pressure of facts merely closes it more snugly.
- Ogden Nash

When there are new ideas being conjectured, it is best for the conversation to establish the principles on which those ideas can be tested. Without this the person making the conjecture has to defined the idea on personality, personal anecdotes, and  personal experience alone

Categories: Project Management

Quote of the Month August 2015

From the Editor of Methods & Tools - Wed, 08/19/2015 - 09:39
Acknowledging that something isn’t working takes courage. Many organizations encourage people to spin things in the most positive light rather than being honest. This is counterproductive. Telling people what they want to hear just defers the inevitable realization that they won’t get what they expected. It also takes from them the opportunity to react to […]

17 Theses on Software Estimation (Expanded)

10x Software Development - Steve McConnell - Tue, 08/18/2015 - 17:51

This post is part of an ongoing discussion with Ron Jeffries, which originated from some comments I made about #NoEstimates. You can read my original "17 Theses on Software Estimation" post here. That post has been completely subsumed by this post if you want to just read this one. You can read Ron's response to my original 17 Theses article here. This post doesn't respond to Ron's post per se. It has been expanded to address points he raised, but responses to him are more implicit than explicit. 

Arriving late to the #NoEstimates discussion, I’m amazed at some of the assumptions that have gone unchallenged, and I’m also amazed at the absence of some fundamental points that no one seems to have made so far. The point of this article is to state unambiguously what I see as the arguments in favor of estimation in software and put #NoEstimates in context.  

1. Estimation is often done badly and ineffectively and in an overly time-consuming way. 

My company and I have taught upwards of 10,000 software professionals better estimation practices, and believe me, we have seen every imaginable horror story of estimation done poorly. There is no question that “estimation is often done badly” is a true observation of the state of the practice. 

2. The root cause of poor estimation is usually lack of estimation skills. 

Estimation done poorly is most often due to lack of estimation skills. Smart people using common sense is not sufficient to estimate software projects. Reading two page blog articles on the internet is not going to teach anyone how to estimate very well. Good estimation is not that hard, once you’ve developed the skill, but it isn’t intuitive or obvious, and it requires focused self-education or training. 

One of the most common estimation problems is people engaging with so-called estimates that are not really Estimates, but that are really Business Targets or requests for Commitments. You can read more about that in my estimation book or watch my short video on Estimates, Targets, and Commitments. 

3. Many comments in support of #NoEstimates demonstrate a lack of basic software estimation knowledge. 

I don’t expect most #NoEstimates advocates to agree with this thesis, but as someone who does know a lot about estimation I think it’s clear on its face. Here are some examples

(a) Are estimation and forecasting the same thing? As far as software estimation is concerned, yes they are. (Just do a Google or Bing search of “definition of forecast”.) Estimation, forecasting, prediction--it's all the same basic activity, as far as software estimation is concerned. 

(b) Is showing someone several pictures of kitchen remodels that have been completed for $30,000 and implying that the next kitchen remodel can be completed for $30,000 estimation? Yes, it is. That’s an implementation of a technique called Reference Class Forecasting. 

(c) Is doing a few iterations, calculating team velocity, and then using that empirical velocity data to project a completion date count as estimation? Yes it does. Not only is it estimation, it is a really effective form of estimation. I’ve heard people argue that because velocity is empirically based, it isn’t estimation. Good estimation is empirically based, so that argument exposes a lack of basic understanding of the nature of estimation. 

(d) Is counting the number of stories completed in each sprint rather than story points, calculating the average number of stories completed each sprint, and using that for sprint planning, estimation? Yes, for the same reasons listed in point (c). 

(e) Most of the #NoEstimates approaches that have been proposed, including (c) and (d) above, are approaches that were defined in my book Software Estimation: Demystifying the Black Art, published in 2006. The fact that people people are claiming these long-ago-published techniques as "new" under the umbrella of #NoEstimates is another reason I say many of the #NoEstimates comments demonstrate a lack of basic software estimation knowledge. 

(f) Is estimation time consuming and a waste of time? One of the most common symptoms of lack of estimation skill is spending too much time on ineffective activities. This work is often well-intentioned, but it’s common to see well-intentioned people doing more work than they need to get worse estimates than they could be getting.

(g) Is it possible to get good estimates? Absolutely. We have worked with multiple companies that have gotten to the point where they are delivering 90%+ of their projects on time, on budget, with intended functionality. 

One reason many people find estimation discussions (aka negotiations) challenging is that they don't really believe the estimates they came up with themselves. Once you develop the skill needed to estimate well -- as well as getting clear about whether the business is really talking about an estimate, a target, or a commitment -- estimation discussions become more collaborative and easier. 

4. Being able to estimate effectively is a skill that any true software professional needs to develop, even if they don’t need it on every project. 

“Estimation often doesn't work very well, therefore software professionals should not develop estimation skill” – this is a common line of reasoning in #NoEstimates. This argument doesn't make any more sense than the argument, "Scrum often doesn't work very well, therefore software professionals should not try to use Scrum." The right response in both cases is, "Get better at the practice," not "Throw out the practice altogether." 

#NoEstimates advocates say they're just exploring the contexts in which a person or team might be able to do a project without estimating. That exploration is fine, but until someone can show that the vast majority of projects do not need estimates at all, deciding to not estimate and not develop estimations skills is premature. And my experience tells me that when all the dust settles, the cases in which no estimates are needed will be the exception rather than the rule. Thus software professionals will benefit -- and their organizations will benefit -- from developing skill at estimation. 

I would go further and say that a true software professional should develop estimation skill so that you can estimate competently on the numerous projects that require estimation. I don't make these claims about software professionalism lightly. I spent four years as chair of the IEEE committee that oversees software professionalism issues for the IEEE, including overseeing the Software Engineering Body of Knowledge, university accreditation standards, professional certification programs, and coordination with state licensing bodies. I spent another four years as vice-chair of that committee. I also wrote a book on the topic, so if you're interested in going into detail on software professionalism, you can check out my book, Professional Software Development. Or you can check out a much briefer, more specific explanation in my company's white paper about our Professional Development Ladder

5. Estimates serve numerous legitimate, important business purposes.

Estimates are used by businesses in numerous ways, including: 

  • Allocating budgets to projects (i.e., estimating the effort and budget of each project)
  • Making cost/benefit decisions at the project/product level, which is based on cost (software estimate) and benefit (defined feature set)
  • Deciding which projects get funded and which do not, which is often based on cost/benefit
  • Deciding which projects get funded this year vs. next year, which is often based on estimates of which projects will finish this year
  • Deciding which projects will be funded from CapEx budget and which will be funded from OpEx budget, which is based on estimates of total project effort, i.e., budget
  • Allocating staff to specific projects, i.e., estimates of how many total staff will be needed on each project
  • Allocating staff within a project to different component teams or feature teams, which is based on estimates of scope of each component or feature area
  • Allocating staff to non-project work streams (e.g., budget for a product support group, which is based on estimates for the amount of support work needed)
  • Making commitments to internal business partners (based on projects’ estimated availability dates)
  • Making commitments to the marketplace (based on estimated release dates)
  • Forecasting financials (based on when software capabilities will be completed and revenue or savings can be booked against them)
  • Tracking project progress (comparing actual progress to planned (estimated) progress)
  • Planning when staff will be available to start the next project (by estimating when staff will finish working on the current project)
  • Prioritizing specific features on a cost/benefit basis (where cost is an estimate of development effort)

These are just a subset of the many legitimate reasons that businesses request estimates from their software teams. I would be very interested to hear how #NoEstimates advocates suggest that a business would operate if you remove estimates for each of these purposes.

The #NoEstimates response to these business needs is typically of the form, “Estimates are inaccurate and therefore not useful for these purposes” rather than, “The business doesn’t need estimates for these purposes.” 

That argument really just says that businesses are currently operating on the basis of much worse predictions than they should be, and probably making poorer decisions as a result, because the software staff are not providing very good estimates. If software staff provided more accurate estimates, the business would make better decisions in each of these areas, which would make the business stronger. 

The other #NoEstimates response is that "Estimates are always waste." I don't agree with that. By that line of reasoning, daily stand ups are waste. Sprint planning is waste. Retrospectives are waste. Testing is waste. Everything but code-writing itself is waste. I realize there are Lean purists who hold those views, but I don't buy any of that. 

Estimates, done well, support business decision making, including the decision not to do a project at all. Taking the #NoEstimates philosophy to its logical conclusion, if #NoEstimates eliminates waste, then #NoProjectAtAll eliminates even more waste. In most cases, the business will need an estimate to decide not to do the project at all.  

In my experience businesses usually value predictability, and in many cases, they value predictability more than they value agility. Do businesses always need predictability? No, there are few absolutes in software. Do businesses usually need predictability? In my experience, yes, and they need it often enough that doing it well makes a positive contribution to the business. Responding to change is also usually needed, and doing it well also makes a positive contribution to the business. This whole topic is a case where both predictability and agility work better than either/or. Competency in estimation should be part of the definition of a true software professional, as should skill in Scrum and other agile practices. 

6. Part of being an effective estimator is understanding that different estimation techniques should be used for different kinds of estimates. 

One thread that runs throughout the #NoEstimates discussions is lack of clarity about whether we’re estimating before the project starts, very early in the project, or after the project is underway. The conversation is also unclear about whether the estimates are project-level estimates, task-level estimates, sprint-level estimates, or some combination. Some of the comments imply ineffective attempts to combine kinds of estimates—the most common confusion I’ve read is trying to use task-level estimates to estimate a whole project, which is another example of lack of software estimation skill. 

You can see a summary of estimation techniques and their areas of applicability here. This quick reference sheet assumes familiarity with concepts and techniques from my estimation book and is not intended to be intuitive on its own. But just looking at the categories you can see that different techniques apply for estimating size, effort, schedule, and features. Different techniques apply for small, medium, and large projects. Different techniques apply at different points in the software lifecycle, and different techniques apply to Agile (iterative) vs. Sequential projects. Effective estimation requires that the right kind of technique be applied to each different kind of estimate. 

Learning these techniques is not hard, but it isn't intuitive. Learning when to use each technique, as well as learning each technique, requires some professional skills development. 

When we separate the kinds of estimates we can see parts of projects where estimates are not needed. One of the advantages of Scrum is that it eliminates the need to do any sort of miniature milestone/micro-stone/task-based estimates to track work inside a sprint. If I'm doing sequential development without Scrum, I need those detailed estimates to plan and track the team's work. If I'm using Scrum, once I've started the sprint I don't need estimation to track the day-to-day work, because I know where I'm going to be in two weeks and there's no real value added by predicting where I'll be day-by-day within that two week sprint. 

That doesn't eliminate the need for estimates in Scrum entirely, however. I still need an estimate during sprint planning to determine how much functionality to commit to for that sprint. Backing up earlier in the project, before the project has even started, businesses need estimates for all the business purposes described above, including deciding whether to do the project at all. They also need to decide how many people to put on the project, how much to budget for the project, and so on. Treating all the requirements as emergent on a project is fine for some projects, but you still need to decide whether you're going to have a one-person team treating requirements as emergent, or a five-person team, or a 50-person team. Defining team size in the first place requires estimation. 

7. Estimation and planning are not the same thing, and you can estimate things that you can’t plan. 

Many of the examples given in support of #NoEstimates are actually indictments of overly detailed waterfall planning, not estimation. The simple way to understand the distinction is to remember that planning is about “how” and estimation is about “how much.” 

Can I “estimate” a chess game, if by “estimate” I mean how each piece will move throughout the game? No, because that isn’t estimation; it’s planning; it’s “how.”

Can I estimate a chess game in the sense of “how much”? Sure. I can collect historical data on the length of chess games and know both the average length and the variation around that average and predict the length of a game. 

More to the point, estimating an individual software project is not analogous to estimating one chess game. It’s analogous to estimating a series of chess games. People who are not skilled in estimation often assume it’s more difficult to estimate a series of games than to estimate an individual game, but estimating the series is actually easier. Indeed, the more chess games in the set, the more accurately we can estimate the set, once you understand the math involved. 

This all goes back to the idea that we need estimates for different purposes at different points in a project. An agile project may be about "steering" rather than estimating once the project gets underway. But it may not be allowed to get underway in the first place if there aren't early estimates that show there's a business case for doing the project. 

8. You can estimate what you don’t know, up to a point. 

In addition to estimating “how much,” you can also estimate “how uncertain.” In the #NoEstimates discussions, people throw out lots of examples along the lines of, “My project was doing unprecedented work in Area X, and therefore it was impossible to estimate the whole project.” This is essentially a description of the common estimation mistake of allowing high variability in one area to insert high variability into the whole project's estimate rather than just that one area's estimate. 

Most projects contain a mix of precedented and unprecedented work (also known as certain/uncertain, high risk/low risk, predictable/unpredictable, high/low variability--all of which are loose synonyms as far as estimation is concerned). Decomposing the work, estimating uncertainty in each area, and building up an overall estimate that includes that uncertainty proportionately is one technique for dealing with uncertainty in estimates. 

Why would that ever be needed? Because a business that perceives a whole project as highly risky might decide not to approve the whole project. A business that perceives a project as low to moderate risk overall, with selected areas of high risk, might decide to approve that same project. 

9. Both estimation and control are needed to achieve predictability. 

Much of the writing on Agile development emphasizes project control over project estimation. I actually agree that project control is more powerful than project estimation, however, effective estimation usually plays an essential role in achieving effective control. 

To put this in Agile Manifesto-like terms:

We have come to value project control over project estimation, 
as a means of achieving predictability

As in the Agile Manifesto, we value both terms, which means we still value the term on the right. 

#NoEstimates seems to pay lip service to both terms, but the emphasis from the hashtag onward is really about discarding the term on the right. This is another case where I believe the right answer is both/and, not either/or

I wrote an essay when I was Editor in Chief of IEEE Software called "Sitting on the Suitcase" that discussed the interplay between estimation and control and discussed why we estimate even though we know the activity has inherent limitations. This is still one of my favorite essays. 

10. People use the word "estimate" sloppily. 

No doubt. Lack of understanding of estimation is not limited to people tweeting about #NoEstimates. Business partners often use the word “estimate” to refer to what would more properly be called a “planning target” or “commitment.”  

The word "estimate" does have a clear definition, for those who want to look it up.  

The gist of these definitions is that an "estimate" is something that is approximate, rough, or tentative, and is based upon impressions or opinion. People don't always use the word that way, and you can see my video on that topic here

Because people use the word sloppily, one common mistake software professionals make is trying to create a predictive, approximate estimate when the business is really asking for a commitment, or asking for a plan to meet a target, but using the word “estimate” to ask for that. It's common for businesses to think they have a problem with estimation when the bigger problem is with their commitment process. 

We have worked with many companies to achieve organizational clarity about estimates, targets, and commitments. Clarifying these terms makes a huge difference in the dynamics around creating, presenting, and using software estimates effectively. 

11. Good project-level estimation depends on good requirements, and average requirements skills are about as bad as average estimation skills. 

A common refrain in Agile development is “It’s impossible to get good requirements,” and that statement has never been true. I agree that it’s impossible to get perfect requirements, but that isn’t the same thing as getting good requirements. I would agree that “It is impossible to get good requirements if you don’t have very good requirement skills,” and in my experience that is a common case.  I would also agree that “Projects usually don’t have very good requirements,” as an empirical observation—but not as a normative statement that we should accept as inevitable. 

Like estimation skill, requirements skill is something that any true software professional should develop, and the state of the art in requirements at this time is far too advanced for even really smart people to invent everything they need to know on their own. Like estimation skill, a person is not going to learn adequate requirements skills by reading blog entries or watching short YouTube videos. Acquiring skill in requirements requires focused, book-length self-study or explicit training or both. 

If your business truly doesn’t care about predictability (and some truly don’t), then letting your requirements emerge over the course of the project can be a good fit for business needs. But if your business does care about predictability, you should develop the skill to get good requirements, and then you should actually do the work to get them. You can still do the rest of the project using by-the-book Scrum, and then you’ll get the benefits of both good requirements and Scrum.

From my point of view, I often see agile-related claims that look kind of like this, What practices should you use if you have: 

  • Mediocre skill in Estimation
  • Mediocre skill in Requirements
  • Good to excellent skill in Scrum and Related Practices

Not too surprisingly, the answer to this question is, Scrum and Related Practices. I think a more interesting question is, What practices should you use if you have: 

  • Good to excellent skill in Estimation
  • Good to excellent skill in Requirements
  • Good to excellent skill in Scrum and related practices

Having competence in multiple areas opens up some doors that will be closed with a lesser skill set. In particular, it opens up the ability to favor predictability if your business needs that, or to favor flexibility if your business needs that. Agile is supposed to be about options, and I think that includes the option to develop in the way that best supports the business. 

12. The typical estimation context involves moderate volatility and a moderate levels of unknowns

Ron Jeffries writes, “It is conventional to behave as if all decent projects have mostly known requirements, low volatility, understood technology, …, and are therefore capable of being more or less readily estimated by following your favorite book.” I don’t know who said that, but it wasn’t me, and I agree with Ron that that statement doesn’t describe most of the projects that I have seen. 

I think it would be more true to say, “The typical software project has requirements that are knowable in principle, but that are mostly unknown in practice due to insufficient requirements skills; low volatility in most areas with high volatility in selected areas; and technology that tends to be either mostly leading edge or mostly mature." In other words, software projects are challenging, but the challenge level is manageable. If you have developed the full set of skills a software professional should have, you will be able to overcome most of the challenges or all of them. 

Of course there is a small percentage of projects that do have truly unknowable requirements and across-the-board volatility. I consider those to be corner cases. It’s good to explore corner cases, but also good not to lose sight of which cases are most common. 

13. Responding to change over following a plan does not imply not having a plan. 

It’s amazing that in 2015 we’re still debating this point. Many of the #NoEstimates comments literally emphasize not having a plan, i.e., treating 100% of the project as emergent. They advocate a process—typically Scrum—but no plan beyond instantiating Scrum. 

According to the Agile Manifesto, while agile is supposed to value responding to change, it also is supposed to value following a plan. The Agile Manifesto says, "there is value in the items on the right" which includes the phrase "following a plan." 

While I agree that minimizing planning overhead is good project management, doing no planning at all is inconsistent with the Agile Manifesto, not acceptable to most businesses, and wastes some of Scrum's capabilities. One of the amazingly powerful aspects of Scrum is that it gives you the ability to respond to change; that doesn’t imply that you need to avoid committing to plans in the first place. 

My company and I have seen Agile adoptions shut down in some companies because an Agile team is unwilling to commit to requirements up front or refuses to estimate up front. As a strategy, that’s just dumb. If you fight your business about providing estimates, even if you win the argument that day, you will still get knocked down a peg in the business’s eyes. 

I've commented in other contexts that I have come to the conclusion that most businesses would rather be wrong than vague. Businesses prefer to plant a stake in the ground and move it later rather than avoiding planting a stake in the ground in the first place. The assertion that businesses value flexibility over predictability is Agile's great unvalidated assumption. Some businesses do value flexibility over predictability, but most do not. If in doubt, ask your business. 

If your business does value predictability, use your velocity to estimate how much work you can do over the course of a project, and commit to a product backlog based on your demonstrated capacity for work. Your business will like that. Then, later, when your business changes its mind—which it probably will—you’ll still be able to respond to change. Your business will like that even more.  

14. Scrum provides better support for estimation than waterfall ever did, and there does not have to be a trade off between agility and predictability. 

Some of the #NoEstimates discussion seems to interpret challenges to #NoEstimates as challenges to the entire ecosystem of Agile practices, especially Scrum. Many of the comments imply that estimation will somehow impair agility. The examples cited to support that are mostly examples of unskilled misapplications of estimation practices, so I see them as additional examples of people not understanding estimation very well. 

The idea that we have to trade off agility to achieve predictability is a false trade off. If we define "agility" to mean, "no notion of our destination" or "treat all the requirements on the project as emergent," then of course there is a trade off, by definition. If, on the other hand, we define "agility" as "ability to respond to change," then there doesn't have to be any trade off. Indeed, if no one had ever uttered the word “agile” or applied it to Scrum, I would still want to use Scrum because of its support for estimation and predictability, as well as for its support for responding to change. 

The combination of story pointing, velocity calculation, product backlog, short iterations, just-in-time sprint planning, and timely retrospectives after each sprint creates a nearly perfect context for effective estimation. To put it in estimation terminology, story pointing is a proxy based estimation technique. Velocity is calibrating the estimate with project data. The product backlog (when constructed with estimation in mind) gives us a very good proxy for size. Sprint planning and retrospectives give us the ability to "inspect and adapt" our estimates. All this means that Scrum provides better support for estimation than waterfall ever did. 

If a company truly is operating in a high uncertainty environment, Scrum can be an effective approach. In the more typical case in which a company is operating in a moderate uncertainty environment, Scrum is well-equipped to deal with the moderate level of uncertainty and provide high predictability (e.g., estimation) at the same time. 

15. There are contexts where estimates provide little value. 

I don’t estimate how long it will take me to eat dinner, because I know I’m going to eat dinner regardless of what the estimate says. If I have a defect that keeps taking down my production system, the business doesn’t need an estimate for that because the issue needs to get fixed whether it takes an hour, a day, or a week. 

The most common context I see where estimates are not done on an ongoing basis and truly provide little business value is online contexts, especially mobile, where the cycle times are measured in days or shorter, the business context is highly volatile, and the mission truly is, “Always do the next most useful thing with the resources available.” 

In both these examples, however, there is a point on the scale at which estimates become valuable. If the work on the production system stretches into weeks or months, the business is going to want and need an estimate. As the mobile app matures from one person working for a few days to a team of people working for a few weeks, with more customers depending on specific functionality, the business is going to want more estimates. As the group doing the work expands, they'll need budget and headcount, and those numbers are determined by estimates. Enjoy the #NoEstimates context while it lasts; don’t assume that it will last forever. 

16. This is not religion. We need to get more technical and more economic about software discussions. 

I’ve seen #NoEstimates advocates treat these questions of requirements quality, estimation effectiveness, agility, and predictability as value-laden moral discussions. "Agile" is a compliment and "Waterfall" is an invective. The tone of the argument is more moral than economic. The arguments are of the form, "Because this practice is good," rather than of the form, "Because this practice supports business goals X, Y, and Z." 

That religion isn’t unique to Agile advocates, and I’ve seen just as much religion on the non-Agile sides of various discussions. It would be better for the industry at large if people could stay more technical and economic more often. 

Agile is About Creating Options, Right?

I subscribe to the idea that engineering is about doing for a dime what any fool can do for a dollar, i.e., it's about economics. If we assume professional-level skills in agile practices, requirements, and estimation, the decision about how much work to do up front on a project should be an economic decision about which practices will achieve the business goals in the most cost-effective way. We consider issues including the cost of changing requirements and the value of predictability. If the environment is volatile and a high percentage of requirements are likely to spoil before they can be implemented, then it’s a bad economic decision to do lots of up front requirements work. If predictability provides little or no business value, emphasizing up front estimation work would be a bad economic decision.

On the other hand, if predictability does provide business value, then we should support that in a cost-effective way. If we do a lot of the requirements work up front, and some requirements spoil, but most do not, and that supports improved predictability, that would be a good economic choice. 

The economics of these decisions are affected by the skills of the people involved. If my team is great at Scrum but poor at estimation and requirements, the economics of up front vs. emergent will tilt toward Scrum. If my team is great at estimation and requirements but poor at Scrum, the economics will tilt toward estimation and requirements. 

Of course, skill sets are not divinely dictated or cast in stone; they can be improved through focused self-study and training. So we can treat the decision to invest in skills development as an economic issue too. 

Decision to Develop Skills is an Economic Decision Too

What is the cost of training staff to reach competency in estimation and requirements? Does the cost of achieving competency exceed the likely benefits that would derive from competency? That goes back to the question of how much the business values predictability. If the business truly places no value on predictability, there won’t be any ROI from training staff in practices that support predictability. But I do not see that as the typical case. 

My company and I can train software professionals to approach competency in both requirements and estimation in about a week. In my experience most businesses place enough value on predictability that investing a week to make that option available provides a good ROI to the business. Note: this is about making the option available, not necessarily exercising the option on every project. 

My company and I can also train software professionals to approach competency in a full complement of Scrum and other Agile technical practices in about a week. That produces a good ROI too. In any given case, I would recommend both sets of training. If I had to recommend only one or the other, sometimes I would recommend starting with the Agile practices. But my real recommendation is to "embrace the and" and develop both sets of skills.  

For context about training software professionals to "approach competency" in requirements, estimation, Scrum, and other Agile practices, I am using that term based on work we've done with our  Professional Development Ladder. In that ladder we define capability levels of "Introductory," "Competence," "Leadership," and "Mastery." A few days of classroom training will advance most people beyond Introductory and much of the way toward Competence in a particular skill. Additional hands-on experience, mentoring, and feedback will be needed to cement Competence in an area. Classroom study is just one way to acquire these skills. Self-study or working with an expert mentor can work about as well. The skills aren't hard to learn, but they aren't self-evident either. As I've said above, the state of the art in estimation, requirements, and agile practices has moved well beyond what even a smart person can discover on their own. Focused professional development of some kind or other is needed to acquire these skills. 

Is a week enough to accomplish real competency? My company has been training software professionals for almost 20 years, and our consultants have trained upwards of 50,000 software professionals during that time. All of our consultants are highly experienced software professionals first, trainers second. We don't have any methodological ax to grind, so we focus on what is best for each individual client. We all work hands-on with clients so we know what is actually working on the ground and what isn't, and that experience feeds back into our training. We have also also invested heavily in training our consultants to be excellent trainers. As a result, our service quality is second to none, and we can make a tremendous amount of progress with a few days of training. Of course additional coaching, mentoring and support are always helpful. 

17. Agility plus predictability is better than agility alone. 

Skills development in practices that support estimation and predictability vs. practices that support agility is not an either/or choice. A truly agile business would be able to be flexible when needed, or predictable when needed. A true software professional will be most effective when skilled in both skill sets. 

If you think your business values agility only, ask your business what it values. Businesses vary, and you might work in a business that truly does value agility over predictability or that values agility exclusively. Many businesses value predictability over agility, however, so don't just assume it's one or the other.  

I think it’s self-evident that a business that has both agility and predictability will outperform a business that has agility only. With today's powerful agile practices, especially Scrum, there's no reason we can't have both.  

Overall, #NoEstimates seems like the proverbial solution in search of a problem. I don't see businesses clamoring to get rid of estimates. I see them clamoring to get better estimates. The good news for them is that agile practices, Scrum in particular, can provide excellent support for agility and estimation at the same time. 

My closing thought, in this hash tag-happy discussion, is that #AgileWithEstimationWorksBest -- and #EstimationWithAgileWorksBest too. 

Resources 

Quickly Help Users Identify Their Needs

Mike Cohn's Blog - Tue, 08/18/2015 - 15:00

Many projects bog down during requirements gathering. This is dangerous because it wastes valuable time, often setting up a project to be late.

To avoid this problem, I like to engage users and stakeholders in very lightweight story-writing workshops. Before anyone arrives in the meeting room, I will prepare the room by making sure there are pens and index cards within reach of every chair in the room. Even if you will be using a tool to maintain your product backlog, pen and paper are great for this type of early brainstorming.

I then write on a whiteboard or large sheet of paper:

As a ________

I want _________

so that ________

You’ll probably recognize that as the usual format of a user story, but your users may not. So I tell everyone that we’re gathered to fill in the blanks. We may start with a brief effort to identify who the products users are (the first of the three blanks). But most of the time will be spent filling in the second and third blanks (what and why).

I find that 90 minutes of this is usually plenty of brainstorming to load up a few months worth of high-priority items in a product backlog. That’s a guideline and will, of course, need to be adjusted based on the number of participants, previous agreement on the product vision, and other factors. Save time by avoiding prioritization discussions. Leave that to the product owner to be done outside the story-writing workshop.

Observation Issues and Root Cause Analysis

Herding Cats - Glen Alleman - Mon, 08/17/2015 - 17:29

Todd Little posted a comment on "How To Lie With Statistics," about his observations on the chart contained in that original post. 

6a00d8341ca4d953ef01a3fd22c4b1970bAs Todd mentions in his response

Screen Shot 2015-08-17 at 8.33.20 AM
The Cone of Uncertainty chart comes from the original work of Barry Boehm, "Reducing Estimation Uncertainty with Continuous Estimation: Assessment Tracking with 'Cone of Uncertainty.'" In this paper Dr. Boehm speaks to the lack of continuous updating of the estimates made early in the program as the source of unfavorable cost and schedule outcomes. 

As long as the projects are not re-assessed or the estimations not re-visited, the cones of uncertainty are not effectively reduced [1]. 

The Cone of Uncertainty is a notional example of how to increase the accuracy and precision of software development estimates with continuous reassessments. For programs in the federal space subject to FAR 34.2 and DFARS 34.201, reporting Estimate to Complete (ETC) and Estimates at Completion (EAC) is mandatory on a monthly basis. This is rarely done in the commercial world with the expected results shown in Todd's chart for his data and Demarco's data.

The core issue from current research at PARCA (http://www.acq.osd.mil/parca) from Root Cause Analysis (where I have worked as a support contractor) shows many of the issues are poor estimates when the program was baselined and failure to update the ETC and EAC with credible information about risks and physical percent complete

The data reported in Todd's original chart are the results of the projects based on estimates that may or may not have been credible. So the analysis of the outcomes of the completed projects is Open Loop  ...

... that is the target estimate measured against the actual outcomes May or May not Have Been Against Credible Estimates. So showing project overages doesn't actually provide the needed information the correct this problem. The estimate may have been credible, but the execution failed to perform as planned.

With this Open Loop assessment it is difficult to determine any corrective actions. Todd's complete presentation "Uncertainty Surrounding Cone of Uncertainty,"  speaks to some of the Possible root cause of the mismatch between Estimates and Actuals. As Todd mentions in his response, this was not the purpose his chart. Rather I'd suspect just to show the existence of this gap.

The difficulty however is pointing out observations of problems, while useful to confirm there is a problem, does little to correct the underlying cause of the problem.

At a recent ICEEA conference in San Diego, Dr. Boehm and several others spoke about this estimating problem.  Several books and papers were presented addressing this issue.

Both these resources , and many more, speak to the Root Causes of both the estimating problem and the programmatic issues of staying on plan.

This is the Core Problem That Has To Bee Addressed 

We need both good estimates and good execution to arrive as planned. There is plenty of evidence that we have an estimating problem. Conferences (ICEAA and AACE) speak to these. As well as government and FFRDC organizations (search for Root Cause Analysis here PARCA, IDA, MITRE, RAND, and SEI).

But the execution side is also a Root Cause. Much research has been done on procedures and process for Keeping the Program Green. For example the work presented at ICEAA "The Cure for Cost and Schedule Growth" where more possible Root Causes are addressed from our research.

While Todd's chart shows the problem, the community - cost and schedule community - is still struggling with the corrective action. The chart is ½ the story. The other ½ is the poor performance on the execution side IF we had a credible baseline to execute against. 

To date both sides of the problem are unsolved and there for we have Open Loop Control with neither the proper steering target nor the proper control of the system to steer toward that target. Without corrections to both estimating, planning, scheduling, and execution, there is little hope in improving the probability of success in the software development domain.

Using Todd's chart from the full presentation, the core question that remains unanswered in many domains is

How can we increase the credibility of the estimate to complete earlier in the program?

Screen Shot 2015-08-17 at 10.05.23 AM

Meaning 

  • In the feasibility stage what is a credible estimate, and how can that estimate be improved as the program moves left to right?
  • What are the measures of credibility?
  • How can these measures be informed as the project progresses?
  • What are the physical  processes to assure those estimates are increasing in accuracy and precision?

By the way the term possible error comes from historical data. And like all How to Lie With Statistics charts that historical data is self selected, so a specific domain, classification of projects, and most importantly, the maturity of the organization making the estimates and executing the program.

Much research has shown the maturity of the acquirer influences the accuracy and precision of the estimates. Our poster child is Stardust, with on time, on budget, working outcomes due to both government and contractor Program Manager's maturity for managing in the presence of uncertainty. Which is one of the source of this material

Managing in the presence of uncertainty from Glen Alleman

[1] Boehm, B. “Software Engineering Economics”. Prentice-Hall, 1981.  

Related articles Why Guessing is not Estimating and Estimating is not Guessing Estimating and Making Decisions in Presence of Uncertainty Making Conjectures Without Testable Outcomes
Categories: Project Management

How to Use Continuous Planning

If you’ve read Reasons for Continuous Planning, you might be wondering, “How can we do this?” Here are some ideas.

You have a couple of preconditions:

  • The teams get to done on features often. I like small stories that the team can finish in a day or so.
  • The teams continuously integrate their features.

Frequent features with continuous integration creates an environment in which you know that you have the least amount of work in progress (WIP). Your program also has a steady stream of features flowing into the code base. That means you can make decisions more often about what the teams can work on next.

Now, let’s assume you have small stories. If you can’t imagine how to make a small story, here is an example I used last week that helped someone envision what a small story was:

Imagine you want a feature set called “secure login” for your product. You might have stories in this order:

  1. A person who is already registered can login with their user id and password. For this, you only need to have a flat file and a not-too-bright parser—maybe even just a lookup in the flat file. You don’t need too many cases in the flat file. You might only have two or three. Yes, this is a minimal story that allows you to write automated tests to verify that it works even when you refactor.
  2. A person who is not yet registered can create a new id and password.
  3. After the person creates a new id and password, that person can log in. You might think of the database schema now. You might not want the entire schema yet. You might want to wait until you see all the negative stories/features. (I’m still thinking flat file here.)
  4. Now, you might add the “parse-all-possible-names” for login. You would refactor Story #2 to use a parser, not copy names and emails into a flat file. You know enough now about what the inputs to your database are, so you can implement the parser.
  5. You want to check for people that you don’t want to log in. These are three different small stories. You might need a spike to consider which stories you want to do when, or do some investigation.
    1. Are they from particular IP addresses (web) or physical locations?
    2. Do you need all users to follow a specific name format?
    3. Do you want to use a captcha (web) or some other robot-prevention device for login (three tries, etc.)?

Maybe you have more stories here. I am at the limit of what I know for secure login. Those of you who implement secure login might think I am past my limit.

These five plus stories are a feature set for secure login. You might not need more than stories 1, 2, and 3 the first time you touch this feature set. That’s fine. You have the other stories waiting in the product backlog.

If you are a product owner, you look at the relative value of each feature against each other feature. Maybe you need this team to do these three first stories and then start some revenue stories. Maybe the Accounting team needs help on their backlog, and this feature team can help. Maybe the core-of-the-product team needs help. If you have some kind of login, that’s good enough for now. Maybe it’s not good enough for an external release. It’s good enough for an internal release.

Your ability to change what feature teams do every so often is part of the value of agile and lean product ownership—which helps a program get to done faster.

You might have an initial one-quarter backlog that might look like this:

Example.AgileRoadmapOneQuarter

Start at the top and left.

You see the internal releases across the top. You see the feature sets across just under the internal releases. This part is still a wish list.

Under the feature sets are the actual stories in the details. Note how the POs can change what each team does, to create a working skeleton.

The details are in the stories at the bottom.

This is my picture.You might want something different from this.

The idea is to create a Minimum Viable Product for each demo and to continue to improve the walking skeleton as the project teams continue to create the product.

Because you have release criteria for the product as a whole, you can ask as the teams demo, “What do we have to do to accomplish our release criteria?” That question allows and helps you replan for the next iteration (or set of stories in the kanban). Teams can see interdependencies because their stories are small. They can ask each other, “Hey can you do the file transfer first, before you start to work on the Engine?”

The teams work with their product owners. The product owners (product owner team) work together to develop and replan the next iteration’s plan which leads to replanning the quarter’s plan. You have continuous planning.

You don’t need a big meeting. The feature team small-world networks help the teams see what they need, in the small. The product owner team small-world network helps the product owners see what they need for the product over the short-term and the slightly longer term. The product manager can meet with the product owner team at least once a quarter to revise the big picture product roadmap.

You can do this if the teams have small stories, if they pay attention to technical excellence and use continuous integration.

In a program, you want smallness to go big. Small stories lead to more frequent internal releases (every day is great, at least once a month). More frequent internal releases lead to everyone seeing progress, which helps people accomplish their work.

You don’t need a big planning meeting. You do need product owners who understand the product and work with the teams all the time.

The next post will be about whether you want resilience or prediction in your project/program. Later :-)

Categories: Project Management

Is #NoEstimates a House Built on Sand?

Herding Cats - Glen Alleman - Mon, 08/17/2015 - 02:45

This is my last post on the topic of #NoEstimates. Let's start with my professional observation. All are welcome to provide counter examples. 

Estimates have little value to those spending the money.
Estimates are of critical value to those providing the money.

Since those spending the money usually appear to not recognize the need for estimating for those providing the money, the discussion has no basis on which to exchange ideas. Without the acknowledgement that in business there are is collection of principles that are immutable, those spending the money have little understanding of where the money to do their work comes from†.

Here are the business principles that inform how the business works when funding the development of value:

  • The future is uncertain, but this uncertainty can be modeled. It is not unknowable.
  • Managerial Accounting provides managers with accounting information in order to better inform themselves before they decide matters within their organizations, which aids their management and performance of control functions.
  • Economic Risk Management identifies, analyzes and accepts or mitigates the uncertainties encountered in the managerial decision-making processes

On the project management side, there are also immutable principles required for project success

  • There is some notion of what Done looks like, in units of measure meaningful to the decision makers. Effectiveness and Performance are two standard measures in domains where systems thinking prevails 
  • There is a work Plan to reach Done. This Plan can be simple or it can be complex. But the order of the work and the dependencies between the work elements are the basis of all planning processes.
  • There is  a plan for the needed resources to reach Done. This includes staff, facilities, and funding. This means knowing something about how much and when for these resources.
  • There is the recognition of the risk involved in reached Done, and a response to those risks
  • There is some means of measuring physical progress to the Plan to reach Done, so corrective actions can be taken to increase the probability of success. Tangible outcomes from the Planned work is the preferred way to measure progress

The discussion - of sorts - around No Estimates has reached a low point in shared understanding. But first let me set the stage

If the Business and Project Success principles are not accepted as the basis of discussion for any improvements, then there is no basis of discussion. Stop reading there's nothing here for you. If these principles are acknowledged, then please continue

In a recent post from one of the Original authors of the #NoEstimates hashtag it was said...

Quit estimates cold turkey. Get some kind of first-stab working software into the customer’s hands as quickly as possible, and proceed from there. What does this actually look like? When a manager asks for an estimate up front, developers can ask right back, “Which feature is most important?”— then deliver a working prototype of that feature in two weeks. Deliver enough working code fast enough, with enough room for feedback and refinement, and the demand for estimates might well evaporate. Let’s stop trying to predict the future. Let’s get something done and build on that — we can steer towards better.

This is one of those over generalizations that when questioned get strong pushback to the questioner. Let's deconstruct this paragraph a bit in the context of software development.

  • Quit estimates cold turkey - perhaps those paying for the work need to be consulted to determine if they have any vested interest in knowing about cost and schedule of the value they're paying for.
  • Some kind of first stab software working - sounds nice. But how much does that cost? And when can that be delivered. Can any feature in the requested list - the needed capabilities and their supporting technical and operational requirements - be done in 2 weeks? Can you show that can happen, so is that just a platitude repeated enough that is has become a meme - without any actual evidence of being true? 
  • Deliver enough working code fast enough, with enough room for feedback and refinement, and the demand for estimates might well evaporate - there are two cascaded IF conditions here. IF we deliver working code fast enough, IF we leave room for feedback, THEN estimate MIGHT evaporate.

This last one is one of those  IF PIGS COULD FLY type statements.

So Here's the Issue

If it is conjectured that we can make decisions in the presence of uncertainty - and all project work operated in the presence of uncertainty by its very definition - otherwise it'd be production - then how can we make a choice between alternatives if we can't estimate the outcomes of those choices?

This is the basis of MicroEconomics and Managerial Finance. When the OP'ers of #NoEstimates make these types of statements they're doing so on the their volition. It's likely their strongly held belief that decisions can be made without estimating the outcomes of those decisions. 

So when questioned about what principles these conjectures are based on returns shorn for asking, acquisitions of trolling, being rude, having no respect for the person making these unfounded, unsubstantiated, untested, domain free statements, it seems almost laughable. At times it appears to be willful ignorance of the basic tenants of business decision making. I don't pretend to know what's in the minds of many #NE supporters. Having talked to some advocates who are skeptical, it turns out when questioning further they are unwilling to disavow the notion that there is merit in exploring further. 

This is a familiar course for climate change deniers. All the evidence is not in, so let's challenge everything and see what we can discover. This notion of challenging and exploring in the absence of established principles is not that useful actually. In a domain like managerial finance, Microeconomics of software development decision making, in the realm of decision making in general, the principles and practices are well established.

What's now know is actually that those principles and practicers are not know to those making the conjecture that we should challenge everything. Much like the political climate deniers - Well I'm not a scientist but I heard on the internet there is some dissent in the measurements ... So I'm not familiar with probability and statistics and haven't taken a microeconomics class or read any Managerial Finance books, But almost sure that those self proclaimed thought leaders for #NoEstimates have something worth looking into.

Harsh, you bet it's harsh. Any idea presented in open forum will be challenged when that idea willfully violate the principles on which business operates. Better be prepared to be challenged and better be prepared to bring evidence your conjecture has merit. This happens all the time in science, mathematics, and engineering. Carl Sagan's BS Detector is one place to start. Or John Baez's Crack Pot Index are useful in the science and math world. 

No Estimates has now reached that level, with some outrageous claims. 

  • Not doing estimates improves project performance by 10X
  • Estimates are actually evil
  • Estimating destroys innovation
  • Steve McConnell proves in his book estimates can't be done.
  • Todd Littles Figure 2 shows how bad we are at estimating - without of course reading the rest of the presentation showing how to correct these errors.

Making Credible Decisions in the Presence of Uncertainty

Decision making is the basis of business management. Here's an accessible text for learning to making decisions in the presence of uncertainty, Decision Analysis for the Professional. When there is any suggestion that decision can be made without estimate ask if the personal making that conjecture has an evidence this is possible. Ask if they're read this book. Ask if their decision making process has:

  • A decision making framework
  • A decision making process
  • A methodology for making decisions
  • How this decision making process works in the presence complex organizations
  • A probability and statistics model for making decsions.

Here's some more background on making decisions in the presence of uncertainty.

This is a sample of the many resources available for making decisions in the presence of uncertainty. There is also a large collection of estimating software development projects. The one we use in our work is

This an other resources are the basis of understanding how to make decision.

When it is conjectured we can decide with estimating, ask have you any evidence what so ever this is possible beyond your personal opinion and anecdotal experience? No? Then please stop trying to convince me your unsubstantiated  idea has any merit in actual business practice. 

And this is why I've decided to stop writing about the nonsense of #NoEstimates. There is no basis for the discussion anchored in principles, practices, or processes of business based in managerial finance and Microeconomics of decision making.

It's a House Built On Sand

† I learned this in the first week of my first job after graduate school. 

 

Related articles Making Conjectures Without Testable Outcomes Estimating Processes in Support of Economic Analysis Root Cause of Project Failure Herding Cats: How To Make Decisions Estimating and Making Decisions in Presence of Uncertainty Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management