Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
This is my last post on the topic of #NoEstimates. Let's start with my professional observation. All are welcome to provide counter examples.¬†
Estimates have little value to those spending the money.
Estimates are of critical value to those providing the money.
Since those spending the money usually appear to not recognize the need for estimating for those providing the money, the discussion has no basis on which to exchange ideas. Without the acknowledgement that in business there are is collection of principles that are immutable, those spending the money have little understanding of where the money to do their work comes from‚Ä†.
Here are the business principles that inform how the business works when funding the development of value:
On the project management side, there are also immutable principles required for project success
The discussion - of sorts - around No Estimates has reached a low point in shared understanding. But first let me set the stage
If the Business and Project Success principles are not accepted as the basis of discussion for any improvements, then there is no basis of discussion. Stop reading there's nothing here for you. If these principles are acknowledged, then please continue
In a recent post from one of the Original authors of the #NoEstimates hashtag it was said...
Quit estimates cold turkey. Get some kind of first-stab working software into the customer‚Äôs hands as quickly as possible, and proceed from there. What does this actually look like? When a manager asks for an estimate up front, developers can ask right back, ‚ÄúWhich feature is most important?‚ÄĚ‚ÄĒ then deliver a working prototype of that feature in two weeks. Deliver enough working code fast enough, with enough room for feedback and refinement, and the demand for estimates might well evaporate.¬†Let‚Äôs stop trying to predict the future. Let‚Äôs get something done and build on that‚Ää‚ÄĒ‚Ääwe can steer towards better.
This is one of those over generalizations that when questioned get strong pushback to the questioner. Let's deconstruct this paragraph a bit in the context of software development.
This last one is one of those ¬†IF PIGS COULD FLY¬†type statements.
So Here's the Issue
If it is conjectured that we can make decisions in the presence of uncertainty - and all project work operated in the presence of uncertainty by its very definition - otherwise it'd be production - then how can we make a choice between alternatives if we can't estimate the outcomes of those choices?
This is the basis of MicroEconomics and Managerial Finance. When the OP'ers of #NoEstimates make these types of statements they're doing so on the their volition. It's likely their strongly held belief that decisions can be made without estimating the outcomes of those decisions.¬†
So when questioned about what¬†principles these conjectures are based on returns shorn for asking, acquisitions of trolling, being rude, having no respect for the person making these unfounded, unsubstantiated, untested, domain free statements, it seems almost laughable. At times it appears to be willful ignorance of the basic tenants of business decision making. I don't pretend to know what's in the minds of many #NE supporters. Having talked to some advocates who are skeptical, it turns out when questioning further they are unwilling to disavow the notion that there is¬†merit in¬†exploring further.¬†
This is a familiar course for climate change deniers.¬†All the¬†evidence¬†is not in, so let's¬†challenge¬†everything and¬†see what we can discover.¬†This notion of challenging and¬†exploring in the absence of established principles is not that useful actually. In a domain like managerial finance, Microeconomics of software development decision making, in the realm of decision making in general, the principles and practices are well established.
What's now know is actually that those principles and practicers are not know to those making the conjecture that we should challenge everything. Much like the political climate deniers -¬†Well I'm not a scientist but I heard on the internet there is some¬†dissent¬†in the measurements ... So¬†I'm not¬†familiar¬†with¬†probability and¬†statistics¬†and haven't taken a microeconomics class or read any Managerial¬†Finance books, But almost sure that those¬†self proclaimed¬†thought leaders for #NoEstimates¬†have¬†something¬†worth looking into.
Harsh, you bet it's harsh. Any idea presented in open forum will be challenged when that idea willfully violate the principles on which business operates. Better be prepared to be challenged and better be prepared to bring evidence your conjecture has merit. This happens all the time in science, mathematics, and engineering. Carl Sagan's BS Detector is one place to start. Or John Baez's Crack Pot Index¬†are useful in the science and math world.¬†
No Estimates has now reached that level, with some outrageous claims.¬†
Making Credible Decisions in the Presence of Uncertainty
Decision making is the basis of business management. Here's an accessible text for learning to making decisions in the presence of uncertainty,¬†Decision Analysis for the Professional.¬†When there is any suggestion that decision can be made without estimate ask if the personal making that conjecture has an evidence this is possible. Ask if they're read this book. Ask if their decision making process has:
Here's some more background on making decisions in the presence of uncertainty.
¬†Modern Decision Making: A Guide to Modeling with Decision Support¬†Systems, Samuel¬†Bodily.¬†¬†
Decision Making Under Uncertainty: Models and Choices, Charles¬†Holloway.¬†
This is a sample of the many resources available for making decisions in the presence of uncertainty. There is also a large collection of estimating software development projects. The one we use in our work is
This an other resources are the basis of understanding how to make decision.
When it is conjectured¬†we¬†can¬†decide¬†with¬†estimating, ask¬†have you any evidence what so ever this is possible¬†beyond¬†your personal opinion and¬†anecdotal¬†experience?¬†No? Then please stop trying to convince me your unsubstantiated ¬†idea has any merit in actual business practice.¬†
And this is why I've decided to stop writing about the nonsense of #NoEstimates. There is no basis for the discussion anchored in principles, practices, or processes of business based in managerial finance and Microeconomics of decision making.
It's a House Built On Sand
‚Ä† I learned this in the first week of my first job after graduate school.¬†
¬†Related articles Making Conjectures Without Testable Outcomes Estimating Processes in Support of Economic Analysis Root Cause of Project Failure Herding Cats: How To Make Decisions Estimating and Making Decisions in Presence of Uncertainty Why Guessing is not Estimating and Estimating is not Guessing
Decisions are about making Trade Offs for the project that are themselves about:
The purpose of this process is to:
Making decisions about capabilities and resulting requirements is the start of discovering what DONE looks like, by:
Decisions about the functional behaviors and their options is next. These decisions:
Then comes the assessment the cost effectiveness of each decision:
Each of these steps is reflected in the next diagram.
Value of This Approach
When we hear that estimates are not needed to make decisions, we need to ask how the following questions can be answered:
¬†The decision making process is guided by the identification of alternatives
Decision-making is about deciding between alternatives. These alternatives need to be identified, assessed, and analyzed for their impact on the probability of success of the project.
These impacts include, but are not limited to:
The effectiveness of our decision making follows the diagram below:
In the End - Have all the Alternatives Been Considered?
Until there is a replacement for the principles of Microeconomics, for each decision made on the project, we will need to know the impact on cost, schedule, technical parameters, and other attributes of that decision. To not know those impacts literally violates the principles of microeconomics and the governance framework of all business processes, where the value at risk is non-trivial.
When you hear planning ahead, by assessing our alternatives is overrated, quit¬†estimating cold turkey¬†think again. And ask evidence of how to make decisions in the presence of uncertainty with making estimates, making trade-offs, evaluating alternatives - probabilistic alternatives - and all the those other decision making processes found in your managerial finance book you read in engineering, computer science, or business schoolMaking Conjectures Without Testable Outcomes Estimating and Making Decisions in Presence of Uncertainty Estimating Processes in Support of Economic Analysis
At a recent conference the discussion of the integration of Agile with Earned Value Management on programs subject to FAR 34.201 and DFARS 252.234-7001 was the topic. Here's my presentation.
Turns out it is a match made in heaven. Since that conference, I've worked a DOJ proposal where agile and EVM are mandated, along with the SDLC (Scrum) and the Agile Development tool (TFS). The flood gates are now opening on Software Intensive System on procurements requiring an EVMS. The NDIA (National Defense Industry Association) is release an integration document in October defining how to make the match. Agile luminaries have spoken at DOD conferences and given advice to Undersecretaries on the topic. One office I work in is writing an implementation guide, as is the GAO.
In a recent post of forecasting capacity planning¬†a time series of data was used as the basis of the discussion.
Some static statistics were then presented.
With a discussion of the upper and lower ranges of the past data. The REAL question though is what is the likely outcomes for data in the future given the past performance data. That is¬†if we recorded what happened in the past, what is the likely data in the future?
The¬†average and¬†upper and lower ranges from the past data are static statistics. That is all the dynamic behavior of the past is wiped out in the¬†averaging and deviation¬†processes, so that information can no longer be used to¬†forecast¬†the possible outcomes of the future.¬†
This is one of the attributes of¬†The Flaw of Averages and¬†How to Lie With Statistics, two books that should be on every managers desk. That is managers tasked with making decisions in the presence of uncertainty when spending other peoples money.
We now have a Time Series and can ask the question¬†what is the range of possible outcomes in the future given the values in the past? This can easily be done with a free tool - R. R is a statistical programming language that is free from the Comprehensive R Archive Network (CRAN). In R, there are several functions that can be used to make these forecasts. That is¬†what are the estimated values in the future form the past and their confidence intervals.
Let's start with some simple steps:
Here's what all this looks like in RStudio:
NETS=ts(NE.Numbers) - convert the raw numbers to a time series
NETSARIMA=arima(NETS, c=order(0,1,1)) - make an ARIMA object
NEFORECAST = forecast(NETSARIMA) - make a forecast using that
plot(NEFORECAST) - plot it
Here's the plot, with the time series¬†from¬†the raw data and the 80% and 90% confidence bands on the¬†possible outcomes in the future.¬†
The Punch Line
You want to make decisions with other peoples money when the 80% confidence in a possible outcome is itself a - 56% to +68% variance? really. Flipping coins gets a better probability of an outcome inside all the possible outcomes that happened in the past. The time series is essentially a random series with very low confidence of being anywhere near the¬†mean. This is the basis of¬†The Flaw of Averages.¬†
Where I work this would be a¬†non-starter if we came to the Program Manager with this¬†forecast of the Estimate to Complete based on an Average with that wide a variance.¬†
Possible where there is low value at risk, a customer that has little concern for cost and schedule overrun, and maybe where the work is actually and experiment ¬†with no deadline or not-to-exceed budget, or any other real constraint. But if your project has a need date for the produced capabilities, a date when those capabilities need to start earning their keep and need to start producing value that can be booked on the balance sheet a much higher confidence in what the future¬†NEEDS to be is likely going to be the key to success
The Primary Reason for Estimates
First estimates are for the business. Yes developers can use them too. But the business has a business goal. Make money at some point in the future on the sunk costs of today - the¬†breakeven date. These sunk costs are recoverable - hopefully - so we need to know¬†when we'll be even with our investment. This is how business works, they make decisions in the presence of uncertainty - not on the opinion of development saying¬†we recorded our past¬†performance¬†on an average for projected that to the¬†future. No, they need a risk adjusted, statistically sound level of confidence that they won't run out money before breakeven. What this means in practice is a¬†management reserve and¬†cost and schedule margin to protect the project from those naturally occurring variances and those probabilistic events to derail all the best laid plans.
Now developers make not think like this. But someone somewhere in a non-trivial business does. Usually in the Office of the CFO. This is called¬†Managerial Finance and it's how serious money at risk firms manage.¬†
So when you see time series like those in the original post, do your homework and show the¬†confidence¬†of the probability of the¬†needed performance actually showing up. And by needed performance I mean the¬†steering target used in the¬†Closed Loop Control system used to increase the probability that the planned value - that the Agilest so dearly treasure - actually appears somewhere near the planned need date and somewhere around the planned cost so the Return on Investment those paying for your work are not disappointed with a negative return and label their spend as¬†underwater.¬†
So What Does This Mean in the End?
Even when you're using past performance - one of the better ways of forecasting the future - you need to give careful consideration of those past numbers. Averages and simple variances which wipe out the actual underlying time series variances - are not only naive, they are bad statistics used to make bad management decisions.¬†
Add to the poorly formed notion that decisions can be made about future outcomes in the presence of ¬†uncertainty in the absence of estimates about that future and you've got the makings of management disappointment. The discipline of estimating future outcomes from past behaviors is well developed. The mathematics and especially the terms used in that mathematics are well established. Here's some source we use in our everyday work. These are not¬†populist¬†books, they are math and engineering. They have equations, algorithm, code examples. They are used used the¬†value at risk is sufficiently high that management is¬†on the hook for meeting the performance goals in exchange for the money assigned to the project.
If you work a project that doesn't care too much about deadlines, budget overages, or what gets produced other than the minimal products, then these books and related papers are probably not for you. And most likely Not Estimating the probability that you'll not over spend, show up seriously late, and fail to produce the needed capabilities to meet the Business Plans, will be just fine. But if you are expected to meet the business goals in exchange for the spend plan you've beed assigned, these might be a good place start to avoid being a statistic (dead skunk on the middle of the road) in the next Chaos Report (no matter how poorly the statistics are).
This by the way is an understanding I came to on the plane flight home this week. #Noestimates is a credible way to run your project when these conditions are in place. Otherwise you may what to read how to make credible forecasts of what the cost and schedule is going to be for the value produced with your customer's money, assuming they actually care about not wasting it.
¬†Related articles IT Risk Management Thinking, Talking, Doing on the Road to Improvement Estimating Processes in Support of Economic Analysis Making Conjectures Without Testable Outcomes
The management of projects involves many things. Capabilities, Requirements, Development, Staffing, Budgeting, Procurement, Accounting, Testing, Security, Deployment, Maintenance, Training, Support, Sales and Marketing, and other development and operational processes. Each of these has interdependencies with other elements. Each operates in its own specific ways on the project. Almost all have behaviors described by probabilistic model driven by the underlying statistical processes.
Management in this sense is¬†control in the presence of these probabilistic processes. And yes we can control these items - it's a well developed process, starting with Statistical Process Control, Monte Carlo Simulation of resulting, Bayesian Networks, Probabilistic Real Options and other methods based on probabilistic processes.
The notion that are not controllable is at its heart flawed and essentially misinformed. But this control requires information. It's been mentioned before about¬†Closed Loop Control,¬†Closed Loop versus Open Loop,¬†Staying on Plan Means Closed Loop Control,¬†Use and Misuse of Control Systems,¬†and¬†Why Project Management is a Control System.
All these lead to Five Immutable Principles of Project Success. Along with these Principles, comes Practices, and Processes. But it's the Principles we're after as a start.
Each of the Principles makes use of information used on¬†managing¬†the project. This¬†information is the¬†signal used by management to make decisions. These signals are used to compare the current of the project (the system under management) to the desired state of the system. This is the basis of the¬†Control System used to manage the¬†System Under Management. With the control system - what ever that may be and there are many systems - the next step to gather the information needed to make decisions using this control system. This means information about the ¬†past, present, and¬†future¬†of the system under management. Past data - when recorded - is available from the database of what happened in the past. Present data should be readily available directly from the system. The future data presents a unique problem. Rarely is information about the future recorded some place. It's in the future and hasn't happened yet.¬† But we can find sources for this future information. We have models of what the system may do in the future. We may have similar systems from the past that can be used to create future data. But there is a critical issue here. The future may not be like the past. Or the future may be impacted in ways not found on similar projects. But we to come up with this information if we are to make decisions about the future. So if we're missing the model, or missing the similar project, what can we do? We can make estimates from the data or models in some probabilistically informed manner. This is the role of estimating. To inform our decision making processes in the presence of uncertainty of possible future outcomes, knowing something about the past and present state of the system under management. With this probabilistic information, decisions can be made to take corrective actions to keep our project¬†under control. That is moving in the direction we planned to move to reach the goal of the project. This goal is usually...
The PlanningPoker.com team has been quite busy. A couple of months ago, they launched a brand new design and mobile support. Since then, they have continued to improve the site based on your feedback.
Now, they’ve launched PlanningPoker.com’s first set of premium features. While these premium features do require a monthly fee, the base game will always be free.
Visit PlanningPoker.com to see these new features in action:
… and more.
Digital product development agency 352 Inc. is the team that created the new PlanningPoker.com, and they’re committed to making it the most helpful and efficient way to run your planning sessions. They are also open to your feedback on how to improve the product.
Give the new features a try and let them know what more you would like to see. To learn more about Planning Poker in general, check out our informational page on it.
I’m working on the program management book, specifically on the release planning chapter. One of the problems I see in programs is that the organization/senior management/product manager wants a “commitment” for an entire quarter. Since they think in quarter-long roadmaps, that’s not unreasonable—from their perspective.
There is a problem with commitments and the need for planning for an entire quarter. This is legacy (waterfall) thinking. Committing is not what the company actually wants. Delivery is what the company wants. The more often you deliver, the more often you can change.
That means changing how often you release and replan.
Consider these challenges for a one-quarter commitment:
If you “commit” on a shorter cadence, you can manage these problems. (I prefer the term replan.)
If you consider a no-more-than-one-monthly-duration “commit,” you can see the product evolve, provide feedback across the program, and change what you do at every month milestone. That’s better.
Here’s a novel idea: Don’t commit to anything at all. Use continuous planning.
If you look at the one-quarter roadmap, you can see I show ¬†three iterations worth of stories as MVPs. In my experience, that is at least one iteration too much look-ahead knowledge. I know very few teams who can see six weeks out. I know many teams who can see to the next iteration. I know a few teams who can see two iterations.
What does that mean for planning?
Do continuous planning with short stories. You can keep the 6-quarter roadmap. That’s fine. The roadmap is a wish list. Don’t commit to a one-quarter roadmap. If you need a commitment, commit to one iteration at a time. Or, in flow/kanban, commit to one story at a time.
That will encourage everyone to:
If you keep the planning small, you don’t need to gather everyone in one big room once a quarter for release planning. If you do continuous planning, you might never need everyone in one room for planning. You might want everyone in one room for a kickoff or to help people see who is working on the program. That’s different than a big planning session, where people plan instead of deliver value.
If you are managing a program, what would it take for you to do continuous planning? What impediments can you see? What risks would you have planning this way?
Oh, and if you are on your way to agile and you use release trains, remember that the release train commits to a date, not scope and date.
Consider planning and replanning every week or two. What would it take for your program to do that?
A very oft-cited metric is that 64 percent of features in products are “rarely or never used.” The source for this claim was Jim Johnson, chairman of the Standish Group, who presented it in a keynote at the XP 2002 conference in Sardinia. The data Johnson presented can be seen in the following chart.
Johnson’s data has been repeated again and again to the extent that those citing it either don’t understand its origins or never bothered to check into them.
The misuse or perhaps just overuse of this data has been bothering me for a while, so I decided to investigate it. I was pretty sure of the facts but didn’t want to rely solely on my memory, so I got in touch with the Standish Group, and they were very helpful in clarifying the data.
The results Jim Johnson presented at XP 2002 and that have been repeated so often were based on a study of four internal applications. Yes, four applications. And, yes, all internal-use applications. No commercial products.
So, if you’re citing this data and using it to imply that every product out there contains 64 percent “rarely or never used features,” please stop. Please be clear that the study was of four internally developed projects at four companies.