Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
When we hear things like ...
Why promising nothing delivers more and planning always fails,
¬†It's in doing the work that we discover the work that we must do,
¬†If estimates were real the team wouldn't have to know the delivery date, they just work naturally and be on date.
You have to ask do these posters have any understanding that it's not their money?¬†That¬†all project work is probabilistic. That nonlinear, non-stationary, stochastic processes drive ¬†uncertainty for all work in ways that cannot be controlled by disaggregating the work (slicing), or assuming that work elements are¬†independent from other work elements in all but the most trivial of project context.
Systems Thinking and Probability‚Ä†
All systems where optimum technical performance is needed require a¬†negative feedback loop as the basis for controlling the work in order to arrive on the planned date, with the planned capabilities, for the planned budget. If there is no need to arrive as planned or as needed, then no control system is needed, just spend until told to stop.
The negative feedback loop as a control system, is the opposite of the positive feedback loop. In chemistry a positive feedback loop is best referred to as an¬†explosion. In project management a positive feedback loop results in a project that requires greater commitment of resources to produce the needed capabilities beyond what was anticipated. That is cost and schedule overrun and lower probability of technical success.
A project is a type of complex adaptive system that acquires information about its environment and the interactions between the project elements, identifies information of importance, and places that information within a context, model, or schema, and then acts on this information to make decisions.
The individual members of the project act as a complex adaptive system themselves and exert influence on the selection of both the schema and the adaptive forces used to make decisions. The extent to which learning produces adaptive or maladaptive behavior determines the survival of failure of the project and the organization producing the value from the project.
Managing in the Presence of Uncertainty
Uncertainty creates risk. Reducible risk and irreducible risk. This risk by its nature is probabilistic. Complex systems tend to organize themselves in a normal distribution of outcomes ONLY if each individual element of the system is Independent and Identically distributed. If this is not the case, long tailed distributions result and are the source of Black Swans. And these Black Swans are unanticipated cost and schedule performance problems and technical failures we are familiar with in the literature. The project¬†explodes.¬†
So We Arrive at the End
To manage in the presence of an uncertain future for cost, delivery date, and delivered capabilities to produce the value in exchange for the cost, we need some mechanism to inform our decision making process based on these random variables. The random variables that create risk. Risk that must be reduced to increase the probability of success. The reducible risk and the irreducible risk.
This mechanism is the ability to¬†estimate to impact of any decision while making the¬†trade off between decision alternatives - this is the basis of Microeconomics - the tradeoff of a decision based on the¬†opportunity cost¬†of the collection of decision alternatives.
Anyone conjecturing that decisions can be made in the presence of uncertainty without making ¬†estimated impacts of those decisions has willfully ignored the foundational principles ¬†of ¬†Microeconomics.
The only possible way to make decisions in the absence of estimating the impact of a decision is when the decision has a trivial¬†value at risk. Many decisions are just that.¬†If I decide wrong the outcome has little or no impact on cost, schedule, or needed¬†technical¬†performance. In this case Not Estimating is a viable option. For all other conditions, Not Estimates results in a Black Swan¬†explosion of the customers budget, time line, and expected beneficial outcomes based on the produced value.
‚Ä† Technical Performance Measurement, Earned Value, and Risk Management: An Integrated Diagnostic Tool for Program Management, Commander N. D. Pisano, SC, USN, Program Executive Officer Air NSW, Assault and Special Missions Programs (PEO(A)). Nick is a colleague. This paper is from 1991 defining how to plan and assess performance for complex, emergent systems.¬†Related articles Just Because You Say Words, It Doesn't Make Then True There is No Such Thing as Free Essential Reading List for Managing Other People's Money The Dysfunctional Approach to Using "5 Whys" Mr. Franklin's Advice
Thanks to all my neighbors, friends, and colleagues for their service.
It's been popular recently in some agile circles to mention¬†we use the 5 whys¬†when asking about dysfunction. This common and misguided approach assumes - wrongly - causal relationship are linear and problems come from a single source. For example
Estimates are the smell of dysfunction. Let's ask the 5 Whys to reveal these dysfunctions
The natural tendency to assume that in asking¬†5 whys there is a connection from beginning to end for the thread connecting cause and effect. This single source of the problem - the symptom - is labeled the Root Cause. The question is¬†is the root cause that actual root cause. The core problem is the¬†5 whys¬†is not really seeking a solution but just eliciting more symptoms masked as causes.
A simple example illustrates the problem from¬†Apollo Root Cause Analysis.
Say we're in the fire prevention business. If preventing fires is our goal, let's look for the causes of the fire and determine the correction actions needed to actual prevent fire from occuring. In this example let's says we've identified 3 potential causes of fire. There is ...
So what is the root cause of the fire? To prevent the fire - and in the follow on example prevent a dysfunction - we must find at least one¬†cause of the fire that can be acted on to meet the goals and objectives of preventing the fire AND are within our control.
If we decide to control of combustable materials then the root cause is the combustibles. Same for the oxygen. This can be done by inerting a confined space, say with nitrogen. Same for the ignition sources. This traditional Root Cause Analysis pursues a preventative solution that is within our control and meets the goals and objectives - prevent fire. But this is not actually the pursuit of the Root Cause. By pursuing this approach, we stop on a single cause that may or may not result in the best solution. We're mislead into a categorical thinking¬†process that looks for solutions. This doesn't means there is no root cause. It means we can't that a root cause cannot be labels until we have decided on which solutions we are able to implement. The root cause is actually secondary to and contingent on the solution, not the inverse. Only after solutions have been established can we identify the actual root cause of the fire not be prevented.
The notion that Estimates are the¬†smell of dysfunction in a software development organization and asking the 5 Whys in search for the Root Cause is equally flawed.¬†
The need to estimate or not estimate has not been established. It is presumed that it is the estimating process that creates the dysfunction, and then the search - through the 5 Whys - is the false attempt to categorize the root causes of this dysfunction. The supposed dysfunction is them reverse engineered to be connected to the estimating process. This is not only a na√Įve approch to solving the dysfunction is inverts the logic by ignoring the need to estimate. Without confirmation that estimates are needed ot not needed, the search for the cause of the dysfunction has no purposeful outcome.¬†
The decision that estimates are needed or not need does not belong to those being asked to produce the estimates. That decision belongs to those consuming the estimate information in the decision making process of the business - those whose money is being spent.
And of course those consuming the estimates need to confirm they are operating their decision making processes in some framework that requires estimates. It could very well be those providing the money to be spent by those providing the value don't actual need an estimate. The value at risk¬†may be low enough - 100 hours of development for a DB upgrade. But when the¬†value at risk is sufficiently large - and that determination of done again by those providing the money, then a legitimate need to know how much, when, and what is made by the business¬†In this case, decisions are based on Microeconomics of opportunity cost for uncertain outcomes in the future.
This is the basis of estimating and the determination of the real root causes of the problems with estimates. Saying¬†we're bad at estimating is NOT the root cause. And it is never the reason not to estimate. If we are bad at estimating, and if we do have confirmation and optimism biases, then fix them. Remove the impediments to produce credible estimates. Because those estimates are needed to make decisions in any non-trivial value at risk work.¬†
¬†Related articles Let's Get The Dirt On Root Cause Analysis Essential Reading List for Managing Other People's Money The Fallacy of the Planning Fallacy Mr. Franklin's Advice
The book¬†Software for Your Head¬†was a seminal work when we were setting up our Program Management Office in 2002 for a mega-project to remove nuclear waste from a very contaminated site in Golden Colorado.
Here's an adaptation of those ideas to the specifics of our domain and problems
Software for your mind from Glen Alleman This approach was a subset of a much larger approach to managing in the presence of uncertainty, very high risk, and even higher rewards, all on a deadline, and fixed budget.¬† As was stated in the Plan of the Week.
Do this every week, guided by the 3 year master plan and make sure no one is injured or killed.
That project is documented in the book¬†Making the Impossible Possible summarized here.
Making the impossible possible from Glen Alleman Related articles The Reason We Plan, Schedule, Measure, and Correct The Flaw of Empirical Data Used to Make Decisions About the Future There is No Such Thing as Free
We've been doing this for 20 years and therefore you can as well
Is a common phrase used when asked¬†in what domain does you approach work? Of course without a test of that idea outside the domain in which the anecdotal example is used, it's going to be hard to know if that idea is actually credible beyond those examples.
So if we hear¬†we've been successful in our domain doing¬†something¬†or better yet NOT doing something, like say NOT estimating, ask in what domain have you been successful? Then the critical question, is there any evidence that the success in that domain is transferable to another domain? This briefing provides a framework - from my domain of aircraft development - illustrating that domains vary widely in their needs, constraints, governance processes and applicable and effective approaches to delivering value.
Paradigm of agile project management from Glen Alleman Google seems to have forgotten how to advance the slides on the Mac. So click on the presentation title (paradigm of agile PM) ¬†to do that. Safari works. Related articles The Reason We Plan, Schedule, Measure, and Correct The Flaw of Empirical Data Used to Make Decisions About the Future There is No Such Thing as Free Root Cause Analysis Domain is King, No Domain Defined, No Way To Test Your Idea Mr. Franklin's Advice
Education is not the learning of facts, but the training of the mind to think - Albert¬†Einstein¬†
So if we're going to learn how to think¬†about managing the spending of other peoples money in the presence of uncertainty, we need some basis of education.¬†
Uncertainty is a fundamental and unavoidable feature of daily life. Personal life and the life of projects. To deal with this uncertainty intelligently we represent and reason about these uncertainties. There are formal ways of reasoning (logical systems for reasoning found in the Formal Logic and Artificial Intelligence domain) and informal ways of reasons (based on probability and statistics of cost, schedule, and technical performance in the Systems Engineering domain).
If Twitter, LinkedIn, and other forum conversations have taught me anything, it's that many participants base their discussion on personal experience and opinion. Experience informs opinion. That experience may be based on¬†gut feel learned from the ¬†school of hard knocks.¬†But there are other ways to learn as well. Ways to guide your experience and inform your option. Ways based on education and frameworks for thinking about solutions to complex problems.
Samuel Johnson has served me well with his quote...
There are two ways to knowledge,¬†We know a subject ourselves, or we know where we can find information upon it.
Hopefully the knowledge we know ourselves has some basis in fact, theory, and practice, vetted by someone outside ourselves, someone beyond our personal anecdotal experience
Here's my list of essential readings that form the basis of my understanding, opinion, principles, practices, and processes as they are applied in the domains I work - Enterprise IT, defense and space and their software intensive systems.
So In The End
This list is the tip of the iceberg for access to the knowledge needed to manage in the presence of uncertainty while spending other peoples money.Related articles Mr. Franklin's Advice Want To Learn How To Estimate? Two Books in the Spectrum of Software Development
Any process that does not have provisions for its own refinement will eventually fail or be abandoned
- W. R. Corcoran, PhD, P.E.,¬†The Phoenix¬†Handbook: The Ultimate Event¬†Evaluation¬†Manual for Finding Profit¬†Improvement in Adverse Events,¬†Nuclear Safety Review Concepts, 19 October 1997.
In the agile community it is popular to use the terms¬†complex, complexity, complicated¬†many times interchangeably and and many times wrongly. These terms are many times overloaded with an agenda used to push a process or even a method.
First some definitions
One more item we need is the types of Complexity
And Now To The Point
When we hear complex, complexity, complex systems, complex adaptive system, pause to ask what kind of¬†complex are you talking about. What¬†Type of complex system. In what system are you applying the term¬†complex. Have you classified that system in a way that actually matches a real system.
It is common use the terms complex, complicated, and complexity are interchanged. And software development is classified or mis-classified as one or the both or all three. It is also common to toss around these terms with not actual understanding of their meaning or application.
We need to move beyond buzz words. Words like¬†Systems Thinking.¬†Building software is part of a system. There are interacting parts that when assembled, produce an outcome. Hopefully a desired outcome. In the case of software the interacting parts are more than just the parts. Software has emergent properties. A Type 4 system, built from Type 1, 2, and 3 systems. With changes in time and uncertainty, modeling these systems requires stochastic processes. These processes depend on estimating behaviors as a starting point.¬†
The understanding that software development is an uncertain process (stochastic) is well known, starting in the 1980's  with COCOMO. Later models, like¬†Cone of Uncertainty made it clear that these uncertainties, themselves, evolve with time. The current predictive models based on stochastic processes include Monte Carlo Simulation of networks of activities, Real Options, and Bayesian Networks. Each is directly applicable to modeling software development projects.
 Software Engineering Economics, Barry Boehm, Prentice-Hall, 1981.Related articles Decision Analysis and Software Project Management Making Decisions in the Presence of Uncertainty Some More Background on Probability, Needed for Estimating Approximating for Improved Understanding The Microeconomics of a Project Driven Organization How to Avoid the "Yesterday's Weather" Estimating Problem Hope is not a Strategy
Project work is random. Most everything in the world ¬†is random. The weather, commuter traffic, productivity of writing and testing code. Few things actually take as long as they are planned. Cost is less random, but there are variances in the cost of labor, the availability of labor. Mechanical devices have variances as well.
The exact fit of a water pump on a Toyota Camry is not the same for each pump. There is a tolerance in the mounting holes, the volume of water pumped. This is a variance in the technical performance.
Managing in the presence of these uncertainties is part of good project management. But there are two distinct paradigms of managing in the presence of these uncertainties.
In the first case we have empirical data. In he second case we don't. There are two approaches to modeling what the system will do in terms of cost and schedule outcomes.
Bootstrapping the Empirical Data
With samples of past performance and the proper statistical assessment of those samples, we can¬†re-sample them to produce a model of future performance. This bootstrap resampling shares the principle of the second method - Monte Carlo Simulation - but with several important differences.
This¬†bootstrapping method is quick, easy, and produces a quick and easy result. But it has issues that must be acknowledged.
Monte Carlo Simulation
This approach is more general and removes many of the restriction to the statistical confidence of¬†bootstrapping.
Just as a reminder, in principle both the parametric and the non-parametric bootstrap are special cases of Monte Carlo simulations used for a very specific purpose: estimate some characteristics of the sampling distribution. But like all principles, in practice there are larger differences when modeling project behaviors.
In the more general approach ¬†of Monte Carlo Simulation the algorithm repeatedly creating random data in some way, performing some modeling with that random data, and collecting some result.
In practice when we hear Monte Carlo simulation we are talking about a theoretical investigation, e.g. creating random data with no empirical content - or from reference classes - ¬†used to investigate whether an estimator can represent known characteristics of this random data, while the (parametric) bootstrap refers to an empirical estimation and is not necessary a model of the underlying processes, just a small sample of observations independent from the actual processes that generated that data.
The key advantage of MCS is we don't necessarily need ¬†past empirical data. MCS can be used to advantage if we do, but we don't need it for the Monte Carlo Simulation algorithm to work.
This approach could be used to estimate some outcome, like in the bootstrap, but also to theoretically investigate some general characteristic of an statistical estimator (cost, schedule, technical performance) which is difficult to derive from empirical data.
MCS removes the road block heard in many critiques of estimating -¬†we don't have any past data on¬†which¬†to¬†estimate. ¬†No problem, build a model of the work, the dependencies between that work, and assign statistical parameters to the individual or collected PDFs and run the MCS to see what comes out.
This approach has several critical advantages:
So Here's the Killer Difference
Bootstrapping models make several key assumptions, which may not be true in general. So they must be tested before accepting any of the outcomes.
Monte Carlo Simulation models provide key value that bootstrapping can't.
The critical difference between Bootstrapping and Monte Carlo Simulation is MCS can show what the future performance has to be to stay on schedule (within variance), on cost, and have the technical performance meet the needs of the stakeholder.
Bootstrapping can only show what the future will be like if it like the past, not what it must be like. In Bootstrapping this future MUST be like the past. In MCS we can tune the PDFs to show what performance has to be to manage to that plan. Bootstrapping is reporting¬†yesterday's weather as tomorrow's weather - just like Steve Martin in LA Story. If tomorrow's weather turns out not to be like yesterday's weather, you gonna get wet.
MCS can forecast tomorrows weather, by assigning PDFs to future activities that are different than past activities, then we can make any needed changes in that future model to¬†alter the weather to meet or needs. This is in fact how weather forecasts are made - with much more sophisticated models of course here at the National Center for Atmospheric Research in Boulder, CO
This forecasting (estimating the future state) of¬†possible outcomes and the alternation of those outcomes through management actions to change dependencies, add or remove resources, provide alternatives to the plan (on ramps and off maps of technology for example), buy down risk, apply management reserve, assess impacts of rescoping the project, etc. etc. etc. ¬†is what project management is all about.
Bootstrapping is necessary but far from sufficient for any non-trivial project to show up on of before the need date (with schedule reserve), at o below the budgeted cost (with cost reserve) and have the produce or service provide the needed capabilities (technical performance reserve).
Here's an example of that probabilistic forecast of project performance from a MCS (Risky Project). This picture shows the probability for cost, finish date, and duration. But it is built on time evolving PDFs assigned to each activity in a network of dependent tasks, which models the work stream needed to complete as planned.
When that future work stream is changed to meet new requirements, unfavorable past performance and the needed corrective actions, or changes in any or all of the underlying random variables, the MCS can show us the expected impact on key parameters of the project so management in intervention can take place - since Project Management is a verb.
The connection between the Bootstrap and Monte Carlo simulation of a statistic is simple.
Both are based on repetitive sampling and then direct examination of the results.
But there are significant differences between the methods (hence the difference in names and algorithms). Bootstrapping uses the original, initial sample as the population from which to resample. Monte Carlo Simulation uses a data generation process, with known values of the parameters of the Probability Distribution Function. The common algorithm for MCS is Lurie-Goldberg.¬†Monte Carlo is used to test that the results of the estimators produce desired outcomes on the project. And if not, allow the modeler and her management to change those estimators and then mange to the changed plan.
Bootstrap can be used to estimate the variability of a statistic and the shape of its sampling distribution from past data. Assuming the future is like the past, make forecasts of throughput, completion and other project variables.¬†
In the end the primary differences (and again the reason for the name differences) is Bootstrapping is based on unknown distributions. Sampling and assessing the shape of the distribution in Bootstrapping adds no value to the outcomes. Monte Carlo is based on known or defined distributions usually from Reference Classes.Related articles Do The Math Complex, Complexity, Complicated The Fallacy of the Planning Fallacy
When we hear words about any topic, my favorite of course is¬†all things project manage, it doesn't make them true.
So when we hear some phrase, idea, or conjecture - ask for evidence. Ask for domain. Ask for examples. If you hear¬†we're just exploring ask who's paying for that? Because it is likely those words are unsubstantiated conjecture from personal experience and not likely very useful outside that personal experienceRelated articles Root Cause Analysis The Reason We Plan, Schedule, Measure, and Correct The Flaw of Empirical Data Used to Make Decisions About the Future Want To Learn How To Estimate? There is No Such Thing as Free Estimates
The Planning Fallacy is well documented in many domains. Bent¬†Flyvbjerg has documented this issue in one of his books, Mega Projects and Risk. But the Planning Fallacy is more complex than just the optimism bias. Many of the root causes for cost overruns are based in the¬†politics of large projects.
The ¬†planning fallacy is ...
...a phenomenon in which predictions about how much time will be needed to complete a future task display an optimistic bias (underestimate the time needed). This phenomenon occurs regardless of the individual's knowledge that past tasks of a similar nature have taken longer to complete than generally planned.¬†The bias only affects predictions about one's own tasks; when outside observers predict task completion times, they show a pessimistic bias, overestimating the time needed.¬†The planning fallacy requires that predictions of current tasks' completion times are more optimistic than the beliefs about past completion times for similar projects and that predictions of the current tasks' completion times are more optimistic than the actual time needed to complete the tasks.
The critical notion here is about¬†ones own estimates. This is the critical reasons for¬†
With all that said, there still is a large body of evidence that estimating is still a major problem.‚Ä†¬†
I have a colleague who is the former Cost Analysis Director of NASA. He has three reasons projects get in cost, schedule, and technical trouble:
The Planning Fallacy
Daniel Kahneman (Princeton) and Amos Tversky¬†(Stanford) describe it as ‚Äúthe tendency to underestimate the time, costs, and risks of future actions and overestimate the benefit of those actions‚ÄĚ.¬† The results are time and cost overruns as well as benefit shortfalls.¬† The concept is not new: they coined the term in the 1970s and much research has taken place since, see the Resources below.
So the challenge is to not fall victim to this optimism bias and become a statistic in the Planning Fallacy.
How do we do that?
Here's our experience:
So With These And Others...We can remove the fallacy of the Planning Fallacy.
This doesn't mean our project will be successful. Nothing can guarantee that. But the Probability of Success will be increased.
In the end we MUST know the Mission we are trying to accomplish, the units of measure of that Mission in terms meaningful to the decision makers. Without that we can't now what DONE looks like. Amnd with that only our optimism will carry us along until it is too late to turn back.
Anyone using Planning Fallacy as the excuse for project failure, not planning, not estimating, not actually doing their job as a project and business manager will likely succeed in the quest for project failure and get ¬†what they deserve. Late, Over Budget, and the gadget they're building doesn't work as needed.
‚Ä† Please note, that just because estimating is a problem in all domains, that's NO reason to not estimate. Like planning is a problem, it's no reason NOT to plan. Any suggestion that estimating or planning is not needed in the presence of uncertain future - as it is on all projects - is willfully ignoring the principles of Microeconomics - making choices in the presence of uncertainty based on opportunity cost . To suggest other wise confirms this ignorance.
These are some background on the Planning Fallacy problem from the anchoring and adjustment point of view that I've used over the years to inform our estimating processes for software intensive systems. After reading through these I hope you come to a better understanding of many of the mis-conceptions about estimate and the fallacies of how that is done in practice.
Interestingly there is a poster on twitter in the #NoEstimates thread that objects when people post links to their own work or work of others. Please do not fall prey to the notion that everyone has an equally informed opinion, unless you yourself have done all the research needed to cover the foundations of the topics. Outside resource are the very life blood of informed experience and the opinions that come from that experience.¬†
Anchoring unbound,¬†Nicholas Epley and Thomas Gilovich¬†
On the Reality of Cognitive Illusions,¬†Daniel Kahneman,¬†Princeton University,¬†Amos Tversky,¬†Stanford University.
The framing effect and risky decisions:¬†Examining cognitive functions with fMRI,¬†Cleotilde Gonzalez, Jason Dana, Hideya Koshino, and ,Marcel Just, The Journal of Economic Psychology, 26 (2005), 1-20.
¬†Discussion Note: Review of Tversky & Kahnemann (1974): Judgment under uncertainty: heuristics and biases,¬†Micheal Axelsen¬†UQ Business School¬†The University of Queensland¬†Brisbane, Australia
The Anchoring-and-Adjustment¬†Heuristic,¬†Why the Adjustments Are Insufficient,¬†Nicholas Epley and Thomas Gilovich.
¬†Judgment under Uncertainty: Heuristics and Biases,¬†Amos Tversky; Daniel Kahneman,¬†Science, New Series, Vol. 185, No. 4157. (Sep. 27, 1974), pp. 1124-1131
This should be enough to get you started and set the stage for rejecting any half baked ideas about anchoring and adjustment, planning fallacies, no need to estimate and the collection of other cocka mammy ideas floating around the web on how to make credible decisions with other peoples money.Related articles The Reason We Plan, Schedule, Measure, and Correct Herding Cats: Five Estimating Pathologies and Their Corrective Actions Tunnel to Nowhere Root Cause Analysis The Flaw of Empirical Data Used to Make Decisions About the Future
When our children were in High School, I strongly suggested they both take an economics class. Our daughter came home one day and announced at the dinner table
Dad, we learned¬†something¬†important¬†today in Economics Class.¬†
Whats that dear?,¬†I said, knowing the answer.
There's no such thing as free she said.
In finance, There is No Such Thing as a Free Lunch (TANSTAAFL) refers to the opportunity cost paid to make a decision. The decision to consume one product usually comes with the trade-off of giving up the consumption of something else.¬†
In a recent conversation, I was introduced to the notion of¬†Extreme¬†Contracts,¬†
The first bullet sounds like a good idea, provided one week can actually produce testable outcomes. The second bullet means that the variable will be the produced outcomes, since the probability that all work is of the same size, same risk, same effort is likely low on any non-trivial project.
The last bullet fails to acknowledge the principle of¬†lost opportunity cost. This is the¬†there is no such thing as free.¬†When the delivered software is not what is needed, the cost of the software in Extreme Contracts, is¬†free. But the¬†lost¬†capabilities¬†in the time frame they are needed is not free. It has a cost. This is the Opportunity Cost that is lost.¬†
The basis of good project management, and especially in our domain, using Earned Value, depends on providing the needed capabilities at the needed time for the needed cost. Baked into that paradigm is all the cost, planned upfront with the appropriate confidence levels, alternative assessment, and margins and reserves. The discovery cost, risk mitigation cost - both reducible and irreducible - and the cost recovery for productive use of the delivered product or service on the planned date for the planned cost.
This is the foundation of¬†Capabilities¬†Based¬†Planning¬†used in enterprise IT and software intensive systems development.
Capabilities based planning (v2) from Glen Alleman Related articles The Reason We Plan, Schedule, Measure, and Correct How We Make Decisions is as Important as What We Decide. The Flaw of Empirical Data Used to Make Decisions About the Future
But to do this properly we need to have a standard set of terms that can form the basis of understanding the problem and the solution.
When those terms are redefined for what ever reason, the ability to exchange ideas is lost. For example there is a popular notion that defining terms in support of an idea is useful
So what does this¬†actually mean in the project management domain?
Plans are strategies for the success of the project. Strategies are hypothesis. Hypothesis needs to be tested to determine their validity. These tests - in the project domain - comes from setting a plan, performing the work, accessing the compliance of the outcomes with the plan, that corrective actions in the next iteration of the works.
This seems obvious, but when we hear about the failures in the execution of the plans, we have to wonder what went wrong. Research has shown many Root Causes of project shortfalls. Here are four from our domain:
The root cause for each of these starts with the lack of the following
Unrealistic Performance Expectations
When we set out to define what performance is needed we must have a means of testing that this expectation can be achieved. There are several ways of doing this:
Unrealistic Cost and Schedule Estimates
Inadequate Assessment of Risk
Unanticipated Technical Issues
Each of these¬†issues can be addressed through a Systems Engineering process using Measures of Effectiveness, Measures of Performance, and Technical Performance Measures. The planning process makes use of these measures to assess the¬†credibility of the plan and the processes to test the hypothesis.¬†Related articles Want To Learn How To Estimate? Debunking Let's Stop Guessing and Start Estimating Complex, Complexity, Complicated Estimates
What‚Äôs the difference between¬†estimate¬†and¬†guess?
One way to distinguish between them is degree of care taken when we arrive at a conclusion. A conclusion about how much effort work will take. How much it will cost to perform that work. If that work will hve any risk associated with it.¬†
Estimate¬†is derived from the Latin word¬†aestimare.¬†‚ÄúTo Value.‚ÄĚ The term estimate is also the origin of¬†estimable, ¬†meaning ‚Äúcapable of being estimated‚ÄĚ or ‚Äúworthy of esteem", and of¬†esteem, which meaning ‚Äúregard."
To make an estimate means to judge - using some method - the extent, nature, or value of something, with the implication that the result is based on expertise, data, a model, or familiarity. An estimate is the resulting calculation or judgment of the outcome or result. The related term is¬†approximation, meaning ‚Äúclose or near.‚ÄĚ Estimates have a measure of¬†nearness to the actual value. We may not be able to know the actual value, but the estimate is¬†close to that value. The confidence in the estimate adds more information about the¬†nearness of the estimate to the actual value.
To guess is to believe or suppose, to form an opinion based on little or no evidence, or to be correct by chance or conjecture. A guess is a thought or idea arrived at by one of these methods. Guess¬†is a synonym for ¬†conjecture¬†and¬†surmise, which like estimate, can be used¬†as a verb or noun.
One step between a guess and an estimate is an educated guess, a more casual estimate. An idiomatic term for this conclusion is ‚Äúballpark figure.‚ÄĚ The origin of this American English idiom, which alludes to a baseball stadium, is not certain. One conclusion is is related to ‚Äúin the ballpark,‚ÄĚ meaning ‚Äúclose‚ÄĚ in the sense that one at such a location may not be in a precise location but is in the stadium.
We could have a hunch or an intuition about some outcome, some numerical value. Or we could engage in guesswork or speculation.
An interesting idea is ‚ÄúDead reckoning‚ÄĚ means the same thing as¬†guesswork, though it originally referred to navigation based on reliable information. Near synonyms describing thoughts or ideas developed with more rigor include¬†hypothesis¬†and¬†supposition, as well as¬†theory¬†and¬†thesis.
A guess is a casual, ¬†spontaneous conclusion.¬†
An estimate is based on some thought and/or data.
If those paying you can accept a Wild Ass Guess then you're probably done. If they have tolerance (risk tolerance) for loosing their¬†value at risk if you're guess is wrong, then go ahead and Guess. Otherwise some form of estimate is likely needed to inform your decision about some outcome in the future that is uncertain.Related articles How We Make Decisions is as Important as What We Decide. The Flaw of Empirical Data Used to Make Decisions About the Future Build a Risk Adjusted Project Plan in 6 Steps Want To Learn How To Estimate?
Managing other people's money - our internal money, money from a customer, money from an investor - means making rational decisions based on facts. In an uncertain and emerging future, making those decisions means assessing the impact of that decision on that future in terms of money, time, and delivered value.
To make those decisions - in the presence of this uncertainty - implies we need to develop information about the variables that appear in that future. We can use past data of course. That data needs to be adjusted for several factors:
No answers to these questions? That data you collected not likely to have much value in making decisions for the future.
Working over the week on a release of a critical set of project capabilities and need a break from that. This post will be somewhat scattered as I'm writing it in the lobby to get some fresh air.
Here's the¬†post asking for a conversation about estimates. Here's a long response to that request.
Let's ignore the term¬†FACT for the moment as untestable and see how to arrive at some answers for each statement. These answers are from a paradigm of¬†Software Intensive Systems, where Microeconomics of decision-making is the core paradigm used to make decisions, based on Risk and Opportunity Costs¬†from those decisions.
So if the OP is actually interested in moving from the known problem of using estimates in a dysfunction way, let's stop speaking about how to make decisions without estimates, and learn how to make good estimates needed for good decisions.
This issue of¬†Harvard Business Review is dedicated to¬†Make Better¬†Decisions. Start with reading how to make good decisions. There is a wealth of guidance how to do that. Why use¬†Dilbert-style management examples. We all now about those. How about some actionable corrective actions to the root causes of bad management. All backed up with data beyond personal anecdotes. Reminds me of eraly XP where just try it was pretty much the approach.¬†So if the OP is really interested in...
Let‚Äôs use our collective influence and intelligence to take the discussion forward to how we can cure the horrible cancer in our industry of Estimate = Date = Commitment.
Then there are nearly unlimited resources for doing that. The first is to call BS on the notion decisions can be made without estimates, without stating where this is applicable - first. Acknowledge unequivocally that estimates are needed when the value at risk reaches a level deemed important by the owners of the money, and start acting like the professionals we want to be and gain a seat at the team to improve the probability of success of our projects with credible estimates of cost, effort, risk, productivity, production of value, and all the other¬†attributes of that success.¬†
For those interested in¬†exploring further how to provide value to those paying your salary, here are some posts on Estimating Books
Full attribution to Gaping Void for this carton.¬†http://gapingvoid.com/2010/05/03/daily-bizcard-11-fred-wilson/
Wayne makes realtime estimates on every skate stroke of where he is going, where the puck is going, and where all the defensemen are going to be when he plans to take his shot on goal.
When we hear we can make decisions about the future without estimating the impact from those decision or using only small sample, non-statistically adjusted measures, or ignore the stochastic behaviors of the past and the future we'll be ion the loosing end of the¬†shot on goal.
There simply is no way out of the need to estimate the future for any non-trivial project funded by other peoples money. Trivial project? Our own money? Act as you wish. No one cares what you do. Make a suggestion it works somewhere else, better come to the table with some testable data independent of personal anecdotes.¬†Related articles Root Cause Analysis The Reason We Plan, Schedule, Measure, and Correct The Flaw of Empirical Data Used to Make Decisions About the Future The Flaw of Averages and Not Estimating
Fit for Purpose and Fit for Use
Both these terms are used to describe¬†value of an IT outcome.¬†A product or service value is defined by¬†fit to purpose¬†(utility) and¬†fit to use¬†(warranty).
Fit to purpose, or utility, means that service needs to fulfill customer needs.
Fit for use, or warranty, means that product or service is available when a user needs it.
From Phillippe Kruchten's The Frog and the Octopus we get 8 factors defining a context from which to test any¬†¬†idea that we encounter.
So when we hear something that doesn't seem to pass the smell test, think of Carl's advice
And then when we hear¬†you're¬†just¬†not willing to try out this idea, think of some more of his advice.
Then ask,¬†how is this idea of your Fit for Purpose and Fit for Use in those 8 context factors? Don't have an answer? Then maybe the personal anecdotes are ready for prime time in Enterprise domain.Related articles Root Cause Analysis The Reason We Plan, Schedule, Measure, and Correct
When we hear of a new and exciting approach to old and never ending problems, we need to first ask¬†in what domain is your new and¬†clever¬†solution applicable
No domain, not likely to have been tested outside you personal anecdotal experience.
Here's Mark's Scaling factors. He uses Agile as the starting point, but I'm expanding it to any suggestion that has yet to be tested outside of personal anecdotes
What's the Point?
When a new approach is being proposed, without a domain, it's a solution looking for a problem to solve.