Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Herding Cats - Glen Alleman
Syndicate content
Performance-Based Project Management¬ģ Principles, Practices, and Processes to Increase Probability of Success
Updated: 5 hours 3 min ago

Decision Making Means Making Inferences

Mon, 08/24/2015 - 17:03

In software development, we almost always encounter situations where a decision must be made when we are uncertain what the outcome might or even the uncertainty in data used to make that decision.

Decision making in the presence of uncertainty is standard management practice in all business and technical domains. From business investment decision, to technical choices for project work. 

Making decisions in the presence of uncertainty means making probabilistic inferences from the information available to the decision maker.

There are many techniques for decision making. Decision trees are common. Where the probability of an outcome of a decision is part of a branch of a tree. If I go left in the branch - the decision - what happens? If I go right what happens? Each branch point becomes the decision. Each of the two or more branches becomes the outcomes. The probabilistic aspect is applied to the branches, and the outcomes - which may be probabilistic as well and are assessed for befits to those making the decision.

Decision-tree
Another approach is Monte Carlo Simulation of decision trees. Here's a tool we use for many decisions in our domain, Palisade, Crystal Ball. There are others. They work like the manual process in the first picture, but let you tune the probabilistic branching, probabilistic outcomes to model complex decision making processes.

Screen Shot 2015-08-24 at 8.00.19 AMIn the project management paradigm of projects we work, there are networks of activities. Each of these activities has some dependency or prior work, and each activity produces dependencies on follow on work. These can be model with Monte Carlo Simulation as well.

Screen Shot 2015-08-24 at 8.06.05 AM

The Schedule Risk Analysis (SRA) of the network of work activities is mandated on a monthly basis in many of the programs we work. 

In Kanban and Scrum systems Monte Carlo Simulation is a powerful tool to reveal the expected performance of the development activity. Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban & Scrum Projects Using Monte Carlo Simulation, Troy Magennisis a good place to start for this approach.

Each of these approaches and others are designed to provide actionable information to the decision makers. This information requires a minimum understanding of what is happening to the system being managed:

  • What are the naturally occurring variances of the work activities that we have no control over - aleatory uncertainty?
  • What are the event based probabilities of some occurrence - epistemic uncertainty?
  • What are the consequences of each outcome - decision, probabilistic event, or naturally occurring variance - on the desired behavior of the system?
  • What choices can be made that will influence these outcomes?

In many cases, the information available to make these choices is in the future. Some is in the past. But that information in the past needs careful assessment.

Past data is Only useful if you can be assured the future is like the past. If not, making decision using past data without adjusting that data for the possible changes in the future takes you straight into the ditch - see The Flaw of Averages. 

In order to have any credible assessment of the impact of a decision using data in the future - where will the system be going in the future? - it is mandatory to ESTIMATE.

It is simply not possible to make decisions about future outcomes in the presence of uncertainty in that future without making estimates.

Anyone says you can is incorrect. And if they insist it can be done, ask for testable evidence of their conjecture, based on the mathematics of probabilistic systems. No testable credible testable data, then it's pure speculation. Move on.

The False Conjecture of Deciding in Presence of Uncertainty without Estimates

  • Slicing the work into similar sized chunks, performing work on those chunks and using that information to produce information about the future makes the huge assumption the future is like the past.
  • Record past performance, making nice plots, running static analysis for¬†mean, mode,¬†standard deviation, variance¬†is naive at best. The time series variances are rolled up hiding the latent variances that will emerge in the future.¬†Time series analysis (ARIMA) is required to reveal the possible values in the dataset from the past that will emerge in the future, since the system under observation remains the same.

Time series analysis is a fundamental tool for making forecasting of future outcomes from past data. Weather forecasting - plus complex compressible fluid flow models - is based on time series analysis. Stock market forecasting uses time series analysis. Cost and Schedule modeling uses time series analysis. Adaptive process control algorithms, like the speed control and fuel management in your modern car uses time series analysis.

One of the originators of time series analysis, George E. P. Box and his seminal book Time Series Analysis, Forecasting and Control, is often seriously misquoted, when he said All Models are Wrong, Some are Useful.  Anyone misusing that quote to try and convince you, you can't model the future didn't (or can't) do the math in Box's book and likely got a D in the High School probability and statistics class.

So do the math, read the proper books, gather past data, model the future with dependency networks, Kanban and Scrum backlogs, measure current production, forecast future production based on Monte Carlo Models - and don't believe for a moment that you can make decision about future outcomes in the presence of uncertainties without estimating that future.

Related articles Making Conjectures Without Testable Outcomes Why Guessing is not Estimating and Estimating is not Guessing IT Risk Management Architecture -Center ERP Systems in the Manufacturing Domain
Categories: Project Management

Managing in the Presence of Uncertainty

Mon, 08/24/2015 - 05:43

A Tweet caught my eye this weekend

Screen Shot 2015-08-23 at 1.39.09 PMBefore moving to risk let's look at what Agile is

Agile development is a phrase used in software development to describe methodologies for incremental software development. Agile development is an alternative to traditional project management where emphasis is placed on empowering people to collaborate and make team decisions in addition to continuous planning, continuous testing and continuous integration.

Next the notion that Agile is actually risk management is very misunderstood. Agile provides raw information for risk management, but risk management has little to do with what software development method is being used. The continuous nature of Agile provides more frequent feedback on the state of the project. That is advantageous to risk management. Since Agile mandates this feedback on fine grained boundaries - weeks not months - the actions in the risk management paradigm below are also fine grained.

Where Does Risk Come From?

All risk comes from uncertainty. Uncertainty comes in two types. (1) Aleatory (naturally occurring in the underlying process and therefore irreducible) and (2) Epistemic (a probability that something unfavorable will happen). 

Risk results from uncertainty. To deal with the risk from Aleatory uncertainty we can only have margin, since the resulting risk is irreducible. This is schedule margin, cost margin, and product performance margin. This type of risk is just part of the world we live in. Natural variances in the work performed developing products needs margin. Natural variances in the performance is a server's throughput needed margin.

We can deal directly with the risk from Epistemic uncertainty by buying down the uncertainty. This is done with experiments, trials, incremental development and other risk reduction  activities that lower the uncertainty in the processes.

By the way many use the notion that risk is both positive and negative. This is not true. It's a naive understanding of the mathematics of risk processes. PMI does this. It is not allowed in our domain(s).

Agile and Incremental Delivery

There is a popular myth in the agile community that they have a lock on the notion of incremental delivery. This is again not true. Many product development lifecycles use incremental and iterative processes to produce products. Spiral development, Integrated Master Plan/Integrated Master Schedule, Incremental Commitment. All applicable to Software Intensive Systems and System of Systems domains, like Enterprise ERP.

Managing in the Presence of Uncertainty and the Resulting Risk

Here's how we manage SIS and SoS in the presence of uncertainty.

The methods used to collect requirements, turn those requirements into products and ship those products produces are of little concern to the Risk Management Process. They are of great concern to those engineering those products, but the Risk Management Process is above that activity.  You can start with the SEI Continuous Risk Management Guidebook for a framework of managing software development in the presence of risk. The management of risk in agile is very close to the management of risk in any other product or service development process. For any risk management process to work it needs to have these processes as a minimum. So when you here agile manages risk, confirm that is actually taking place by having the person making that system show clearly in a process description how each of the process areas below are implemented. Without that connection it just ain't true. Screen Shot 2015-08-23 at 3.03.44 PM   And here's how to put these processes together to ensure risk is being managed to increase the probability of success of your project.
Screen Shot 2015-08-23 at 3.42.33 PM And how to put these process areas to work on a project Screen Shot 2015-08-23 at 4.20.33 PM For those interested in managing projects in the presence of uncertainty and the risk that uncertainty creates, independent of any development methodology or development framework, here's a collection from the office library, in no particular order:

And a short white paper on Risk Management in Enterprise IT

Information Technology Risk Management from Glen Alleman Related articles IT Risk Management Making Conjectures Without Testable Outcomes Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Anecdotes versus Numbers (Statistics)

Sat, 08/22/2015 - 04:08

In the world of project management and the process improvement efforts needed to increase the Probability of Project Success anecdotes appear to prevail when it comes to suggesting alternatives to observed dysfunction. 

If we were to pile all the statistics for all the data for the effectiveness or not effectiveness of all the process improvement methods on top of each other they would lack the persuasive power of a single anecdote in most software development domains outside of Software Intensive Systems. 

Why? because most people working in small groups, agile, development projects, compared to Enterprise, Mission Critical can't fail, that must show up on time, on budget, with not just the minimum viable products, the the mandatorily needed viable capability - rely on anecdotes to communicate their messages.

I say this not from just personal experience, but from research for government agencies and commercial enterprise firms tasked with Root Cause Analysis, conference proceedings, refereed journal papers, and guidance from those tasked with the corrective actions of major program failures.

Anecdotes appeal to emotion. Statistics, numbers, verifiable facts appeal to reason. It's not a fair fight. Emotiona always wins without acknowledging that emotion is seriously flawed when making decisions.

Anecdotal evidence is evidence where small numbers of anecdotes are presented. There is a large chance - statistically - this evidence is unreliable due to cherry picking or self selection (this is the core issue with the Standish Reports or anyone claiming anything without proper statistical sampling processes). 

Anecdotal evidence is considered dubious support of any generalized claim. Anecdotal evidence is no more than a type description (i.e., short narrative), and is often confused in discussions with its weight, or other considerations, as to the purpose(s) for which it is used.

We've all heard stories, ¬Ĺ of all IT projects fail. Waterfall is evil, hell even¬†estimates are evil stop doing them cold turkey. They prove the point the speaker is making right? Actually they don't. I just used an anecdote to¬†prove a point.¬†

If I said The Garfunkel Institute just released a study showing 68% of all software development projects did not succeed because of a requirements gathering process failed to define what capabilities were needed when done, I've have made a fact base point. And you'd become bored reading the 86 pages of statistical analysis and correlation charts between all the causal factors contributing to the success or failure of the sample space of projects. See you are bored.

Instead if I said every project I've worked on went over budget and was behind schedule because we were very poor at making estimates. That'd be more appealing to your emotions, since it is a message you can relate to personally - having likely experienced many of the same failures.

The purveyors of anecdotal evidence to support a position make use of a common approach. Willfully ignoring a fact based methodology through a simple tactic...

We all know what Mark Twain said about lies, dammed lies, and statistics

People can certainly lie with statistics, done all the time. Start with How to Lie With Statistics But those types of Lies are nothing compared to the able to script personal anecdotes to support a message. From I never seen that work, to what now you're telling me - the person that actually invented this earth shattering way of writing software - that it doesn't work outside my personal sphere of experience?

An anecdote is a statistic with a sample size of one. OK, maybe a sample size of a small group  of your closest friends and fellow travelers. 

We fall for this all the time.  It's easier to accept an anecdote describing a problem and possible solution from someone we have shared experiences with, than to investigate the literature, do the math, even do the homework needed to determine the principles, practices, and processes needed for corrective action.

Don’t fall for manipulative opinion-shapers who use story-telling as a substitute for facts. When we're trying to persuade, use facts, use actual example based on those facts. Use data that can be tested outside personal anecdotes used to support an unsubstantiated claim without suggesting both the rot cause and the testable corrective actions.  

Related articles Making Conjectures Without Testable Outcomes Deadlines Always Matter Root Cause of Project Failure
Categories: Project Management

How To Lie With Statistics

Fri, 08/21/2015 - 22:51

How-to-lie-with-statisticsHow To Lie With Statistics is a critically important book to have on your desk if you're involved any decision making. My edition is a First Edition, but I don't have the dust jacket, so not worth that much beyond the current versions.

The reason for this post is to lay the ground work for assessing reports, presentations, webinars, and other selling documents that contain statistical information. 

The classic statistical misuse if the Standish Report, describing the success and failure of IT projects.

Here's my summation on the elements of How To Lie in our project domain

  • Sample with the Built In Bias - the population of the sample space is not defined. The samples are¬†self selected in that those who respond are the basis of the statistics. No adjustment for all those who did not respond to a survey for example.¬†
  • The Well Chosen Average - The arithmetical average, Median, and Mode are¬†estimators of the population statistics. Any of these without a variance is of little value for decision making.¬†
  • Little Figures That Are Not There - the classic is¬†use this¬†approach¬†(in this case #NoEstimates) and¬†your¬†productivity¬†will improve 10X, that 1000% by the way. A 1000% improvement. That's unbelievable, literally unbelievable. The actual improvements are stated, only the percentage. The baseline performance is not stated. It's unbelievable.
  • Much Ado About Practically Nothing - ¬†the probability of being in the range of¬†normal. This is the basis of advertising. What's the variance?
  • Gee-Whiz Graphs - using graphics and adjustable scales provides the opportunity to¬†manipulate the message. The classic example of this is the estimating errors in a popular graph used by the No Estimates advocates. It's a graph showing the number of projects that complete¬†over there estimated cost and schedule. What's not shown is the credibility of the original estimate.
  • One Dimensional Picture - using a picture to show numbers, where the picture is not in the scale as the numbers provides a messaging path for visual readers.¬†
  • Semi-attached¬†Picture -¬†If you can't¬†prove¬†what you want to prove,¬†demonstrate¬†something else¬†and¬†pretend that they are the same thing. In one example, the logic is inverted. Estimating is conjectured to be the root cause of problems. With no evidence of that, the statement¬†we don't see how estimating can produce success, so not estimating will increase the probability of success.¬†
  • Post Hoc Rides Again - posy hoc causality is common in the absence of a¬†cause and effect understanding. The correlation and causality differences are many times not understood.¬†

Here's a nice example of How To Lie

There's a chart from an IEEE Computer article showing the numbers of projects that exceeded their estimated cost. But let's start with some research on the problem. Coping with the Cone of Uncertainty.

There is a graph, popularly used to show that estimates 

Screen Shot 2015-08-14 at 3.09.33 PM

 

This diagram is actually MISUSED by the #NoEstimates advocates.

The presentation below shows the follow on information for how estimates can be improved the increase the confidence in the process and improvements in the business. As well shows the root causes of poor estimates and their corrective actions. Please ignore any ruse of Todd's chart without the full presentation. 

 

My mistake was doing just that.

So before anyone accepts any conjecture from a #NoEstimates advocate using the graph above, please read the briefing at the link below to see the corrective actions for making poor estimates.

Screen Shot 2015-08-14 at 3.24.50 PM

Here's the link to Todd's entire briefing not just the many times misused graph of estimates not representing the actuals Uncertainty Surrounding the Cone of Uncertainty. 

Related articles Root Cause of Project Failure Estimating and Making Decisions in Presence of Uncertainty Are Estimates Really The Smell of Dysfunction?
Categories: Project Management

What is a Software Intensive System?

Thu, 08/20/2015 - 14:52

When we hear about software development in the absence of a domain, it's difficult to have a discussion about the appropriate principles, processes, and practices of that work. Here's one paradigm that has served us well.

In the Software Intensive System world, Number 6 and beyond, here's some background Related articles Making Conjectures Without Testable Outcomes Root Cause of Project Failure
Categories: Project Management

One More #NoEstimates Post

Thu, 08/20/2015 - 02:37

Steve McConnell's recent post on estimating prompted me to make one more post on this topic. First some background on my domain and point of view.

I work in what is referred to a Software Intensive Systems (SIS) involving Introduction, Foundations, Development Lifecycle, Requirements, Analysis and Design, Implementation, Verification and Validation, Summary and Outlook and those SIS's are usually embedded in System of Systems 

This may not be the domain where the No Estimates advocates work. Their system may not be software intensive and more that not system of systems. And as one of the more vocal supporters of No Estimates likes to say the color of your sky is different than mine. And yes it is, it's Blue, and we know why. It's Rayleigh Scattering. The reason we know why is engineers and scientist occupy the hallways of our office, along with all the IT and business SW developers running the enterprise IT systems that enable the production of all the SIS's embedded in the SoS products.

Here's a familiar framework for the spectrum of software systems

I'll add to Steve's comments in italics, while editing out material not germain to my responses but still in support of Steve's. Before we start here's one important concept In project management we do not seek perfect prediction. We seek early warning signals to enable predictive corrective actions. 

1. Estimation is often done badly and ineffectively and in an overly time-consuming way. 

My company and I have taught upwards of 10,000 software professionals better estimation practices, and believe me, we have seen every imaginable horror story of estimation done poorly. There is no question that ‚Äúestimation is often done badly‚ÄĚ is a true observation of the state of the practice.¬†

The role of estimating is found on many domains. Independent Cost Estimates (ICE) are mandated in many domains I work. Estimating professional organizations provide guidance, materials, and communities. www.iceaa.org, www.aace.org, NASA, DOD, DOE, DHS, DOJ, most every "heavy industry" from dirt moving to writing software for money, has some formalized estimating process.

2. The root cause of poor estimation is usually lack of estimation skills. 

Estimation done poorly is most often due to lack of estimation skills. Smart people using common sense is not sufficient to estimate software projects. Reading two page blog articles on the internet is not going to teach anyone how to estimate very well. Good estimation is not that hard, once you’ve developed the skill, but it isn’t intuitive or obvious, and it requires focused self-education or training. 

One of the most common estimation problems is people engaging with so-called estimates that are not really Estimates, but that are really Business Targets or requests for Commitments. You can read more about that in my estimation book or watch my short video on Estimates, Targets, and Commitments. 

Root Cause Analysis is one of our formal processes. We apply Reality Charting¬ģ¬†to all technologies of our work. RCA is part of Governance and continuous process improvement. Conjecturing that estimates are somehow the "smell" of something else without stating that problem and most importantly confirming the Root Cause of the problem, providing corrective actions, and most critically confirming the corrective action removes the root cause is bad management at best and naive management at worst¬†

3. Many comments in support of #NoEstimates demonstrate a lack of basic software estimation knowledge. 

I don’t expect most #NoEstimates advocates to agree with this thesis, but as someone who does know a lot about estimation I think it’s clear on its face. Here are some examples

(a) Are estimation and forecasting the same thing? As far as software estimation is concerned, yes they are. (Just do a Google or Bing search of ‚Äúdefinition of forecast‚ÄĚ.) Estimation, forecasting, prediction--it's all the same basic activity, as far as software estimation is concerned.¬†

The notion of redefining terms to suite the needs of the speaker is troubling. Estimating is about the past, present, and future. As a former physicist I made estimates of the scattering cross section of particle collisions, so we knew where to look for the signature of the collision. In a second career, since I really didn't have an original idea needed for the profession of particle physics, I estimated the signature parameter is mono-pulse doppler radar signals to identify targets in missile defense systems. Same for signatures from sonar system to separate whales from Biscayne Bay speed boats, the Oscar Class Russian Submarines.

Forecasting is estimating some outcome in the future. Weather forecasters make predictions of the probability of rain in the coming days. 

(b) Is showing someone several pictures of kitchen remodels that have been completed for $30,000 and implying that the next kitchen remodel can be completed for $30,000 estimation? Yes, it is. That’s an implementation of a technique called Reference Class Forecasting. 

Reference Class Forecasting is fundamental to good estimating. But other techniques are useful as well. Parametric modeling, design based models in systems engineering in sysML has estimating databases. Model Based Design is a well developed discipline in our domain and others. Evene Subject Matter Experts (although actually undesirable) can be a start, with wide-band Delphi

(c) Is doing a few iterations, calculating team velocity, and then using that empirical velocity data to project a completion date count as estimation? Yes it does. Not only is it estimation, it is a really effective form of estimation. I’ve heard people argue that because velocity is empirically based, it isn’t estimation. Good estimation is empirically based, so that argument exposes a lack of basic understanding of the nature of estimation. 

All good estimates are based on some "reference class." Gathering data to build a reference class may be needed. But care is needed is using the "first few sprints" without first answering some questions.

  • Is a forecast of the future.
  • Is the future like the past?
  • Are there changes in the underlying statistical process in the future that are not accounted for in the past.
  • Are the underlying¬†statistical¬†processes for irreducible (aleatory)¬†uncertainty¬†stationary.¬†That is are the natural¬†variances¬†in the project work the¬†same across¬†the life span of the project. Or do they change as time passes?
Empirical estimation require knowing something about the underlying statistical and probabilistic processes. Without this knowledge, those empirical measurement are "point" measures and not likely to be representative of the future

(d) Is counting the number of stories completed in each sprint rather than story points, calculating the average number of stories completed each sprint, and using that for sprint planning, estimation? Yes, for the same reasons listed in point (c). 

This is estimating. But the numbers alone are no good "estimators." The variance and stability of the variance is needed. The past is a predictor of the future ONLY is the future is like the past. This is the role of time series analysis where simple and free tools can be used to produce a credible estimate of the future from the past.

(e) Most of the #NoEstimates approaches that have been proposed, including (c) and (d) above, are approaches that were defined in my book Software Estimation: Demystifying the Black Art, published in 2006. The fact that people people are claiming these long-ago-published techniques as "new" under the umbrella of #NoEstimates is another reason I say many of the #NoEstimates comments demonstrate a lack of basic software estimation knowledge. 

The use of Slicing - one proposed #Noestimates technique is estimating. Using the NO in front of Estimate and then referencing "slicing" seems a bit disingenuous. But slicing is subject to the same issue as all reference class that are not adjusted for future changes. The past may not be like the future. Confirmation and adjustment are part of good estimating. 

(f) Is estimation time consuming and a waste of time? One of the most common symptoms of lack of estimation skill is spending too much time on ineffective activities. This work is often well-intentioned, but it’s common to see well-intentioned people doing more work than they need to get worse estimates than they could be getting.

This notion that those spending the money get to say what is a waste and what is waste would be considered hubris in any other context. In an attempt not to be rude (one of the No estimates advocates favorite come backs when presented with a tough question - ala Jar Jar Binks) estimates are primarily not for those spending the money but for those providing the money. How much, when and what are business questions that need answers in any non-trivial business transaction. If the need to know is not there, it is likely the "value at risk" for the work is low enough, no one cares what it costs, when it will be done, or what we'll get when we're done.

Just to be Crystal clear I use the term non-trivial to mean a project whose cost and schedule and possible whose produced content - when missed - do not impact the business in any manner detrimental to its operation. 

(g) Is it possible to get good estimates? Absolutely. We have worked with multiple companies that have gotten to the point where they are delivering 90%+ of their projects on time, on budget, with intended functionality. 

Of course it is, and good estimates happen all the time. Bad estimates happen all the time as well. One of my engagements is with the Performance Assessment and Root Cause Analyses division of the US DOD. Root Cause Analysis of ACAT1 Nunn McCurdy programs shows the following. Similar Root Causes can be found for commercial projects. 

Gary Bliss Chart

One reason many people find estimation discussions (aka negotiations) challenging is that they don't really believe the estimates they came up with themselves. Once you develop the skill needed to estimate well -- as well as getting clear about whether the business is really talking about an estimate, a target, or a commitment -- estimation discussions become more collaborative and easier. 

The Basis of Estimate problem is universal. Is was on a proposal team that lost to an arch rival because our "basis of estimate" included an unrealistic staffing plan. Build a credible estimate is actual work. The size of the project, the "value at risk," the tolerance for risk, and a myriad of other factors all go into the deciding how to make the estimate. All good estimates and estimating practices are full collaboration. 

When management abuse is called out when estimating, it has not been explained how NOT estimating that corrects the management abuse.

4. Being able to estimate effectively is a skill that any true software professional needs to develop, even if they don’t need it on every project. 

‚ÄúEstimation often doesn't work very well, therefore software professionals should not develop estimation skill‚ÄĚ ‚Äď this is a common line of reasoning in #NoEstimates. This argument doesn't make any more sense than the argument, "Scrum often doesn't work very well, therefore software professionals should not try to use Scrum." The right response in both cases is, "Get better at the practice," not "Throw out the practice altogether."¬†

The notion of "I can't learn to estimate well," is not the same as "it's possible to learn to estimate well." There are professional estimating organizations, books, journals, courses. What is really being said is "I don't want to learn to estimate." 

#NoEstimates advocates say they're just exploring the contexts in which a person or team might be able to do a project without estimating. That exploration is fine, but until someone can show that the vast majority of projects do not need estimates at all, deciding to not estimate and not develop estimations skills is premature. And my experience tells me that when all the dust settles, the cases in which no estimates are needed will be the exception rather than the rule. Thus software professionals will benefit -- and their organizations will benefit -- from developing skill at estimation. 

Those #NoEstimate advocates have appeared to no ask those paying their salary what they need in terms of estimates. Ignore for the moment the Dilbert managers. This is a day one issue. #Noestimates willfully ignores the needs of the business. And when called on it says "if management needs estimates, we should estimate." Any manager accountable for a non-trivial expenditure that doesn't have some type of "estimate to complete and Estimate at Completion isn't going to be a manager for very long when the project shows up late, over budget, and doesn't deliver the needed capabilities.

I would go further and say that a true software professional should develop estimation skill so that you can estimate competently on the numerous projects that require estimation. I don't make these claims about software professionalism lightly. I spent four years as chair of the IEEE committee that oversees software professionalism issues for the IEEE, including overseeing the Software Engineering Body of Knowledge, university accreditation standards, professional certification programs, and coordination with state licensing bodies. I spent another four years as vice-chair of that committee. I also wrote a book on the topic, so if you're interested in going into detail on software professionalism, you can check out my book, Professional Software Development. Or you can check out a much briefer, more specific explanation in my company's white paper about our Professional Development Ladder. 

5. Estimates serve numerous legitimate, important business purposes.

Estimates are used by businesses in numerous ways, including: 

  • Allocating budgets to projects (i.e., estimating the effort and budget of each project)
  • Making cost/benefit decisions at the project/product level, which is based on cost (software estimate) and benefit (defined feature set)
  • Deciding which projects get funded and which do not, which is often based on cost/benefit
  • Deciding which projects get funded this year vs. next year, which is often based on estimates of which projects will finish this year
  • Deciding which projects will be funded from CapEx budget and which will be funded from OpEx budget, which is based on estimates of total project effort, i.e., budget
  • Allocating staff to specific projects, i.e., estimates of how many total staff will be needed on each project
  • Allocating staff within a project to different component teams or feature teams, which is based on estimates of scope of each component or feature area
  • Allocating staff to non-project work streams (e.g., budget for a product support group, which is based on estimates for the amount of support work needed)
  • Making commitments to internal business partners (based on projects‚Äô estimated availability dates)
  • Making commitments to the marketplace (based on estimated release dates)
  • Forecasting financials (based on when software capabilities will be completed and revenue or savings can be booked against them)
  • Tracking project progress (comparing actual progress to planned (estimated) progress)
  • Planning when staff will be available to start the next project (by estimating when staff will finish working on the current project)
  • Prioritizing specific features on a cost/benefit basis (where cost is an estimate of development effort)

These are just a subset of the many legitimate reasons that businesses request estimates from their software teams. I would be very interested to hear how #NoEstimates advocates suggest that a business would operate if you remove estimates for each of these purposes.

The #NoEstimates response to these business needs is typically of the form, ‚ÄúEstimates are inaccurate and therefore not useful for these purposes‚ÄĚ rather than, ‚ÄúThe business doesn‚Äôt need estimates for these purposes.‚Ä̬†

That argument really just says that businesses are currently operating on the basis of much worse predictions than they should be, and probably making poorer decisions as a result, because the software staff are not providing very good estimates. If software staff provided more accurate estimates, the business would make better decisions in each of these areas, which would make the business stronger. 

The other #NoEstimates response is that "Estimates are always waste." I don't agree with that. By that line of reasoning, daily stand ups are waste. Sprint planning is waste. Retrospectives are waste. Testing is waste. Everything but code-writing itself is waste. I realize there are Lean purists who hold those views, but I don't buy any of that. 

Estimates, done well, support business decision making, including the decision not to do a project at all. Taking the #NoEstimates philosophy to its logical conclusion, if #NoEstimates eliminates waste, then #NoProjectAtAll eliminates even more waste. In most cases, the business will need an estimate to decide not to do the project at all.  

In my experience businesses usually value predictability, and in many cases, they value predictability more than they value agility. Do businesses always need predictability? No, there are few absolutes in software. Do businesses usually need predictability? In my experience, yes, and they need it often enough that doing it well makes a positive contribution to the business. Responding to change is also usually needed, and doing it well also makes a positive contribution to the business. This whole topic is a case where both predictability and agility work better than either/or. Competency in estimation should be part of the definition of a true software professional, as should skill in Scrum and other agile practices. 

Estimates are the basis of managerial finance and decision making in the presence of uncertainty (Microeconomics of software development). The accuracy and precision of the estimates is usually determined by the value at risk. From low risk, which may mean no estimates. To high risk which means frequently updated independent validation of the estimates. But in nearly all business decisions - unless the value at risk can be written off - there is a need to know something about the potential loss as well as the potential gain. 

6. Part of being an effective estimator is understanding that different estimation techniques should be used for different kinds of estimates. 

One thread that runs throughout the #NoEstimates discussions is lack of clarity about whether we‚Äôre estimating before the project starts, very early in the project, or after the project is underway. The conversation is also unclear about whether the estimates are project-level estimates, task-level estimates, sprint-level estimates, or some combination. Some of the comments imply ineffective attempts to combine kinds of estimates‚ÄĒthe most common confusion I‚Äôve read is trying to use task-level estimates to estimate a whole project, which is another example of lack of software estimation skill.¬†

You can see a summary of estimation techniques and their areas of applicability here. This quick reference sheet assumes familiarity with concepts and techniques from my estimation book and is not intended to be intuitive on its own. But just looking at the categories you can see that different techniques apply for estimating size, effort, schedule, and features. Different techniques apply for small, medium, and large projects. Different techniques apply at different points in the software lifecycle, and different techniques apply to Agile (iterative) vs. Sequential projects. Effective estimation requires that the right kind of technique be applied to each different kind of estimate. 

Learning these techniques is not hard, but it isn't intuitive. Learning when to use each technique, as well as learning each technique, requires some professional skills development. 

When we separate the kinds of estimates we can see parts of projects where estimates are not needed. One of the advantages of Scrum is that it eliminates the need to do any sort of miniature milestone/micro-stone/task-based estimates to track work inside a sprint. If I'm doing sequential development without Scrum, I need those detailed estimates to plan and track the team's work. If I'm using Scrum, once I've started the sprint I don't need estimation to track the day-to-day work, because I know where I'm going to be in two weeks and there's no real value added by predicting where I'll be day-by-day within that two week sprint. 

That doesn't eliminate the need for estimates in Scrum entirely, however. I still need an estimate during sprint planning to determine how much functionality to commit to for that sprint. Backing up earlier in the project, before the project has even started, businesses need estimates for all the business purposes described above, including deciding whether to do the project at all. They also need to decide how many people to put on the project, how much to budget for the project, and so on. Treating all the requirements as emergent on a project is fine for some projects, but you still need to decide whether you're going to have a one-person team treating requirements as emergent, or a five-person team, or a 50-person team. Defining team size in the first place requires estimation. 

7. Estimation and planning are not the same thing, and you can estimate things that you can’t plan. 

Many of the examples given in support of #NoEstimates are actually indictments of overly detailed waterfall planning, not estimation. The simple way to understand the distinction is to remember that planning is about ‚Äúhow‚ÄĚ and estimation is about ‚Äúhow much.‚Ä̬†

Can I ‚Äúestimate‚ÄĚ a chess game, if by ‚Äúestimate‚ÄĚ I mean how each piece will move throughout the game? No, because that isn‚Äôt estimation; it‚Äôs planning; it‚Äôs ‚Äúhow.‚ÄĚ

Can I estimate a chess game in the sense of ‚Äúhow much‚ÄĚ? Sure. I can collect historical data on the length of chess games and know both the average length and the variation around that average and predict the length of a game.¬†

More to the point, estimating an individual software project is not analogous to estimating one chess game. It’s analogous to estimating a series of chess games. People who are not skilled in estimation often assume it’s more difficult to estimate a series of games than to estimate an individual game, but estimating the series is actually easier. Indeed, the more chess games in the set, the more accurately we can estimate the set, once you understand the math involved. 

This all goes back to the idea that we need estimates for different purposes at different points in a project. An agile project may be about "steering" rather than estimating once the project gets underway. But it may not be allowed to get underway in the first place if there aren't early estimates that show there's a business case for doing the project. 

Plans are strategies for the success of the project. What accomplishments must occur, how those accomplishments are assessed in units of measure meaningful to the decision makers are the start of Planning. Choices made during the planned process and most certainly during the execution process are informed by estimates of future outcomes from the decisions made today and the possible decision made in the future. This is the basis of Microeconomics of decision making.

Strategy making is many times used by #NoEstimates advocates when they are actually applying operational effectiveness. Strategic decision making is a critical success factor for non-trivial projects.

Strategic portfolio management from Glen Alleman

8. You can estimate what you don’t know, up to a point. 

In addition to estimating ‚Äúhow much,‚ÄĚ you can also estimate ‚Äúhow uncertain.‚ÄĚ In the #NoEstimates discussions, people throw out lots of examples along the lines of, ‚ÄúMy project was doing unprecedented work in Area X, and therefore it was impossible to estimate the whole project.‚ÄĚ This is essentially a description of the common estimation mistake of allowing high variability in one area to insert high variability into the whole project's estimate rather than just that one area's estimate.¬†

Most projects contain a mix of precedented and unprecedented work (also known as certain/uncertain, high risk/low risk, predictable/unpredictable, high/low variability--all of which are loose synonyms as far as estimation is concerned). Decomposing the work, estimating uncertainty in each area, and building up an overall estimate that includes that uncertainty proportionately is one technique for dealing with uncertainty in estimates. 

Why would that ever be needed? Because a business that perceives a whole project as highly risky might decide not to approve the whole project. A business that perceives a project as low to moderate risk overall, with selected areas of high risk, might decide to approve that same project. 

You can estimate anything that is knowable. You personally may not know it - so go find someone who does, do research, "explore," experiment, build models, build prototypes. Do what ever is necessary to improve your knowledge (epistemology) of the uncertainties and improve your understanding of the natural variance (aleatory uncertainty). But if it's knowable, then don't say it's unknown. It's just unknown to you. 

The classic error and unbounded hubris about estimates comes from Donald Rumsfeld when he used the Unknown Unknowns in the first Iraq war. he never read The Histories,  Herodotus, 5th Century B.C. Where the author told the reader "don't go to what is now Iraq," the tribal powers will never comply with your will. Same for what is now Afghanistan, where Alexander the Great was ejected by the local tribesman.  

9. Both estimation and control are needed to achieve predictability. 

Much of the writing on Agile development emphasizes project control over project estimation. I actually agree that project control is more powerful than project estimation, however, effective estimation usually plays an essential role in achieving effective control. 

Closed loop control and especially feedforward adaptive control requires making estimates for future states - before they unfavorably impact the outcome. This means estimating. Software development is a closed loop adaptive control system.

To put this in Agile Manifesto-like terms:

We have come to value project control over project estimation, 
as a means of achieving predictability.

My 1st disagreement with Steve. Control is based on estimating. Both needed in any close loop control system. By the Way the conjecture used of slicing is not Close Loop Control. There is no steering target. The slicing data does not say what the performance (how many slices or what ever units you want) are Needed to meet the goals of the project. Slicing is Open Loop Control.  The #Noestimates advocates need to  pick up any "Control System" book to see how this works.

As in the Agile Manifesto, we value both terms, which means we still value the term on the right. 

#NoEstimates seems to pay lip service to both terms, but the emphasis from the hashtag onward is really about discarding the term on the right. This is another case where I believe the right answer is both/and, not either/or. 

I wrote an essay when I was Editor in Chief of IEEE Software called "Sitting on the Suitcase" that discussed the interplay between estimation and control and discussed why we estimate even though we know the activity has inherent limitations. This is still one of my favorite essays. 

10. People use the word "estimate" sloppily. 

No doubt. Lack of understanding of estimation is not limited to people tweeting about #NoEstimates. Business partners often use the word ‚Äúestimate‚ÄĚ to refer to what would more properly be called a ‚Äúplanning target‚ÄĚ or ‚Äúcommitment.‚ÄĚ ¬†

The word "estimate" does have a clear definition, for those who want to look it up.  

The gist of these definitions is that an "estimate" is something that is approximate, rough, or tentative, and is based upon impressions or opinion. People don't always use the word that way, and you can see my video on that topic here. 

Better yet how about definition from the actual estimating community

  • Software Cost Estimation with COCOMO II
  • Software Sizing and Estimating
  • Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban and Scrum Projects using Monte-Carlo Simulation
  • Estimating Software-Intensive Systems" Project, Products, and Processes
  • Making Hard Decisions
  • Forecasting Methods and¬†Applications¬†¬†
  • Probability Methods for Cost Uncertainty Analysis
  • Cost¬†Estimate¬†Classification¬†System, AACEI
  • Cost¬†Estimating¬†Body of Knowledge, ICEAA
  • Parametric¬†Estimating¬†Handbook, ICEAA
  • Basic¬†Software¬†Cost Estimating, CEB 09, ICEAA Online

The last opens with "‚ÄúAny sufficiently advanced technology is indistinguishable from magic.‚ÄĚ - Arthur C. Clarke. This may be one of the Root Causes for the #NoEstimates¬†advocates.¬†They've¬†encountered¬†a sufficiently advanced technology and see it as magic and therefore not¬†within¬†their grasp

There is no need to redefine anything. The estimating community has done that already.

Because people use the word sloppily, one common mistake software professionals make is trying to create a predictive, approximate estimate when the business is really asking for a commitment, or asking for a plan to meet a target, but using the word ‚Äúestimate‚ÄĚ to ask for that. It's common for businesses to think they have a problem with estimation when the bigger problem is with their commitment process.¬†

We have worked with many companies to achieve organizational clarity about estimates, targets, and commitments. Clarifying these terms makes a huge difference in the dynamics around creating, presenting, and using software estimates effectively. 

11. Good project-level estimation depends on good requirements, and average requirements skills are about as bad as average estimation skills. 

A common refrain in Agile development is ‚ÄúIt‚Äôs impossible to get good requirements,‚ÄĚ and that statement has never been true. I agree that it‚Äôs impossible to get¬†perfect¬†requirements, but that isn‚Äôt the same thing as getting¬†good¬†requirements. I would agree that ‚ÄúIt is impossible to get good requirements if you don‚Äôt have very good requirement skills,‚ÄĚ and in my experience that is a common case. ¬†I would also agree that ‚ÄúProjects usually don‚Äôt have very good requirements,‚ÄĚ as an empirical observation‚ÄĒbut not as a normative statement that we should accept as inevitable.¬†

If you don't know where you are going,you'll end up someplace else - Yogi Berra'

Figure it out, don't put up with being lazy. Use Capabilities Based Planning to elicit the requirements. What do you want this thing to do when it's done? Don't know, then why are you spend the customers money to build something. 

Agile is essentially spending the customers money to find out what the customer doesn't know. Ask first, is this the best use of the money?

Like estimation skill, requirements skill is something that any true software professional should develop, and the state of the art in requirements at this time is far too advanced for even really smart people to invent everything they need to know on their own. Like estimation skill, a person is not going to learn adequate requirements skills by reading blog entries or watching short YouTube videos. Acquiring skill in requirements requires focused, book-length self-study or explicit training or both. 

If your business truly doesn’t care about predictability (and some truly don’t), then letting your requirements emerge over the course of the project can be a good fit for business needs. But if your business does care about predictability, you should develop the skill to get good requirements, and then you should actually do the work to get them. You can still do the rest of the project using by-the-book Scrum, and then you’ll get the benefits of both good requirements and Scrum.

From my point of view, I often see agile-related claims that look kind of like this, What practices should you use if you have: 

  • Mediocre skill in Estimation
  • Mediocre skill in Requirements
  • Good to excellent skill in Scrum and Related Practices

Not too surprisingly, the answer to this question is, Scrum and Related Practices. I think a more interesting question is,What practices should you use if you have: 

  • Good to excellent skill in Estimation
  • Good to excellent skill in Requirements
  • Good to excellent skill in Scrum and related practices

Having competence in multiple areas opens up some doors that will be closed with a lesser skill set. In particular, it opens up the ability to favor predictability if your business needs that, or to favor flexibility if your business needs that. Agile is supposed to be about options, and I think that includes the option to develop in the way that best supports the business. 

12. The typical estimation context involves moderate volatility and a moderate levels of unknowns

Ron Jeffries¬†writes, ‚ÄúIt is conventional to behave as if all decent projects have mostly known requirements, low volatility, understood technology, ‚Ķ, and are therefore capable of being more or less readily estimated by following your favorite book.‚ÄĚ I don‚Äôt know who said that, but it wasn‚Äôt me, and I agree with Ron that that statement doesn‚Äôt describe most of the projects that I have seen.¬†

The color of Ron's sky must not be blue - the normal color. Every project we work has volatile requirements.

Don't undertake a project unless it is manifestly important and nearly impossible. - Edwin Land 

For enterprise IT there are databases showing the performance of past projects

  • www.nesma.org

  • www.isbsg.org

  • www.cosmicon.com

I think it would be more true to say, “The typical software project has requirements that are knowable in principle, but that are mostly unknown in practice due to insufficient requirements skills; low volatility in most areas with high volatility in selected areas; and technology that tends to be either mostly leading edge or mostly mature." In other words, software projects are challenging, but the challenge level is manageable. If you have developed the full set of skills a software professional should have, you will be able to overcome most of the challenges or all of them. 

Of course there is a small percentage of projects that do have truly unknowable requirements and across-the-board volatility. I consider those to be corner cases. It’s good to explore corner cases, but also good not to lose sight of which cases are most common. 

13. Responding to change over following a plan does not imply not having a plan. 

It‚Äôs amazing that in 2015 we‚Äôre still debating this point. Many of the #NoEstimates comments literally emphasize not having a plan, i.e., treating 100% of the project as emergent. They advocate a process‚ÄĒtypically Scrum‚ÄĒbut no plan beyond instantiating Scrum.¬†

Plans are strategies for success of the projects.  Strategies are hypothesis. Hypothesis's need tests (experiments) to continually validate them. Ron can lecture us all he wants. But agile is a SW Development paradigm embedded in a larger strategic development paradigm and plans come from there. That's how enterprises function. Both are needed.

According to the Agile Manifesto, while agile is supposed to value responding to change, it also is supposed to value following a plan. The Agile Manifesto says, "there is value in the items on the right" which includes the phrase "following a plan." 

While I agree that minimizing planning overhead is good project management, doing no planning at all is inconsistent with the Agile Manifesto, not acceptable to most businesses, and wastes some of Scrum's capabilities. One of the amazingly powerful aspects of Scrum is that it gives you the ability to respond to change; that doesn’t imply that you need to avoid committing to plans in the first place. 

My company and I have seen Agile adoptions shut down in some companies because an Agile team is unwilling to commit to requirements up front or refuses to estimate up front. As a strategy, that’s just dumb. If you fight your business about providing estimates, even if you win the argument that day, you will still get knocked down a peg in the business’s eyes. 

I've commented in other contexts that I have come to the conclusion that most businesses would rather be wrong than vague. Businesses prefer to plant a stake in the ground and move it later rather than avoiding planting a stake in the ground in the first place. The assertion that businesses value flexibility over predictability is Agile's great unvalidated assumption. Some businesses do value flexibility over predictability, but most do not. If in doubt, ask your business. 

If your business does value predictability, use your velocity to estimate how much work you can do over the course of a project, and commit to a product backlog based on your demonstrated capacity for work. Your business will like that. Then, later, when your business changes its mind‚ÄĒwhich it probably will‚ÄĒyou‚Äôll still be able to¬†respond to change. Your business will like that even more. ¬†

14. Scrum provides better support for estimation than waterfall ever did, and there does not have to be a trade off between agility and predictability. 

Not quite true. Waterfall projects have excellent estimating processes. Trouble is during the execution of the project things change. When the Plan and the Estimate aren;'t updated to match this change - which is one of the root causes of project failure- then the estimate becomes of little use. Apply agile processes to estimating is the same as applying agile processes to codinig. Frequent assessments of progress to plan and corrective actions when variances appear.

Some of the #NoEstimates discussion seems to interpret challenges to #NoEstimates as challenges to the entire ecosystem of Agile practices, especially Scrum. Many of the comments imply that estimation will somehow impair agility. The examples cited to support that are mostly examples of unskilled misapplications of estimation practices, so I see them as additional examples of people not understanding estimation very well. 

The idea that we have to trade off agility to achieve predictability is a false trade off. If we define "agility" to mean, "no notion of our destination" or "treat all the requirements on the project as emergent," then of course there is a trade off, by definition. If, on the other hand, we define "agility" as "ability to respond to change," then there doesn't have to be any trade off. Indeed, if no one had ever uttered the word ‚Äúagile‚ÄĚ or applied it to Scrum, I would still want to use Scrum because of its support for estimation and predictability, as well as for its support for responding to change.¬†

The combination of story pointing, velocity calculation, product backlog, short iterations, just-in-time sprint planning, and timely retrospectives after each sprint creates a nearly perfect context for effective estimation. To put it in estimation terminology, story pointing is a proxy based estimation technique. Velocity is calibrating the estimate with project data. The product backlog (when constructed with estimation in mind) gives us a very good proxy for size. Sprint planning and retrospectives give us the ability to "inspect and adapt" our estimates. All this means that Scrum provides better support for estimation than waterfall ever did. 

If a company truly is operating in a high uncertainty environment, Scrum can be an effective approach. In the more typical case in which a company is operating in a moderate uncertainty environment, Scrum is well-equipped to deal with the moderate level of uncertainty and provide high predictability (e.g., estimation) at the same time. 

15. There are contexts where estimates provide little value. 

I don’t estimate how long it will take me to eat dinner, because I know I’m going to eat dinner regardless of what the estimate says. If I have a defect that keeps taking down my production system, the business doesn’t need an estimate for that because the issue needs to get fixed whether it takes an hour, a day, or a week. 

The most common context I see where estimates are not done on an ongoing basis and truly provide little business value is online contexts, especially mobile, where the cycle times are measured in days or shorter, the business context is highly volatile, and the mission truly is, ‚ÄúAlways do the next most useful thing with the resources available.‚Ä̬†

In both these examples, however, there is a point on the scale at which estimates become valuable. If the work on the production system stretches into weeks or months, the business is going to want and need an estimate. As the mobile app matures from one person working for a few days to a team of people working for a few weeks, with more customers depending on specific functionality, the business is going to want more estimates. As the group doing the work expands, they'll need budget and headcount, and those numbers are determined by estimates. Enjoy the #NoEstimates context while it lasts; don’t assume that it will last forever. 

Start with Value At Risk. What are you willing to lose if your estimate is wrong. Then decides if the cost of estimating covers than risk.

16. This is not religion. We need to get more technical and more economic about software discussions. 

I’ve seen #NoEstimates advocates treat these questions of requirements quality, estimation effectiveness, agility, and predictability as value-laden moral discussions. "Agile" is a compliment and "Waterfall" is an invective. The tone of the argument is more moral than economic. The arguments are of the form, "Because this practice is good," rather than of the form, "Because this practice supports business goals X, Y, and Z." 

That religion isn’t unique to Agile advocates, and I’ve seen just as much religion on the non-Agile sides of various discussions. It would be better for the industry at large if people could stay more technical and economic more often. 

Agile is About Creating Options, Right?

I subscribe to the idea that engineering is about doing for a dime what any fool can do for a dollar, i.e., it's about economics. If we assume professional-level skills in agile practices, requirements, and estimation, the decision about how much work to do up front on a project should be an economic decision about which practices will achieve the business goals in the most cost-effective way. We consider issues including the cost of changing requirements and the value of predictability. If the environment is volatile and a high percentage of requirements are likely to spoil before they can be implemented, then it’s a bad economic decision to do lots of up front requirements work. If predictability provides little or no business value, emphasizing up front estimation work would be a bad economic decision.

On the other hand, if predictability does provide business value, then we should support that in a cost-effective way. If we do a lot of the requirements work up front, and some requirements spoil, but most do not, and that supports improved predictability, that would be a good economic choice. 

The economics of these decisions are affected by the skills of the people involved. If my team is great at Scrum but poor at estimation and requirements, the economics of up front vs. emergent will tilt toward Scrum. If my team is great at estimation and requirements but poor at Scrum, the economics will tilt toward estimation and requirements. 

Of course, skill sets are not divinely dictated or cast in stone; they can be improved through focused self-study and training. So we can treat the decision to invest in skills development as an economic issue too. 

Decision to Develop Skills is an Economic Decision Too

What is the cost of training staff to reach competency in estimation and requirements? Does the cost of achieving competency exceed the likely benefits that would derive from competency? That goes back to the question of how much the business values predictability. If the business truly places no value on predictability, there won’t be any ROI from training staff in practices that support predictability. But I do not see that as the typical case. 

My company and I can train software professionals to approach competency in both requirements and estimation in about a week. In my experience most businesses place enough value on predictability that investing a week to make that option available provides a good ROI to the business. Note: this is about making the option available, not necessarily exercising the option on every project. 

My company and I can also train software professionals to approach competency in a full complement of Scrum and other Agile technical practices in about a week. That produces a good ROI too. In any given case, I would recommend both sets of training. If I had to recommend only one or the other, sometimes I would recommend starting with the Agile practices. But my real recommendation is to "embrace the and" and develop both sets of skills.  

For context about training software professionals to "approach competency" in requirements, estimation, Scrum, and other Agile practices, I am using that term based on work we've done with our  Professional Development Ladder. In that ladder we define capability levels of "Introductory," "Competence," "Leadership," and "Mastery." A few days of classroom training will advance most people beyond Introductory and much of the way toward Competence in a particular skill. Additional hands-on experience, mentoring, and feedback will be needed to cement Competence in an area. Classroom study is just one way to acquire these skills. Self-study or working with an expert mentor can work about as well. The skills aren't hard to learn, but they aren't self-evident either. As I've said above, the state of the art in estimation, requirements, and agile practices has moved well beyond what even a smart person can discover on their own. Focused professional development of some kind or other is needed to acquire these skills. 

Is a week enough to accomplish real competency? My company has been training software professionals for almost 20 years, and our consultants have trained upwards of 50,000 software professionals during that time. All of our consultants are highly experienced software professionals first, trainers second. We don't have any methodological ax to grind, so we focus on what is best for each individual client. We all work hands-on with clients so we know what is actually working on the ground and what isn't, and that experience feeds back into our training. We have also also invested heavily in training our consultants to be excellent trainers. As a result, our service quality is second to none, and we can make a tremendous amount of progress with a few days of training. Of course additional coaching, mentoring and support are always helpful. 

17. Agility plus predictability is better than agility alone. 

Agility in the absence of steering targets created by estimating in the presence of uncertainty of of little value. Any Closed Loop Control systems requires rapid response to changing conditions and a steering signal that may required an estimate of where we want to be when we arrive. 

Skills development in practices that support estimation and predictability vs. practices that support agility is not an either/or choice. A truly agile business would be able to be flexible when needed, or predictable when needed. A true software professional will be most effective when skilled in both skill sets. 

If you think your business values agility only, ask your business what it values. Businesses vary, and you might work in a business that truly does value agility over predictability or that values agility exclusively. Many businesses value predictability over agility, however, so don't just assume it's one or the other.  

I think it’s self-evident that a business that has both agility and predictability will outperform a business that has agility only. With today's powerful agile practices, especially Scrum, there's no reason we can't have both.  

Overall, #NoEstimates seems like the proverbial solution in search of a problem. I don't see businesses clamoring to get rid of estimates. I see them clamoring to get better estimates. The good news for them is that agile practices, Scrum in particular, can provide excellent support for agility and estimation at the same time. 

My closing thought, in this hash tag-happy discussion, is that #AgileWithEstimationWorksBest -- and #EstimationWithAgileWorksBest too. 

Woody has successful created what he wanted - a discussion of sorts - about estimating. Trouble is without a principled discussion it turns into personal anecdotes rather than fact based dialog. Those of us asking for fact based examples are then seen as improperly challenging the anecdotes and since there is not yet any fact based response the need to improve the probability of success for software development goes unanswered, replaced acquisitions and name calling.

Related articles Making Conjectures Without Testable Outcomes Root Cause of Project Failure IT Risk Management Deadlines Always Matter Thinking, Talking, Doing on the Road to Improvement Information Technology Estimating Quality
Categories: Project Management

Why Bother with Probability and Statistics?

Wed, 08/19/2015 - 18:06

IWs3OkqfydCQIt is conjectured that uncertainty can be dealt with ordinary means with open conversation, identification of the uncertainties and their handling strategies. That quantitative methods are too elaborate and unnecessary for problems except the most technical and complicated ones.

When asked what is meant by uncertainty the answer many times is probably or very likely. But not any quantitative measure meaningful to the decision makers. Since the future is always uncertain in our project domain, making decisions in the presence of uncertainty is a critical success factor [1] for all project work. 

Decision making is one of the hard things in life. True decision-making occurs not when we already know the outcome, but when we do not know what to do. When we have to balance conflicting values, costs, schedule, needed capabilities, sort through complex situations, and deal with real uncertainty. To make decisions in the presence of this uncertainty we need to know the possible outcomes of our decision, the possible alternatives and their costs - in the short term and in the long term. Making these types of decisions requires we make estimates of all the variables involved in the decision-making process.

What Are Probabilities? 

There is a trend in the software development domain to redefine well established terms in mathematics, engineering, and science - it seems to suit the needs of those proffering that in the presence of uncertainty decisions can't be made.

Probabilities represent our state of knowledge. They are a statement of how likely we think an event might occur or the possible of a value being within a range of values.

These probabilities are based in uncertainty, and uncertainty comes in two forms. Aleatory and Epistemic. 

  • Aleatory uncertainty is the natural randomness in a process. For discrete variables, the randomness is parameterized by the probability of each possible value. For continuous variables, the randomness is parameterized by the probability density function (pdf).
  • Epistemic uncertainty is the uncertainty in the model of the process. It is due to limited data and knowledge. The epistemic uncertainty is characterized by alternative models. For discrete random variables, the epistemic uncertainty is modeled by alternative probability distributions. For continuous random variables, the epistemic uncertainty is modeled by alternative probability density functions. In addition, there is epistemic uncertainty in parameters that are not random by have only a single correct (but unknown) value.

Both these uncertainties exist on projects. When making good decisions on projects we know something about these uncertainties and have handling plans for the resulting risk produced by the uncertainties.

  • For Aleatory uncertainty (irreducible risk) we need¬†margin. The¬†margin¬†protects the project deliverables from unfavorable cost, schedule, and technical performance that is part of the naturally occurring variances.
  • For Epistemic uncertainty (reducible risk) can be addressed by¬†buying down the uncertainty. Paying money to¬†learn more.

This by the way is a primary benefit of Agile Software Development, where forced short term deliverables provide information to reduce risk. Agile is Not a risk management process, many other steps needed for that. But Agile is a means to reveal risk and take corrective action on much shorter time boundaries - reducing the accumulation of risk.

Some Background on Decision Making in the Presence of Uncertainty 

One way to distinguish good decisions from bad decisions is to assess the outcomes of those decisions. The measurement critical for a good or bad decision needs some definition itself. There are issues of course. The results of the decision may not appear for some time in the future, but we need to know something about the possible results before we make the decision. As well we'd like to see the results of the alternatives of our decision for the choices that weren't made or rejected.

A fundamental purpose of quantitative decision making is to distinguish between good and bad decisions. And to provide criteria for assessing the goodness of the decision. To do this we need first to establish what the decision is about.

  • When do you think we'll be ready to¬†go live with the needed capabilities we're paying you develop?
  • If we switch from our legacy systems to an ERP system, how much will we save over the next 5 years with the sunk cost of the entire project?
  • On the list of¬†desirable features, which ones can we get on the current¬†need date¬†if we reduce the budget by 15%?

Making decisions like these in the presence of uncertainty by estimating future outcomes is a normal, everyday, business process. Any suggestion these decisions can be made without estimates is utter nonsense.

Decision analysis starts with defining what a decision is - the commitment to resources that is irrevocable only at some cost. If there is not cost associated with making the decsion or changing your mind after the decision has been made - in the business domain - the decision was of little if any value. This is the value at risk discussion. How much are we willing to risk if we don't know to some level of confidence what the outcome of our decision is?

The elements of good decision analysis are [2]. So for any good decision and its decision making process, we'll need answers to the questions on the left, some form of logic to make a decision, the defined actionable steps from that decision and then an assessment of the outcomes to inform future decisions - learning from our decisions 

Screen Shot 2015-08-19 at 10.19.27 AM

Decision support systems that implement the process above are based in part on the underlying uncertainties of the systems under management. Research into the cost and schedule behaviors of these systems is well developed. Here's one example.

Screen Shot 2015-08-20 at 8.01.06 AM

In the end the decision making process will not meet the needs of the decision makes if we don't have alternatives defined, information at hand - and most times this information is probabilities information from condition in the future in the presence of uncertainty, and the value we assign to the outcomes - then making decisions is going to turn out BAD.

We're driving in the dark with the lights off, while spending other peoples money and our project will end up like this...

Upside down

Reference Material for Further Understanding

  1. Strategic Planning with Critical Success Factors and Future Scenarios: An Integrated Strategic Planning Framework, Technical Report CMU/SEI-2010-TR-037 ESC-TR-2010-102.
  2. Decision Analysis for the Professional 4th Edition, Peter McNamee and John Celina, 
  3. Real Options: Managing Strategic Investment  in an Uncertain World, Amran, Martha, and Nalin Kulatilaka, 

    Harvard Business School Press, 1999. 
  4. Making Hard Decisions: An Introduction to Decision Analysis, Robert Clemen, 

    Duxbury Press, 1996.

  5. Software Design as an Investment Activity: A Real Options Perspective, Kevin Sullivan and Prasad Chalasani 
  6. Probabilistic Modeling as an Exploratory Decision Making Tool, Martin Pergler and Andrew Freeman, McKinsey & Company, Number 6, September 2008
  7. Value at Risk for IS/IT Project and Portfolio Appraisal and Risk Management, Stefan Koch, Department of Information Business, Vienna University of Economics and BA, Austria, 
Related articles Making Conjectures Without Testable Outcomes Root Cause of Project Failure IT Risk Management
Categories: Project Management

Quote of the Day

Wed, 08/19/2015 - 14:55

The door of a bigoted mind opens outwards. The pressure of facts merely closes it more snugly.
- Ogden Nash

When there are new ideas being conjectured, it is best for the conversation to establish the principles on which those ideas can be tested. Without this the person making the conjecture has to defined the idea on personality, personal anecdotes, and  personal experience alone

Categories: Project Management

Observation Issues and Root Cause Analysis

Mon, 08/17/2015 - 17:29

Todd Little posted a comment on "How To Lie With Statistics," about his observations on the chart contained in that original post. 

6a00d8341ca4d953ef01a3fd22c4b1970bAs Todd mentions in his response

Screen Shot 2015-08-17 at 8.33.20 AM
The Cone of Uncertainty chart comes from the original work of Barry Boehm, "Reducing Estimation Uncertainty with Continuous Estimation: Assessment Tracking with 'Cone of Uncertainty.'" In this paper Dr. Boehm speaks to the lack of continuous updating of the estimates made early in the program as the source of unfavorable cost and schedule outcomes. 

As long as the projects are not re-assessed or the estimations not re-visited, the cones of uncertainty are not effectively reduced [1]. 

The Cone of Uncertainty is a notional example of how to increase the accuracy and precision of software development estimates with continuous reassessments. For programs in the federal space subject to FAR 34.2 and DFARS 34.201, reporting Estimate to Complete (ETC) and Estimates at Completion (EAC) is mandatory on a monthly basis. This is rarely done in the commercial world with the expected results shown in Todd's chart for his data and Demarco's data.

The core issue from current research at PARCA (http://www.acq.osd.mil/parca) from Root Cause Analysis (where I have worked as a support contractor) shows many of the issues are poor estimates when the program was baselined and failure to update the ETC and EAC with credible information about risks and physical percent complete

The data reported in Todd's original chart are the results of the projects based on estimates that may or may not have been credible. So the analysis of the outcomes of the completed projects is Open Loop  ...

... that is the target estimate measured against the actual outcomes May or May not Have Been Against Credible Estimates. So showing project overages doesn't actually provide the needed information the correct this problem. The estimate may have been credible, but the execution failed to perform as planned.

With this Open Loop assessment it is difficult to determine any corrective actions. Todd's complete presentation "Uncertainty Surrounding Cone of Uncertainty,"  speaks to some of the Possible root cause of the mismatch between Estimates and Actuals. As Todd mentions in his response, this was not the purpose his chart. Rather I'd suspect just to show the existence of this gap.

The difficulty however is pointing out observations of problems, while useful to confirm there is a problem, does little to correct the underlying cause of the problem.

At a recent ICEEA conference in San Diego, Dr. Boehm and several others spoke about this estimating problem.  Several books and papers were presented addressing this issue.

Both these resources , and many more, speak to the Root Causes of both the estimating problem and the programmatic issues of staying on plan.

This is the Core Problem That Has To Bee Addressed 

We need both good estimates and good execution to arrive as planned. There is plenty of evidence that we have an estimating problem. Conferences (ICEAA and AACE) speak to these. As well as government and FFRDC organizations (search for Root Cause Analysis here PARCA, IDA, MITRE, RAND, and SEI).

But the execution side is also a Root Cause. Much research has been done on procedures and process for Keeping the Program Green. For example the work presented at ICEAA "The Cure for Cost and Schedule Growth" where more possible Root Causes are addressed from our research.

While Todd's chart shows the problem, the community - cost and schedule community - is still struggling with the corrective action. The chart is ¬Ĺ the story. The other ¬Ĺ is the poor performance on the execution side¬†IF we had a credible baseline to execute against.¬†

To date both sides of the problem are unsolved and there for we have Open Loop Control with neither the proper steering target nor the proper control of the system to steer toward that target. Without corrections to both estimating, planning, scheduling, and execution, there is little hope in improving the probability of success in the software development domain.

Using Todd's chart from the full presentation, the core question that remains unanswered in many domains is

How can we increase the credibility of the estimate to complete earlier in the program?

Screen Shot 2015-08-17 at 10.05.23 AM

Meaning 

  • In the¬†feasibility¬†stage¬†what is a credible estimate, and how can that estimate be improved as the program moves left to right?
  • What are the measures of credibility?
  • How can these measures be informed as the project progresses?
  • What are the physical ¬†processes to assure those estimates are increasing in accuracy and precision?

By the way the term possible error comes from historical data. And like all How to Lie With Statistics charts that historical data is self selected, so a specific domain, classification of projects, and most importantly, the maturity of the organization making the estimates and executing the program.

Much research has shown the maturity of the acquirer influences the accuracy and precision of the estimates. Our poster child is Stardust, with on time, on budget, working outcomes due to both government and contractor Program Manager's maturity for managing in the presence of uncertainty. Which is one of the source of this material

Managing in the presence of uncertainty from Glen Alleman

[1] Boehm, B. ‚ÄúSoftware Engineering Economics‚ÄĚ. Prentice-Hall, 1981.¬†¬†

Related articles Why Guessing is not Estimating and Estimating is not Guessing Estimating and Making Decisions in Presence of Uncertainty Making Conjectures Without Testable Outcomes
Categories: Project Management

Is #NoEstimates a House Built on Sand?

Mon, 08/17/2015 - 02:45

This is my last post on the topic of #NoEstimates. Let's start with my professional observation. All are welcome to provide counter examples. 

Estimates have little value to those spending the money.
Estimates are of critical value to those providing the money.

Since those spending the money usually appear to not recognize the need for estimating for those providing the money, the discussion has no basis on which to exchange ideas. Without the acknowledgement that in business there are is collection of principles that are immutable, those spending the money have little understanding of where the money to do their work comes from†.

Here are the business principles that inform how the business works when funding the development of value:

  • The future is uncertain, but this uncertainty can be modeled. It is not¬†unknowable.
  • Managerial Accounting provides managers with¬†accounting¬†information in order to better inform themselves before they decide matters within their organizations, which aids their¬†management¬†and performance of control functions.
  • Economic Risk Management identifies, analyzes and accepts or mitigates the uncertainties encountered in the managerial decision-making processes

On the project management side, there are also immutable principles required for project success

  • There is some notion of what Done looks like, in units of measure meaningful to the decision makers. Effectiveness and Performance are two standard measures in domains where¬†systems thinking prevails¬†
  • There is a work Plan to reach Done. This Plan can be simple or it can be complex. But the order of the work and the dependencies between the work elements are the basis of all planning processes.
  • There is ¬†a plan for the needed resources to reach Done. This includes staff, facilities, and funding. This means knowing something about¬†how much and¬†when for these resources.
  • There is the recognition of the risk involved in reached Done, and a response to those risks
  • There is some means of measuring physical progress to the Plan to reach Done, so corrective actions can be taken to increase the probability of success. Tangible outcomes from the Planned work is the preferred way to measure progress

The discussion - of sorts - around No Estimates has reached a low point in shared understanding. But first let me set the stage

If the Business and Project Success principles are not accepted as the basis of discussion for any improvements, then there is no basis of discussion. Stop reading there's nothing here for you. If these principles are acknowledged, then please continue

In a recent post from one of the Original authors of the #NoEstimates hashtag it was said...

Quit estimates cold turkey. Get some kind of first-stab working software into the customer‚Äôs hands as quickly as possible, and proceed from there. What does this actually look like? When a manager asks for an estimate up front, developers can ask right back, ‚ÄúWhich feature is most important?‚ÄĚ‚ÄĒ then deliver a working prototype of that feature in two weeks. Deliver enough working code fast enough, with enough room for feedback and refinement, and the demand for estimates might well evaporate.¬†Let‚Äôs stop trying to predict the future. Let‚Äôs get something done and build on that‚Ää‚ÄĒ‚Ääwe can steer towards better.

This is one of those over generalizations that when questioned get strong pushback to the questioner. Let's deconstruct this paragraph a bit in the context of software development.

  • Quit estimates cold turkey - perhaps those paying for the work need to be consulted to determine if they have any vested interest in knowing about cost and schedule of the value they're paying for.
  • Some kind of first stab software working - sounds nice. But how much does that cost? And when can that be delivered. Can any feature in the requested list - the needed capabilities and their supporting technical and operational requirements - be done in 2 weeks? Can you show that can happen, so is that just a platitude repeated enough that is has become a meme - without any actual evidence of being true?¬†
  • Deliver enough working code fast enough, with enough room for feedback and refinement, and the demand for estimates might well evaporate - there are two cascaded IF conditions here. IF we deliver working code fast enough, IF we leave room for feedback, THEN estimate MIGHT evaporate.

This last one is one of those  IF PIGS COULD FLY type statements.

So Here's the Issue

If it is conjectured that we can make decisions in the presence of uncertainty - and all project work operated in the presence of uncertainty by its very definition - otherwise it'd be production - then how can we make a choice between alternatives if we can't estimate the outcomes of those choices?

This is the basis of MicroEconomics and Managerial Finance. When the OP'ers of #NoEstimates make these types of statements they're doing so on the their volition. It's likely their strongly held belief that decisions can be made without estimating the outcomes of those decisions. 

So when questioned about what principles these conjectures are based on returns shorn for asking, acquisitions of trolling, being rude, having no respect for the person making these unfounded, unsubstantiated, untested, domain free statements, it seems almost laughable. At times it appears to be willful ignorance of the basic tenants of business decision making. I don't pretend to know what's in the minds of many #NE supporters. Having talked to some advocates who are skeptical, it turns out when questioning further they are unwilling to disavow the notion that there is merit in exploring further. 

This is a familiar course for climate change deniers. All the evidence is not in, so let's challenge everything and see what we can discover. This notion of challenging and exploring in the absence of established principles is not that useful actually. In a domain like managerial finance, Microeconomics of software development decision making, in the realm of decision making in general, the principles and practices are well established.

What's now know is actually that those principles and practicers are not know to those making the conjecture that we should challenge everything. Much like the political climate deniers - Well I'm not a scientist but I heard on the internet there is some dissent in the measurements ... So I'm not familiar with probability and statistics and haven't taken a microeconomics class or read any Managerial Finance books, But almost sure that those self proclaimed thought leaders for #NoEstimates have something worth looking into.

Harsh, you bet it's harsh. Any idea presented in open forum will be challenged when that idea willfully violate the principles on which business operates. Better be prepared to be challenged and better be prepared to bring evidence your conjecture has merit. This happens all the time in science, mathematics, and engineering. Carl Sagan's BS Detector is one place to start. Or John Baez's Crack Pot Index are useful in the science and math world. 

No Estimates has now reached that level, with some outrageous claims. 

  • Not doing estimates improves project performance by 10X
  • Estimates are actually evil
  • Estimating destroys innovation
  • Steve McConnell¬†proves in his book estimates can't be done.
  • Todd Littles Figure 2 shows how bad we are at estimating - without of course reading the rest of the presentation showing how to correct these errors.

Making Credible Decisions in the Presence of Uncertainty

Decision making is the basis of business management. Here's an accessible text for learning to making decisions in the presence of uncertainty, Decision Analysis for the Professional. When there is any suggestion that decision can be made without estimate ask if the personal making that conjecture has an evidence this is possible. Ask if they're read this book. Ask if their decision making process has:

  • A decision making framework
  • A decision making process
  • A methodology for making decisions
  • How this decision making process works in the presence complex organizations
  • A probability and statistics model for making decsions.

Here's some more background on making decisions in the presence of uncertainty.

This is a sample of the many resources available for making decisions in the presence of uncertainty. There is also a large collection of estimating software development projects. The one we use in our work is

This an other resources are the basis of understanding how to make decision.

When it is conjectured we can decide with estimating, ask have you any evidence what so ever this is possible beyond your personal opinion and anecdotal experience? No? Then please stop trying to convince me your unsubstantiated  idea has any merit in actual business practice. 

And this is why I've decided to stop writing about the nonsense of #NoEstimates. There is no basis for the discussion anchored in principles, practices, or processes of business based in managerial finance and Microeconomics of decision making.

It's a House Built On Sand

† I learned this in the first week of my first job after graduate school. 

 

Related articles Making Conjectures Without Testable Outcomes Estimating Processes in Support of Economic Analysis Root Cause of Project Failure Herding Cats: How To Make Decisions Estimating and Making Decisions in Presence of Uncertainty Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

How To Make Decisions

Sat, 08/15/2015 - 23:00

Decisions are about making Trade Offs for the project that are themselves about:

  • Evaluating alternatives.
  • Integrating and balancing all the considerations (cost, performance, Producibility, testability, supportability, etc.).
  • Developing and refining the requirements, concepts, capabilities of the product or services produced by the project or product development process.
  • Making trade studies and the resulting trade offs that enables the selection of the best or most balanced solution to fulfill the business need or accomplishment of the mission.

The purpose of this process is to:

  • Identify the trade-offs ‚Äď the decisions to be made ‚Äď among requirements, design, schedule, and cost.
  • Establish the level of assessment commensurate with cost, schedule, performance, and risk impact based on the value at risk for the decision.
    • Low value at risk, low impact, simple decision making ‚Äď possibly even gut feel.
    • High value at risk, high impact, the decision-making process must take into account these impacts.

Making decisions about capabilities and resulting requirements is the start of discovering what DONE looks like, by:

  • Establishing alternatives for the needed performance and functional requirements.
  • Resolving conflicts between these requirements in terms of the product‚Äôs delivered capabilities.

Decisions about the functional behaviors and their options is next. These decisions:

  • Determine preferred set of requirements for the needed capabilities. This of course is an evolutionary process as requirements emerge, working products are put to use, and feedback is obtained.
  • Determine the customer assesses requirements for lower-level functions as each of the higher-level capabilities are assessed.
  • Evaluate alternatives to each requirement, each capability, and the assessed value of each capability ‚Äď in units of measure meaningful to the decision makers.

Then comes the assessment the cost effectiveness of each decision:

  • Develop the Measures of Effectiveness and Measures of Performance for each decision.
  • Identify the critical Measures of Effectiveness of each decision in fulfilling the project‚Äôs business goal or mission. These Technical Performance Measures are used to assess the impact of each decision on the produced value of the project.

Each of these steps is reflected in the next diagram.

Picture1

Value of This Approach

When we hear that estimates are not needed to make decisions, we need to ask how the following questions can be answered:

  • How can we have systematized thought process, where the decisions are based on measureable impacts?
  • How can we clarify our options, problem structure, and available trade-offs using units of measure meaningful to the decision makers?
  • How can we improve communication of ideas and professional judgment within our organization through a shared exchange of the impacts of our decisions?
  • How can we improve communication of rationale for each decision to others outside the organization?
  • How can we be assured of our confidence that all available information has been accounted for in a decision?

 The decision making process is guided by the identification of alternatives

Decision-making is about deciding between alternatives. These alternatives need to be identified, assessed, and analyzed for their impact on the probability of success of the project.

These impacts include, but are not limited to:

  • Performance
  • Schedule
  • Cost
  • Risk
  • Affordability
  • Producibility
  • And all the other ‚Ķilities associated with the outcomes of the project

The effectiveness of our decision making follows the diagram below:

Picture2

In the End - Have all the Alternatives Been Considered?

Until there is a replacement for the principles of Microeconomics, for each decision made on the project, we will need to know the impact on cost, schedule, technical parameters, and other attributes of that decision. To not know those impacts literally violates the principles of microeconomics and the governance framework of all business processes, where the value at risk is non-trivial.

Picture3

When you hear planning ahead, by assessing our alternatives is overrated, quit estimating cold turkey think again. And ask evidence of how to make decisions in the presence of uncertainty with making estimates, making trade-offs, evaluating alternatives - probabilistic alternatives - and all the those other decision making processes found in your managerial finance book you read in engineering, computer science, or business school

† Derived from Module J: Trade Study Process, Systems Engineering, Boeing.

Related articles Making Conjectures Without Testable Outcomes Estimating and Making Decisions in Presence of Uncertainty Estimating Processes in Support of Economic Analysis
Categories: Project Management

Earned Value + Agile a Match Made ini Heaven?

Sat, 08/15/2015 - 01:13

At a recent conference the discussion of the integration of Agile with Earned Value Management on programs subject to FAR 34.201 and DFARS 252.234-7001 was the topic. Here's my presentation.

Turns out it is a match made in heaven. Since that conference, I've worked a DOJ proposal where agile and EVM are mandated, along with the SDLC (Scrum) and the Agile Development tool (TFS). The flood gates are now opening on Software Intensive System on procurements requiring an EVMS. The NDIA (National Defense Industry Association) is release an integration document in October defining how to make the match. Agile luminaries have spoken at DOD conferences and given advice to Undersecretaries on the topic. One office I work in is writing an implementation guide, as is the GAO.
Categories: Project Management

The Flaw of Averages

Fri, 08/14/2015 - 04:30

In a recent post of forecasting capacity planning a time series of data was used as the basis of the discussion.

Screen Shot 2015-08-13 at 6.40.47 PM

Some static statistics were then presented.

Screen Shot 2015-08-13 at 6.43.01 PMWith a discussion of the upper and lower ranges of the past data. The REAL question though is what is the likely outcomes for data in the future given the past performance data. That is if we recorded what happened in the past, what is the likely data in the future?

The average and upper and lower ranges from the past data are static statistics. That is all the dynamic behavior of the past is wiped out in the averaging and deviation processes, so that information can no longer be used to forecast the possible outcomes of the future. 

This is one of the attributes of The Flaw of Averages and How to Lie With Statistics, two books that should be on every managers desk. That is managers tasked with making decisions in the presence of uncertainty when spending other peoples money.

We now have a Time Series and can ask the question what is the range of possible outcomes in the future given the values in the past? This can easily be done with a free tool - R. R is a statistical programming language that is free from the Comprehensive R Archive Network (CRAN). In R, there are several functions that can be used to make these forecasts. That is what are the estimated values in the future form the past and their confidence intervals.

Let's start with some simple steps:

  1. Record all the data in the past. For example make a text file of the values in the first chart. Name that file NE.Numbers
  2. Start the R tool. Better yet download an IDE for R. RStudio is one. That way there is a development environment for your statistical work. As well there are many Free R books on statistical forecasting - estimating outcomes in the future.
  3. OK, read the Time Series of raw data from the file of value as assign it to a Variable
    • NETS=ts(NE.Numbers)
    • The ts function converts the Time Series into an object - a Time Series - that can be used by the next function
  4. With the Time Series now in the right format, apply the ARIMA function. ARIMA is Autoregressive Integrated Moving Average. Also know as the Box-Jenkins algorithm. The is George Box of the famously misused and blatantly abused quote all models are wrong some models are useful. If you don't have the full paper where that quote came from and the book Time Series Analysis: Forecasting and Control, Box and Jenkins, please resist re-quoting out of context. That quoyte has become the meme for those not having the  background to do the math for time series analysis and it becomes a mantra for willfully ignoring the math needed to actually make estimates of the future - forecasting - using time series of the past in ANY domain. ARIMA is the beginning basis of all statistical forecasting, the science, engineering, and finance.
    • The ARIMA algorithm has three parameters -¬†p, d, q
    • p is the order of the autoregressive model.
    • d is the degree of he differencing
    • q is the order of the moving average
    • Here's the manual in R for ARIMA
  5. With the original data turned into a Time Series and presented to the ARIMA function we can now apply the Forecast function. This function provides methods and tools for displaying and analyzing univariate time series forecasts including exponential smoothing via state space models and automatic ARIMA modelling.
  6. When applied to the ARIMA output we get a Forecast series that can be plotted.

Here's what all this looks like in RStudio:


NETS=ts(NE.Numbers) - convert the raw numbers to a time series
NETSARIMA=arima(NETS, c=order(0,1,1)) - make an ARIMA object
NEFORECAST = forecast(NETSARIMA) - make a forecast using that
plot(NEFORECAST) - plot it

Here's the plot, with the time series from the raw data and the 80% and 90% confidence bands on the possible outcomes in the future. 

Rplot

The Punch Line

You want to make decisions with other peoples money when the 80% confidence in a possible outcome is itself a - 56% to +68% variance? really. Flipping coins gets a better probability of an outcome inside all the possible outcomes that happened in the past. The time series is essentially a random series with very low confidence of being anywhere near the mean. This is the basis of The Flaw of Averages. 

Where I work this would be a non-starter if we came to the Program Manager with this forecast of the Estimate to Complete based on an Average with that wide a variance. 

Possible where there is low value at risk, a customer that has little concern for cost and schedule overrun, and maybe where the work is actually and experiment  with no deadline or not-to-exceed budget, or any other real constraint. But if your project has a need date for the produced capabilities, a date when those capabilities need to start earning their keep and need to start producing value that can be booked on the balance sheet a much higher confidence in what the future NEEDS to be is likely going to be the key to success

The Primary Reason for Estimates

First estimates are for the business. Yes developers can use them too. But the business has a business goal. Make money at some point in the future on the sunk costs of today - the breakeven date. These sunk costs are recoverable - hopefully - so we need to know when we'll be even with our investment. This is how business works, they make decisions in the presence of uncertainty - not on the opinion of development saying we recorded our past performance on an average for projected that to the future. No, they need a risk adjusted, statistically sound level of confidence that they won't run out money before breakeven. What this means in practice is a management reserve and cost and schedule margin to protect the project from those naturally occurring variances and those probabilistic events to derail all the best laid plans.

Now developers make not think like this. But someone somewhere in a non-trivial business does. Usually in the Office of the CFO. This is called Managerial Finance and it's how serious money at risk firms manage. 

So when you see time series like those in the original post, do your homework and show the confidence of the probability of the needed performance actually showing up. And by needed performance I mean the steering target used in the Closed Loop Control system used to increase the probability that the planned value - that the Agilest so dearly treasure - actually appears somewhere near the planned need date and somewhere around the planned cost so the Return on Investment those paying for your work are not disappointed with a negative return and label their spend as underwater. 

So What Does This Mean in the End?

Even when you're using past performance - one of the better ways of forecasting the future - you need to give careful consideration of those past numbers. Averages and simple variances which wipe out the actual underlying time series variances - are not only naive, they are bad statistics used to make bad management decisions. 

Add to the poorly formed notion that decisions can be made about future outcomes in the presence of  uncertainty in the absence of estimates about that future and you've got the makings of management disappointment. The discipline of estimating future outcomes from past behaviors is well developed. The mathematics and especially the terms used in that mathematics are well established. Here's some source we use in our everyday work. These are not populist books, they are math and engineering. They have equations, algorithm, code examples. They are used used the value at risk is sufficiently high that management is on the hook for meeting the performance goals in exchange for the money assigned to the project.

If you work a project that doesn't care too much about deadlines, budget overages, or what gets produced other than the minimal products, then these books and related papers are probably not for you. And most likely Not Estimating the probability that you'll not over spend, show up seriously late, and fail to produce the needed capabilities to meet the Business Plans, will be just fine. But if you are expected to meet the business goals in exchange for the spend plan you've beed assigned, these might be a good place start to avoid being a statistic (dead skunk on the middle of the road) in the next Chaos Report (no matter how poorly the statistics are).

This by the way is an understanding I came to on the plane flight home this week. #Noestimates is a credible way to run your project when these conditions are in place. Otherwise you may what to read how to make credible forecasts of what the cost and schedule is going to be for the value produced with your customer's money, assuming they actually care about not wasting it.

 

Related articles IT Risk Management Thinking, Talking, Doing on the Road to Improvement Estimating Processes in Support of Economic Analysis Making Conjectures Without Testable Outcomes
Categories: Project Management

No Signal Means, No Steering Target, Means No Corrective Actions

Wed, 08/12/2015 - 06:13

Screen Shot 2015-08-11 at 6.44.36 PMThe management of projects involves many things. Capabilities, Requirements, Development, Staffing, Budgeting, Procurement, Accounting, Testing, Security, Deployment, Maintenance, Training, Support, Sales and Marketing, and other development and operational processes. Each of these has interdependencies with other elements. Each operates in its own specific ways on the project. Almost all have behaviors described by probabilistic model driven by the underlying statistical processes.

Management in this sense is control in the presence of these probabilistic processes. And yes we can control these items - it's a well developed process, starting with Statistical Process Control, Monte Carlo Simulation of resulting, Bayesian Networks, Probabilistic Real Options and other methods based on probabilistic processes.

The notion that are not controllable is at its heart flawed and essentially misinformed. But this control requires information. It's been mentioned before about Closed Loop Control, Closed Loop versus Open Loop, Staying on Plan Means Closed Loop Control, Use and Misuse of Control Systems, and Why Project Management is a Control System.

All these lead to Five Immutable Principles of Project Success. Along with these Principles, comes Practices, and Processes. But it's the Principles we're after as a start.

Each of the Principles makes use of information used on managing the project. This information is the signal used by management to make decisions. These signals are used to compare the current of the project (the system under management) to the desired state of the system. This is the basis of the Control System used to manage the System Under Management. With the control system - what ever that may be and there are many systems - the next step to gather the information needed to make decisions using this control system. This means information about the  past, present, and future of the system under management. Past data - when recorded - is available from the database of what happened in the past. Present data should be readily available directly from the system. The future data presents a unique problem. Rarely is information about the future recorded some place. It's in the future and hasn't happened yet.  But we can find sources for this future information. We have models of what the system may do in the future. We may have similar systems from the past that can be used to create future data. But there is a critical issue here. The future may not be like the past. Or the future may be impacted in ways not found on similar projects. But we to come up with this information if we are to make decisions about the future. So if we're missing the model, or missing the similar project, what can we do? We can make estimates from the data or models in some probabilistically informed manner. This is the role of estimating. To inform our decision making processes in the presence of uncertainty of possible future outcomes, knowing something about the past and present state of the system under management. With this probabilistic information, decisions can be made to take corrective actions to keep our project under control. That is moving in the direction we planned to move to reach the goal of the project. This goal is usually...
... provide needed capabilities to those paying for the project to meet some business goal or fulfill a mission strategy. To accomplish some beneficial outcome in exchange for the cost and time invested in development of the capabilities. So what happens when we don't have information about the future. That is we choose to Not Estimate when faced with the uncertainty of the possible future states of the system we need to take corrective actions to keep the project moving in the desired direction? Without these estimates, we have no signal needed to take corrective. We have an Open Loop Control system. A system that takes any path it wants, it has no control mechanism to keep it on track. The open loop control system is a non-feedback system in which the control input to the system is determined using only the current state of the system and a model of the system. There is no feedback to determine if the system is achieving the desired output based on the reference input or set point. The system does not observe itself to correct itself and, as such, is more prone to errors and cannot compensate for disturbances to the system. This means we're going to get what we're going to get, with no chance to steer the system toward our desired outcome. For any non-trivial system, not estimating the future state, creating the error signal between the desired future state and the current state and using that error signal to take corrective actions is considered Bad Management.  This would be considered Doing Stupid Things On Purpose in most domains that spend other people's money to produce needed capabilities and the resulting value for the planned cost on the planned date.  
Categories: Project Management

Reasoning About the "Estimating" Problem

Mon, 08/10/2015 - 00:59

Let's start with a background piece on estimating. The Fermi Problem. A Fermi estimate is an order estimate of something. Not an order of magnitude (that's a 10X estimates, easy for anyone to make). These types of problems are encountered in physics and engineering education. From personal experience in oral exams where we were asked to estimate something quickly on the black board (yes the Black Board). Something like, what is the orbital velocity of a star with a specific mass composed of a specific set of fusion elements? You have 5 minutes young student, work quickly.

These back of the envelope calculations are well know exercises to show how to make estimates in the presence of uncertainty and with very little data in hand. This technique was named after Enrico Fermi for his ability to make good approximation with little or not actual data. These types of problems involve making justified guesses (not the types on uninformed guesses we see in many domains), with upper and lower variances.

A nice example is how many piano tuners are there in Chicago in 2009?

  1. There are approximately 9,000,000 people living in Chicago.
  2. On average, there are two persons in each household in Chicago.
  3. Roughly one household in twenty has a piano that is tuned regularly.
  4. Pianos that are tuned regularly are tuned on average about once per year.
  5. It takes a piano tuner about two hours to tune a piano, including travel time.
  6. Each piano tuner works eight hours in a day, five days in a week, and 50 weeks in a year.

With these assumptions, the number of piano tunings a year is approximately

  • (9,000,000 people in Chicago / 2 people per house) x 1 piano per 20 houses x 1 tuner per piano per year = 225,000 tuning per year
  • (50 weeks per year x 5 day per week x 8 hours a day) / 2 hours to tune = 1000 tunings per year per piano tuner
  • (225,000 tunings per year) / 1000 tunings per year per tuner = 225 tuners in Chicago in 2009
  • The actual number in 2009 was 290

This is similar to the Drake equation which estimates the number of intelligent civilizations in our galaxy. This approach by the way, may be one of the reasons estimating is seen as hard or even not possible by some. They missed those opportunities where estimating is taught. 

What Does This Have to do with Project Management?

Estimation theory is a critical aspect of project management. When spending other peoples money in the presence of uncertainty we need to make decisions in the presence of this uncertainty. Estimation theory is a branch of statistics dealing with estimating values of parameters (numbers) based on measured/empirical data that have random values. The parameters describe the underlying physical process in a way that their values affect the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.

In the project world, we have three core variables - cost, schedule, and technical performance. These are interdependent, likely non-linear, and many times non-stationary (evolving in time). There is nice course at MIT OCW course on the Art of Approximation in Science and Engineering  These back of an envelope estimates are critical to success in engineering and science. They are also critical to estimating in software development.

So when we hear it's hard or it's not even possible to estimate software development, don't believe it for a moment. Here's a butt simple way on How to Estimate Almost Any Software Deliverable.

The next thing we hear is estimates are the smell of dysfunction. And of course no dysfunctions are named, no root cause of the dysfunction named, and no corrective actions named - only stop estimating since estimates are evil, used as commitments, and misused to punish developers.

The Real Bottom Line

In business a framing assumption of managerial finance. This framing assumptions informs those of us on the business side of spending our money when making decisions. This is the basis of microeconomics of decision making. 

When it is conjectured that decisions can be made in the presence of uncertainty in the absence of estimating to cost and impacts of those decisions without making estimates of those outcomes, we have to ask are those making those conjectures informed by any framework based in the process of business management? It appears not.

Related articles Estimating Processes in Support of Economic Analysis Making Conjectures Without Testable Outcomes Applying the Right Ideas to the Wrong Problem Root Cause of Project Failure IT Risk Management
Categories: Project Management

Flaws and Fallacies of #NoEstimates

Sun, 08/09/2015 - 23:39

All the work we do in the projects domain is driven by uncertainty. Uncertainty of some probabilistic future event impacting our project. Uncertainty in the work activities performed while developing a product or service.

Decision making in the presence of these uncertainties is a natural process in all of business.

The decision maker is asked to express her beliefs by assigning probabilities to certain possible states of the system in the future and the resulting outcomes of those states.

What's the chance we'll have this puppy ready for VMWorld in August? What's the probability that when we go live and 300,000 users logon we'll be able to handle the load? What's our test coverage for the upcoming release given we've added 14 new enhancements to the code base this quarter? Questions like that are normal everyday business questions, along with what's the expected delivery date, what's the expected total sunk cost, and what's the expected bookable value measured in Dead Presidents for the system when it goes live?

To answer these and the unlimited number of other business, technical, operational, performance, security, and financial questions, we need to know something about probability and statistics. This knowledge  is an essential tool for decision making no matter the domain.

Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write - H.G. Wells

If we accept the notion that all project work is probabilistic, driven by the underlying statistical processes of time, cost, and technical outcomes, including Effectiveness, Performance, Capabilities, and all the ...ilities that  manifest and determine value after a system is put into initial use. Then these conditions are the source of uncertainty and come in two types:

  • Reducible - event based with a probability of occurrence within a specified time period.
  • Irreducible - naturally occurring by a Probability Distribution Function of the variances produced by the underlying process.

If you don't accept this - that all project work is probabilistic in nature - stop reading, this Blog is not for you.

If you do accept that all project work is uncertain, then there are some more assumptions we need to make sense of the decision making processes. The term statistic has two definitions - one long ago and a current one. The long ago one means a fact, referring to numerical facts. A numerical fact as a measurement, a count, or a rank. This number can represent a total, an average or a percentage of several such measures. This term also applied to the broad discipline of statistical manipulation in the same way accounting applies to entering and balancing accounts. 

Statistics in the second sense is a set of methods for obtaining, organizing, and summarizing numerical facts. These facts usually represent a partial rather than complete knowledge about a situation. For example the sample of the population rather than counting the entire population in the case of the census.

These numbers - statistics - are usually subjected to formal statistical analysis to help in our decision making in the presence of uncertainty.

In our software project world uncertainty is an inherent fact. Software uncertainty is likely much higher than in construction, since the requirements in software development are soft unlike the requirements in interstate highway development. But while the domain may have different variance in the level of uncertainty, estimates are still needed to make decisions in the presence of these uncertainties. Highway development has many uncertainties - none the least is the weather and weather delays. 

When you measure what you are speaking about and express it in numbers you know something about it; but when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind - Lord Kelvin

Decisions are made on data. Otherwise those decisions are just gut feel, intuition, and at their core guesses. When you are guessing with other peoples money you have a low probability of keeping your job or the business staying in business. 

... a tale told by an idiot, full of sound and fury, signifying nothing - Shakespeare

When we hear personal anecdotes about how to correct a problem and the conjecture that those anecdotes are applicable outside the individual telling the anecdote - beware. Without a test of any conjecture it is just a conjecture. 

He uses statistics as a drunken man uses lampposts - for support rather than illumination - Andrew Lang

We many times confuse a symptom with the cause. When reading about all the failures in IT projects, and probability of failure, the number of failures versus success, there is rarely - in those naive posts on that topic - any assessment of the cause of the failure. The Root Cause analysis is not present. The Chaos Report is the most egregious of these. 

There is no merit where there is no trial; and till experience stamps the mark of strength, cowards may pass for heroes, and faith for falsehood - A. Hill

Tossing out anecdotes, platitudes, and misquoted quotes does not make for a credible argument for anything. I knew a person that did X successfully, therefore you should have the same experience is common. Or just try it you may find it works for you just like it worked for me. 

It seems there are no Principles or tested Practices in the approach to improving projects success. Just platitudes and anecdotes - masking chatter as process improvement advice. 

I started to write a detailed exposition using this material for the #NoEstimates conjecture that decisions can be made without an estimate. But Steve McConnell's post is much better than anything I could have done. So here's the wrap up...

When it is conjectured that decisions, any decisions, some decisions, self selected decisions, can be made in the presence of uncertainty can be made without also making an estimate of the outcome of that decision, the cost of that decision, the impact of that decision - then let's hear how, so we can test it outside personal opinion and anecdote.

References 

It's time for #NoEstimates advocates to provide some principle based examples of how to make decisions in the presence of uncertainty without estimating. Here these are populist books (Books without the heavy math), but still capable of conveying the principles of the topic can be a source of learning. 

  1. Flaws and Fallacies in Statistical Thinking, Stephen K. Campbell, Prentice Hall, 1974
  2. The Economics of Iterative Software Development: Steering Toward Better Business Results, Walker Royce, Kurt Bittner, and Mike Perrow, Addison Wesley, 2009.
  3. How Not to be Wrong: The Power of Mathematical Thinking, Jordan Ellenberg, Penguin Press, 2014
  4. Hard Facts, Dangerous Half-Truths & Total Nonsense: Profiting from Evidence Based Management, Jeffery Pfeffer and Robert I. Sutton, Harvard Business School Press, 2006.
  5. How to Measure Anything, Finding the Value of Intangibles in Business, 3rd Edition, Douglas W. Hubbard, John Wiley & Sons, 2014.
  6. Standard Deviations: Flawed Assumptions, Tortured Data, and Other Ways Ways to Lie With Statistics, Gary Smith
  7. Center for Informed Decision Making
  8. Decision Making for the Professional, Peter McNamee and John Celona

Some actual math books on the estimating problem

  1. Probability Methods for Cost Uncertainty Analysis, Pau R. Garvey
  2. Making Hard Decisions: An Introduction to Decision Analysis, 2nd Edition, Robert T, Clemen, Duxbury Press, 1996.
  3. Estimating Software Intensive Systems, Richard D. Stutzke, Addison Wesley, 2005.
  4. Probabilities as Similarly Weighted Frequencies, Antoine Billot · Itzhak Gilboa · Dov Samet · David Schmeidler
Related articles Making Conjectures Without Testable Outcomes Estimating Processes in Support of Economic Analysis Applying the Right Ideas to the Wrong Problem Estimating and Making Decisions in Presence of Uncertainty
Categories: Project Management

More Misconceptions of Waterfall

Sat, 08/08/2015 - 16:47

It is popular in some agile circle to use Waterfall as the stalking horse for every bad management practices in software development. A recent example is

Go/No Go decisions are a residue of waterfall thinking. All software can built incrementally and most released incrementally.

Nothing in Waterfall prohibits incremental release. In fact the notion of block release is the basis of most Software Intensive Systems development. From the point of view of the business capabilities are what they bought. The capability to do something of value in exchange for the cost of that value. Here's an example in health insurance business. Incremental release of features is of little value if those features don't work together to provide some needed capability to conduct business. A naive approach is the release early and release often platitude of some in the agile domain. Let's say we're building a personnel management system. This includes recruiting, on-boarding, provisioning, benefits signup, time keeping, and payroll. It's not be very useful to release the time keeping feature if the payroll feature was not ready. 

Screen Shot 2015-08-08 at 9.37.16 AM

So before buying into the platitude of release early and often ask what does the business need to do business? Then draw a picture like the one about, develop a Plan for producing those capabilities in the order they are needed to deliver the needed value. Without this approach, you'll be spending money without producing value and calling that agile. 

That way you can stop managing other peoples money with Platitudes and replace them with actual business management processes. So every time you hear a platitude masking as good management, ask does that person using that platitude work anywhere that is high value at risk? No, then probably has yet to encounter that actual management of other peoples money

Related articles Capabilities Based Planning - Part 2 Are Estimates Really The Smell of Dysfunction?
Categories: Project Management

Inversion of Control In Commercial Off The Shelf Products (COTS)

Thu, 07/30/2015 - 19:25

The architecture of COTS products comes fixed from the vendor. As standalone systems this is not a problem. When integration starts, it is a problem. 

Here's a white paper from the past that addresses this critical enterprise IT issue

Inversion of Control from Glen Alleman
Categories: Project Management

Architecture -Center ERP Systems in the Manufacturing Domain

Thu, 07/30/2015 - 13:51

I found another paper presented at Newspaper systems journal around architecture in manufacturing and ERP.

One of the 12 Principles of agile says The best architectures, requirements, and designs emerge from self-organizing teams. This is a developers point of view of architecture. The architects point of view looks like.

Architectured Centered Design from Glen Alleman
Categories: Project Management

IT Risk Management

Wed, 07/29/2015 - 02:46

I was sorting through a desk draw and came across a collection of papers from book chapters and journals done in the early 2000's when I was the architect of an early newspaper editorial system.

Here's one on Risk Management

Information Technology Risk Management from Glen Alleman This work was done early in the risk management development process. Tim Lister's quote came later Risk management is how adults management projects.
Categories: Project Management