Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Project Management

Quote of the Day

Herding Cats - Glen Alleman - Sun, 06/25/2017 - 18:48

Great spirits have always found violent opposition from mediocrities. The latter cannot understand it when a man does not thoughtlessly submit to hereditary prejudices but honestly and courageously uses his intelligence. Albert Einstein, in  The Tao of Systems Engineering: An Engineer's Survival Guide (Kindle Locations 324-326). Ronald Paul Sherwin

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Sat, 06/24/2017 - 15:39

As to methods, there may be a million and then some, but principles are few. The man who grasps principles can successfully select his own methods. The man who tries methods, ignoring principles, is sure to have trouble. ‚ąí¬†Harrington Emerson.

from an advertisement for The Book of Business, in Colliers: National Weekly, 1917 

When we hear a claim or conjecture about some method that fixes some named dysfunction, and that claim or conjecture clearly violates established principles, one of several outcomes can occur. Ask what do you mean or what principle informs your conjecture here? Many times this will be the start of a dialogue on the topic and learning on both side can start. Other teams, those being asked that question take offense to being asked to explain their position. In later case, it is best to move on, since willfully ignoring the principles that form the basis of the discussion is not going to lead to a shared exchange of ideas.  


Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Fri, 06/23/2017 - 18:32

‚ÄúWe trained hard, but it seemed that every time we were beginning to form up into teams, we would be reorganized. I was to learn later in life that we tend to meet any new situation by reorganizing; and a wonderful method it can be for creating the illusion of progress while producing confusion, inefficiency, and demoralization.‚ÄĚ ‚Äď Petronius Arbiter (210 B.C.)

Categories: Project Management

Just a Reminder About Estimating Software Projects

Herding Cats - Glen Alleman - Fri, 06/23/2017 - 17:52

Here's a clear and concise discussion of the estimating topic.

And just a reminder for making decisions in the presence of uncertainty.

There is NO Means

Categories: Project Management

Possibilities for Managing Individual and Team Compensation

There’s a twitter¬†conversation about how to manage team compensation. How can we pay people what they are worth and compensate a team for their value?

I know there are teams where I was quite valuable—not because I was “the” leader, but because I helped the team achieve our goals. And, there are teams where I was not as valuable. I did great work, but my contribution wasn’t as valuable as other people on the team.

That is the crux of the matter when we think about team compensation.

Here’s how I managed this problem in the past:

  • I created (with HR’s input) technical career levels. I often had 5 levels: associate¬†engineer, engineer, senior engineer, principal engineer, consulting engineer. The principal and consulting had first-level manager and group manager as parallel. In addition, I had a couple more management levels (director and VP).
  • I wrote down expertise criteria that differentiated each level. The criteria focused on breadth of responsibility, collaboration capability, and strategic vs. tactical thinking. HR made me add “typical education” which I amended to say “or years of experience.
  • I asked my groups to provide me feedback on these criteria.
  • When I was sure the criteria were correct, I met one-on-one to make sure we each agreed where each person fit into the criteria. Some people were on the verge of a promotion. Some were not. We worked together to make sure we were both comfortable with their title and compensation.

Now, I had the ability to provide people individual compensation and promotions. And, I could provide team-based compensation. Here’s how it worked.

One guy, John, wanted a promotion to senior engineer. He was a high-initiative person. He coached and mentored people in the team. He got along with everyone quite well, and his solutions were often good. It was the often that was the difficult part. When he got an idea in his head, it would take a disaster to convince him he was wrong. His judgment was not as good as a senior engineer needed to have.

I’d provided feedback in our weekly one-on-ones explaining both my happiness and concerns with his judgment, depending on the circumstance. (This was not agile. We used staged-delivery, a feature-driven approach. I was the manager of several groups.) I asked him if he wanted coaching, and yes, he did, but not from me. That was fine. I arranged a coaching arrangement with someone who was a principal engineer (2 levels up).

The two of them met twice a week for several weeks. I think each meeting was about 20-30 minutes. The coach asked questions, provided guidance and options. The engineer learned a ton in that month and started to explore other options with his team. He started to realize his very first idea was not the be-all and end-all for the work.

It took him several months of practice, and I was able to promote him to be a senior engineer.

People need to know what the criteria are—why the org values them. If the salary ranges are too tight, there is no flexibility in hiring. If the salary ranges are too loose, it’s too easy to have discrimination in payment, especially if someone started their first job too low. (Yes, I have experienced salary discrimination.)

Let me provide a little context for team compensation. John’s team was involved in a new product. We didn’t know much about the customers and product management wasn’t much help. (I said this is before agile.) John asked the tech writer, Susan, for help in understand what customers wanted.

Susan guided the entire project. She helped the team understand the requirements. Because Susan was a principal engineer, she had customer contacts and she used them. She created what we would now recognize as a ranked backlog. John had the idea of a “pre-beta,” which were builds we provided to a select group of customers. You might think of this as a series of MVP (Minimum Viable Products) to these customers. The customers provided feedback to Susan, who used that feedback to guide the team.

We released the product and it was a great success. My VP came to me and told me I would get a $10k bonus (a ton of money back then). I said I had not enough to do with the project, and that the team would get the money. My boss cocked an eyebrow and said, “I don’t want to lose any of them.” I told him I would make it right, somehow.

I went to the team and told them I had been chosen to receive a $10k bonus, which I thought was wrong. They all agreed!

I asked them to explain how they wanted to divide the money. (I was thinking evenly.) Before I even had a chance to pass out stickies, John said, “Susan should get the most. She was the heart and soul of this project.” Everyone nodded their heads.

I said that was great, but let’s do a private vote in case not everyone agreed. I passed out stickies and asked people to write down how they wanted to divide it. Every person said: 40% to Susan, the rest evenly. Well, one person added me in the evenly part. I thanked the person and demurred.

That’s what we did. Susan asked for part of her increased percentage to be a team dinner with spouses/significant others and they invited Mark and me.

The team knows who did what. The team can manage bonuses.

I don’t know that this is the “best” approach. I have always wanted to know what my organization wanted from me. I have found a career ladder in the form of expertise criteria a great way to accomplish this. In addition, I want to know that if there is extra compensation, the team will receive that extra as a team. Every project I’ve ever been on was a team effort. Agile approaches make that even more obvious.

Categories: Project Management

Decisions Without Estimates?

Herding Cats - Glen Alleman - Thu, 06/22/2017 - 20:52

There is a posted question at an agile conference. Can you make a decision without an estimate? Like many discussions in the domain of agile, the statement is made without any evidence that it is true, nor can even be true in principle. This type of fallacy is common. 

First a principle ...

There is NO means of making credible decisions in the presence of uncertainty without first estimating the outcome of that decision. This is a foundation principle of the economics of probabilistic decision making. To suggest you can make such a decision without estimates  willfully ignores that principle.

Decisions without estimates

Let's look at each of these from the point of view of Managerial Finance and Economics of Decision Making in the presence of Uncertainty. These two points of view are the basis of any credible business management process.

It's not your money, spend it wisely

First, let's establish a singular framing assumption

  • All project work¬†is uncertain - this uncertainty¬†comes in two types. Reducible (Epistemic) and Irreducible (Aleatory).
  • These uncertainties are probabilistic and statistical respectively.
  • Knowing the behaviors¬†of these uncertainties¬†- the probabilistic and statistical models - is part of deciding what to do.
  • The impact on the project from these probabilities¬†is also¬†probabilistic. Knowing this impact requires assessing the probabilities.

If you want to decide what's the probability of occurrence of some Epistemic uncertainty or the statistical processes for some aleatory activity, you need to estimate. Don't guess. Don't assume, Estimate. This process is the basis of all risk management. And Risk Management is How Adults Manage Projects - Tim Lister.

So with this in mind let's look at the conjectured process that can be performed on projects without estimating in the presence of uncertainty.

  • Longest entry Barrier First - so how would you know what the longest is without making an estimate since that longest is likely a probabilistic number in the future? If you the longest upfront, it's already happened.
  • Prototyping¬†First - OK, which feature do we prototype? How much effort is the prototype? What are we expecting to learn from the prototype before we start spending the customer's money?¬†
  • Strategic Items First - strategies are hypotheses. Hypotheses have uncertainties. Those¬†uncertainties¬†impact the validation of the Strategy. How can you assess the strategy¬†without making estimates of the impact of the outcome of the¬†hypothesis?
  • Customer Oriented First - does the customer have absolute confidence of what comes first? Does the customer operate in the presence of uncertainty?
  • High Risk First - all risk comes from uncertainty. Epistemic¬†uncertainty. Aleatory uncertainty. No decision can be made in the presence of risk - derived from its uncertainties¬†- without estimating the impact of that risk, the cost to mitigate that risk, the residual risk after mitigation.¬†Risk Management is How Adults Manage Projects - Tim Lister.¬†Be an adult, manage the project as a¬†Risk Manager. Be an Adult, estimate the impact of the risk on the probability of success.
  • Management Decision - OK, management decides. Any uncertainties¬†resulting from that decision? No, proceed. Yes? Any estimate on the impact of that decision on the probability of success? Just because management decided does not remove tuncertaintynty, unless that uncertainty has been analyzed.¬†
  • Reducing Cost First - how much cost, what sources of cost, what's the probability that the cost reduction will be effective?¬†
  • Minimal¬†Scope - how do you know what minimal is in the presence of uncertainty without estimating?
  • POC to Decide - same as Management Decision

The is NO principle by which you can make a credible decision in the presence of uncertainty (reducible and irreducible) without estimating the impact of that decision on the probability of success of your project.


Related articles What's the Smell of Dysfunction? Root Cause of Project Failure IT Risk Management Making Decisions In The Presence of Uncertainty Herding Cats: Five Immutable Principles of Project Success Estimating Processes in Support of Economic Analysis Herding Cats: Decisions Without Estimates? Herding Cats: Risk Management for Agile Software Development Projects
Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Thu, 06/22/2017 - 16:29

Management of Software Development projects is not like a Rocky movie, where strong desire overcomes lack of skill, experience, and talent.

Skill, experience, tools, principles, processes, practices and talent are all needed to increase the probability success. Not understanding that estimates are rarely for those spending the money but for those paying for the Value produced by those spending the money. Not seeking the highest possible level of each of these, or tolerating less than effective application of these,  will result in disappointing results for those paying for and those developing the software system.

Categories: Project Management

Book of the Month

Herding Cats - Glen Alleman - Thu, 06/22/2017 - 15:22

Death  of ExpertiseThe Death of Expertise describes how established knowledge is challenged and why this is critical in our modern society.

With nearly unlimited access to information - the information age - we've become narcissistic and misguided intellectual egalitarians.

Everything from webMD to Wikipedia, normal people are now experts.

Any idea now demands to be taken seriously. Unsubstantiated claims have the same visibility is tested theories and evidence-based principles, processes, and practices

In our software development domain, anyone with a blog and a Twitter account can make statements. Present company included.

The trouble is fact checking these claims takes effort.

With the right tag lines, those making bogus claims can collect followers with ease.

This book is a rear guard action on behave of those who actually know what they are talking about. 

Categories: Project Management

Programming Across Paradigms

From the Editor of Methods & Tools - Wed, 06/21/2017 - 18:46
What’s in a programming paradigm? How did the major paradigms come to be, and why? Once we’ve sworn our love to one paradigm, does a program written under any other still smell as sweet? Can functional programmers learn anything from the object-oriented paradigm, or vice versa? In this talk, we’ll try to understand what we […]

Quote of the Day

Herding Cats - Glen Alleman - Wed, 06/21/2017 - 15:33

While management and leadership are related and often treated as the same, their central functions are different. Managers clearly provide some leadership, and leaders obviously perform some management.
However, there are unique functions performed by leaders that are not performed by managers. My observation over the past forty years ... is that we develop a lot of good managers, but very few leaders.
Let me explain the difference in functions that they perform:

  • A manager takes care of where you are; a leader takes you to a new place.
  • A manager is concerned with doing things right; a leader is concerned with doing the right things.
  • A manager deals with complexity; a leader deals with uncertainty.
  • A manager creates policies; a leader establishes principles.
  • A manager sees and hears what is going on; a leader hears when there is no sound and sees when there is no light.
  • A manager finds answers and solutions; a leader formulates the questions and identifies the problems.

- James E. Colvard

from "Systems Engineering Newsletter, June 2017, Project Performance International, PO Box 2385, Ringwood North Victoria, 3134, Australia

Categories: Project Management

Misinterpretations of the Cone of Uncertainty

Herding Cats - Glen Alleman - Tue, 06/20/2017 - 20:31

The Cone of Uncertainty is a framing assumption used to model the needed reduction in some parameter of interest in domains ranging from software development to hurricane forecasting.

This extended post covers

  1. The framing assumptions of the Cone of Uncertainty.
  2. The Cone of Uncertainty as a Technical Performance Measure.
  3. Closed Loop Stochastic Adaptive control in the presence of Evolving Uncertainty.

These topics are connected in a simple manner.

All project work is uncertain (probabilistic and statistical). Uncertainty comes in two forms - Epistemic and Aleatory. Uncertainty creates Risk. Risk management requires active reduction of risk. Active reduction requires we have a desired reduction goal, perform the work needed to move the parameter toward the goal - inside the control limits, and measure progress toward the rduction goal. Management of this reduction work and measurement of the progress toward the goals is a Close Loop Control System paradigm. Closed Loop Control, has a goal, an action, a measurement, and a corrective action. These activities, their releationships and values are defined in a PLAN for increasing the probability of success for the project. The creation and management of the Plan is usually performed by the Program Planning and Controls group where I work. 

Framing Assumptions for the Cone of Uncertainty 

Of late, Cone of Uncertainty has become the mantra of No Estimates advocates claiming that data is needed BEFORE the Cone is of any use. This is course is NOT the correct use of the Cone. And without this data, the Cone has no value in the management of the work. This fallacy comes from a collection of data that did not follow the needed and planned reduction of uncertainty for the cost estimates of a set of software development projects. 

The next fallacy of this conjecture is that the root cause for why those projects did not have their uncertainty reduced in some defined and desirable manner was never provided.  The result of the observation is a symptom of something. But the Cause was not sought in any attributable way. This is a common problem in low maturity development organizations. We blame something for our problem, without finding out the cause of the something or even if that something is the actual cause.

In our domain, Root Cause Analysis is mandated before ANY suggested change for improvement, prevention, or corrective actions are taken. Our RCA method is the Apollo Method. Apollo is distinctly different from other root cause approaches in that each effect requires a Condition and Action. Please read the book in the previous sentence to see why this is critical and how it is done. Then every time you hear some statement about an observed problem, you can ask what's the cause (both condition and action)?

So when you hear I have data that shows that the Cone (or for that matter anything) does not follow the expected and desired outcomes and there is no Root Cause to explain that unexpected outcome - ignore that conjecture, until the reason for the observed outcome is found and corrective actions have been identified and those corrective actions have been confirmed to actually fix the observed problem. 

So let's recap what the Cone of Uncertainty is all about from those who have created the concept. Redefining the meaning and then arguing my data doesn't match that is a poor start to improving the probability of project success.

The Cone of Uncertanty describes the uncertainty in the behaviors or measurement of a project parameter and how that uncertanty needs to be reduced as the project proceeds to increase the Probability of Project Success.
If that parameter is NOT being reduced at the planned rate, then the Probability of Success is not increased in the planned needed manner.

If the parameter of interest is not being reduced as needed, go find out why and fix it, or you'll be late, over budget, and the technical outcome unacceptable. The Cone of Uncertanty does NOT need data to validate it is the correct paradigm. The Cone of Uncertainty is the framework for improving the needed performance of the project. It's a Principle. 

Here's some background on the topic of the Cone of Uncertainty. Each of these can be found with Google.

  1. Misinterpretations of the ‚ÄúCone of Uncertainty‚ÄĚ in Florida during the 2004 Hurricane Season, Kenneth Broad, Anthony Leiserowitz, Jessica Weinkle, and Marissa Steketee, American Meteorological Society, May 2007
  2. Reducing Estimation Uncertainty with Continuous Assessment: Tracking the ‚ÄúCone of Uncertainty‚ÄĚ, Pongtip Aroonvatanaporn, Chatchai Sinthop, Barry Boehm
  3. ‚ÄúShrinking The Cone Of Uncertainty With Continuous Assessment For Software Team Dynamics In Design And Development,‚ÄĚ Pongtip Aroonvatanaporn,‚ÄĚ Ph.D. Thesis, University of Southern California, August 2012.
  4. "Improving Software Development Tracking and Estimation Inside the Cone of Uncertainty, "Pongtip Aroonvatanaporn, Thanida Hongsongkiat, and Barry Boehm.

The Cone of Uncertainty as a Technical Performance Measure

Technical Performance Measures are one of four measures describing how a project is making progress to plan. These measures - combined - provide insight into the probability of program success.

  • Measure of Effectiveness
  • Measure of Performance
  • Technical Performance Measure
  • Key Performance Parameters

Here's a workshop that we give twice a year at the College of Performance Management's conference. The definition of a TPM is on page 14. 

The reduction of Uncertainty (as in the Cone shape of the Uncertainty) can be considered a Technical Performance Measure. The Planners of the program define the needed uncertainty at a specific point in time, measure that uncertainty at that point in time and assess the impact on the probability of program success by comparing planned versus actual.

This approach is the same for any other TPM - Risk, Weight, Throughput, and even Earned Value Management parameters. This is the basis of the closed loop control system used to manage the program with empirical data from past performance compared to the planned performance at specific points in time.

This Closed Loop Program Control Process provides the decision makers with actionable information. The steering target - the MOE, MOP, TPM, KPP - is defined upfront and evolves as the programs progress with new information. Same for the Uncertainties in the program measures. This approach is also the basis of Analysis of Alternatives and other trades when it is determined that the desired measures cannot be achieved. Decisions are made to adjust the work, adjust the requirements, or adjust the measures.

Close Loop Control Systems in the Presence of Emerging Stochastic Behaviours

Risk management is how adults manage projects - Tim Lister. All risk comes from uncertainty. Uncertainty comes in two forms - Epistemic (reducible) and Aleatory (Irreducible). Here's an overview of how to manage in the presence of uncertainty


Let's look at the Closed Loop Control System paradigm for managing projects. Control systems exist in many domains. From simple fixed setpoint systems like your thermostat controlling the HVAC system, or a PID controller running a cracking column at a refinery, to multi-dimensional evolving target control systems guiding an AIM-9L missile (which I wrote SW for), to multi-target acquisition systems needed to steer midcourse interceptors to their targets, to cloud-based enterprise ERP systems, to autonomous flight vehicles operating in mixed environments, with mission planning while in flight and collision avoidance with manned vehicles - the current generation of swarm UAVs in the battlefield.

A concept that spans all these systems - and is shared with project management control systems (we work in Program Planning and Controls as a working title) is the idea of a PLAN.

  • What is the Plan to keep the temperature in the room at 76 degrees? It's a simple plan, run the A/C or Heater, measure the current temperature, compare that to the desired temperature, adjust appropriately, all the way up to
  • An evolving battle management system where the¬†control loop for individual vehicles interacts with¬†control loops of other vehicles (manned and unmanned) as the emerging mission, terrain, weather, and mission.
  • What are the units of measure for this Plan? What are the probabilistic and statistical behaviors of these units measure? How can we measure the progress of the project toward compliance with these measures in the presence of uncertainty?
  • How can the¬†variance analysis of these measures be used to take corrective actions to¬†keep the program GREEN?

All these control systems share the same framing assumptions - there is a goal, the needed capabilities to accomplish that goal are known, I have  PLAN by which that goal can be accomplsihed, that PLAN may evolve - with manual intervention or with autonomous intervention - as the PLAN evolves, the Mission evolves, and the situation evolves.

 Project work is a complex, adaptive, stochastic process with evolving Goals, resources, and situation, all operating in the presence of uncertainty. Here are the moving parts for any non-trivial project.Screen Shot 2017-06-19 at 8.30.26 AM
In the control system for project management, like the control system for those non-trivial examples above - the PLAN for the behavior of the parameter of interest is the starting point. The feedback (closed loop feedback) of the variance between the desired Value and the Actual Value at the time of the measurement is connected with other information on the program to define the corrective actions needed to Keep the Program Green.

Like control systems in automation with closed loop control, a project control system must be implemented that controls cost, ensures technical performance, and manages schedules. All successful control systems require measurements. All closed loop control systems have a plan to steer toward. A temperature for a simple thermostate all the way to a target cost, schedule, and techncial performance for a complex software project. In the project management world those measurements (inputs/calculated values) are called metrics and the greater the frequency of measurement (up to a point - the Nyquist sample rate), the more accurate the control. All measurements need to be compared to an expectation - the Set Point.  When deviations are noticed action is required to modify the processes that produce the output - the product or service produced by the project.

Here are some resources of closed loop control systems for project management based on following the plan for the performance of the Parameter of Interest

  1. "Feedback Control in Project-Based Management," L. Scibile,¬†ST Division ‚Äď Monitoring and Communication Group (ST/MC) CERN, Geneva, Switzerland
Back to the Cone of Uncertainty

The notion¬†of the¬†Cone of Uncertainty¬†has been around for awhile. Barry Boehm's work in ‚ÄúSoftware Engineering Economics‚ÄĚ. Prentice-Hall, 1981.¬†

But first, let's establish a framing assumption. When you hear of projects where uncertainty is not reduced as the project progresses, ask a simple question. Why is this the case? Why, as the project progresses with new information, delivered products, reduced risk, is the overall uncertainty not being reduced? Go find the root cause of this, before claiming uncertainty doesn't reduce. Uncertainty as a principle for all projects, should be reducing thorugh the direct action of Project Management. If uncertainty is not reducing the case may be - bad management, an out of control project, or you're working in pure research world where things like that happen.

As well never measure any project parameter as a ratio. Relative numbers - ordinal - are meaningless when making decisions. Relative uncertainty is one. Relative to what? Cardinal numbers are measures used to make decisions.

So a quick review again. What is the Cone of Uncertainty?

  • The Cone is a project management framework describing the uncertainty aspects of estimates (cost and schedule) and other project attributes (cost, schedule, and technical performance parameters). Estimates of cost, schedule, technical¬†performance on the left side of the cone have a lower probability of being precise and accurate than estimates on the right side of the cone. This is due to many reasons. One is levels of uncertainty¬†early in the project. Aleatory and Epistemic uncertainties, which create the risk to the success of the project. Other uncertainties that create risk include:
    • Unrealistic performance expectation with missing Measures of Effectiveness and Measures of Performance
    • Inadequate assessment of risks and unmitigated exposure to these risks with proper handling plans.
    • Unanticipated technical issues with alternative plans and solutions to maintain effectiveness
  • Since all project work contains uncertainty, reducing this uncertainty¬†- which reduces risk - is the role of the project team and their management. Either the team itself, the Project or Program Manager, or on larger programs the Risk Management owner.¬†

Here's another simple definition of the Cone of Uncertainty: 

The Cone of Uncertainty describes the evolution of the measure of uncertainty during a project. For project success, uncertainty not only must decrease over time, but must also diminishe its impact on the project's outcome. This is done by active risk management, through probabalistic decision-making. At the beginning of a project, there is usually little known about the product or work results. Estimates are needed but are subject to large level of uncertainty. As more research and development is done, more information is learned about the project, and the uncertainty then decreases, reaching 0% when all risk has been mitigated or transferred. This usually happens by the end of the project.

So the question is? - How much variance reduction needs to take place in the project attributes (risk, effectiveness, performance, cost, schedule - shown below) at what points in time, to increase the probability of project success? This is the basis of Closed Loop Project Control  Estimates of the needed reduction of uncertanty, estimates of the possisble reduction of uncertainty, and estimates of the effectiveness of these reduction efforts are the basis of the Close Loop Project Control System.

This is the paradigm of the Cone of Uncertainty - it's a planned development compliance engineering tool, not an after the fact data collection tool

The Cone is NOT the result of the project's past performance. The Cone IS the Planned boundaries (upper and lower limits) of the needed reduction in uncertainty (or other performance metrics) as the project proceeds. When actual measures of cost, schedule, and technical performance are outside the planned cone of uncertainty, corrective actions must be taken to move those uncertanties inside the cone, if the project is going to meet it's cost, schedule, and technical performance goals. 

If your project's uncertanties are outside the Planned boundaries at the time when they should be inside the planned boundaries, then you are reducing the proabbility of project success

The Measures that are modeled in the Cone of Uncertainty are the Quantitative basis of a control process that establishes the goal for the performance measures. Capturing the actual performance, comparing it to the planned performance, and compliance with the upper and lower control limits provides guidance for making adjustments to maintain the variables perform inside their acceptable limits.

The Benefits of the Use of the Cone of Uncertainty 

The planned value, the upper and lower control limits, the measures of actual values from a Close Loop Control System - a measurement based feedback process to improve the effectiveness and efficiency of the project management processes.

  • Analyzing trends that help focus on problem areas at the earliest point in time - when the¬†variable under control¬†starts misbehaving, intervention can be taken. No need to wait till the end to find out¬†you're not going to make it.
  • Providing early insight into error-prone products that can then be corrected earlier and thereby at lower cost - when the trends are headed to the UCL and LCL, intervention can take place.
  • Avoiding or minimizing cost overruns and schedule slips by detecting them early - by observing trends to breaches of the UCL and LCL.
    enough in the project to implement corrective actions
  • Performing better technical planning, and making adjustments to resources based on discrepancies between planned and actual progress.

A critical success factor for all project work is Risk Management. And risk management includes the management of all kinds of risks. Risks from all kinds of sources of uncertainty, including technical risk, cost risk, schedule, management risk. Each of these uncertainties and the risks they produce can take on a range of values described by probability and statistical distribution functions. Knowing what ranges are possible and knowing what ranges are acceptable is a critical project success factor.

We need to know the Upper Control Limits (UCL) and Lower Control Limit (LCL) of the ranges of all the variables that will impact the success of our project. We need to know these ranges as a function of time With this paradigm we have logically connected project management processes with Control System processesIf the variances, created by uncertainty going outside the UCL and LCL. Here's a work in progress paper "Is there an underlying Theory of Project Management," that addresses some of the issues with control of project activities.

This is the critical concept in successful project management

We must have a Plan for the critical attributes - Mission Effectiveness, Technical Performance, Key Performance Parameters - for the items. If these are not compliant, the project has become subject to one of the Root Causes of program performance shortfall. We must have a burndown or burnup plan for producing the end item deliverables for the program that match those parameters over the course of the program. Of course, we have a wide range of possible outcomes for each item in the beginning. And as the program proceeds the variances measures on those items move toward compliance of the target number in this case Weight.

Screen Shot 2017-01-13 at 4.21.56 PM

Here's another example of the Cone of Uncertainty, in this case, the uncertainty in weight of a vehicle as designed by an engineering team. The UCL and LCL are defined BEFORE the project starts. These are the allowable ranges of the weight values for the object at specific points in time. When the actual weight or the projected weight goes outside that range Houston We Have A Problem.  These are used to inform the designer of the progress of the project as it proceeds. Staying inside the control limits is the Planned progress path to the final goal - in this case, temperature.

Wrap Up of this Essay

The Cone of Uncertanty, is a signaling boundary of a Closed Loop Control system used to manage the project to success with feedback from the comparison of the desired uncertainty in some parameter to the actual uncertainty in that parameter.

This boundary is defined BEFORE work start to serve as the PLANNED target to steer toward for a specific parameter. In simplier closed loop control systems, this is called the Set Point†

† The setpoint is the desired or target value for an essential variable of a system, often used to describe a standard configuration or norm for the system. In project management or engineering development, the set point is a stochastic variable that is evolving as the program progresses and may be connected in non-linear ways with other set points for other parameters.

  Related articles Are Estimates Really The Smell of Dysfunction? What's the Smell of Dysfunction? Herding Cats: The Economics of Decision Making on Software Projects Capabilities Based Planning - Part 2
Categories: Project Management

Commenting Policy and How to Get Answers to Your Questions

Mike Cohn's Blog - Tue, 06/20/2017 - 17:00

Here is a short reminder about the Commenting Policy for this site. You can read the full policy here or via the link in the footer on every page.

A lot of the value of this blog comes from the discussion in the comments, and so it is important that all comments be related to the original post. While I appreciate anyone who takes the time to leave a comment, if a comment is not related to the original post, I won't reply and may delete the comment.

Do not hijack a post to an entirely different question. Please see if your question has been addressed by other posts. If not, you can submit it at

I try to address the questions and topics suggested there through this blog and my weekly tips emails.

I'm sorry, but I cannot reply by email to questions that are highly specific to your team. If your question or topic is one that I think is of general interest to others here, I will do my best to address it in a blog or weekly tip.

Quote of the Month June 2017

From the Editor of Methods & Tools - Tue, 06/20/2017 - 08:44
Avoid using defect tracking systems during the iteration. Fix bugs discovered in the work underway for the iteration immediately. If it takes a lot of work, create a task in the Spring backlog. However, there is usually no need to log this bug in the defect tracking system. Logging it would only create another work […]

Defining ‚ÄúScaling‚ÄĚ Agile, Part 6: Creating the Agile Organization

We might start to think about agile approaches as a project change. However, if you want to “scale” agile, the entire culture changes. Here is a list of the series and how everything changes the organization’s culture:

All of these “scaling” ideas require change.

Agile organizations become very good at small changes all the time. They are adaptive and resilient. They understand how change works and they embrace it. (Big changes are quite difficult for almost everyone. Small changes tend to be much easier.)

Here is a picture of the Satir change model. We start off in Old Status Quo with some level of performance. Some Foreign Element arrives and we have uneven performance while we are in Chaos, searching for that Transforming Idea. We discover that TI and practice and integrate that idea into our work until we reach a New Status Quo, hopefully at a higher level of performance.

The change model works for each human and for the entire organization. In fact, I don’t see how you can have an agile organization until people become comfortable with small and large changes on a regular basis. This is one of the reasons I say we should take an agile approach to agile. (See Agile is Not a Silver Bullet.)

Do you need to be a fully adaptive and resilient organization? Only you can answer that question. Maybe you start from teams and the project portfolio. Maybe you look for incentives that don’t create optimization “up.” (I need a new word for this! Do you have suggestions for me?? Please comment if so.)

You do not have to have a fully agile organization to recognize benefits at the team and project portfolio levels. Agile is Not for Everyone.

Decide how much change your organization needs to be successful. Practice change, as often as you can. That will help you more than an agile label will.

One last comment: I’m not sure “scaling” is the right way to think about this. For¬†me, the scaling idea still has roots in silo thinking, not flow thinking. That could be just me. (Grin!) I wish I had a better way to explain it.

My ideas of scaling agile are about creating a culture based on our ability to change:

  • How do we treat each other? Do we/can we see each other as an intrinsic part of a whole?
  • What do we discuss? Do we discuss “failures” or learning opportunities?
  • What do we reward? How do we move to rewarding non-silo work? That is, work to deliver products (in either projects or programs) or other work that rewards collaboration and flow efficiency.

I hope you enjoyed this series. Please do comment on any of the posts. I am curious about what you think. I will learn from your comments.

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Mon, 06/19/2017 - 03:55

Under the current imperfect administration of the Universe, most new ideas are false, so most ideas for improvement make matters worse

Categories: Project Management

The Smell of Dysfunction and Corrective Action

Herding Cats - Glen Alleman - Sat, 06/17/2017 - 17:57

When we hear about some dysfunction in our work, a product, a system - Root Cause Analysis can provide the corrective action needed to remove the dysfunction and most importantly prevent the dysfunction from returning.

There are Eight Questions needed to gain insight into the source of the problem, before conjecturing any fix. In the Appollo Root Cause Analysis method, we have both Events (Actions) and Conditions, so these Eight need to be restated

  1. What was the harm from the event or the condition that created the primary effect?
  2. What was the significance of this effect? How is this measured? 
  3. What was the set-up for the event or the condition? In Apollo, the cause and the effect are the same things in an infinite chain.  My cause is your effect, which creates a  new cause for a new effect.
  4. What triggered the event? Either an Action or a Condition. Find out, write that down, record the source of the observation and the connection in the chain of cause and effect.
  5. What made the harm as severe as it was? Assess - an after action report - what damage was done. 
  6. What kept the harm from being a lot worse? Look for preventative measures that are already in place.
  7. What should be learned from the event? This is the basis of another method, the Phoenix method of Root Cause Analysis, where Lessons Learned are captured and used to prevent or correct future problems.
  8. What should be done about the event? This is the Corrective Action Plan (CAP) for the Primary Effect. In Apollo, this is defined by removing one of more cause and effect paths.

Here's a framework for applying Apollo

Root cause analysis master plan from Glen Alleman

When you hear someting like Estimates are the smell of Dysfunction you'll now know that is complete utter malarky, since not cause has been stated and not corrective action has been devised that can be tested to prevent or correct that undefined smell. 

So don't let anyone get away with making a unsubstianted statement about some problem they have observed and some suggested correction to that problem. It's intellectual laziness to do make those statements. And intellectual laziness to not challenge them. 

Categories: Project Management

Performance-Based Project Management¬ģ

Herding Cats - Glen Alleman - Sat, 06/17/2017 - 14:00

Successfully managing projects, no matter the domain, the size, the development or engineering method starts with a set of Principles, Practices, and Processes.

Here are those Principles and Practices. The Principles are Immutable. The Practices are universal. The Processes need to be tailored to your domain.

When hearing about some new, supposed innovative way for delivering value in exchange for money, ask does that idea fit into any known set of Principles and Practices of project success?

Principles and Practices of Performance-Based Project Management¬ģ from Glen Alleman Related articles Applying the Right Ideas to the Wrong Problem Capabilities Based Planning Information Technology Estimating Quality
Categories: Project Management

The Five Laws of Software Estimating are Wrong

Herding Cats - Glen Alleman - Fri, 06/16/2017 - 17:11

There's a blog post from a few years back that has resurfaced The 5 Laws of Software Estimates. It's one of those posts that's heavy on opinion and light on principles. Time to revisit this ill-informed idea on why, what, and how we need to use estimates to make decisions in the presence of uncertainty. 

Law of Software Estimating Fact of Software Estimating Estimates are waste

A waste for whom?

Developers think they are waste, but it's not their money.

To those paying the developers, estimates provide actionable information needed to make decisions:

  • Can we afford to develop this feature?
  • If we develop this feature when will it start¬†earning its keep for the business
  • What tradeoffs¬†are possible¬†in cost and schedule for this feature compared to other features?
Estimates are non-transferrable 

Its claimed Software estimates are not fungible.

Estimates are transferrable if you keep track of what you develop and codify those Features for future reference.

I work in a domain where a Central Repiository is kept of all work performed and used to estimate other projects. This is called Reference Class Forecasting and done in all mature project management domains.

This is basic business process improvement and part of any credible estimating process.

Estimates are Wrong

 Estimates are NOT Guesses

This is a fundamental category error in knowledge. Assigning a meaning to a term that is incorrect.

Either a lack of knowledge about estimating or a willful lack of knowledge about estimating. I suspect that later since every High School Statistics class introduces the notion of an estimate and the two attributes of that estimate

  • Precision - what precision is needed to make your decision
  • Accuracy - what accuracy¬†is needed to make your decision

Define these before you start estimating, not afterward. 

We need to know the cost for the product is "at our below $1,200,000 with an 80% confidence would be a statement a CIO might make to a development team.

Or we need to know you can produce that product for that cost "on or before" the end of the 2nd quarter.

Estimates are Temporary

Yes they are, that's why the Estimate to Complete and the Estimate at Completion are continually emerging with new data, better models, empirical data from delivered.

This is obvious to any mature estimating organization.

Estimates are Necessary

Correct Businesses cannot make decisions about whether or not to build software without having some idea of the cost and time involved.

So why even introduce the first 4 as LAWs, rather state the first 4 as dysfunctions that need to be removed to increase the Value of estimates.    

So before you go off and Stop Estimating, ask yourself - or better confirm that those paying you for your work - that there is no need to know how much it will cost to produce the Value asked for or no need to know when that Value will arrive at some degree of precision and accuracy?

If there is no need, then estimates likely don't need to be made.

If there is a need for all the right business reasons, then ignore the first 4 Laws, and learn how to make estimates in the presence of uncertainty for those whose money you are spending.

Related articles Information Technology Estimating Quality Humpty Dumpty and #NoEstimates Five Immutable Principles of Project Success and Estimating Why Guessing is not Estimating and Estimating is not Guessing Capabilities Based Planning Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Making Conjectures Without Testable Outcomes What's the Smell of Dysfunction?
Categories: Project Management

Defining ‚ÄúScaling‚ÄĚ Agile, Part 5: Agile Management

One of the challenges I see in organizations is how managers can use agile approaches. One of the biggest problems is that the entire organization is organized for resource efficiency (think silos of functional experts). Agile approaches use flow efficiency. Thinking in flow efficiency changes everything.

Many people in organizations believe that dividing up the work among specialists will get the work done faster. That’s the case in manufacturing. (Think piece work.) Manufacturing processes use¬†resource efficiency to reasonable effect. However, manufacturing does not account for innovation or learning. (Design of manufacturing processes is innovative and requires learning. That’s why manufacturing processes often prototype (iterate on their work) when they develop the process.)

Organizations who need to optimize for innovation or learning realize that the people work collaboratively. Collaborative work—including management work—demands flow efficiency instead of resource efficiency.

Flow efficiency helps people optimize “up” for lack of a better term. (If you have not yet read This is Lean: Resolving the Efficiency Paradox, I recommend you ¬†do so.)

Once you start to think about flow efficiency, you might start to think about throughput (and maybe cycle time and cumulative flow).  That changes the data the managers need to make decisions.

Now, it doesn’t matter what anyone’s utilization is. What matters is the Cost of Delay (or Cost of Delay/Duration). It might even matter where the bottlenecks are and who can unwedge those bottlenecks.

An organization might still need to track the run rate for a project, program, or even a workgroup such as Customer Support. Run rate might not mean as much when you can measure revenue, customer acquisition and retention, and how often you can get the customer to return and buy more. (The more often you deliver value, the more you can acquire customers who pay.)

One manager learned about flow efficiency. She was managing the team working on the “most important” project in the company. She decided to stop tracking utilization. She told her team, “I want to track your throughput as a team. I want to know what I can do improve the flow of work through your team. Please bring me your impediments to flow and I’ll address them.”

The team told her about build time (too slow), insufficient test automation (not enough and too slow). She built the case using Cost of Delay/Duration to get other teams to help this team reduce build time and increase test automation.

The teams together took eight weeks to make what they called infrastructure improvements. After the first two of those eight weeks, the team finished twice as many stories (2 instead of 1) as they had before all the improvements started. The team continued to increase their throughput. By the end of the eight weeks, the team was able to finish anywhere from 8-12 stories. The team continued their now-higher level of throughput. After three months, the organization had attracted many more customers. They decided to move to a subscription model for their product, and recognize much more revenue.

Yes, a team’s ability to deliver value fast created feedback loops for management: product management, project portfolio management.

(I’m still reading about agile-useful metrics, so I’m sure there is more here.)

If managers worry about flow efficiency, they ask the teams, “What can I do to help your throughput?” The managers manage the project portfolio. Work flows through teams, not through people.

That changes what managers do. Top-level managers (and maybe product managers) define the strategy and talk to customers to see what customers need so they can refine the strategy. Top managers also consider new options for entirely new products—changes in strategy.

Middle managers plan and replan the project portfolio to implement the strategy. First-level managers remove impediments to collaboration.

Let me summarize a little:

Agile managers have a very different role than more traditional managers. They manage differently. Agile managers might use the two pillars of lean: respect for people and continuous improvement as a basis for their management style. To me, that means building relationships with each person the manager “manages,” ¬†the willingness to question all assumptions, and to look at the whole: what does it take for us to succeed as an organization. The idea of one function succeeding instead of all of us vanishes.

Flow efficiency changes the corporate culture: managers change what they discuss. Managers change what they reward. Managers change how they treat people and how they expect people to be treated, because it’s not about the individual “resource.” It’s about the system of work.

The posts so far aside from this post:

I’m quite sure this post is not perfect, but I have other writing to do. I expect to have one more summary post.

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Thu, 06/15/2017 - 05:11

Have a roadmap instead of a road - Adrian Bolboaca

Without the map, we can't know if we're on the right road. But the map is not the terrain, so both are needed for success.

Categories: Project Management