Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Herding Cats - Glen Alleman
Syndicate content
Performance-Based Project Management¬ģ Principles, Practices, and Processes to Increase Probability of Success –Ē–ĺ–≤–Ķ—Ä—Ź–Ļ, –Ĺ–ĺ –Ņ—Ä–ĺ–≤–Ķ—Ä—Ź–Ļ
Updated: 11 hours 49 min ago

Closed Loop Control

Fri, 12/19/2014 - 04:14

Writing software for money is a Closed Loop Control System.

Screen Shot 2014-12-18 at 6.16.44 PM

  • The Reference is our¬†needed¬†performance - cost, schedule, and techncial - to acheive the projects goals. These goals include providing the needed business value, at the needed cost, on the needed day. Since each of these is a random variable, we can't state ¬†Exactly what they are, but we can make a statement about their range and the confidence that the will fall inside that range in order to meet the business goals
  • The System is the development process that takes money, time, and requirements and turns them into code.
  • The¬†Sensor¬†is the measurement system for the project. This starts with assessing the delivered code to assure it meets some intended capability, requirement, or business value. The target units of measure are defined before the work starts for any particular piece of code, so when that code is delivered we can know that it accomplished what we wanted it to do.
  • The¬†Error Signal ¬†comes from comparing the difference between the desired state - cost, schedule, technical, value, or any other measure - and the actual state. This error signal is then used to ¬†adjust or¬†take corrective action to put the system back in balance so it can achieve the desired outcome.

Without the Desired State, the Current State, the comparison of the two, the Error Signal, the project is running open loop. We'll arrive when we arrive at the rate of progress we are performing at, for the cost we are consuming. There is no information available to show what the needed performance of cost, schedule, or value production needs to be to arrive, on time, on budget, and on value (or near enough to call it close).

And when you hear about control systems and they don't follow the picture at the top, they're not Closed Loop. They may be Open Loop, but they are not Close Loop.

Control systems from Glen Alleman


Related articles Conveniently Unburdened by Evidence Risk Management is Project Management for Adults Three Increasingly Mature Views of Estimate Making in IT Projects Complex Project Management The Myth and Half-Truths of "Myths and Half-Truths"
Categories: Project Management

The Myth and Half-Truths of "Myths and Half-Truths"

Thu, 12/18/2014 - 05:51

I love it when there is a post about Myths and Half-Truths, that is itself full of Myths and Half Truths. Orginal text in Italics, the Myth Busted and Half Truth turned into actual Truth beow each

Myth: If your estimates are a range of dates, you are doing a good job managing expectations.

  • Only the earlier, lower, more magical numbers will be remembered. And those will be accepted as firm commitments.

If this is the case, the communication process is broken. It's not the fault of the estimate or the estimator that Dilbert style management is in place. If this condition persists, better look for work somewhere else, because there are much bigger systemic problems there.

  • The lower bound is usually set at "the earliest date the project can possibly be completed". In other words, there is absolutely no way the work can be completed any earlier, even by a day. What are the chances of hitting that exact date? Practice shows - close to nil.¬†

The lower bound, most likely, and upper bounds must have a confidence level. The early date with a 10% confidence says to the reader, there is no hope of making that. But the early alone gets stuck in the mind as a cofirmation bias.

Never ever communicate a date, a cost, a capability in the absence of the confidence level for that number.

Making the estimate and management expectations have no connection to each other. Poor estimates make it difficult to manage expectations, because receiving the estimates loose confidecen in the estimator when the estimates have no connection with the actual outcomes of the project.

Half-truth: You can control the quality of your estimate by putting more work into producing this estimate.

It's not the work that is needed, it's the proper work. Never confuse effort with results. Reference Class Forecasting, Parametric models, and similar estimating tools along with the knowledge of the unlying uncertaities of the processes, technology, and people connected with delivering the outcomes of their efforts.

  • By spending some time learning about the project, researching resources available, considering major and minor risks, one can produce a somewhat better estimate.
  • The above activities are only going to take the quality of the estimate so far. ¬†Past a certain point, no matter how much effort goes into estimating a project, the quality of the estimate is not going to improve. Then the best bet is to simply start working on the project.

Of course this ignores the very notion of Subject Matter Expertise. Why are you asking someone who only knows one thing - a one trick pony to work on yu new project. This is naive hiring and management.

This would be like hiring a commercial builder with little or no understanding of modern energy efficient building, solar power and heating, thermal windows, and high efficiency HVAC systems and asked him to build a LEADS Compliant office building.

Why would you do that? Why would you hire software developers that had little understanding of where technology is going? Don’t they read, attend conferences, and look at the literature to see what’s coming up? 

Myth: People can learn to estimate better, as they gain experience.

People can learn to estimates as they gain skill, knowledge, and experience. All three ae needed. Experience alone is necessary but far from sufficient. Experience doing it wrong doesn't lead to improvement, only confirm that bad estimates are the outcome. There are a nearly endless supply of books, papers, and articles on how to properly estimate. Read, take a course, talk to experts, listen, and you'll be able to determine where you are going wrong. Then your experience  is will of value, beyond confirming you know how to do it wrong.

  • It is possible to get better at estimating ‚Äď if one keeps estimating the same task, which becomes known and familiar with experience. This is hardly ever the case in software development. Every project is different, most teams are brand new, and technology is moving along fast enough.

That's not the way to improve anyting in the software development world. If you wrote code for a single function over and over again, you'd be a one-trick-pony. 

The notion of projects are always new, is better said projects are new to me. Fix that by finding someone who knows what the new problem is about and hire them to help you. Like the builder above dobn't embark on a project where you don't have some knowledge of what to do, how to do it, what prolems will be encountered and what their solutions are. That's a great way to waste your customers money and join a Dath March project.

    • Do not expect to produce a better estimate for your next project than you did for your last one.

Did you not keep a record of what you did last time. Were you paying attention to what happened? No wonder your outcomes are the same.

    • By the same token, do not expect a worse estimate. The quality of the estimate is going to be low, and it is going to be random.

As Inigo Montoya says: You keep using that word. I do not think it means what you think it means.All estimates are random variables. These random variables come from an underlying statistcial process - a Reference Class - of uncertainty about or work. Some are reducible some are irreducible. Making decisons in the presence of this uncertainty is called risk management. And as Tim Lister says Risk Management is how Adults Manage Projects.

Half-truth: it is possible to control the schedule by giving up quality.

The trade offs between cost, schedule, and performance - quality is a performance measure in our Software Intensive Systems domain, along with many other ...ilities. So certainly the schedule can be controlled by trading off quality. 

  • Only for short-term, throw-away temporary projects.

This is not a Myth, it's all too common. I'm currently the Director of Program Governance, where this decision was made lomg ago and we're still paying the price for that decision.

  • For most projects, aiming for lower quality has a negative effect on the schedule.


Related articles How Think Like a Rocket Scientist - Irreducible Complexity Complex Project Management Risk Management is Project Management for Adults
Categories: Project Management

Conveniently Unburdened by Evidence

Wed, 12/10/2014 - 17:32

Conveniently Unburdened by Evidence - Kate Beckett to Richard Castle

When we hear a conjecture about a topic that skips over principles of business, the economics of decision making, or the mathematics of probabilistic and statistical modeling, listen to what Kate said to Richard. 

Putting This Skepticism To Work

There are three concerns for every project manager and those funding the work of the project †

  1. Schedule - Will the project go over schedule? All projects are probabilistic endeavors. Uncertainty abounds. Both reducible uncertainty and irreducible uncertainty. Work can address the reducible uncertainty. Buying down the risk associated with this reducible uncertainty. Irreducible uncertainty can only be addressed with margin. Schedule margin, cost margin, technical margin. 

  2. Cost - Will the project overrun its budget? Cost margin is needed to protect the project from an over budget condition. This is called Management Reserve. But MR can only do some much the estimate of the cost, the management of the work to that estimate is also needed. With a credible estimate, MR and Contingency are still needed to avoid going over budget.

  3. Performance - Will the deliverables  satisfy the goal(s) of the project? The technical performance of the deliverables is founded on the Measures of Effectiveness and Measures of Performance of the capabilities provided by the projects. Capabilities Based Planning is the foundation of defining what DONE looks like in units of measure meaningful to the decision makers.

At the start and up until the end of a project, the answer to each of these questions is knowable to some degree of confidence - less in the beginning and more as the project progresses. A yes answer to any or all of the questions is taken to be an undesirable outcome. These are business questions as well as technical questions. But it is the business that is most interested in the answers and the confidence level of the answer - a simple Yes or No is not sufficient. Yes, we have an 80% confidence of completing on or before the need date. 

In The End

To provide answers to these questions before arriving at the end of the project, we need estimates. So when we answer Yes to the question - which is unavoidable - we don't to proceed in the absence of corrective actions to increase the probability of a desirable outcome. At the beginning of the project that confidence is low, since project evolve. To provide credible answers about the confidence of arriving on time, on budget, with the needed capabilities, we must estimate not only the cost, schedule, and outcomes, but estimate the impact of our corrective actions.

If we fail to do this, if by lack of knowledge or experience or with intentional ignorance of the probabilistic process of all projects, we've set the foundation of failure. The approach of making decisions in the absence of estimating the cost or that decision and the resulting impact of that decision, ignore  - with intent - the principles of Microeconomics of decision making. Ignoring the opportunity cost of the decision. This opportunity cost must be estimated, since it will occur in the future and is usually beyond our ability to measure directly.

Ignoring opportunity cost and ignoring estimating the future is called Open Loop Control. To increase the probability of project success we need to apply the principles of Closed Loop Control. And when we manage projects with Open Loop processes, those providing us the money to produce value will be disappointed.

Control systems from Glen Alleman

† Quantitative Risk Analysis for Project Management A Critical Review, Lionel Galway, WR-112-RC, February 2004

Related articles Software Estimating for Non Trival Projects Mike Cohn's Agile Quotes Show Me Your Math Estimating Guidance Complex Project Management
Categories: Project Management

How Think Like a Rocket Scientist - Irreducible Complexity

Wed, 12/10/2014 - 15:00

Orion launched today and recovery after two orbits. The test of the launch system, Pad Abort system, and Heat Shield were the main purposes of the flight.

I worked the proposal - after coming off the winning proposal for Hubble Robotic Service Mission. The Crew Exploration Vehicle was the original name of the flight vehicle. The Integrated Master Plan and Integrated Master Schedule described the increasing maturity of the deliverables for the space craft and it's flight support systems. After the contract win, I moved to the flight avionics firm and defined the IMP/IMS and project performance management processes for that major subcontractor. When you get to minute 21:17, Tracking Data Relay Satellite is mentioned. I worked that project as a new graduate student many decades ago.

Starting back on TDRSS, agile - meaning emerging requirements, test driven development, direct customer feedback on short iterations - and the development process were deployed with rolling waves, 44 day rule Work Packages, and emergent technical requirements derived from Mission Capabilities. 

Here's the long version of the launch to orbit. 


After two orbits, Orion came home. The double boom is the sonic boom. Tests of the heat shield will confirm if it functioned properly.


Recently a statement was made about agile and complexity and it was conjectured if the project is too complex for a physical board - a place to put the stickies for the stories - then we've missed opportunities to simplify. Possibly not realizing that complexity, as well as complex system, are the norm in many domains and complexity management processes using tools - rather than manual means - is also the norm. 

If your Agile planning needs are too complex for a physical board, you've probably missed opportunities to simplify / improve.

When I suggested that agile and agile tools are used to deal with complex problem in these environments, without the need to reduce that complexity, there was a conversation of sorts that suggested...

I'd be surprised to hear Orion was using a COTS Agile project management tool in a significant way

Some Necessary Complexity

On Hubble mission, there is a Service Mission Assurance Process that reveals some of the complexity of the System of Systems found in space flight. The Interface Control processes for example for the payload on STS 125.


External knowledge of what tools were used, what processes were applied, how the flight avionics software for Orion was converted from the 777 suite to the spacecraft suite, tested, altered to user needs, simulated, emulated, verified and validated on rolling waves, on 44 day iteration cycles could have only been obtained if you were actually in the building in the vendors shop.

But there are other surprises in the business of space flight. A few good places to start include:

Beyond the outsiders comments of surprise inside space and defense firms, agile tools from Rally, VersionOne, and JIRA are used in a wide variety of domains from embedded systems to intelligence systems, where the requirements don't come from the users, they come from the enemy. Here's an example of agile in the INTEL business.

Maybe those  surprised by the many different applications of the principles of agile - developed long before the Agile Manifesto - missed those processes in Building O6, Sepulveda Blvd, Redondo Beach, circa 1978.

Screen Shot 2014-12-05 at 8.12.48 PM

In The End

There are numerous approaches to applying agile development in a wide variety of domains. I work in a domain where Systems Engineering and Earned Value Management is the starting point and Agile is used to develop code guided by EAI-748-C and DID 81861.

In these environments, development of software is incremental and iterative, with emerging requirements, with stable capabilities. These programs are complex and tools are the basis of success for managing all the moving parts of the program. Rarely is everyone in the same room, since these are System of Systems programs. As well Integration and Test are done by external sources - V&V for flight safety. So many of the processes found in small commercial projects are not applicable to programs in our domain.

To suggest there is but one way to reduce complexity by putting all the stories on cards on the wall is a bit naive in the absence of establishing the external needs of the project first, then deciding what processes to apply.

Some background on applying agile in the DOD can be found at:

Domain first, Context second, Only then Process

Related articles Systems Engineering in the Enterprise IT Domain Estimating Guidance Software Estimating for Non Trival Projects Improving DOE Project Performance Using the DOD Integrated Master Plan
Categories: Project Management

Risk Management is Project Management for Adults

Tue, 12/09/2014 - 16:49

In a recent presentation, Tim speaks further about managing in the presence of uncertainty and the application of agile in software development. Plans NEVER go right, planning in presence of uncertainty, requires - DEMANDS actually - estimating risk, uncertainties, unknowns - on the project. When we hear about making decisions in the absence of estimating the probability of the drivers, impacts, and outcomes. As he says This is a crock just as the making decisions in the absence of estimating is a crock. Ignoring the probabilistic behaviour of impacts from the future - microeconomics - is as Tim suggests childish behaviour.

Screen Shot 2014-12-08 at 8.35.51 PM

Managing in the Presence of Uncertainty

Here's an extract from a much larger briefing on managing complex, software intensive systems, in our domain - Enterprise IT. The critical issue here is uncertainty is always present. Failure to recognize this, failure to deal with it, failure to make decisions based on the underlying statistical and probabilistic aspects of this uncertainty is as Tim suggest childish.

If we're looking where we need estimates, look here potential - potential being something that might occur in the future. Potential cost, potential schedule, potential event. 

Screen Shot 2014-12-09 at 8.36.26 AM


Effective risk management - and therefore effective project management and effective value delivery - requires navigating through this causal chain, assessing the current potential for loss, and implementing strategies for minimizing the potential for loss. The next section builds on the concepts in this section by examining two fundamental approaches for analyzing risk.

Managing in the presence of uncertainty from Glen Alleman

So when we hear...

Decisions can be made in the absence of estimates. Ask how, ask to be shown the tangible evidence, ask for the mechanics this can be possible.

Related articles Show Me Your Math Estimating Guidance Software Estimating for Non Trival Projects
Categories: Project Management

Three Increasingly Mature Views of Estimate Making in IT Projects - Update

Mon, 12/08/2014 - 19:02

There remains serious misunderstandings of how, why, when, and for what purpose estimates of cost, schedule and delivered capabilities are made in the development of software systems using other peoples money.

There are three distinct approaches to the problem:

The first paper shows the self-selected projects and how they have completed - for the most part - lonfer the ideal initial estimates. These estimates are not calibrated, meaning they are not assessed for credibility, error bands, or confidence. The paper mentions the the solid line, initial versus actual is the ideal line where actuals meet estimated value. In any stochastic estimating process, it will be unlikely ant estimate will result in a match with the actual for the very simple reason that the work processes are random and when the estimates don't contain the probabilistic confidence intervals, the actual MUST be different that the estimate.

As well, no root cause for the unfavorable performance of the actuals compared to the initial estimates is provided. This is a core failure to understand the process of estimating, rot cause analysis, and the discovery of the corrective actions needed to improve both the estimating processes as well as project performance management.

This fundamental failure is not limited to the self-selected set of projects in the paper. This failure mode can be found a wide variety of project domains in and out of the software business.

The second paper speaks the the major flaws Standish Report - meaningless figures. self-selected samples, perverted accuracy, unrealistic rates and misleading definitions. The paper states the root causes and suggested corrective actions.

The third paper shows how to quantify IT forecasts (estimates of future outcomes) in a mathematically sound manner.

All five papers are useful in the right context. Little re-introduces Boehm's cone of uncertainty, assessment of Standish shows the traps that can be easily fallen into when good statistical practices are not followed, the third provides the mathematical foundation for restoring those sound practices, and the RAND report shows the mechanics of the corrective actions to restore credibility in software estimating.

A risk based view of the estimating problem developed for the recent successful launch and recovery of Orion, then called Crew Exploration Vehicle.

Probabilistic Schedule and Cost Analysis from Glen Alleman Related articles Software Estimating for Non Trival Projects
Categories: Project Management

Building Software Before Requirement Are Stable??

Fri, 12/05/2014 - 14:47

There is a popular notion that requirements stability is difficult to acheive, because customers don't know what they want. Ignoring for the moment that requirements instability is the root cause of Death March project and let's pretend requirements stability is in fact hard to come by.

If you a project without requirements stability, the architecture of the underlying systems become paramount, since the components of the system are guaranteed to change.

What is software architecture?

Software application architecture is the process of defining a structured solution that meets all of the technical and operational requirements, while optimizing common quality attributes such as performance, security, and manageability. It involves a series of decisions based on a wide range of factors, and each of these decisions can have considerable impact on the quality, performance, maintainability, and overall success of the application.

This means that the underlying architecture must

  • Be based on a reference design, so the structure has alreay been shown to work
  • The transaction model must be separated from the data model and the display model
  • There must be strict, some would say ruthless, separation of concerns, for example

Screen Shot 2014-12-04 at 10.11.02 PM

  • Data warehouse paradigm, where the data for the transactions and the resulting display and interaction is completely isolated
  • A federated architecture, where systems are integrated through a¬†data bus or some other form os isolation
  • The referential integrity of the data must be ruthlessly maintained and enforced

With these the result of low requirements fidelity is the ball of mud or shanty town architecture. The Ball or Mud paradigm is 

A BIG BALL OF MUD is a casually, even haphazardly, structured system. Its organization, if one can call it that, is dictated more by expediency than design. Yet, its enduring popularity cannot merely be indicative of a general disregard for architecture.

This means there is 

  • Throw away code -¬†we'll fix this now and worry about the outcomes later
  • Piecemeal code that is installed to fix the current problems, only to create future problems.
  • Keep it working - we're on a deadline, we need to get this fixed and move on.

And of course the biggest lie of all for any non-trivial system

Our architecture emerges as the requirements become known

This approach is the basis of Shanty Town or Ball of Mud architecture of the modern hacked together systems we've all come in contact with.

Related articles Software Estimating for Non Trival Projects
Categories: Project Management

Is Programming the Same as Software Engineering?

Thu, 12/04/2014 - 16:50

Jenkins-cover-6-26-2014When we hear I'm a developer is that the same as I'm a Software Engineer. What does it mean to be a software engineering versus a developer of sofwtare? Peter Denning's review of he book A Whole New Engineer in the December edition of Communications of the ACM speaks to this question.

There are several important ideas here. One example in the review was from a 1980s in a study at the research institute at NASA-Ames Research Center where computer scientists were brought together with NASA scientists on big problems in space and aeronautics. Our scientists pioneered in applying supercomputers instead of wind tunnels to the design of full aircraft, conducting science operations from great distances over a network, and studying neural networks that could automate tasks that depend on human memory and experience. But there was a breakdown: our NASA customers frequently complained that our engineers and scientists failed to make their deliverables. 

This was a major issue, since the research¬†funding for the institute came¬†mainly from our individual contracts¬†with NASA managers. Failure to make¬†deliverables was a recipe for non-renewal¬†and loss of jobs. NASA managers¬†said, ‚Äúyour work is of no value to¬†us without the deliverables we agreed¬†to,‚ÄĚ and our scientists responded,¬†‚Äúsorry, we can‚Äôt schedule breakthroughs.‚Ä̬†This disconnect seemed¬†to be rooted in the gap between what¬†engineering and science schools¬†teach and the realities of customer expectations¬†in the workplace.

This extract from the review brings up a smaller issue. The notion that we can't possibly estimate when we'll be done or what it wil cost. And the popular notion that...

Estimates are difficult. When requirements are vague ‚ÄĒ and it seems that they always are ‚ÄĒ then the best conceivable estimates would also be very vague. Accurate estimation becomes essentially impossible. Even with clear requirements ‚ÄĒ and it seems that they never are ‚ÄĒ it is still almost impossible to know how long something will take, because we‚Äôve never done it before.

The rest of the book review speaks to the new engineer gap and how it is being closed, with these principles (I've included the one most interesting to me personally)

  • Become competent at engineering practices and technologies.
  • Learn to be a designer‚ÄĒsomeone who can propose combinations of existing components and technologies to take care of real concerns.
Related articles Assessing Value Produced By Investments Software Estimating for Non Trival Projects
Categories: Project Management

Integrated Master Plan and Department of Energy Program Management

Wed, 12/03/2014 - 16:05

Working in Phoenix this week on a Program Management Office deployment for a government agency providing public health services. We're installing processes and training staff on the notion of Measures of Success (a term I had not heard before), that have close resemblance to the Integrated Master Plan (IMP).

Here's a paper given at the Energy Facilities Contractors Organization (EFCOG) on a similar topic.

There are several critical points here:

  • If we don't know what capabilities the project is supposed to produce and what the Measures of Effectiveness and Measures of Performance are for those capabilities, it's going to be hard to know what done looks like.
  • If we don't know what Done looks like, someone is going to have to pay to find out. That cost has to be absorbed somewhere, many times the project itself.
  • When we hear...

Estimates are difficult. When requirements are vague ‚ÄĒ and it seems that they always are ‚ÄĒ then the best conceivable estimates would also be very vague. Accurate estimation becomes essentially impossible. Even with clear requirements ‚ÄĒ and it seems that they never are ‚ÄĒ it is still almost impossible to know how long something will take, because we‚Äôve never done it before.

This is going to turn out to be true. The project is going to spend money on discovering the requirements that delivery the capabilities. This is the case sometimes, but that cost needs to be accounted for in the business case. 

The last sentence above is actually not the case, expect where the development team has no experience in the business or technical domain. And in that case, why did you hire people to spend your money when they've never done this kind of work before?

  • So in the end, define the capabilities, hire people who know something about what you want done, make sure their Past Performance assures they have a grasp on the problem, and fund the project with enough budget to discover the requirements that deliver the needed capabilities.
Related articles Assessing Value Produced By Investments Systems Engineering in the Enterprise IT Domain
Categories: Project Management

Quote of the Day

Tue, 12/02/2014 - 15:02

Plans are only good intentions unless they immediately degenerate into hard work. - Peter F. Drucker

Categories: Project Management

Systems Engineering in the Enterprise IT Domain

Mon, 12/01/2014 - 06:14

Systems Engineering has two components

  • System - a set of interrelated components working together toward some common objective.
  • Engineering - the application of scientific principles to practical ends; as the design, construction and operation of efficient¬†and economical structures, equipment, and systems.

When we work in the Enterprise IT domain or any Software Intensive Systems ... engineering is focused on the system as a whole; it emphasizes its total operation. It looks at the system from the outside, that is, at its interactions with  other systems and the environment, as well as from the inside. It is concerned  not only with the engineering design of the system but also with external factors, which can significantly constrain the design. These include the identification of customer needs, the system operational environment, interfacing systems, logistics  support requirements, the capabilities of operating personnel, and such other  factors as must be correctly reflected in system requirements documents and accommodated in the system design. [Systems Engineering Principles and Practices, Alexander Kossiakoff, John Wiley & Sons]

So what does this mean in practice?

It means when we start without knowing what DONE looks like, no method, no technique, clever process is going to help us discover what DONE looks like, until we spend a pile of money and expend a lot of time trying out various ideas in our search for DONE. 

What this means is that emergent requirements, mean wandering around looking for what DONE looks like. We need to state DONE in units that connect with Capabilities to fulfill a mission or deliver success for a business case.

What this doesn't mean is that we need the requirements up front. In fact we may not actually what the requirements up front. If we don't know what DONE means, those requirements must change and that change costs much more money then writing down what DONE looks like in units of measure meaningful to the decision makers.

So Here's Some Simple Examples of What A Capability Sounds like

  • We need the capability to pre-process insurance claims at $0.07 per transaction rather than the current $0.11 per transaction.
  • We need the capability to remove 1¬Ĺ hours from the retail ordering process once the merger is complete.
  • We need the capability to change the Wide Field Camera and the internal nickel hydride batteries, while doing no harm to the telescope.
  • We need the capability to fly 4 astronauts to the International Space Station, dock, stay 6 months, and return safely.
  • We need the capability to control the Hell Fire Missile with a new touch panel while maintaining existing navigation and guidance capabilities in the helicopter.
  • We need the capability to comply with FAR Part 15 using the current ERP system and its supporting work processes.

Here's a more detailed example

Identifying System Capabilities is the starting point for any successful program. Systems Capabilities are not direct requirements, but statements of what the system should provide in terms of ‚Äúabilities.‚ÄĚ

For example there are three capabilities needed for the Hubble Robotic Service Mission:

  • Do no harm to the telescope - it is very fragile
  • Change the Wide Field Camera - was built here in Boulder
  • Connect the battery umbilical cable - like our cell phones they wear out

How is this to be done and what are the technical, operational, safety and mission assurance requirements? Don’t really know yet, but the Capabilities guide their development. The critical reason for starting with capabilities is to establish a home for all the requirements.

To answer the questions:

  • Why is this requirement present?
  • Why is this requirement needed?
  • What business or mission value does fulfilling this requirement provide?

Capabilities statements can then be used to define the units of measure for program progress. Measuring progress with physical percent complete at each level is mandatory.¬†But measuring how the Capabilities are being fulfilled is most meaningful to the customer. The ‚Äúmeaningful to the customer‚ÄĚ unit of measures are critical to the success of any program. Without these measures the program may be cost, schedule, and technically successful but fail to fulfill the mission.

This is the difference between fit for purpose and Fit for Use.

The process flow below is the starting point for identifying the Needed Capabilities and determining their priorities. Starting with the Capabilities prevents the ‚ÄúBottom Up‚ÄĚ requirements gathering process from producing a ‚Äúlist‚ÄĚ of requirements ‚Äď all needed ‚Äď that is missing a well formed topology. This Requirements Architecture is no different than the Technical or Programmatic architecture of the system.

Capabilities Based Planning (CBP) focuses on ‚Äúoutputs‚ÄĚ rather than ‚Äúinputs.‚ÄĚ

These ‚Äúoutputs‚ÄĚ are the mission capabilities that are fulfilled by the program. Without the capabilities, it is never clear the mission will be a success, because there is no clear and concise description of what success means. Success means providing the needed capabilities, on or near schedule and cost. The concept of CBP recognizes the interdependence of systems, strategy, organization, and support in delivering the capability, and the need to examine options and trade‚Äíoffs in terms of performance, cost and risk to identify optimum development investments. CBP relies on Use Cases and scenarios to provide the context to measure the level of capability.

Here's One Approach For Capturing the Needed Capabilities

Screen Shot 2014-11-30 at 8.26.59 PM

In Order To Capture These Needed Capabilities We Need To...

Screen Shot 2014-11-30 at 8.29.12 PM

What Does All This Mean?

When we hear of all the failures of IT projects, and other projects for that matter, the first question that must be answered is 

What was the root cause of the failure?

Research has shown that unclear, vague, and many times conflicting requirements are the source of confusion about what DONE looks like. In the absence of a definitive description of DONE in units of effectiveness and performance, those requirements have no home to be assessed for their appropriateness. 

Related articles Estimating Guidance Complex Project Management Populist versus Technical View of Problems
Categories: Project Management

Estimating Guidance - Updated

Sat, 11/29/2014 - 20:29

There is an abundance of estimating guidance to counter the abundance of ill-informed notions about estimating. Here's some we use on our programs,

The list goes one for 100's of other soruces. Google "software cost estimating." But here's the core issue from the opening line in the Welcome section of Software Estimation: Demystifying the Black Art, Steve McConnell. 

The most unsuccessful three years in the education of cost estimators appears to be fifth-grade arithmetic - Norman R. Augustine

Augustine is former Chairman and CEO of Martin Marietta. His seminal book Augustine's Laws, describes the complexities and conundrums of today's business management and offers solutions. Anyone interested in learning how successful management of complex technology based firms is done, should read that book.

All Project Processes Driven By Uncertainty

The hope that uncertainty can be "programmed" out of a project is a false hope. However, we can manage in the presence of these uncertainties by understanding the risk they represent, and addressing each in an appropriate manner. In Against the Gods: The Remarkable Story of Risk, author Peter Bernstein states one of the major intellectual triumphs of the modern world is the transformation of uncertainty from a matter of fate to an area of study. And so, risk analysis is the process of assessing risks, while risk management uses risk analysis to devise management strategies to reduce or ameliorate risk. 

Estimating the outcomes of our choices - the opportunity cost paradigm of Microeconomics - is an integral part to managing in the presence of uncertainty. To successfully develop a credible estimate we need to identify and address four types of uncertainly on projects:

  1. Normal variations occur in the completion of tasks arising from normal work processes. Deming has shown that these uncertainties are just part of the process and attempts to control them, plan around them, or otherwise remove them is a waste of time. Mitigation's for these normal variations include fine-grained assessment points in the plan verifying progress. The assessment of these activities should be done in a 0% or 100% manner. Buffers and schedule margin are inserted in front of the critical activities to protect their slippage. Statistical process control approaches forecast further slippage.
  2. Foreseen uncertainties that are identified but have uncertain influences. Mitigation's for these unforeseen uncertainties are done by the creation of contingent paths forward are defined in the plan. These on ramp and off ramp points can be taken if needed.
  3. Unforeseen uncertainties are events that can’t be identified in the planning process. When these unforeseen uncertainties appear new approaches must be developed.
  4. Chaos appears when the basic structure of the project becomes unstable, with no ability to forecast its occurrence are the uncertainties that produced. In the presence of chaos, continuous verification of the project’s strategy is needed. Major iterations of deliverables can isolate these significant disruptions.

Managing in the Presence of Uncertainty

Uncertainty management is essential for any significant project. Certain information about key project cost, performance, and schedule attributes are often unknown until the project is underway. The emerging risks from these uncertainties can be identified early in the project that impact the project later are often termed ‚Äúknown unknowns.‚ÄĚ These risks can be mitigated with a good risk management process. For risks that are beyond the vision of the project team a properly implemented risk management process can also rapidly quantify the risks impact and provide sound plans for mitigating its affect.

Uncertainty and the resulting risk management is concerned with the outcome of future events, whose exact outcome is unknown, and with how to deal with these uncertainties. Outcomes are categorized as favorable or unfavorable, and risk management is the art and science of planning, assessing, handling, and monitoring future events to ensure favorable outcomes. A good risk management process is proactive and fundamentally different than issue management or problem solving, which is reactive.

Risk management is an important skill applied to a wide variety of projects. In an era of downsizing, consolidation, shrinking budgets, increasing technological sophistication, and shorter development times, risk management provides valuable insight to help key project personnel plan for risks, alert them of potential issues, analyze these issues, and develop, implement, and monitor plans to address risks  long before they surface as issues and adversely affect project cost, performance, and schedule.

Project management in the presence of uncertainty and the risks this creates requires - actually mandates - estimating the outcomes from these uncertainties. As Tim Lister advises in "Risk Management Is Project Management for Adults". IEEE Software. May 1997.

Risk Management is Project Management for Adults

In the End

So those conjecturing that software estimating can't be done, have either missed that 5th grade class or are intentionally ignoring the basis of all business decision making processes - the assessment of opportunity costs using Microeconomics.

As De Marco and Lister state:

An almost-defining characteristic of adulthood is a willingness to confront the unpleasantness of life, from the niggling to the cataclysmic.

Related articles Assessing Value Produced By Investments Mike Cohn's Agile Quotes Complex Project Management Software Estimating for Non Trival Projects Estimating Guidance Software Estimation in an Agile World
Categories: Project Management

Bad Estimation Is a Systematic Problem

Fri, 11/28/2014 - 16:53

Vasco Duarte posted on twitter a quote (without link) ...

Bad estimation is a systematic problem, not an individual failure ...

The chart below is from a much larger briefing on Essential Views needed to increase the probability of success in the software intensive systems domain. Vasco is correct. The question is what to do about it.

Screen Shot 2014-11-28 at 8.27.22 AM

So the question for those ACAT 1 Programs ($6B and above) and every other domain where cost and schedule overage is common is - what's the approach. In our domain we have approaches. One is below, there are others.

But it would seem that "not estimating" is an unlikley candidate for addressing poor estimating.

Screen Shot 2014-11-28 at 8.49.34 AM 


Related articles Assessing Value Produced By Investments Software Estimating for Non Trival Projects
Categories: Project Management

Complex Project Management

Wed, 11/26/2014 - 19:53

Effective Complex Poject ManagementThere is much confusion in the domain of project management and especially software projects between complexity, complex, and complicated. Wikipedia definitions almost always fall short.

The book on the left is the latest addition to this topic in the domain of agile software development. The book is based on an Adaptive Complex Project Framework. The notion, a naive notion, that complexity can be reduced and complex systems should be avoided, is just that notional. In practice complex systems can't be avoided in any business or technical domain where mission critical systems exist. That is non-trivial systems are complex. 

These systems include System of Systems, Enterprise Systems, Federated Systems, and system in which interaction with other systems is needed to accomplish the mission or business goal.

The book emerged from a 2010 IBM report of 1,541 executives in 60 countries about the preparedness for complex systems work. Capitalizing on Complexity. From the report there are ten Critical Success Factors, in priority order:

  1. Executive support - if those at the top aren't willing to support your project, it's going to be difficult to get help when things start going bad and they must go bad, because projects break new ground and this creates push back from those uninterested in breaking new ground.
  2. User involvement - projects are about users getting their needs met through new capabilities, delivered through technical and operational requirements.
  3. Clear business objectives - if we don't know what Done looks like in units of measure meaningful to the decision makers, we'll never recognize Done before we run out of time and money.
  4. Emotional maturity - project work is hard work. If you're easily offended by blunt questions about where you'e going, how you're going to get there, assures that when you arrive the product or service will actually work and wat evidence there is to show you spend the money wisely - you're not ready for project work. Project work is about projects. People deliver projects, but it's about the mission or business goal. Maturity in all things is required for success.
  5. Optimizing scope - full functionality can never be foreseen. But a set of needed capabilities must be foreseen if the project is not to turn into a death march - exploring and searching for what Done looks like. Only in a research project do capabilities emerge. If you're spending your customers money, have some definitive notion of what capabilities will result from this spend. What Measures of Effectiveness and Measures of Performance will be used to assure progress is being made toward the goal of delivering those capabilities.
  6. Agile process - no one has visibility to what will emerge in the future. Be prepared - the Boy Scout Motto - for new information and surprises. Ask and answer ¬†how long are you willing to wait before to discover you are late, over budget, and the gadget you're building doesn't work? The answer is ¬Ĺ the time needed to take corrective action. In other words, determine progress to plan iteratively every few weeks so you have sufficient time to fix what you broke. This is a pur closed loop control system problem. The sampling rate to remain in control is the Nyquist rate, and it is ¬Ĺ to rate of change of the control variable. ¬†
  7. Project management expertise - managing projects is all about having a plan, a sequence of work activities the deliver incremental, increasing maturity capability for the planned cost, to produce the planned value. All measures of cost and value at monetary.
  8. Skilled resources - work gets done by skilled, experienced people. If they've not seen the problem before or know someone who has, then it's going to be a rough ride. 
  9. Execution - it's all about execution. Execution at a sustainable pace, with tangible outcomes that be assessed for their increasing maturity in units meaningful to the decision makers. The notion that  working software has to be delivered every few days is totally domain dependent. The notion that requirements go stale is equally domain dependent. Don't listen to anyone with any idad that doesn't define the domain of where that idea is known to work. Such a person is just blowing smoke.
  10. Tools and Infrastructure - tools are critical. Any complex project is too complex for one person to manage the data, processes, people, and progress. When you hear complexity is bad, reduce complexity, ask if they have ever actually managed a complex project? No, be quiet.
Related articles Estimating Guidance
Categories: Project Management

Estimating on Agile Projects

Wed, 11/26/2014 - 15:40

The current issue of ICEEAWorld, has an article on estimating on agile projects. 

Screen Shot 2014-11-25 at 5.06.11 PM Screen Shot 2014-11-25 at 5.06.35 PM

Categories: Project Management

Constructing a Credible Estimate

Tue, 11/25/2014 - 17:56

To build a credible estimate for any project, in any domain, to produce a solution to any problem, we need to start with a few core ideas.

  • Gather historical data.
    • Unless you're inventing new physics, it is very unlikely what you want to do hasn't been done already, somewhere by someone.
    • We hear all the time¬†this project is unique. Really?
    • This has NEVER been done before?
    • There is no¬†reference design for what we want to do?
    • We are actually inventing the solution out of whole cloth?
  • Gather information about this specific project.
    • This doesn't mean full detailed requirements. That's just not going to happen on any real project.
    • Gather¬†needed Capabilities. Follow the Capabilities Based Planning advice.
    • Sort these capabilities using what ever method you want, but sort them in some priority so the¬†Analysis of Alternatives can be performed.
    • Capabilities are not requirements. Capabilities state what you'll be doing with the results of the project and how what you'll be doing will produce the planned value from the project.
  • Break out some statistical tools - excel will work
    • Does the historical have any statistically confident that it represents the actual past performance
    • I see all the time 20 samples of stories that have ¬Ī50% variances over the period of performance. The¬†Average is then used. Don't do this.¬†
      • First the Most Likely is the number you want. That's the Mode, the most recurring value, of all the numbers you have.
      • Next read¬†The Flaw of Averages on how you can be fooled by averages
  • Fianally to produce a credible estimate, you'll need:
    • Experience
    • Skills
    • Knowledge
    • Data
    • Tools
    • People
    • Process

Screen Shot 2014-11-25 at 3.56.13 PM

If you're missing any of the items in this picture, it's going to be a disappointing effort. Some may even call it a waste to estimate. But not for the reasons you think. It  is a waste to estimate if you don't know how to estimate. But estimate are not for you, unless you're the one providing the money. They're for those providing the money, expecting the outcomes from that expense show show up on some need date, with the needed value that provide them with the ability to earn back the money.

Categories: Project Management

Local Firm Has Critical Message for Project Managers

Tue, 11/25/2014 - 16:47

Rally Software is a local firm providing tools for the management of agile project. Project Managers provide the glue for all human endeavors involving complex work processes. Rally has those tools as do many others. Rally also has message that needs to be addressed by the project management community. Organizing, planning, executing social projects is one of the roles projects managers can contribute to.

SIM posium 2014 - Denver from Ryan Martens Related articles Estimating Guidance When the Solution to Bad Management is a Bad Solution Measures of Program Performance Should I Be Estimating My Work? Assessing Value Produced By Investments
Categories: Project Management

Mike Cohn's Agile Quotes

Mon, 11/24/2014 - 17:30

Screen Shot 2014-11-21 at 1.17.19 PMMike Cohn of Mountain Goat Software has a collection of 101 Agile Quotes. 

There are few I have heart burn with, but the vast majority are right on. 

Some of my favorite:

  • Planning is everything, plans are nothing - Field Marshall Helmuth von Moltke. This is a much misused quote. In the military business, like all businesses that spend lots of money, have high risk, and high reward, we need a plan. That plan is a strategy for the success of the project, be it D-Day or an ERP deployment. That strategy is actually a hypothesis, and the hypothesis needs to have tests. That's what the plan describes. The tests that confirm the strategy is working. To conduct the tests, we need to perform work. When the test shows to strategy is not working, we need a new strategy. That is we change the plan.
  • To be uncertain is to be uncomfortable, but to certain is to be ridiculous - Chinese Proverb. Another misused quote. All project work is uncertain. Managing in the presence of uncertainty is part of good management. Uncertainty creates risk, and risk management is how adults manage projects - Time Lister.
  • Scrum without automation is like driving a sports car on a dirt track - you won't experience the full potential, you will get frustrated, and you will probably end up blaming the car - Ilan Goldstein - tools are the basis of all process and process improvement. Paper on the wall, software management tools. Those who suggest that tools are ruining agile aren't working on complex project.
  • If you define the problem successfully, you almost have the solution - Steve Jobs. This is the role of Plans, Integrated Master Planning in our domain, where the outcomes are described in units of Effectiveness and Performance in an increasing maturity cycle.
Categories: Project Management

Software Estimating for Non Trivial Projects

Sun, 11/23/2014 - 16:26

When we read on a blog post that estimates are not meaningful unless you are doing very trivial work, † I wonder if the poster has worked on any non-trivial software domain. Places like GPS OCX, SAP consolidation, Manned Space Flight Avionics, or maybe Health Insurance Provider Networks. Because without some hands on experience in those non-trivial domains, it's be hard to actually knowing what you're talking about when it comes to estimating the spend of other peoples money.

Maybe some background on estimates for nontrivial work will shed light on this ill informed notion that only trivial projects can be estimated.

These are a small sample of papers from one journal on software estimating for misison critical, some times National Asset projects. 

Go to Cross Talk, The Journal of Defense Software Engineering, and search for "estimating" to get 10 pages of 10 articles on this topic alone. This notion of estimating in non-trivial domains is well developed, well documented, and many examples of tools, processes, and principles. 

If Do Your Homework and the Test is much easier.

It could be that the original poster has little experience in mission critical, national asset, enterprise class, software intensive systems. Or it could be the poster simply doesn't know what making estimates for project that spends other peoples money, many times significant amounts of money, is all about.

And of course most of the problems describes as the basis for Not Estimating - the illogical notion that if we can't do something well, let's stop doing it - starts with not knowing what Done Looks Like in any units of measure meaningful to the decision makers. 

So start here with my favorite enterprise architect blog amd his list of books when you follow the link to the bottom.

Screen Shot 2014-11-23 at 7.06.35 AM

So when you have some sense of what DONE looks like in terms of capabilities, the estimating process is now on solid ground. From that solid ground you can ask have we done any like this before? Or better yet can we f ind someone who has done something like this before? Or maybe can we look around to see what looks like our problem and figure out how long it took them by simply asking them? I

If the answer to any of those questions is NO and you're NOT working in a research and development domain, then don't start the project because you're not qualified to do the work, you don't know what you're doing and you're going to waste your customers money.


† Scroll to the bottom of and search for "A Thing I Can Estimate," to see the phrase, and remember the questions and the answers above. If you're not answering those in some positive way, you're now on a death march project starting day one, because you don't know what done looks like for the needed capabilities. Not the requirements, not the code, not the testing - that's all straight forward. Without some notion of what the system is supposed to do, you're never recognize it if it were ever to come into view. And since the customer doesn't know as well, all the money they're spending to find out has to be written off as IRAD or flushed down the toliet as a waste of time and effort in the end. And then you'll know why  Standish (improperly) reports projects fail. 

Screen Shot 2014-11-23 at 7.38.19 AM

Related articles Basis of #NoEstimates are 27 Year Old Reports Estimating Guidance Should I Be Estimating My Work? Assessing Value Produced By Investments Trust but Verify How to Estimate Software Development
Categories: Project Management

Show Me Your Math

Sat, 11/22/2014 - 21:02

IScreen Shot 2014-11-21 at 12.32.21 PMn a recent email exchange, the paper by Todd Little showing projects that exceeded their estimates was used as an example for how porrly we estimate, and ultimately one of the reasons to adopt the #NoEstimates paradigm of making decisions in the absence of estimates of cost, schedule, and the probability that the needed capabilities will show up on time and be what the customer wanted.

Sherlock here had it right. This picture by the way  is borrowed from Mike Cohn's eBook of 101 quotes for agile.

I've written about Little's paper before, but it's worth repeating.

It's very sporty to use examples of bad mathematics, bad management, bad processes, bad practices as the basis for something new. This is essential the basis the book Hard Facts, dangerous Half-Truths: Profiting from Evidence-Based Management. When we start talking about something new and disruptive in the absence of the data, facts, root causes, and underlying governance and principles, we're treading on very thin ground in terms of credibility.

Here's the core principle of all software development

Customers exchange money for value. The definition of  value needs to be in units of measure that allows them to make decisions about the future value of that value. That value is exchanged for a cost. A future cost as well. 

Both this future cost and future value are probabilistic in nature, due to the uncertainties in the work processes, technologies, markets, productivity and all the ...ilities associated with project work. In the presence of uncertainty, nothing is for certain - a tautology. There are two type of uncertainty - reducible and irreducible. Reducibe we can do something about. We can spend money to reduce the risk associated with the uncertainty. Irreducible, we can't. We can only have margin, management reserve, or a Plan B.

To make decisions in the presence of these uncertainties - reducible and irreducible - we need to estimate the uncertainty, the cost of handling the uncertainty, and the value produced by the work driven by these uncertainties. When we fail to make these estimates, the uncertainties don't go away. When we slice the work into small chunks, we might also slice the uncertainties into small chunks - this is the basis of agile and the paradigm of Little Bets. But the uncertainties are still there, unless we've explicitly bought them down or installed margin and reserve. They didn't go away. And what you don't know - or choose to explicitly ignore - can hurt you.

So Sherlock is right don't put forth a theory without the data. 


Related articles Should I Be Estimating My Work? Basis of #NoEstimates are 27 Year Old Reports Estimating Guidance Baloney Claims: Pseudo - science and the Art of Software Methods Assessing Value Produced By Investments Trust but Verify Anecdotal Evidence is not Actually Evidence
Categories: Project Management