Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Herding Cats - Glen Alleman
Syndicate content
Performance-Based Project Management¬ģ Principles, Practices, and Processes to Increase Probability of Success
Updated: 9 hours 54 min ago

The Unmyths of Estimating

Wed, 09/30/2015 - 17:33

Phillip Armour has a classic article in CACM titled "Ten Unmyths of Project Estimation," Communications of the ACM (CACM), November 2002, Vol 45, No 11. Several of these Unmyths are applicable to the current #NoEstimates concept. Much of the misinformation about how estimating is the smell of dysfunction can be traced to these unmyths.

Mythology is not a lie ... it is metaphorical. It has been well said that mythology is the penultimate truth - Joseph Campbell, The Power of Myth

Using Campbell's quote, myths are not untrue. They are an essential truth, but wrapped in anecdotes that are not literally true. In our software development domain a myth is a truth that seems to be untrue. This is Armour's origin of the unmyth. 

The unmyth is something that seems to be true but is actually false.

Let's look at the three core conjectures of the #Noestimates paradigm:

  • Estimates cannot be accurate - we cannot get an accurate estimate of cost, schedule, or probability that the result will work.
  • We can't say when we'll be done or how much it will cost.
  • All estimates are commitments - making estimates makes us committed to the number that results from the estimate.

The Accuracy Myth

Estimates are not numeric values. they are probability distributions. If the Probability Distribution below represents the probability of the duration of a project, there is a finite minim - some time where the project cannot be completed in less time.

There is the highest probability, or the Most Likely duration for the project. This is the Mode of the distribution. There is a mid point in the distribution, the Median. This  is the value between the highest and the lowest possible completion times. Then there  is the Mean of the distribution. This is the average of all the possible completion times. And of course The Flaw of Averages is in effect for any decisions being made on this average value †


‚ÄúIt is moronic to predict without first establishing an error rate for a prediction and keeping track of one‚Äôs past record of accuracy‚Ä̬†‚ÄĒ Nassim Nicholas Taleb, Fooled By Randomness

If we want to answer the question What is the probability of completing ON OR BEFORE a specific date, we can look at the Cumulative Distribution Function (CDF) of the Probability Distribution Function (PDF). In the chart below the PDF has the earliest finish in mid-September 2014 and the latest finish early November 2014.

The 50% probability is 23 September 2014. In most of our work, we seek an 80% confidence level of completing ON OR BEFORE the need date.

The project then MUST have schedule, cost, and technical margin to protect that probabilistic date.

How much margin is another topic.

But projects without margin are late, over budget, and likely don't work on day one. Can't be complaining about poor project performance if you don't have margin, risk management, and a plan for managing both as well as the technical processes.

So where do these charts come from? They come from a simulation of the work. The order and dependencies of the work. And the underlying statistical nature of the work elements. 

  • No individual work element is deterministic.
  • Each work element has some type of dependency on the previous work element and the following work element.
  • Even if all the work elements are Independent and sitting in a Kanban queue, unless we have unlimited servers of that queue, being late on the current piece of work will delay the following work.¬†


So what we need is not Accurate estimates, we need Useful estimates. The usefulness of the estimate is the degree to which it helps make optimal business decisions. The process of estimating is Buying Information. The Value of the estimates, like all value is determined by the cost to obtain that information. The value of the estimate of the opportunity cost, which is the different between the business decision made with the estimate and the business decision made without the estimate. ‡

Anyone suggesting that simple serial work streams can accurately forecast -  an estimate of the completion time - MUST read Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban & Scrum Projects using Monte Carlo Simulation, Troy Magennis.

In this book are the answers to all the questions those in the #NoEstimates camp say can't be answered.

The Accuracy Answer

  • All work is probabilistic.
  • Discover the Probability Distribution Functions for the work.
  • If you don't know the PDF, make one up - we use -5% + 15% for everything until we know better.
  • If you don't know the PDF, go look in databases of past work for your domain. Here's some databases:
  • If you still don't know, go find someone who does, don't guess.
  • With this framework - it's called Reference Class Forecasting - that is making estimate about your project from¬†reference classes¬†of other projects, you can start making¬†useful estimates.¬†

But remember, making estimates is how you make business decisions with opportunity costs. Those opportunity costs are the basis of Microeconomics and Managerial Finance. 

Cone of Uncertainty and Accuracy of Estimating

There is a popular myth that the Cone of Uncertainty prevents us from making accurate estimates. We now know we need useful estimates, but those are not prevented by in the cone of uncertainty. Here's the guidance we use on our Software Intensive Systems projects.


Finally in the estimate accuracy discussion comes the cost estimate. The chart below shows how cost is driven by the probabilistic elements of the project. Which brings us back to the fundamental principle that all project work is probabilistic. Modeling the cost, schedule, and probability of technical success is mandatory in any non-trivial project. By non-trivial I mean a de minimis project, one that if we're off by a lot it doesn't really matter to those paying.


The Commitment Unmyth

So now to the big bug a boo of #NoEstimates. Estimates are evil, because they are taken as commitments by management. They're taken as commitment by Bad Management, uninformed management., management that was asleep in the High School Probability and Statistics class, management that claims to have a Business degree, but never took the Business Statistics class. 

So let's clear something up,

Commitment is how Business Works

Here's an example taken directly from ‡ 

Estimation is a technical activity of assembling technical information about a specific situation to create hypothetical scenarios that (we hope) support a business decision. Making a commitment based on these scenarios is a business function.

The Technical ‚ÄúEstimation‚ÄĚ decisions include:

  • When does our flight leave?
  • How do we get there? Car? Bus?
  • What route do we take?
  • What time of day and traffic conditions?
  • How busy is the airport, how long are the lines?
  • What is the weather like?
  • Are there flight delays?

This kind of information allows us to calculate the amount of time we should allow to get there.

The Business ‚ÄúCommitment‚ÄĚ and Risk decisions include:

  • What are the benefits in catching the flight on time?
  • What are the consequences of missing the plane?
  • What is the cost of leaving early?

These are the business consequences that determine how much risk we can afford to take.

Along with these of course is the risk associated with the uncertainty in the decisions. So estimating is also Risk Management and Risk Management is management in the presence of uncertainty. And the now familiar presentation from this blog.

Managing in the presence of uncertainty from Glen Alleman

Wrap Up

Risk Management is how Adults manage projects - Tim Lister. Risk management is managing in the presence of uncertainty. All project work is probabilistic and creates uncertainty. Making decisions in the presence of uncertainty requires - mandates actually - making estimates (otherwise you're guess your pulling numbers from the rectal database).  So  if we're going to have an Adult conversation about managing in the presence of uncertainty, it's going to be around estimating. Making estimates. improving estimates, making estimates valuable to the decision makers. 

Estimates are how business works - exploring for alternatives means willfully ignoring the needs of business. Proceed at your own risk

† This average notion is common in the No estimates community. Take all the past stories or story points and find the average value and use that for the future values. That is a serious error in statistical thinking, since without the variance being acceptable, that average can be wildly off form the actual future outcomes of the project

‡ Unmythology and the Science of Estimation, Corvus International, Inc., Chicago Software Process Improvement Network, C-Spin, October 23, 2013

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

All Conjectures Need a Hypothesis

Tue, 09/29/2015 - 16:53

As far as hypothesis are concerned, let no one expect anything certain from astronomy, which cannot furnish it, lest he accept as the truth ideas conceived for another purpose, and depart from this study a greater fool than when he entered it. Andreas Osiander's (editor) preface to De Revolutionbus, Copernicus, in To Explain the World: The Discovery  of Modern Science, Steven Weinberg

In the realm of project, product, and business management we come across nearly endless ideas conjecturing to solve some problem or another.

Replace the word Astronomy with what ever word those conjecturing a solution will fix some unnamed problem.

From removing the smell of dysfunction, to increasing productivity by 10 times, to removing the need to have any governance frameworks, to making decisions in the presence of uncertainty without the need to know the impacts of those decisions.

In the absence of any hypothesis by which to test those conjectures, leaving a greater fool than when entering is the likely result. In the absence of a testable hypothesis, any conjecture is an unsubstantiated anecdotal opinion.

An anecdote is a sample of one from an unknown population

And that makes those conjectures doubly useless, because they can not only not be tested, they are likely applicable only the those making the conjectures.   

If we are ever to discover new and innovative ways to increase the probability of success for our project work, we need to move far away from conjecture, anecdote, and untestable ideas and toward evidence based assessment of the problem, the proposed solutions and the evidence that the propsed correction will in fact result in improvement.

One Final Note

As a first year Grad student in Physics I learned a critical concept that is missing from much of the conversation around process improvement. When an idea is put forward in the science and engineering world, the very first thing is to do a literature search.

  • Is this idea recognized by others as being credible. Are there supporting studies that confirm the effectiveness and applicability of the idea outside the authors own experience?
  • Are those supporting the idea, themselves credible, or just following the herd?
  • Are there references to the idea that have been tested outside the authors own experience?
  • Are there criticisms of the idea in the literature? Seeking critics is itself a critical success factor in testing any ideas. There would be knock down drag out shouting matches in the halls of the physics building about an idea. Nobel Laureates would be waving arms and speaking in loud voices. In the end it was a test of new and emergent ideas. And anyone who takes offense to being criticized, has no basis to stand on for defending his idea.¬†
  • Is the idea the basis of a business, e.g. is the author¬†selling something. A book, a seminar, consulting services?
  • Has this ¬†idea been tested by someone else. We'd tear down our experiment, have someone across the country rebuild it, run the data and see if they got the same results.¬†

Without some way to assess the credibility of any idea, either through replication, assessment against a baseline (governance framework, accounting rules, regulations), the idea is just an opinion. And like Daniel Moynihan says:

Everyone is entitled to his own opinion, but not his own facts. 

and of course my favorite

Again and again and again ‚ÄĒ what are the facts? Shun wishful thinking, ignore divine revelation, forget what "the stars foretell," avoid opinion, care not what the neighbors think, never mind the unguessable "verdict of history" ‚ÄĒ what are the facts, and to how many decimal places?¬†You pilot always into an unknown future; facts are your single clue. Get the facts! -¬†Robert Heinlein (1978)

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Projects versus Products

Mon, 09/28/2015 - 16:00

There's a common notion in some agile circles the projects aren't the right vehicle for developing products. This is usually expressed by Agile Coaches. As a business manager, applying Agile to develop products  as well as delivering Operational Services based on those products, projects are how we account for the expenditures of those outcomes, manage the resources and coordinate the needed resources to produce products as planned.

In our software product business, we use both a Product Manager and a Project Manager. These roles are separate and at the same time overlapping.

  • Products¬†are customer facing. Market needs, business models for revenue, cost, earnings, interfaces with Marketing and Sales and other business management processes- contracts, accounting - are Product focused.

Product Managers focus on Markets. What features are the market segments demanding? What features Must Ship and what featues can we drop? What is the Sales impacts of any slipped dates?

  • Projects are¬†internally facing - internal resources need looking after. The notion of¬†self organizing is fine, but¬†self directed¬†only works when the work efforts have direct contact with the customers. And even then, without some oversight - governance - a self directed team has limitations in the larger context of the business. If the¬†self directed¬†team IS the business, then the need for external governance is removed. This would be rare in any non-trivial business.¬†

Project Managers are inward focused to the resource allocation and management of the development teams. How can we get the work done to meet the market demand? When can we ship the product to maintain the sales forecast?

In very small companies and startups these roles are usually performed by the same person.

Once we move beyond the sole proprietor and his friends, separation of concerns takes over. These roles become distinct.

  • The Product Manager is a member of the Business Development Team, tasked with the business side of the product delivery process.¬†
  • The Project Management Team (PMs and Coordinators, along with development leads and staff), is a member of the delivery team tasked with producing the capabilities needs to capture and maintain the market.

Products are about What and Why. Projects are about Who, How, When, and Where. From Rudyard Kipling's Six Trusted Friends)

Product Management focuses on the overall product vision - usually documented in a Product Roadmap, showing the release cycles of capabilities and features as a function of time. Project Management is about logistics, schedule, planning, staffing, and work management to produce products in accordance with the Road Map.

When agile says it's customer focused, this is true only when there is One customer for the Product, rather than a Market for the Product and that customer is on site. That'd not be a very robust product company if they had only one customer. 

When we hear Products are not Projects, ask in what domain, business size, and value at risk is it possible not to separate these concerns between Products and Projects?

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Risk Management is How Adults Manage Projects

Sun, 09/27/2015 - 20:59

Risk Management is How Adults Manage Projects - Tim Lister

Let's start with some background on Risk Management

Tim's quote sets the paradigm for managing the impediments to success in all our endeavors

It says volumes about project management and project failure. It also means that managing risk is managing in the presence of uncertainty. And managing in the presence of uncertainty means making estimates about the impacts of our decision on future outcomes. So you can invert the statement when you hear we can make decisions in the absence of estimates.

Tim's update is titled Risk Management is Project Management for Grownups.

For those interested in managing projects in the presence of uncertainty and the risk that uncertainty creates, here's a collection from the office library, in no particular order

Here's a summary at a recent meeting around decision making in the presence of risk

Earning value from risk (v4 full charts) from Glen Alleman
Categories: Project Management

Complex, Complexity, Complicated (Update)

Sun, 09/27/2015 - 16:14

Cynefin_as_of_1st_June_2014The popular notion that Cynefin can be applied in the software development domain as a way of discussing the problems involved in writing software for money is missing the profession of Systems Engineering. From Wikipedia Cynefin is...

The framework provides a typology of contexts that guides what sort of explanations or solutions might apply. It draws on research into complex adaptive systems theory, cognitive science, anthropology, and narrative patterns, as well as evolutionary psychology, to describe problems, situations, and systems.

While Cynefin uses the term Complexity and Complex Adaptive System, it is applied from the observational point of view. That is the system exists outside of our influence on the system to control its behavior - we are observers of the systems, not engineers of the solutions in the form of a system that provides needed capabilities to solve a problem.

Read carefully the original paper on Cynefin The New Dynamics of Strategy: Sense Making in a Complex and Complicated World This post is NOT about those types of systems, but about the conjecture that the development of software is by its nature Chaotic. This argument is used by many in the agile world for avoid the engineering disciplines of INCOSE style Systems Engineering.  

There are certainly engineered systems that transform into complex adaptive systems with emergent behaviors that cause the system to fail. Example below. This is not likely to be the case when engineering principles are applied in the domains of Complex and Complicated.

A good starting point for the complex, complicated, and chaotic view of engineered systems is Complexity and Chaos - State of the Art: List of Works, Experts, Organizations, Projects, Journals, Conferences, and Tools There is a reference to Cynefin as organization modeling. While organizational modeling is important - I suspect Cynefin advocates would suggest the only important item - the engineered aspects  of applying Systems Engineering to complex, complicated, and emergent systems is mandatory for any organization to get the product out the door on time, on budget, and on specification.

For another view of the complex systems problem Principles of Complex Systems for Systems Engineering is a good place to start along with the resources from INCOSE and AIAA like Complexity Primer for Systems Engineers, Engineering Complex Systems, Complex System Classification, and many others.

So Let's Look At the Agile Point of View

In the agile community it is popular to use the terms complex, complexity, complicated, complex adaptive system many times interchangeably and many times wrongly - to assert we can't possibly plan ahead, know what we're going to need, and establish a cost and schedule because the systems complex, and emergent.

These terms are many times overloaded with an agenda used to push a process or even a method. As well, in the agile community it is popular to claim we have no control over the system, so we must adapt to its emerging behavior. This is likely the case in one condition - the chaotic behaviors of Complex Adaptive Systems. But this is only the case when we fail to establish the basis for how the CAS was formed and what sub-systems are driving that behaviors, and most importantly what are the dynamics of the interfaces between those subsystems - the System of Systems architecture - that create the chaotic behaviors . 

It is highly unlikely those working in the agile community actually work on complex systems that evolve AND at the same time are engineered at the lower levels to meet specific capabilities and resulting requirements of the system owner. They've simply let the work and the resulting outcomes emerge and become Complex, Complicated, and create Complexity. They are observers  of the outcomes, not engineers of the outcomes.

Here's one example of an engineered system that actually did become a CAS because of poor efforts of the Systems Engineers. I worked on Class I and II sensor platforms. Eventually FCS was canceled for all the right reasons. But for small teams of agile developers the outcomes become complex when the Systems Engineering processes are missing. Cynefin partitions beyond obvious emerge for the most part when Systems Engineering is missing.


First some definitions

  • Complex - consisting of many different and connected part. Not easy to analyze or understand. Complicated or intricate. When a system or problem is considered complex, analytical approaches, like dividing it into parts to make the problem tractable is not sufficient, because it is the interactions of the parts that make the system complex and without these interconnections, the system no longer functions.
  • Complex System -¬†is a functional whole, consisting of interdependent and variable parts. Unlike conventional systems, the parts need not have fixed relationships, fixed behaviors or fixed quantities, and their individual functions may be undefined in traditional terms.
  • Complicated - containing a number of hidden parts, which must be revealed separately because they do not interact. Mutual interaction of the components creates nonlinear behaviors of the system. In principle all systems are complex. The number of parts or components is irrelevant n the definition of complexity. There can be complexity - nonlinear behaviour - in small systems of large systems.¬†
  • Complexity - there is no standard definition of complexity is a view of systems that suggests simple causes result in complex effects. Complexity as a term¬†is generally used to characterize a system with many parts whose interactions with each other occur in multiple ways. Complexity can occur in a variety of forms
    • Complex behaviour
    • Complex mechanisms
    • Complex situations
    • Complex systems
    • Complex data
  • Complexity Theory -¬†states that critically interacting components self-organize to form potentially evolving structures exhibiting a hierarchy of emergent system properties.¬†This theory takes the view that systems are best regarded as wholes, and studied as such, rejecting the traditional emphasis on simplification and reduction as inadequate techniques on which to base this sort of scientific work.

One more item we need is the types of Complexity

  • Type 1 - fixed systems, where the structure doesn't change as a function of time.
  • Type 2 - systems where time causes changes. This can be repetitive cycles or change with time.
  • Type 3 - moves beyond repetitive systems into organic where change is extensive and non-cyclic in nature.
  • Type 4 - are self organizing where we can¬†combine internal constraints of closed systems, like machines, with the creative evolution of open systems, like people.

And Now To The Point

When we hear complex, complexity, complex systems, complex adaptive system, pause to ask what kind of complex are you talking about. What Type of complex system. In what system are you applying the term complex. Have you classified that system in a way that actually matches a real system. Don't take anyone saying well the system is emerging and becoming too complex for us to manage Unless in fact that is the case after all the Systems Engineering activities have been exhausted. It's a cheap excuse for simply not doing the hard work of engineering the outcomes.

It is common use the terms complex, complicated, and complexity interchanged. And software development is classified or mis-classified as one or the both or all three. It is also common to toss around these terms with not actual understanding of their meaning or application.

We need to move beyond buzz words. Words like Systems Thinking. Building software is part of a system. There are interacting parts that when assembled, produce an outcome. Hopefully a desired outcome. In the case of software the interacting parts are more than just the parts. Software has emergent properties. A Type 4 system, built from Type 1, 2, and 3 systems. With changes in time and uncertainty, modeling these systems requires stochastic processes. These processes depend on estimating behaviors as a starting point. 

The understanding that software development is an uncertain process (stochastic) is well known, starting in the 1980's [1] with COCOMO. Later models, like Cone of Uncertainty made it clear that these uncertainties, themselves, evolve with time. The current predictive models based on stochastic processes include Monte Carlo Simulation of networks of activities, Real Options, and Bayesian Networks. Each is directly applicable to modeling software development projects.

[1] Software Engineering Economics, Barry Boehm, Prentice-Hall, 1981.

Related articles Decision Analysis and Software Project Management Making Decisions in the Presence of Uncertainty Some More Background on Probability, Needed for Estimating Approximating for Improved Understanding The Microeconomics of a Project Driven Organization How to Avoid the "Yesterday's Weather" Estimating Problem Hope is not a Strategy Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Why Do Projects Fail?

Sat, 09/26/2015 - 17:25

Project-management-failureWe all wish that there was a simple answers to this question, but there are not. Anyone suggesting there is, doesn't understand the complexities of non-trivial projects in any domain.

There are enough opinions to paper the side of a battle ship.  With all these opinions, nobody has a straightforward answer that is applicable to all projects. There are two fundamental understanding though: (1) Everyone has a theory , (2) there is no singular cause that is universally applicable.

In fact most of the suggestions on project failures have little in common. With that said, I'd suggest there is a better way to view the project failure problem.

What are the core principles, processes, and practices for project success?

I will suggest there are three common denominators consistently mentioned in the literature that are key to a project’s success:

  1. Requirements management. Success was not just defined by well-documented technical requirements, but well-defined programmatic requirements/thresholds. Requirements creep is a challenge for all projects, no matter what method is used to develop the products or services from those projects. Requirements creep comes in many forms. But the basis for dealing with requirements creep starts with a Systems Engineering strategy to manage those requirements. Mist IT and business software projects don't know about Systems Engineering, and that's a common cause failure mode.
  2. Early and continuous risk management , with specific steps defined for managing the risk once identified.
  3.  Project planning. Without incredible luck, no project will succeed without a realistic and thorough plan for that success. It's completely obvious (at least to those managing successful projects), the better the planning, the more likely the outcome will match the plan.

Of the 155 defense project failures studied in ‚ÄúThe core problem of project failure,‚ÄĚ T. Perkins, The Journal of Defense Software Engineering, Vol 3. 11, pp 17, June 2006.

  • 115 ‚Äď Project managers did not know what to do.
  • 120 ‚Äď Project manager overlooked implementing a project ¬† management principle.
  • 125 ‚Äď PMs allowed constraints imposed at higher levels to prevent ¬† them from doing what they should do.
  • 130 ‚Äď PMs do not believe the project management principles add ¬† value.
  • 145 ‚Äď Policies / directives prevented PMs from doing what they ¬† should do.
  • 150 ‚Äď Statutes prevented PMs from doing what they should do.
  • 140 ‚Äď PMs primary goal was other than project success.
  • 135 ‚Äď PMs believed a project management principle was flawed.

From this research these numbers can be summarized into two larger classes

  • Lack of knowledge - the project managers and the development team did not know what to do
  • Improper application of this knowledge - this start with ignoring or overlooking a core principle of project success. This cover most ¬†of the sins of Bad Management, from compressed schedules, limited budge, to failing to produce credible estimates for the work.¬†

So where do we start?

Let's start with some principles. But first a recap

  • Good management doesn't simply happen. It takes qualified managers ‚Äď on both the buyer and supplier side, to appropriately apply project management methods.
  • Good planning doesn‚Äôt simply happen. Careful planning of work scope, WBS, realistic milestones, realistic metrics, and a realistic cost baseline is needed.
  • It is hard work to provide accurate data about schedule, work performed, and costs on a periodic basis.¬†Constant communications and trained personnel is necessary.

Five Immutable Principles of Project Success

Screen Shot 2015-09-26 at 9.03.00 AM

  1. What capabilities are needed to fulfill the Concept of Operations, the Mission and Vision, or the Business System Requirements? Without knowing the answers to these questions, requirements, features, deliverables have no home. They have no testable reasons for being in the project. 
  2. What technical and operational requirements are needed to deliver these capabilities? With the needed capabilities confirmed by those using the outcomes of the project, the technical and operational requirements can be defined. This can be stated up front, or they can emerg as the project progresses. The Capabilities are stable, all other things can change as discovery takes place. If you keep changing the capabilities, you're going to be on a Death March project
  3. What schedule delivers the product or services on time to meet the requirements? Do you have enough money, time, and resources to show up as planned. No credible project doesn't have a deadline and a set and mandated capabilities. Knowing there is sufficient everything on day one and every day after that is the key to managing in the presence of uncertainty. 
  4. What impediments to success, their mitigations, retirement plans, or “buy downs are in place to increase the probability of success?" Risk Management is how Adults Manage Projects - Tim Lister is a go place to start. The uncertainties of all project work come in two type - reducible and irreducible. For irreducible we need margin. For reducible we need specific retirement activities.
  5. What periodic measures of physical percent complete assure progress to plan? This question is based on a critical principle - how long are we willing to wait before we find out we're late?  This period varies but what ever it is it must ve short enough to take corrective action to arrive as planned. Agile does is every two to four week. In formal DOD procurement, measures of physical percent complete are done every four weeks. The advantage of Agile is working products must be produced every period. Not the case in larger more formal processes.

 With these Principles, here's five Practiuces that can put them to work

Screen Shot 2015-09-26 at 10.13.08 AM

  1. Identify Needed Capabilities to achieve program objectives or the particular end state. Define these capabilities through scenarios from the customer point of view in units of Measure of Effectiveness (MoE) meaningful to the customer.
    • Describe the business function that will be enabled by the outcomes of the project.
    • Assess these functions be assessed in terms of Effectiveness and Performance.
  2. Define the Technical And Operational Requirements that must be fulfilled for the system capabilities to be available to the customer at the planned time and planned cost. Define these requirements in terms that are isolated from any implementation technical products or processes. Only then bind the requirements with technology.
    • This can be a formal Work Breakdown Structure or an Agile Backlog
    • The planned work is described in terms of deliverables.¬†
    • Describe the technical and operation Performance measures for each feature
  3. Establish the Performance Measurement Baseline describing the work to be performed, the budgeted cost for this work, the organizational elements that produce the deliverables from this work, and the Measures of Performance (MoP) showing this work is proceeding according to cost, schedule, and technical performance.
  4. Execute the PMB’s Work Packages in the planned order, assuring all performance assessments are 0%/100% complete before proceeding. No rework, no transfer of activities to the future. Assure every requirement is traceable to work and all work is traceable to requirements.
    • If there is no planned order the work processes will be simple.
    • This is a rarely on any enterprise or non-trivial project, since the needed capabilities usually have some sequential dependencies. Accept Produce Purchase Request before issuing Purchase Order.
  5. Perform Continuous Risk Management for each Performance‚ÄďBased Project Management¬ģ process area to Identify, Analyze, Plan, Track, Control, and Communicate programmatic and technical risk.

The integration of these five Practices are the foundation of Performance‚ÄďBased Project Management¬ģ.¬†Each Practice stands alone and at the same time is coupled with the other Practices areas. Each Practice contains specific steps for producing beneficial outcomes to the project, while establishing the basis for overall project success.

Each Practice can be developed to the level needed for specific projects. All five Practices are critical to the success of any project. If a Practice area is missing or poorly developed, the capability to manage the project will be jeopardized, possibly in ways not know until the project is too far along to be recovered.

Each Practice provides information needed to make decisions about the majority flow of the project. This actionable information is the feedback mechanism needed to keep a project under control. These control processes are not impediments to progress, but are the tools needed to increase the probability of success.

Why All This Formality, Why Not Just Start Coding, Let Customer Tell Us  To Stop?

All business works on managing the flow of cost in exchange for value. All business has a fiduciary responsibility to spend wisely. Visibility to the obligated spend is part of Managerial Finance. Opportunity Cost is the basis of Microeconomics of decision making. 

The 5 Principles and 5 Practices are the basis of good business management of the scarce resources of all businesses. 

This is how adults manage projects

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Software Engineering Economics

Sat, 09/26/2015 - 04:51

When confronted with making decisions on software projects in the presence of uncertainty, we can turn to an established and well tested set of principles found in Software Engineering Economics.

First a definition from Guide to the Systems Engineering Body of Knowledge (SEBoK)

Software Engineering Economics is concerned with making decisions within the business context to align technical decisions with the business goals of an organization. Topics covered include fundamentals of software engineering economics (proposals, cash flow, the time-value of money, planning horizons, inflation, depreciation, replacement and retirement decisions); not for-profit decision-making (cost-benefit analysis, optimization analysis); estimation, economic risk and uncertainty (estimation techniques, decisions under risk and uncertainty); and multiple attribute decision making (value and measurement scales, compensatory and non-compensatory techniques).

Engineering Economics is one of the Knowledge Areas for educational requirements in Software Engineering defined by INCOSE, along with Computing Foundations, Mathematical Foundations, and Engineering Foundations. 

A critical success factor for all software development is to model the system under development as holistic, value-providing entities have been gaining recognition as a central process of systems engineering. The use of modeling and simulation during the early stages of the system design of complex systems and architectures can:

  • Document system needed capabilities, functions and requirements,
  • Assess the mission performance,
  • Estimate costs, schedule, and needed product performance capabilities¬†
  • Evaluate tradeoffs,¬†
  • Provide insights to improve performance, reduce risk, and manage costs.

The process above can be performed in any lifecycle duration. From formal top down INCOSE VEE to Agile software development. The process rhythm is independent of the principles.

This is a critical communication factor - separation of Principles, Practices, and Processes, establishes the basis of comparing these Principles, Practices, and Processes across a broad spectrum of domains, governance models, methods, and experiences. Without a shared set of Principles, it's hard to have a conversation.  

Engineering Economics

Developing products or services with other peoples money means we need a paradigm to guide our activities. Since we are spending other peoples money, the economics of that process is guided by Engineering Economics.

Engineering economic analysis concerns techniques and methods that estimate output and evaluate the worth of products and services relative to their costs. (We can't determine the value of our efforts, without knowing the cost to produce that value) Engineering economic analysis is used to evaluate system affordability. Fundamental to this knowledge area are value and utility, classification of cost, time value of money and depreciation. These are used to perform cash flow analysis, financial decision making, replacement analysis, break-even and minimum cost analysis, accounting and cost accounting. Additionally, this area involves decision making involving risk and uncertainty and estimating economic elements. [SEBok, 2015]

The Microeconomic aspects of the decision making process is guided by the principles of  making decisions regarding the allocation of limited resources. In software development we always have limited resources - time, money, staff, facilities, performance limitations of software and hardware.

If we are going to increase the probability of success for software development projects we need to understand how to manage in the presence of the uncertainty surrounding time, money, staff, facilities, performance of products and services and all the other probabilistic attributes of our work.

To make decisions in the presence of these uncertainties, we need to make estimates about the impacts of those decisions. This is an unavoidable consequence of how the decision making process works.

The opportunity cost of any decision between two or more choices means there is a cost for NOT choosing one or more of the available choices. This is the basis of microeconomics of decision making. What's the cost of NOT selecting an alternative?

So when it is conjectured we can make a decision in the presence of uncertainty without estimating the impact of that decision, it's simply NOT true.

That notion violates the principle of Microeconomics   

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Determining Schedule Margin with Monte Carlo Simulation

Tue, 09/22/2015 - 15:54

Picture1Constructing a credible Integrated Master Schedule (IMS) requires sufficient schedule margin be placed at specific locations to protect key deliverables. One approach to determining this margin is the use of a Monte Carlo simulation tool.

This probabilistic margin analysis starts with the construction of a ‚Äúbest estimate‚ÄĚ Integrated Master Schedule with the work activities arranged in a ‚Äúbest path‚ÄĚ network.

While there may be ‚Äúslack‚ÄĚ in some of the activities, the Critical Path exists through this network for each Key Deliverable. This network of activities must show how each deliverable will arrive on or before the contractual need date. This ‚Äúbest path‚ÄĚ network is the Deterministic Schedule ‚Äď the schedule with fixed activity durations.

By assigning a duration variance for each class of work activity, the Monte Carlo model shows if the at what confidence level the probabilistic delivery date occurs on or before the deterministic date. The needed schedule margin for each deliverable can be derived by the Monte Carlo simulation. This activity network is referred to as the Probabilistic Schedule ‚Äď the schedule with activity durations of random variables.

With the schedule margin inserted in front of each deliverable, the Deterministic schedule becomes the basis of the Probabilistic schedule. Next is a cycle of adjusting the Deterministic schedule to assure the needed margin produces the final baselined Deterministic schedule to be placed on baseline. As the program proceeds, this schedule margin is managed through a ‚Äúmargin burn down‚ÄĚ process. Assessing the sufficiency of this margin for the remaining work is then part of the monthly program performance report.

Here's an example from an upcoming workshop on building and executing a credible Performance Measurement Baseline based on the Wright Brother's work

Screen Shot 2015-09-16 at 8.05.32 PM

For this to work we need several things:

  • The work to be performed. This can be a network of activities in a schedule. It can be a collection of activities in a sprint. In both cases we need some approximation of how long it will take to accomplish the work. In both cases these means making an estimate of the¬†Most¬†Likely¬†duration or work effort to produce the needed outcomes.
  • This¬†Most¬†Likely¬†value can come from many sources. But it does need to be the¬†Most Likely,¬†not the average, not some made up number, not some cockamamie guess

Here's how to use a Monte Carlo tool for determining the likelihood of completing on or before a given date, when there is a schedule of the work with Most Likelies for the work durations and the variances in those durations

Risk management using risk+ (v5) from Glen Alleman Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Deterministic, Probabilistic, and Empiricism

Tue, 09/22/2015 - 15:43

When I hear a post like:

If you use deterministic estimates you must ask the team. If you use probabilistic estimates you must not. #NoEstimates

Two things come to mind:

  • All project work is probabilistic. There is no such thing as a deterministic estimate. OK, there is. But those estimates a wrong, dead wrong, willfully ignorant wrong. All project work is probabilistic. If you're making deterministic estimates, you've chosen to ignore the basic processes of probability and statistics.
  • There is an important difference between Statistics and Probability. Both are needed when making decisions in the presence of uncertainty.

Probability and Statistics

All projects have uncertainty.

And there are two kinds of uncertainty on all projects. Reducible and Irreducible.

UncertaintyReducible uncertainty (on the right) is described by the probability of some outcome. There is an 82% probability that we'll be complete on or before the second week in November, 2016. Irreducible uncertainty (on the left) is described by the Probability Distribution Function (PDF) for the underlying statistical processes. 

In both cases estimating is required. There is no deterministic way to produce an assessment of an outcome in the presence of uncertainty without making estimates. This is simple math. In the presence of uncertainty, the project variables are random variables, not deterministic variables. If there is no uncertainty, not need to estimate, just measure.


When we hear that #NoEstimates is about empirical data used to forecast the future, let’s look deeper into the term and the processes of empiricism.

First, an empiricist rejects the logical necessity for scientific principles and bases processes on observations. [1]

While managing other people’s money in the production of value in exchange for that money, there are principles by which that activity is guided. For empiricist principles are not immediately evident. But principles are called principles because they are indemonstrable and cannot be deduced form other premises nor be proved by any formal procedure. They are accepted they have been observed to be true in many instances and to be false in none. 

Second, with empirical data comes two critical assumptions that must be tested before that data has any value in decision making.

  • The variances in the sampled data is sufficiently narrow to allow sufficient confidence in forecasting the future. A ¬Ī45% variance is of little use. Next is the¬†killer problem.
  • With an acceptable variance, the assumption that the future is like the past must be confirmed. If this is not the case, that acceptably sampled data with the acceptable variance is not representative of the future behavior of the project.

Understanding this basis of empiricism is critical to understanding the notion of making predictions in the presence of uncertainty about the future.

Next let’s address the issue of what is an estimate. It seems obvious to all working in engineering, science, and financial domain that an estimate is a numeric value or range of values for some measure that may occur at sometime in the future.Making up definitions for estimate or selecting definition outside of engineering, science, and finance is disingenuous. There is no need to redefine anything. 

Estimation consists of finding appropriate values (the estimate) for the parameters of the system of concern in such a way that some criterion is optimized. [2]

The estimate has several elements:

  • The quantity for the estimate ‚Äď a numeric value we seek to learn about.
  • The range of possible values for that quantity
  • For estimates that have a range of values, the probability distribution of the values in the range of values. The Probability distribution function for the estimated values. The range of values is described by the PDF, with a Most¬†Likely, Median, Mode, and other cummulants ‚Äď that is what‚Äôs the¬†variance of the variance?
  • For an estimates that has a probability of occurrence, the single numeric value for that probability and the confidence on that value. There is an 80% confidence of completing the project on or before the second week in November, 2005

Now when those wanting to redefine what an estimate is to support their quest to have No Estimates, like redefining forecasting as Not Estimating, it becomes clear they are not using any terms found in engineering, science, mathematics, or finance. When they suggest there are many definitions of an estimate and don’t provide any definition, with the appropriate references to that definition, it’s the same approach as saying we’re exploring for better ways to ….  It’s a simple and simple minded approach to a well established discipline and making decisions and fundamentally disingenuous. And should not be tolerated.

The purpose of a cost estimate is determined by its intended use, and its intended use determines its scope and detail.

Cost estimates have two general purposes:

  1. To help managers evaluate affordability and performance against plans, as well as the selection of alternative systems and solutions,
  2. To support the budget process by providing estimates of the funding required to efficiently execute a program.
    • The notion of defining the budget leaves open the other two random variables of all project work ‚Äď productivity and performance of the produced product or service.
    • So suggesting that estimating is no needed when the budget of provided, ignores these two are variables.

Specific applications for estimates include providing data for trade studies, independent reviews, and baseline changes. Regardless of why the cost estimate is being developed, it is important that the project’s purpose link to the missions, goals, and strategic objectives and connect the statistical and probabilistic aspects of the project to the assessment of progress to plan and the production of value in exchange for the cost to produce that value.

The Need to Estimate

The picture below, with apologies for Scott Adams, is typical of the No Estimates advocates who contend estimates are evil and need to be stopped. Estimates can’t be done. Not estimating results in a ten-fold increase in project productivity or some vague unit of measure.  


[1] Dictionary of Scientific Biography, ed. Charles Coulston Gillespie, Scribner, 1073, Volume 2, pp. 604-5

[2] Forecasting Methods and Applications, Third Edition, Spyros Makridakis, Steven C. Whellwright, and Rob J. Hayndman

Some More Background

  1. Introduction to Probability Models, 4th Edition, Sheldon M. Ross
  2. Random Data: Analysis and Measurement Procedures, Julius S. Bendat and Allan G. Piersol
  3. Advanced Theory of Statistics, Volume 1: Distribution Theory, Sir Maurice Kendall and Alan Stuart
  4. Estimating Software Intensive Systems: Projects, Productsm and Processes, Richard D. Stutzke
  5. Probability Methods for Cost Uncertainty Analysis: A Systems Engineering Perspective, Paul R. Garvey
  6. Software Metrics: A Rigorous and Practical Approach, Third Edition, Norman Fenton and James Bieman
Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing Herding Cats: The Economics Of Software Development Herding Cats: Decision Support is a Core Business Process
Categories: Project Management

The Economics Of Software Development

Fri, 09/18/2015 - 12:45

The development of software in the presence of uncertainty is a well developed discipline, a well developed academic topic, and a well developed practice with numerous tools, database, and models in many different SW domains.

Economics is the study of how resources (people, time, facilities) are used to produce and distribute commodities and how services are provided in society. Engineering economics is a branch of microeconomics dealing with engineering related economic decisions. Software Engineering Foundations: A Software Science Perspective, Yingxu Wang, Auerbach Publications.

Software engineering economics is a topic that addresses the elements of software project costs estimation and analysis and project benefit-cost ratio analysis. As well these costs, and the benefits from expending those costs, produce tangible and many times intangible value. The time phased aspects of developing software for money, means we need to understand the scheduling aspects of producing this value.

All three variables in the paradigm of software development for money - time, cost, and value - are random variables. This randomness comes from the underlying uncertainties in the processes found in the development of the software. These uncertainties are always there, they never go away, they are immutable. 

Economic Foundations of Software Engineering

There are fundamental principles and methodologies utilized in engineering economics and their applications in software engineering that form the basis of decision making gin the presence of uncertainty. These formal economic models include the cost of production, and market models based on fundamental principles of microeconomics. The dynamic values of money and assets, patterns of cash flows, can be modeled in support of managements need to make decisions in the presence the constant uncertainties associated with software development

Economic analysis methodologies for engineering decisions include project costs, benefit-cost ratio, payback period, and rate of return can be rigorously described. This is the basis of any formal treatment of economic theories and principles. Software engineering economics is based on elements of software costs, software engineering project costs estimation, economic analyses of software engineering projects, and the software maintenance cost model.

Economics is classified into microeconomics  and macroeconomics. Microeconomics is the study of behaviors of individual agents and markets. Macroeconomics is the study of the broad aspects of the economy, for example employment, export, and prices on a national or global scope. 

A universal quantitative measure of commodities and services in economics is money.

Engineering economics is a branch of microeconomics. There are some basic axioms of microeconomics and engineering economics.

  • Demand versus supply. Demand is the required quantities for a product or service. It is also the demand for labor and materials needed to produce those products and services. Demand is a fundamental driving force of market systems and the predominant reason for most economic phenomena. The market response to a demand is called supply.


  • Supply is the required quantities for a product or¬†service that producers are willing and able to sell at a given range of prices. This also extends to the labor and materials needed to produce the product and services to meet the demand.

 Demands and supplies are the fundamental behaviors of dynamic market systems, which form the context of economics. Not enough Java programers in the area, cost for Java programmers goes up. Demand for rapid production of products, cost of skilled labor, special tools and processes goes up. COBOL programmers in 1998 to 2001 could ask nearly any price for their services. FORTRAN 77 programs here in Denver can get exorbitant rates to help maintain the Ballistic Missile Defense System when a local defense contractor was awarded the maintenance and support contract for Cobra Dane.

Making Decisions in the Presence of Uncertainty

Making decision is about Opportunity Costs

Opportunity Costs are those cost resulting from the loss of potential gain from the other alternatives then the one alternative chosen by the decision maker.

Every time we make a decisions involving multiple choices we are making an opportunity cost based decisions. Since most of the time these costs in in the future and are uncertainty, we need to estimate those opportunity costs as well as the probability that our choice is the right choice to produce the desired beneficial outcomes.

Here's an example from a tool we use, Palisade software's Crystal Ball. There are similar plug in for Excel (RiskAmp is affordable for the individual). 


Another useful tool in the IT decision making world is Real Options. Here's a simple introduction to RO's and decision making.

Berk Chapter 22: Real Options from Herb Meiberger In the end there is an immutable principle  In the presence of uncertainty, making decision about actions today that impact outcomes in the future requires some mechanism for determining those outcomes in the absence of perfect information. This absence of information creates risk. Decision making in the presence of uncertainty and resulting riks means These decisions typically have one or more of the following characteristics: [1]
  • The Stakes ‚ÄĒ The stakes are involved in the decision, such as costs, schedule, delivered capabilities and those impacts on business success or the meeting the objectives.
  • Complexity ‚ÄĒ The ramifications of alternatives are difficult to understand the impact of the decision without detailed analysis.
  • Uncertainty ‚ÄĒ Uncertainty in key inputs creates uncertainty in the outcome of the decision alternatives and points to risks that may need to be managed.
  • Multiple Attributes ‚ÄĒ Larger numbers of attributes cause a larger need for formal analysis.
  • Diversity of Stakeholders ‚ÄĒ Attention is warranted to clarify objectives and formulate performance measures when the set of stakeholders reflects a diversity of values, preferences, and perspectives.

Reducible and Irreducible Uncertainty

All project work is probabilistic driven by underlying statistical processes that create uncertainty. [2] There are two types of uncertainty on all projects. Reducible (Epistemic) and Irreducible (Aleatory).

Aleatory uncertainty arises from the random variability related to natural processes on the project - the statistical processes. Work durations, productivity, variance in quality. Epistemic uncertainty arises from the incomplete or imprecise nature of available information - the probabilistic assessment of when an event may occur.

There is pervasive confusion between these two types of uncertainties when discussing the impacts on these uncertainties on project outcomes, including the estimates of cost, schedule, and technical performance. 

All The World's a  NonLinear, Non-Stationary Stochastic Process, Described by 2nd Order non-Linear Differential Equations.

In the presence of these conditions - and software development is - we need to understand several things for success. What are the coupled dynamics? What are the probabilistic and statistical processes that drive these dynamics? And how can we make decision in their presence?

Predictive Analytics of Project Behaviors

In the presence of uncertainty, the need to predict future outcomes is critically important. One of the professional societies I belong to has a presentation o this topic. Here's a small sample of a mature process for estimating future outcomes given past performance. If you backup the URL to You'll see all the briefings on the topic of cost, schedule, and performance management used in the domains I work.

Screen Shot 2015-09-17 at 8.07.02 AM


[1] Risk Informed Decision Making Handbook, NASA/SP-2010-576 Version 1.0 April 2010.

[2] "Risk-informed decision-making in the presence of epistemic uncertainty," Didier Dubois, Dominique Guyonnet,  International Journal of General Systems, Taylor & Francis, 2011, 40 (2), pp.145-167.

Categories: Project Management

The Principles of Business Management

Thu, 09/17/2015 - 15:05

CalculationsIn business we exchange cost for value. This value is defined by the market in most cases. It can be defined by those paying if what they are buying of a purpose built piece of software. 

In the for Profit world, revenue from the sale of the product or service minus the cost to produce that revenue is profit (in a general form). In the non-profit world, earnings are needed to fulfill the mission of the firm, so profit per se is not the goal of those providing the product or service in exchange for the cost to do that. I work in both profit and non-profit domains. In both domains, the cost to produce the value needs to be covered by income from some source. 

In both domains, the writing of software used by the customer is our primary cost. Those customers pay us for the software, we pay the employees that produce the software. Those customers have an expectation that the software will meet their needs in terms of capabilities, performance, effectiveness, and may other ...ilities in support of their business or mission. 

These expectations come with forms.

  • Arrival Date - When can we expect the receive the¬†latest¬†fixes to the code, for the defects we identified and turned into you 3 weeks ago? In what release?
  • A Capability to Do Something -¬†Those features you spoke about at the User Conference, when do you expect we'll be able to get a look at them in Beta form, so we can see how they will impact our business process?
  • The Ability to Manage Change - We just¬†received¬†notification that our customer (the customer of the customer) will be switching to a¬†new security¬†payment¬†token system. when can you¬†validate¬†that your system will be¬†complaint with¬†that notification?
  • A forecasted cost and schedule - We've just been awarded a contract to provide features that we think you can provide in your product. Do you have a product roadmap showing when the needed features in the attached RFP and contract award document will be ready for use in the system we are proposing to our new customer?

These types of questions are the norm for all businesses that convert money into products or services. Whether we're bending metal into money or typing on keyboards to produce money the core principles of converting that money into more money is the same. 

These business processes require making decisions in the presence of uncertainty.

There is a discipline for this process. It is Operations Research. This is how UPS defines the routes of the trucks everyday, how the local dairy plans the production run for milk and gets it delivered as planned, how airlines plan and execute todays routing with the right crews, fuel, working hardware, how roads are built, how high rises go up, how Target gets the goods to the store, and wait for it how software and hardware are built and delivered on a planned schedule for a planned cost to meet the planned needed capabilities of those paying for those products, when all the processes to do this have probabilistic behaviors.

Those conjecturing that these decisions can be made without estimates need to provide a testable example that does not violate the principles of microeconomics of decision making and the managerial finance governance processes of their business

How would the opportunity cost decisions, Net Present Value decision (a calculation that compares the amount invested today to the present value of the future cash receipts from the investment.), Economic Value Added (is an estimate of a firm's economic profit), is an estimate of a firm's economic profit, being the value created in excess of the required return of the company's investors created in excess of the required return of the company's investors be made without an estimate of the future outcome of that decision.

Without these making those conjecture and even selling seminars on how to make decision without estimates have no way to be tested in an actual business environment. Tested by those actually paying for the work. No conjectured by those spending the money from those paying for the work.

Related articles Why Guessing is not Estimating and Estimating is not Guessing Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management
Categories: Project Management

Idiot Wind

Thu, 09/17/2015 - 00:41

On the way home last week from a program managers conference, was listening to Bob Dylan's Idiot Wind

Everything's a little upside down, matter of fact the wheels have stopped. What’s good is bad, what’s bad is good. Idiot Wind, Bob Dylan, Blood on the Tracks, Copyright 1978

Remindeds me of the current discourse of #NoEstimates

  • Estimates are Bad, #NoEstimates are Good - you can get out from under the oppression of deadlines, bad managers, and commitments by making decisions with No Estimates.
  • #NoEstimates are good, they can increase your productivity ten fold. That's 10X. That's an order of magnitude increase.¬†
  • Forecasts are #NoEstimates and that's good. Estimating is not the same as forecasting. Estimating is bad since we can't possibly determine what's going to happen in the future. But we can forecast what's going to happen in the future and call that #NoEstimates.
  • Commitments are bad, commitments result from estimates, and that is bad. Commitments ruin the collaborative aspects of the project and that is bad. No committing to each other for a shared outcome is good.
  • Making decisions with No Estimates is good, asking when we'll be done and how much it will cost is Bad.
  • Knowing when you're wrong is Good, determining the probabilistic confidence of all estimates, updating those estimates with new data from performance and emerging uncertainty, Bad.
  • Making predictions about the future using past performance and calling that #NoEstimates Good. Using past performance, adjusted for future possibilities and calling that Estimating, Bad.
  • It's bad to have a backlog of needed work, estimating that backlog is more bad. Revising the backlog with updated information is the worst Bad. Having no ability to know if you can meet the need date for the needed cost is Good.
  • Changes for 50% of the requirements in the Backlog is Good, because it's just the way it is. Managing the Backlog like an adult and considering changes on the need date and cost, is Bad.
  • Comparing yourself to Kent Beck is Good and being called crazy is Good. Ignoring there are 100's of 1,000's of people applying Kent Beck's processes. This is called the Galileo Gambit.
  • Focusing on Value is Good. Asking what's the cost to produce that Value and when that Value will be available for use so we can start recovering that cost is Bad.
  • Estimating means you're getting married to a Plan, is Bad. Ignoring that when Plans change - and they always do since a Plan is a Strategy for success - the new Estimate to Complete does not need to be updated is Good.¬†

The more those in the #NoEstimating community try to convince others that Estimating is Bad, can't be done, results in a smell of dysfunction, the more Bob Dylan resonants.

We’re idiots, babe
It’s a wonder we can even feed ourselves

We in the management of other peoples money domain must be, since we must have missed the suspension of the Microeconomics of Software Development when making decisions. We must have missed the suspension of Managerial Finance applied when we're asked to be stewards of the money our customers have given us to provide value for the needed cost on the needed date. We must have missed the suspension of the need to know when and how much so our Time Phased Return on Investment doesn't get a divide by Zero error.

So like most of Dylan's lyrics, they make no sense, unless you're willfully ignorant of the principles of business management. Perhaps #NoEstimates is in that category.

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Slicing Work Into Small Chunks Is Not Without Hazard

Wed, 09/16/2015 - 14:56

You think that because you understand one that you must therefore understand two because one and one make two. But you forget that you must also understand and - Sufi teaching story

The elements of a system, the software system being built, are the easiest parts to recognize. They're visible and tangible because they're in the backlog and scheduled for development. If we look closer - and accept the fact that these elements have interactions with each other - there is more to the solution than a pile of stories being implemented as slices of larger elements of needed capabilities.

The intangibles of the system, the interactions between these elements (slices), the realtime behaviors that produce or consume data, the emergent behaviors of the system resulting from the evolution of the system state from the systems execution are also critical to success.

If we only consider the sliced elements of the system, there is no end to the process. How small is too small? What is the appropriate size of the slice? Not from an effort point of view, but from a systems point of view? But more importantly what are the interactions between the sliced elements? This is dependent on the slices and their interfaces. This dependent on the interconnections, the relationships that hold the sliced elements together. 

Without considering these interconnections and the dependencies on the sliced points - this is a Cut Set Optimization Problem - simply saying slicing provides benefits to estimating and execution has no actual basis in practice. 

Here's how to tell the difference between an actual systems view and just a pile of sliced work:

  • Can we identify the actual parts of the system? To the Atomic level. Not atoms of course, be actual standalone parts, whose further decomposition (slicing) adds no value and in fact may create more complexity.
  • Do we know how these parts - the sliced work outcomes - affect each other? Both statically and dynamically?
  • Do we know how these parts - the sliced work outcomes - produce effects that are different than the effect produced by themselves as standalone parts?
  • Do we know how this¬†integrated effect behaves over time to meet the actual needed capabilities the system is supposed to provide to those paying the bills?

Many of the interconnections in the system operate through the flow of information. This information holds the system together and enables the system to operate as needed.

Slicing is only useful if it answers to the questions above and most importantly  those sliced parts fit in the overall structure of the system - the system architecture, both static and dynamic - to statically and dynamically provide the customer with the needed capabilities at the needed time, for the needed cost, and deliver the needed performance and effectiveness from those capabilities.

The least obvious part of any system - its function or purpose - is often the most crucial determinate of the system's behavior and its resulting success - Thinking in Systems: A Primer, Donella H. Meadows.

Take care so as to not fall under the siren song of simple approaches.

I have yet to see any problem, however complicated, which, when looked at in the right way, did not become more complicated - Poul Anderson, quoted in Arthur Koestler, The Ghost in the Machine.

Care when slicing to make sure you have an understanding of the system, the interaction of the elements, and the outcomes of those interactions that you don't break the topological structure needed to assure the proper flow of value to those paying for your work. 

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Basis of Assessment for Current Approaches with 32 Year Old References

Tue, 09/15/2015 - 11:51

It is popular to use several references to the estimating problem that are three to four decades old

Much has happened in the last 3 to 4 decades to increase to accuracy and precision of software development efforts.

  • Databases - of past performance
  • Modeling algorithms - Monte Carlo simulation, method of moments
  • Reference Class forecasting - from past performance and systems engineering models in sysML. Design Structure Matrix with Monte Carlo Simulation
  • Parametric models - calibrated parametric models of work adjusted

So when we hear there is a problem with estimating the the basis of that claim is 30 to 40 year old reports, we need to be skeptical at best. When those claims are used to sell a book, a workshop, and entire idea, then some serious questions need to be asked.

Any understanding at all of the current software estimating techniques as applied with tools and databases to modern systems, not 40 year old FORTRAN systems?

While there are huge issues with estimating any complex emergent system, identification of the the root cause of the problem has not been done by those conjecturing that Not Estimating is the solution. This Root Cause Analysis has been done for modern complex systems and it has been found to be one of three sources.

The principles of cost and schedule estimating, assessment of the related technical and programmatic gaps are the same in all domains for every scale. From small to billion. Why? Because it's the same problem no matter the scale.

  1. We didn't know
    • We didn't do our homework
    • We ignored what others have told us
    • We ignored the past performance in the same domain
    • We ignored the past performance in other domains
    • We just weren't listening to what people were telling us
    • Our models of cost and schedule growth were bogus, unsound, did not consider the risks, or we just made them up
  2. We couldn't know
    • We didn't have enough time to do the real work needed to produce a credible estimate
    • We didn't have sufficient skills and experience to produce a credible estimate
    • We didn't understand enough about the problem to have our estimate represent reality
    • We choose not to ask the right questions
    • We choose not to listen
    • We choose not to do our home work. or worse choose not to do our job
    • Since we're spending other peoples money we've decided it's not our¬†job¬†to know something about how much and when you'll be done to some level of confidence. We'll let someone else do that for us and we'll use their estimates in our work.¬†
  3. We didn't want to know
    • "You can't handle the truth," as Jack Nicholson character Col. Nathan Jessep's so clearly stated below in the clip for¬†A Few Good Men.
    • As the political risk and consequences of the project increase this process becomes more common.

But here's the way out of the trap for at least (1) and (2) 

  1. We didn't know
    • Do your homework. Look for¬†reference classes¬†for the work you're doing.¬†
    • Come up with an estimate based on credible processes. Wide Band Delphi, 20 questions, lots of ways out there to narrow the gap on the upper and lower bounds on the estimate
  2. We couldn't know
    • Bound the risks with short cycle deliverables.
    • This is called agile
    • It's also called good engineering as practiced in many domains, from DOD 5000.02 to small team agile development
  3. We don't want to know
    • Well there's no way out of those short of being¬†King.¬†

So take care when you hear about problems in the past, the long ago, possibly longer before those conjecturing the problem and the solution were born.

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

The Myth of INVEST and Actual Systems

Mon, 09/14/2015 - 14:07

In agile there is a mnemonic INVEST. This term is one of those Holy Grails that is never subject to assessment within the agile community. I had a hands on experience with an agile tools vendor when we were selecting tools for a DOD program. When speaking with the guru's at the tool vendor, we mentioned multiple resources assigned to a single task and interdependence of the tasks and their deliverables.

You'd thought the devil himself had walked into the room. In systems there are always interdependencies and the work requires multiple skills working together on those interdependencies. 

A reminder of INVEST

  • I=Independent -¬†The user story should be self-contained, in a way that there is no inherent dependency on another user story.
  • N=Negotiable - ¬†User stories, up until they are part of an iteration, can always be changed and rewritten.
  • V=Valuable - A user story must deliver value to the end user.
  • E=Estimable -¬†You must always be able to estimate the size of a user story.
  • S=Small -¬†User stories should not be so big as to become impossible to plan/task/prioritize with a certain level of certainty.
  • T=Testable -¬†The user story or its related description must provide the necessary information to make test development possible.

The ironies here are those suggesting that pure agile doesn't not require estimating seemed to have missed INVEST. 

But here's the issue...

In our domain, we work on systems. Others may work on a bunch of stuff. Here's how to tell the difference

  1. Can you identify the parts?
  2. Do these parts affect each other?
  3. Do the parts together produce an effect that is different from the effect of each part on its own?
  4. Does the effect, the behavior over time, persist in a variety of circumstances?

If the I in INVEST is in fact true, then you're likely working in a bunch of stuff not a system. Bunch of stuff is likely de minims in ways systems are not. 

You think that because you understand "one" that you must therefore understand "two," because one and one make two. But you forget that you must also understand "and" - Sufi teaching story

The notion of decomposing the work - slicing - into small chunks needs to be tested against the systems requirements to also develop and manage the interconnections between these sliced chunks of work. Interconnections in a tree system are the physical floes and chemical reactions that govern the tree's metabolic processes. Similar interconnections occur in software systems.

Slicing work below the level of these interconnections of the system elements looses sight of the interdependencies and therefore looses sight of the system.

Literally you can't see the forest for the trees.

It is the management of these interdependencies that is the Critical Success Factor for increasing the probability of success for the project. Be very careful falling for the holy grail of slicing, without also maintaining visibility to the system it's operations and the interdependencies between all the elements and all the work needed to produce these elements.

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

The Hard Headed Realism of Business Decision Making

Sun, 09/13/2015 - 17:19

I am rarely the person directly in charge of the business itself (CEO, CIO, CTO). Department yes (PMO, DIR) whole business no. I work for CEO's, CIO's, Program Managers, Policy Directors. What I have learned from all these leaders is both simple and complicated.

They have a hard headed view of how business works. Revenue comes in. Cost to produce that revenue is known ahead of time. Surprises in this cost are not welcome. Everyone talks to each other in probabilistic numbers. Accounting speaks in single point values. Business people and finance people speak in probability and statistics.

All the worlds's a random process, evolving, impacted by externalities outside the control of the process, non-linear interactions among the components of the system.

Decision making in the presence of these conditions requires several attributes for success:

  1. What are the underlying behaviors of the system in terms of statistical processes? Are the processes stationary or are they time dependent or dependent on some other processes?
  2. What are the parameters of the system that are first order impacts on the decisions? 
  3. In the presence of naturally occurring and event based uncertainties, business decision makers depend on estimates of the outcomes of their decisions. To make a decision in the presence of uncertainty means assessing the probabilistic outcomes of that decision.
  4. By definition, if you are making decisions in the presence of uncertainty, you are estimating the outcomes. If you're not estimating, or have redefined what you're doing as #NoEstimates when in fact it is estimating, then the only thing left is guessing. And even guessing is a 50/50 estimating technique at worst.

Managers are not confronted with problems that are independent of each other, but with dynamic situations that consist of complex systems of changing problems that interact with each other. I call such situation messes .... Managers do not solve problems, they manage messes - Russell Ackhoff, "The Future of Operational Research is Past," Journal of Operational Research Society, 30, No. 2 (February 1979), pp. 93-104


Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

The Water Fall Myth

Sun, 09/13/2015 - 17:06

It is popular in some parts of the agile community to use Water Fall as the boggy man for all things wrong with the management of software projects. As one who works in the Software Intensive Systems domain on System of Systems programs, Waterfall is an approach that was removed from our guidance a decade and a half ago. But first some definitions from the people who actually invented waterfall, not those critical of the process and unlikely having accountability for showing up on time, on budget, on spec.

  • Waterfall Approach:¬†Development activities are performed in order, with possibly minor overlap, but with little or no iteration between activities. User needs are determined, requirements are defined, and the full system is designed, built, and tested for ultimate delivery at one point in time. A document-driven approach best suited for highly precedented systems with stable requirements.
  • Incremental Approach:¬†Determines user needs and defines the overall architecture, but then delivers the system in a series of increments (‚Äúsoftware builds‚ÄĚ). The first build incorporates a part of the total planned capabilities; the next build adds more capabilities, and so on, until the entire system is complete.
  • Spiral Approach:¬†A risk-driven controlled prototyping approach that develops prototypes early in the development process to specifically address risk areas followed by assessment of prototyping results and further determination of risk areas to prototype. Areas that are prototyped frequently include user requirements and algorithm performance. Prototyping continues until high risk areas are resolved and mitigated to an acceptable level.
  • Evolutionary Approach:¬†An approach that delivers capability in increments, recognizing up front the need for future capability improvements. The objective is to balance needs and available capability with resources and to put capability into the hands of the user quickly.

The first criticism of Waterfall came from Dr. Winston Royce, "Managing the Development of Large Software Systems," Proceedings. IEEE WESCON, August 1970. pages 1-9, Originally published by TRW. Notice in the paper design iterations.

Royce’s view of this model has been widely misinterpreted: he recommended that the model be applied after a significant prototyping phase that was used to first better understand the core technologies to be applied as well as the actual requirements that customers needed!

The DOD replaced Waterfall DOD-STD-2167 with MIL-STD-498

TRW (where Royce worked) was an early adopter of Iterative and Incremental Development (IID) originated by Dr. Barry Boehm in the mid 1980's. The first work in IID programs was taking place in the mid 1970s. A large and successful  program using of IID at IBM Federal Systems Division was the USA Navy helicopter-ship system LAMPS. A 4-year 200 person-year effort involving millions of lines of code. This program was incrementally delivered in 45 time boxed iterations (1 month per iteration). 

The project was successful: "Every one of those deliveries was on time and under budget" in, "Principles of Software Engineering," Harlan Mills, IBM Systems Journal, Vol 19, No 4, 1980. Where he says ...

The basic idea behind iterative enhancement is to develop a software system incrementally, allowing the developer to take advantage of what was being learned during the development of earlier, incremental, deliverable versions of the system. Learning comes from both the development and use of the system, where possible. Key steps in the process were to start with a simple implementation of a subset of the software requirements and iteratively enhance the evolving sequence of versions until the full system is implemented. At each iteration, design modifications are made along with adding new functional capabilities.

Many in the agile community use these words, perhaps without ever have read the 1980 description of how complex software intensive systems were developed at IBM FSD and TRW.

In 1976 Tom Glib stated:

"Evolution" is a technique for producing the appearance of stability. A complex system will be most successful if it is implemented in small steps and if each step has a clear measure of successful achievement as well as a "retreat" possibility to a previous successful step upon failure. You have the opportunity of receiving some feedback from the real world before throwing in all resources intended for a system, and you can correct possible design errors.

The Incremental Commitment Spiral Model is applied in Software Intensive Systems in the DOD. But agile development has entered the domain in 2014 with the connections between Earned Value Management and Agile Development.

Without an understanding of the history of development life cycles, many - most recently the #NoEstimates community - use Waterfall as the stalking horse for all things wrong with software development other than there approaches. 

So when you hear the Red Herring Fallacy that Waterfall is the evil empire, ask if the person making that claim has done his homework, worked any Software Intensive System of Systems, or has experience being accountable for the delivery of mission critical, can't fail systems? Probably not. Just personal anecdotes yet again.

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

To All Who Lost Their Lives on this Day

Fri, 09/11/2015 - 21:09


Categories: Project Management

Why We Need To Estimate Software Cost and Schedule

Tue, 09/08/2015 - 23:22

There is no good way to perform a software cost‚Äźbenefit analysis, breakeven analysis, or make‚Äźor‚Äźbuy analysis without some reasonably accurate method of estimating software costs and their sensitivity to various product, project, and environmental factors. ‚Äź Dr. Barry Boehm

The previous post on Source Lines of Code, set off a firestorm from the proponents of #NoEstimates. 

I'd rather not estimate than estimate with SLOC 

or my favorite since we work in the domains of flight avionics (command and data handling (C&DH) and guidance navigation and control (GN&C)), fire control systems, fault tolerant process control and the diagnostic coverage needed for process safety management, ground data and business process systems for both aircraft and spacecraft.

I'm no longer going to fly with any company that counts LOC as (it) shows a lack of intelligence. †

So the question is where and when are estimating the source lines of code useful for making business decisions? 

Embedded Software Intensive Systems

Corba DaneIn the embedded systems business, memory is fixed, processor speed is hardwired and many times limited by thermal control process. Aircraft and spacecraft avionics bays have limited cooling, so getting a faster processes has repercussions beyond the cost of getting a faster processor. In an aircraft cooling must be added, increasing weight, possibly impacting the center of gravity. In a spacecraft, cooling is not done with fans and moving air. There is no air. Heat pipes and radiators are needed, again adding weight.

For those with experience in rapid development of small chunks of code the get released often to the customer for incremental use in the business process that then provide feedback for the next sliced piece of functionality being concerned about the center of gravity, thermal load, realtime critical path of the executing code so it maintains the realtime closed loop control algorithm so we don't crash into the end of the runway or onto the surface of a distance planet is probably not in their vocabulary.

Business and Processing Systems

For terrestrial systems, even business processing systems, the number of lines of code has direct impact on cost and schedule. Let's start with a source code security analyzer. Those whose skills are rapidly chunking out pieces of useful functionality aren't likely to be interested in running all their code through a security analyzer before even starting the compile and check out process. 

A source code security analyzer examines source code to detect and report weaknesses that can lead to security vulnerabilities.

They are one of the last lines of defense to eliminate software vulnerabilities during development or after deployment. Like all things mission critical there is a Source Code Security Analysis Tool Functional Specification Version 1.1, NIST Special Publication 500-268, February 2011,

Development and Product Maintenance 

Reflection_TypeRefA recent hands on experience with the need to know the SLOC comes from a refactoring effort to remove all the reflection from a code base. Those note familiar with reflection it provides objects that describe assemblies, modules and types. Reflection dynamically creates an instance of a type, binds the type to an existing object, or gets the type from an existing object and invoke its methods or access its fields and properties. If you are using attributes in your code, reflection enables you to access them.

This is a cleaver way to build code in a rapidly changing requirements paradigm. A bit too cleaver in our high performance transaction processing system

In larger production transaction processing systems, it's a way to crater the performance of the code by searching for object types on every single call for the transaction.

Removing all the reflection code structures has eliminates a huge percentage of the CPU time, memory requirements, database performance impacts - along with separating all the DB logic into Stored Procedures - resulting in the decommissioning of large chucks of the server farm running a very large public health application.

How long is it going to take to refactor all this code? I know, let's make an estimate by counting the lines of code. Do a few conversions from the current design (reflections), count how long that took. Divide the total lines of code (objects and their size) by that and we have an Estimate to Complete. Add some margin and we'll know approximately when the big pile of crappy code can get rid of the smell of running fat, slow, and error prone.

High Performance Embedded Mission Systems

Honeywell-Laseref-VIHigh Performance Embedded Systems are found everywhere. Current estimates show they outnumber desktop and server systems 100 to 1. Most of these systems have ZERO defect goals. As well as ZERO tolerance for performance shortfalls, processing disruptions, and other reset conditions. 

How do we have any sense of that the code base is capable of meeting these conditions? Testing of course is one way. But exhaustive testing is simply not possible. In a past life verification and validation of the code was the method - and still the method. Along with that is the cyclomatic complexity assessment of the code base. Another activity not likely to be of much interest to those producing the small chunks of sliced code to rapidly satisfy the customers emerging and possibly unknowable needs until they see it working. 

So In The End

Unless we suspend the principles of Microeconomics and Managerial Finance when making management decisions in the presence of uncertainty, we're going to need to estimate the outcomes of our decisions.

This process is the basis of opportunity cost - that is what is the cost of one decisions over some others. If I make Decision A, what is the cost of NOT making decision B or C. This LOST opportunity is the cost of choice.

Unless we suspend the principles of probability and statistics when applied to networks of interrelated work, we're not going to be able to make decisions without making estimates.

In the four examples above, from direct hands on experience, Source Lines of Code are a good proxy for making estimates about cost and schedule, as well as the complexity of the code base when computing the inherent reliability and fault tolerance of the applications that are embedded in the software by which our daily lives depend on. From flight controls in aircraft, process control loops in everything under computer control, including the computers themselves, the assurance that the code we write is secure and will behave as needed.

If you hear some unsubstantiated claim that SLOC are not of any use in estimating further outcomes, ask when you were working a system where failure is not an option did those paying for that system tell you they didn't need to estimate the outcomes of their decisions?  Haven't worked in that environment? May want to do some exploring of your own to see some of the many ways estimates are made and how SLOC is one of those in Software Intensive Systems Cost and Schedule Estimating (this document is an example of how SLOC is used in systems that are sensitive to size and performance based on the size of the code base. So take a read and possible see something you may not have encountered before. May not be your domain, but embedded systems outnumber desktop and server side systems 100 to 1)

One final thought about Software Intensive Systems and their impact on larger software development processes is the introduction of Agile Development in these domains. Progress is being made in the integration of Agile with large systems acquisition processes. Here's a recent briefing in a domain where systems are engineered. Systems we depend on to work as specified every single time.

Screen Shot 2015-09-07 at 4.33.34 PM


† It's going to be a long walk for the poster of that nonsense idea. Oh yeah those building Positive Train Controls, are also realtime embedded systems developers and they use SLOC to estimate timing, testing, complexity, and many other ...ilities. Same with auto manufacturers. Maybe the Nike show company doesn't. So enjoy the walk. And BTW that OP deleted his post, but worry not, got a screen capture.


Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

SLOC (Source Lines of Code)

Sun, 09/06/2015 - 05:53

In some "points of view" the notion of measuring software development parameters with Source Lines of Code is equivalent to the devil incarnate. This is of course another POV that makes little sense without understanding the domain and context. It's one of those irrationally held truths that has been passed down from on high by those NOT working in the domains where SLOC is a critical measure of project and system performance. 

In the embedded real time systems domains - Software Intensive Systems - where the number of systems and related code base dominates by 100X the desk top and server side code base - the number of line of code in a systems is a direct measure of predicted cost and schedule, as well as predicted performance. Estimating in the presence of uncertainty for Software Intensive Systems is a critical success factor. 

For some background on software in intensive systems...

The importance of embedded systems is undisputed. Their market size is about 100 times the desktop market. Hardly any new product reaches the market without embedded systems any more. The number of embedded systems in a product ranges from one to tens in consumer products and to hundreds in large professional systems. […] This will grow at least one order of magnitude in this decade. […] The strong increasing penetration of embedded systems in products and services creates huge opportunities for all kinds of enterprises and institutions. At the same time, the fast pace of penetration poses an immense threat for most of them. It concerns enterprises and institutions in such diverse areas as agriculture, health care, environment, road construction, security, mechanics, shipbuilding, medical appliances, language products, consumer electronics, etc. (Embedded Systems Design: The ARTIST Roadmap for Research and Development. ed. / Bruno Bouyssounouse; Joseph Sifakis. Berlin / Heidelberg : IEEE Computer Society Press, 2005. p. 72 (Lecture Notes in Computer Science, Vol. 3436).

There are some that are repelled by the notion of counting the lines of code or estimating the number of lines of code that may be needed to produce the needed capabilities. But that'd be because of the domain problem again. 

Databases exist that correlate the SLOC with cost and schedule for business systems. (

Screen Shot 2015-09-05 at 9.55.17 PMAnd Engineering systems

Screen Shot 2015-09-05 at 9.56.32 PMAnd of course Real Time systems

Screen Shot 2015-09-05 at 10.41.27 PM

So like it or not, consider it the devil incarnate or not, the numbers talk.

Predicting computer performance requirements for a completed system early in the design and development lifecycle of that system is challenging. Software requirements and avionic or hardware systems may times mature in parallel, and, in early stages of design, uncertainty of meeting the performance requirements makes determination of processing architecture difficult.

Later in the design process, as details  finalized and prototypes can be developed, estimates of performance, cost, and schedule become increasingly  accurate. If we wait till later in the lifecycle to make architectural changes causes, those changes are much more costly. These changes also result in  schedule and technical risks.

The early performance needs are determined and the corresponding system architecture is established, the easier an appropriate computing platform (hardware and software) can be incorporated into the design.

A direct example I'm familiar with is NASA's Orion Crew Exploration Vehicle flight software. That approach uses available requirements documentation as a basis of the estimate and decouples input/output (I/O) and computation-based processing to estimate each separately then combine them to a final result.

This approach was unique since it was used to estimate the execution time for unwritten or partially specified software, in addition to giving a specific contribution for I/O. As well as estimating the time needed to develop the code and therefore the cost of that code. The method for estimating I/O processing performance was based on quantifying data, and the method for estimating algorithmic processing was based on approximated code size.

The result was used to predict  processor types and quantities, allocate software to processors, predict communication bandwidth utilization, and manage processor margins. (Requirements-based execution time prediction of a partitioned real-time system using I/O and SLOC estimates, (Innovations in Systems and Software Engineering, Volume 8 Issue 4, December 2012, Pages 309-320)

Now is SLOC appropriate for you? Good question. Actually a theological question in some quarters, since the conveyed truth from the agile community is this is never an appropriate approach. Trouble is, research shows a direct correlation between the size of the software systems - both measured and estimated - and its cost and schedule.

Databases exist showing this and other parametric measures that used produce estimates very useful to both business and technical management. At the ICEAA 2014 conference I and a colleague presented our research paper showing how to apply Time Series Analysis (ARIMA) and Principle Component Analysis (PCA) for estimate future performance of projects, there was briefing on the databases available for making estimates of software intensive systems, here's a sample:

These some sources of reference classes for estimating cost and schedule for business and engineering systems. So what ever your thoughts - and likely biases - SLOC is a very useful production tool in many domains - business and embedded systems - with reference class databases, if you're willing to do the work to estimate the complexity of the code. If you claim it can't be done, then for you that's likely true. 

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management