Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Phillip Armour has a classic article in CACM titled "Ten Unmyths of Project Estimation," Communications of the ACM (CACM), November 2002, Vol 45, No 11. Several of these Unmyths are applicable to the current #NoEstimates concept. Much of the misinformation about how estimating is the smell of dysfunction can be traced to these unmyths.
Mythology is not a lie ... it is metaphorical. It has been well said that mythology is the penultimate truth - Joseph Campbell,¬†The Power of Myth
Using Campbell's quote, myths are not untrue. They are an essential truth, but wrapped in anecdotes that are not literally¬†true. In our software development domain a myth is a truth that seems to be untrue. This is Armour's origin of the¬†unmyth.¬†
The unmyth is something that seems to be true but is actually false.
Let's look at the three core conjectures of the #Noestimates paradigm:
The Accuracy Myth
Estimates are not numeric values. they are probability distributions. If the Probability Distribution below represents the probability of the duration of a project, there is a finite minim - some time where the project cannot be completed in less time.
There is the highest probability, or the¬†Most¬†Likely¬†duration for the project. This is the Mode of the distribution. There is a mid point in the distribution, the Median. This ¬†is the value between the highest and the lowest possible completion times. Then there ¬†is the Mean of the distribution. This is the¬†average of all the possible completion times. And of course¬†The Flaw of¬†Averages¬†is in effect for any decisions being made on this average value ‚Ä†
‚ÄúIt is moronic to predict without first establishing an error rate for a prediction and keeping track of one‚Äôs past record of accuracy‚ÄĚ¬†‚ÄĒ Nassim Nicholas Taleb, Fooled By Randomness
If we want to answer the question¬†What is the¬†probability¬†of¬†completing¬†ON OR BEFORE a specific date, we can look at the Cumulative Distribution Function (CDF) of the Probability Distribution Function (PDF). In the chart below the PDF has the earliest finish in mid-September 2014 and the latest finish early November 2014.
The 50% probability is 23 September 2014. In most of our work, we seek an 80% confidence level of completing ON OR BEFORE¬†the need date.
The project then MUST have schedule, cost, and technical margin to protect that probabilistic date.
How much margin is another topic.
But projects without margin are late, over budget, and likely don't work on day one. Can't be complaining about poor project performance if you don't have margin, risk management, and a plan for managing both as well as the technical processes.
So what we need is not Accurate estimates, we need Useful estimates. The usefulness of the estimate is the degree to which it helps make optimal business decisions. The process of estimating is¬†Buying Information. The¬†Value of the estimates, like all value is determined by the cost to obtain that information. The value of the estimate of the¬†opportunity cost,¬†which is the different between the business decision made with the estimate and the business decision made without the estimate. ‚Ä°
Anyone suggesting that simple serial work streams can accurately forecast - ¬†an estimate of the completion time - MUST read¬†Forecasting and Simulating Software Development¬†Projects: Effective¬†Modeling of Kanban & Scrum Projects using Monte Carlo¬†Simulation, Troy Magennis.
In this book are the answers to all the questions those in the #NoEstimates camp say can't be answered.
The Accuracy Answer
But remember, making estimates is how you make business decisions with¬†opportunity costs.¬†Those opportunity costs are the basis of Microeconomics and Managerial Finance.¬†
Cone of Uncertainty and Accuracy of Estimating
There is a popular myth that the¬†Cone of¬†Uncertainty¬†prevents us from making accurate estimates. We now know we need useful estimates, but those are not prevented by in the cone of uncertainty. Here's the guidance we use on our Software Intensive Systems projects.
Finally in the estimate accuracy discussion comes the¬†cost estimate.¬†The chart below shows how cost is driven by the probabilistic elements of the project. Which brings us back to the fundamental principle that¬†all¬†project¬†work is¬†probabilistic. Modeling the cost, schedule, and probability of technical success is mandatory in any non-trivial project. By non-trivial I mean a de minimis project, one that if we're off by a lot it doesn't really matter to those paying.
The Commitment Unmyth
So now to the big bug a boo of #NoEstimates. Estimates are evil, because they are taken as commitments by management.¬†They're taken as commitment by Bad Management, uninformed management., management that was asleep in the High School Probability and Statistics class, management that claims to have a Business degree, but never took the Business Statistics class.¬†
So let's clear something up,
Commitment is how Business Works
Here's an example taken directly from¬†‚Ä°¬†
Estimation is a technical activity of assembling technical information about a specific situation to create hypothetical scenarios that (we hope) support a business decision. Making a commitment based on these scenarios is a business function.
The Technical ‚ÄúEstimation‚ÄĚ decisions include:
This kind of information allows us to calculate the amount of time we should allow to get there.
The Business ‚ÄúCommitment‚ÄĚ and Risk decisions include:
These are the business consequences that determine how much risk we can afford to take.
Along with these of course is the risk associated with the uncertainty in the decisions. So estimating is also Risk Management and Risk Management is management in the presence of uncertainty. And the now familiar presentation from this blog.
Managing in the presence of uncertainty from Glen Alleman
Risk Management is how Adults manage projects - Tim Lister. Risk management is managing in the presence of uncertainty. All project work is probabilistic and creates uncertainty. Making decisions in the presence of uncertainty requires - mandates actually - making estimates (otherwise you're guess your pulling numbers from the rectal database). ¬†So ¬†if we're going to have an Adult conversation about managing in the presence of uncertainty, it's going to be around estimating. Making estimates. improving estimates, making estimates valuable to the decision makers.¬†
Estimates are how business works - exploring for alternatives means willfully ignoring the needs of business. Proceed at your own risk
‚Ä† This average notion is common in the No estimates community. Take all the past stories or story points and find the average value and use that for the future values. That is a serious error in statistical thinking, since without the variance being acceptable, that average can be wildly off form the actual future outcomes of the project
‚Ä° Unmythology and the Science of Estimation, Corvus International, Inc., Chicago Software Process Improvement Network, C-Spin, October 23, 2013Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
As far as hypothesis are concerned, let no one expect anything certain from astronomy, which cannot furnish it, lest he accept as the truth ideas conceived for another purpose, and depart from this study a greater fool than when he entered it. Andreas¬†Osiander's (editor) preface to¬†De Revolutionbus, Copernicus, in¬†To Explain the World: The¬†Discovery¬† of¬†Modern Science, Steven Weinberg
In the realm of project, product, and business management we come across nearly endless¬†ideas conjecturing to solve some problem or another.
Replace the word Astronomy with what ever word those conjecturing a solution will fix some unnamed problem.
From removing the¬†smell of dysfunction, to increasing productivity by 10 times, to removing the need to have any governance frameworks, to making decisions in the presence of uncertainty without the need to know the impacts of those decisions.
In the absence of any hypothesis by which to test those conjectures,¬†leaving¬†a¬†greater¬†fool than when entering¬†is the likely result.¬†In the absence of a testable hypothesis, any conjecture is an unsubstantiated anecdotal opinion.
An anecdote is a sample of one from an unknown population
And that makes those conjectures doubly useless, because they can not only not be tested, they are likely applicable only the those making the conjectures. ¬†¬†
If we are ever to discover new and innovative ways to increase the probability of success for our project work, we need to move far away from conjecture, anecdote, and untestable ideas and toward evidence based assessment of the problem, the proposed solutions and the evidence that the propsed correction will in fact result in improvement.
One Final Note
As a first year Grad student in Physics I learned a critical concept that is missing from much of the conversation around process improvement. When an idea is put forward in the science and engineering world, the very first thing is to do a literature search.
Without some way to assess the credibility of any idea, either through replication, assessment against a baseline (governance framework, accounting rules, regulations), the idea is just an opinion. And like Daniel Moynihan says:
Everyone is entitled¬†to his own opinion, but not his own facts.¬†
and of course my favorite
Again and again and again ‚ÄĒ what are the facts? Shun wishful thinking, ignore divine revelation, forget what "the stars foretell," avoid opinion, care not what the neighbors think, never mind the unguessable "verdict of history" ‚ÄĒ what are the facts, and to how many decimal places?¬†You pilot always into an unknown future; facts are your single clue. Get the facts! -¬†Robert Heinlein (1978)Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
There's a common notion in some agile circles the projects aren't the right vehicle for developing products. This is usually expressed by Agile Coaches. As a business manager, applying Agile to develop products ¬†as well as delivering Operational Services based on those products, projects are how we account for the expenditures of those outcomes, manage the resources and coordinate the needed resources to produce products as planned.
In our software product business, we use both a Product Manager and a Project Manager. These roles are separate and at the same time overlapping.
Product Managers focus on Markets. What features are the market segments demanding? What features Must Ship and what featues can we drop? What is the Sales impacts of any slipped dates?
Project Managers are inward focused to the resource allocation and management of the development teams. How can we get the work done to meet the market demand? When can we ship the product to maintain the sales forecast?
In very small companies and startups these roles are usually performed by the same person.
Once we move beyond the sole proprietor and his friends, separation of concerns takes over. These roles become distinct.
Products are about What and¬†Why. Projects are about Who,¬†How, When,¬†and Where. From Rudyard Kipling's Six Trusted Friends)
Product Management focuses on the overall product vision - usually documented in a Product Roadmap, showing the release cycles of capabilities and features as a function of time. Project Management is about logistics, schedule, planning, staffing, and work management to produce products in accordance with the Road Map.
When agile says it's¬†customer focused, this is true only when there is One customer for the Product, rather than a Market for the Product and that customer is on site. That'd not be a very robust product company if they had only one customer.¬†
When we hear Products are not Projects, ask in what domain, business size, and¬†value at risk is it possible not to separate these concerns between Products and Projects?Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Risk Management is How Adults Manage Projects - Tim Lister
Let's start with some background on Risk Management
Tim's quote sets the paradigm for managing the impediments to success in all our endeavors
It says volumes about project management and project failure. It also means that managing risk is managing in the presence of uncertainty. And managing in the presence of uncertainty means making estimates about the impacts of our decision on future outcomes. So you can invert the statement when you hear¬†we can make decisions in the absence of estimates.
Tim's update is titled¬†Risk Management is Project Management for Grownups.
For those interested in managing projects in the presence of uncertainty and the risk that uncertainty creates, here's a collection from the office library, in no particular order
Here's a summary at a recent meeting around decision making in the presence of risk
Earning value from risk (v4 full charts) from Glen Alleman
The popular notion that¬†Cynefin¬†can be applied in the software development domain as a way of discussing the problems involved in writing software for money is missing the profession of Systems Engineering. From Wikipedia Cynefin is...
The¬†framework¬†provides a typology of contexts that guides what sort of explanations or solutions might apply. It draws on research into¬†complex adaptive systems¬†theory, cognitive science, anthropology, and narrative patterns, as well as evolutionary psychology, to describe problems, situations, and systems.
While Cynefin uses the term Complexity and Complex Adaptive System, it is applied from the¬†observational¬†point of view. That is the system exists outside of our influence on the system to control its behavior - we are observers of the systems, not¬†engineers¬†of the¬†solutions¬†in the form of a¬†system that provides needed capabilities to solve a problem.
Read carefully the original paper on Cynefin¬†The New Dynamics of Strategy: Sense Making in a Complex and Complicated World¬†This post is NOT about those types of systems, but about the conjecture that the development of software is by its nature Chaotic. This argument is used by many in the agile world for avoid the¬†engineering¬†disciplines of¬†INCOSE style Systems¬†Engineering.¬†¬†
There are certainly¬†engineered systems that transform into complex adaptive systems with emergent behaviors that cause the system to fail. Example below. This is not likely to be the case when engineering principles are applied in the domains of Complex and Complicated.
A good starting point for the complex, complicated, and chaotic view of¬†engineered systems is¬†Complexity and Chaos -¬†State¬†of the Art: List of Works, Experts, Organizations,¬†Projects, Journals, Conferences, and Tools¬†There is a reference to Cynefin as organization modeling. While organizational modeling is important - I suspect Cynefin advocates would suggest the only important item - the¬†engineered aspects¬† of applying Systems Engineering to complex, complicated, and emergent systems is mandatory for any organization to¬†get the product out the door on time, on budget, and on specification.
For another view of the complex systems problem¬†Principles¬†of¬†Complex¬†Systems for Systems Engineering is a good place to start along with the resources from INCOSE and AIAA like¬†Complexity Primer for Systems Engineers,¬†Engineering Complex Systems,¬†Complex System Classification, and many others.
So Let's Look At the Agile Point of View
In the agile community it is popular to use the terms¬†complex, complexity, complicated, complex¬†adaptive¬†system¬†many times interchangeably and many times wrongly - to assert we can't possibly plan ahead,¬†know¬†what we're going to need, and establish a cost and schedule because the¬†systems¬†complex, and emergent.
These terms are many times overloaded with an agenda used to push a process or even a method. As well, in the agile community it is popular to claim we have no control over the system, so we must adapt to its emerging behavior. This is likely the case in one condition - the chaotic behaviors of Complex Adaptive Systems. But this is only the case when we fail to establish the basis for how the CAS was formed and what sub-systems are driving that behaviors, and most importantly what are the dynamics of the interfaces between those subsystems - the System of Systems architecture - that create the chaotic behaviors .¬†
It is highly unlikely those working in the agile community actually work on complex systems that evolve AND at the same time are¬†engineered at the lower levels to meet specific capabilities and resulting requirements of the system owner. They've simply let the work and the resulting outcomes¬†emerge¬†and become¬†Complex, Complicated,¬†and create¬†Complexity. They are¬†observers¬† of the outcomes, not¬†engineers of the outcomes.
Here's one example of an engineered system that actually did become a CAS because of poor efforts of the Systems Engineers. I worked on Class I and II sensor platforms. Eventually FCS was canceled for all the right reasons. But for small teams of agile developers the outcomes become¬†complex when the Systems Engineering processes are missing. Cynefin partitions beyond obvious¬†emerge¬†for the most part when Systems Engineering is missing.
First some definitions
One more item we need is the types of Complexity
And Now To The Point
When we hear complex, complexity, complex systems, complex adaptive system, pause to ask what kind of¬†complex are you talking about. What¬†Type of complex system. In what system are you applying the term¬†complex. Have you classified that system in a way that actually matches a real system. Don't take anyone saying¬†well the system is emerging and becoming too complex for us to manage Unless in fact that is the case after all the Systems Engineering activities have been exhausted. It's a cheap excuse for simply not doing the hard work of¬†engineering the outcomes.
It is common use the terms complex, complicated, and complexity interchanged. And software development is classified or mis-classified as one or the both or all three. It is also common to toss around these terms with not actual understanding of their meaning or application.
We need to move beyond buzz words. Words like¬†Systems Thinking.¬†Building software is part of a system. There are interacting parts that when assembled, produce an outcome. Hopefully a desired outcome. In the case of software the interacting parts are more than just the parts. Software has emergent properties. A Type 4 system, built from Type 1, 2, and 3 systems. With changes in time and uncertainty, modeling these systems requires stochastic processes. These processes depend on estimating behaviors as a starting point.¬†
The understanding that software development is an uncertain process (stochastic) is well known, starting in the 1980's  with COCOMO. Later models, like¬†Cone of Uncertainty made it clear that these uncertainties, themselves, evolve with time. The current predictive models based on stochastic processes include Monte Carlo Simulation of networks of activities, Real Options, and Bayesian Networks. Each is directly applicable to modeling software development projects.
 Software Engineering Economics, Barry Boehm, Prentice-Hall, 1981.Related articles Decision Analysis and Software Project Management Making Decisions in the Presence of Uncertainty Some More Background on Probability, Needed for Estimating Approximating for Improved Understanding The Microeconomics of a Project Driven Organization How to Avoid the "Yesterday's Weather" Estimating Problem Hope is not a Strategy Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
There are enough opinions to paper the side of a battle ship. ¬†With all these opinions, nobody has a straightforward answer that is applicable to all projects. There are two fundamental understanding though: (1) Everyone has a theory , (2) there is no singular cause that is universally applicable.
In fact most of the suggestions on project failures have little in common. With that said, I'd suggest there is a better way to view the project failure problem.
What are the core principles, processes, and practices for project success?
I will suggest there are three common denominators consistently mentioned in the literature that are key to a project‚Äôs success:
Of the 155 defense project failures studied in ‚ÄúThe core problem of project failure,‚ÄĚ T. Perkins, The Journal of Defense Software Engineering, Vol 3. 11, pp 17, June 2006.
From this research these numbers can be summarized into two larger classes
So where do we start?
Let's start with some principles. But first a recap
Five Immutable Principles of Project Success
¬†With these Principles, here's five Practiuces that can put them to work
The integration of these five Practices are the foundation of Performance‚ÄďBased Project Management¬ģ.¬†Each Practice stands alone and at the same time is coupled with the other Practices areas. Each Practice contains specific steps for producing beneficial outcomes to the project, while establishing the basis for overall project success.
Each Practice can be developed to the level needed for specific projects. All five Practices are critical to the success of any project.¬†If a Practice area is missing or poorly developed, the capability to manage the project will be jeopardized, possibly in ways not know until the project is too far along to be recovered.
Each Practice provides information needed to make decisions about the majority flow of the project. This actionable information is the feedback mechanism needed to keep a project under control. These control processes are not impediments to progress, but are the tools needed to increase the probability of success.
Why All This Formality, Why Not Just Start Coding, Let Customer Tell Us ¬†To Stop?
All business works on managing the flow of cost in exchange for value. All business has a fiduciary responsibility to¬†spend wisely.¬†Visibility to the obligated spend is part of Managerial Finance.¬†Opportunity Cost is the basis of Microeconomics of decision making.¬†
The 5 Principles and 5 Practices are the basis of good business management of the scarce resources of all businesses.¬†
This is how adults manage projectsRelated articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
When confronted with making decisions on software projects in the presence of uncertainty, we can turn to an established and well tested set of principles found in¬†Software Engineering Economics.
First a definition from¬†Guide to the Systems Engineering Body of Knowledge (SEBoK)
Software Engineering Economics is concerned with making decisions within the business context to align¬†technical decisions with the business goals of an organization. Topics covered include fundamentals of software¬†engineering economics (proposals, cash flow, the time-value of money, planning horizons, inflation, depreciation,¬†replacement and retirement decisions); not for-profit decision-making (cost-benefit analysis, optimization analysis);¬†estimation, economic risk and uncertainty (estimation techniques, decisions under risk and uncertainty); and multiple¬†attribute decision making (value and measurement scales, compensatory and non-compensatory techniques).
Engineering Economics is one of the Knowledge Areas for educational requirements in Software Engineering defined by INCOSE, along with Computing Foundations, Mathematical Foundations, and Engineering Foundations.¬†
A critical success factor for all software development is to¬†model the system under development as holistic, value-providing entities have been gaining recognition as a central process of¬†systems engineering. The use of modeling and simulation during the early stages of the system design of complex¬†systems and architectures can:
The process above can be performed in any lifecycle duration. From formal top down INCOSE VEE to Agile software development. The process rhythm is independent of the principles.
This is a critical communication factor - separation of Principles, Practices, and Processes, establishes the basis of comparing these Principles, Practices, and Processes across a broad spectrum of domains, governance models, methods, and experiences. Without a shared set of Principles, it's hard to have a conversation. ¬†
Developing products or services with other peoples money means we need a¬†paradigm to guide our activities. Since we are spending other peoples money, the economics of that process is guided by¬†Engineering Economics.
Engineering economic analysis concerns techniques and methods that estimate output and evaluate the worth of¬†products and services relative to their costs. (We can't determine the value of our efforts, without knowing the cost to produce that value) Engineering economic analysis is used to evaluate system¬†affordability. Fundamental to this knowledge area are value and utility, classification of cost, time value of money¬†and depreciation. These are used to perform cash flow analysis, financial decision making, replacement analysis,¬†break-even and minimum cost analysis, accounting and cost accounting. Additionally, this area involves decision¬†making involving risk and uncertainty and estimating economic elements. [SEBok, 2015]
The Microeconomic aspects of the decision making process is guided by the principles of ¬†making decisions regarding the allocation of limited resources. In software development we always have limited resources - time, money, staff, facilities, performance limitations of software and hardware.
If we are going to increase the probability of success for software development projects we need to understand how to manage in the presence of the uncertainty surrounding time, money, staff, facilities, performance of products and services and all the other probabilistic attributes of our work.
To make decisions in the presence of these uncertainties, we need to make estimates about the impacts of those decisions. This is an unavoidable consequence of how the decision making process works.
The¬†opportunity cost of any decision between two or more choices means there is a cost for NOT choosing one or more of the available choices. This is the basis of¬†microeconomics¬†of decision making.¬†What's the cost of NOT selecting an alternative?
So when it is conjectured we can make a decision in the presence of uncertainty without estimating the impact of that decision, it's simply NOT true.
That notion violates the principle of Microeconomics ¬†¬†Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Constructing a credible Integrated Master Schedule (IMS) requires sufficient schedule margin be placed at specific locations to protect key deliverables. One approach to determining this margin is the use of a Monte Carlo simulation tool.
This probabilistic margin analysis starts with the construction of a ‚Äúbest estimate‚ÄĚ Integrated Master Schedule with the work activities arranged in a ‚Äúbest path‚ÄĚ network.
While there may be ‚Äúslack‚ÄĚ in some of the activities, the Critical Path exists through this network for each Key Deliverable. This network of activities must show how each deliverable will arrive on or before the contractual need date. This ‚Äúbest path‚ÄĚ network is the Deterministic Schedule ‚Äď the schedule with fixed activity durations.
By assigning a duration variance for each class of work activity, the Monte Carlo model shows if the at what confidence level the probabilistic delivery date occurs on or before the deterministic date. The needed schedule margin for each deliverable can be derived by the Monte Carlo simulation. This activity network is referred to as the Probabilistic Schedule ‚Äď the schedule with activity durations of random variables.
With the schedule margin inserted in front of each deliverable, the Deterministic schedule becomes the basis of the Probabilistic schedule. Next is a cycle of adjusting the Deterministic schedule to assure the needed margin produces the final baselined Deterministic schedule to be placed on baseline. As the program proceeds, this schedule margin is managed through a ‚Äúmargin burn down‚ÄĚ process. Assessing the sufficiency of this margin for the remaining work is then part of the monthly program performance report.
Here's an example from an upcoming workshop on building and executing a credible Performance Measurement Baseline based on the Wright Brother's work
For this to work we need several things:
Here's how to use a Monte Carlo tool for determining the likelihood of completing on or before a given date, when there is a schedule of the work with Most Likelies for the work durations and the variances in those durations
Risk management using risk+ (v5) from Glen Alleman Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
When I hear a post like:
Two things come to mind:
All projects have uncertainty.
And there are two kinds of uncertainty on all projects. Reducible and Irreducible.
Reducible uncertainty (on the right) is described by the probability of some outcome.¬†There is an 82% probability that we'll be complete on or before the second week in November, 2016.¬†Irreducible uncertainty (on the left) is described by the Probability Distribution Function (PDF) for the underlying statistical processes.¬†
In both cases estimating is required. There is no deterministic way to produce an assessment of an outcome in the presence of uncertainty without making estimates. This is simple math. In the presence of uncertainty, the project variables are random variables, not deterministic variables. If there is no uncertainty, not need to estimate, just measure.
When we hear that #NoEstimates is about empirical data used to forecast the future, let‚Äôs look deeper into the term and the processes of empiricism.
First, an empiricist rejects the logical necessity for scientific principles and bases processes on observations.¬†
While managing other people‚Äôs money in the production of value in exchange for that money, there are principles by which that activity is guided. For empiricist principles are not immediately evident. But principles are called principles because they are indemonstrable and cannot be deduced form other premises nor be proved by any formal procedure. They are accepted they have been observed to be true in many instances and to be false in none.¬†
Second, with empirical data comes two critical assumptions that must be tested before that data has any value in decision making.
Understanding this basis of empiricism is critical to understanding the notion of making predictions in the presence of uncertainty about the future.
Next let‚Äôs address the issue of what is an estimate. It seems obvious to all working in engineering, science, and financial domain that an estimate is a numeric value or range of values for some measure that may occur at sometime in the future.Making up definitions for¬†estimate or selecting definition outside of engineering, science, and finance is disingenuous. There is no need to redefine anything.¬†
Estimation consists of finding appropriate values (the estimate) for the parameters of the system of concern in such a way that some criterion is optimized. 
The estimate has several elements:
Now when those wanting to redefine what an estimate is to support their quest to have No Estimates, like redefining forecasting as Not Estimating, it becomes clear they are not using any terms found in engineering, science, mathematics, or finance. When they suggest there are many definitions of an estimate and don‚Äôt provide any definition, with the appropriate references to that definition, it‚Äôs the same approach as saying we‚Äôre exploring for better ways to ‚Ä¶. ¬†It‚Äôs a simple and simple minded approach to a well established discipline and making decisions and fundamentally disingenuous. And should not be tolerated.
The purpose of a cost estimate is determined by its intended use, and its intended use determines its scope and detail.
Cost estimates have two general purposes:
Specific applications for estimates include providing data for trade studies, independent reviews, and baseline changes. Regardless of why the cost estimate is being developed, it is important that the project‚Äôs purpose link to the missions, goals, and strategic objectives and connect the statistical and probabilistic aspects of the project to the assessment of progress to plan and the production of value in exchange for the cost to produce that value.
The Need to Estimate
The picture below, with apologies for Scott Adams, is typical of the No Estimates advocates who contend¬†estimates are evil and need to be stopped. Estimates can‚Äôt be done. Not estimating results in a ten-fold increase in project productivity or some vague unit of measure.¬†¬†
 Dictionary of Scientific Biography, ed. Charles Coulston Gillespie, Scribner, 1073, Volume 2, pp. 604-5
 Forecasting Methods and Applications, Third Edition, Spyros Makridakis, Steven C. Whellwright, and Rob J. Hayndman
Some More Background
The development of software in the presence of uncertainty is a well developed discipline, a well developed academic topic, and a well developed practice with numerous tools, database, and models in many different SW domains.
Economics is the study of how resources (people, time, facilities) are used to produce and¬†distribute commodities and how services are provided in society.¬†Engineering economics is a branch of microeconomics dealing with¬†engineering related economic decisions.¬†Software Engineering Foundations: A Software Science Perspective,¬†Yingxu Wang, Auerbach Publications.
Software engineering economics is a topic that addresses the elements of software project costs estimation and analysis and project benefit-cost¬†ratio analysis. As well these costs, and the benefits from expending those costs, produce tangible and many times intangible value. The¬†time phased aspects of developing software for money, means we need to understand the¬†scheduling¬†aspects of producing this value.
All three variables in the paradigm of software development for money - time, cost, and value - are random variables. This randomness comes from the underlying¬†uncertainties in the processes found in the development of the software. These uncertainties are always there, they never go away, they are immutable.¬†
Economic Foundations of Software Engineering
There are fundamental¬†principles and methodologies utilized in engineering economics and their¬†applications in software engineering that form the basis of decision making gin the presence of uncertainty. These¬†formal economic models include¬†the cost of production, and market models based on¬†fundamental principles of microeconomics. The dynamic values of money¬†and assets, patterns of cash flows, can be modeled in support of managements need to make decisions in the presence the constant uncertainties associated with software development
Economic analysis methodologies for engineering decisions include project¬†costs, benefit-cost ratio, payback period, and rate of return can be rigorously¬†described. This is the basis of any formal treatment of economic theories and¬†principles. Software engineering economics is based on elements of¬†software costs, software engineering project costs estimation, economic¬†analyses of software engineering projects, and the software maintenance cost model.
Economics is classified into microeconomics¬† and macroeconomics.¬†Microeconomics is the study of behaviors of individual agents and markets. Macroeconomics is the study of the broad aspects of the economy, for example employment,¬†export, and prices on a national or global scope.¬†
A universal quantitative measure of commodities and services in¬†economics is money.
Engineering economics is a branch of microeconomics. There are some basic axioms of microeconomics and engineering economics.
Supply is the required quantities for a product or¬†service that producers are willing and able to sell at a given range of prices. This also extends to the labor and materials needed to produce the product and services to meet the demand.
¬†Demands and supplies are the fundamental behaviors of dynamic¬†market systems, which form the context of economics. Not enough Java programers in the area, cost for Java programmers goes up. Demand for rapid production of products, cost of skilled labor, special tools and processes goes up. COBOL programmers in 1998 to 2001 could ask nearly any price for their services. FORTRAN 77 programs here in Denver can get exorbitant rates to help maintain the Ballistic Missile Defense System when a local defense contractor was awarded the maintenance and support contract for Cobra Dane.
Making Decisions in the Presence of Uncertainty
Making decision is about Opportunity Costs
Opportunity Costs are those¬†cost¬†resulting from the loss of potential gain from the other alternatives then the one alternative chosen by the decision maker.
Every time we make a decisions involving multiple choices we are making an¬†opportunity cost based decisions. Since most of the time these costs in in the future and are uncertainty, we need to estimate those¬†opportunity costs as well as the probability that our choice is the right choice to produce the desired beneficial outcomes.
Here's an example from a tool we use, Palisade software's Crystal Ball. There are similar plug in for Excel (RiskAmp is affordable for the individual).¬†
Another useful tool in the IT decision making world is Real Options. Here's a simple introduction to RO's and decision making.
Berk Chapter 22: Real Options from Herb Meiberger In the end there is an immutable principle¬† In the presence of uncertainty, making decision about actions today that impact outcomes in the future requires some mechanism for determining those outcomes in the absence of perfect information. This absence of information creates risk. Decision making in the presence of uncertainty and resulting riks means These decisions typically have one or more of the following characteristics: 
Reducible and Irreducible Uncertainty
All project work is probabilistic driven by underlying statistical processes that create uncertainty.  There are two types of uncertainty on all projects. Reducible (Epistemic) and Irreducible (Aleatory).
Aleatory uncertainty arises from the random variability related to natural processes on the project - the statistical processes.¬†Work durations, productivity, variance in quality. Epistemic uncertainty arises from the incomplete or imprecise nature of available information - the probabilistic¬†assessment of when an event may occur.
There is pervasive confusion between these two types of uncertainties when discussing the impacts on these uncertainties on project outcomes, including the estimates of cost, schedule, and technical performance.¬†
All The World's a ¬†NonLinear, Non-Stationary Stochastic Process, Described by 2nd Order non-Linear Differential Equations.
In the presence of these conditions - and software development is - we need to understand several things for success. What are the coupled dynamics? What are the probabilistic and statistical processes that drive these dynamics? And how can we make decision in their presence?
Predictive Analytics of Project Behaviors
In the presence of uncertainty, the need to¬†predict¬†future outcomes is critically important. One of the professional societies I belong to has a presentation o this topic. Here's a small sample of a mature process for estimating future outcomes given past performance. If you backup the URL to¬†http://www.iceaaonline.com/ready/wp-content/uploads/2015/06/¬†You'll see all the briefings on the topic of cost, schedule, and performance management used in the domains I work.
 Risk Informed Decision Making Handbook,¬†NASA/SP-2010-576 Version 1.0 April 2010.
 "Risk-informed decision-making in the presence of epistemic uncertainty," Didier Dubois, Dominique Guyonnet,¬†¬†International Journal of General Systems, Taylor & Francis, 2011, 40 (2), pp.145-167.
In the¬†for Profit world, revenue from the sale of the product or service minus the cost to produce that revenue is profit (in a general form). In the non-profit world, earnings are needed to fulfill the mission of the firm, so profit per se is not the goal of those providing the product or service in exchange for the cost to do that. I work in both profit and non-profit domains. In both domains, the cost to produce the value needs to be covered by income from some source.¬†
In both domains, the writing of software used by the customer is our primary cost. Those customers pay us for the software, we pay the employees that produce the software. Those customers have an expectation that the software will meet their needs in terms of capabilities, performance, effectiveness, and may other ...ilities in support of their business or mission.¬†
These expectations come with forms.
These types of questions are the norm for all businesses that convert money into products or services. Whether we're¬†bending metal into money or¬†typing on keyboards to produce money the core principles of converting that money into more money is the same.¬†
These business processes require making decisions in the presence of uncertainty.
There is a discipline for this process. It is¬†Operations Research. This is how UPS defines the routes of the trucks everyday, how the local dairy plans the production run for milk and gets it delivered as planned, how airlines plan and execute todays routing with the right crews, fuel, working hardware, how roads are built, how high rises go up, how Target gets the goods to the store, and¬†wait for it how software and hardware are built and delivered on a planned schedule for a planned cost to meet the planned needed capabilities of those paying for those products, when all the processes to do this have probabilistic behaviors.
Those conjecturing that these decisions can be made without estimates need to provide a testable example that does not violate the principles of microeconomics of decision making and the managerial finance governance processes of their business
How would the opportunity cost decisions, Net Present Value decision (a calculation that compares the amount invested today to the¬†present value¬†of the future cash receipts from the investment.), Economic Value Added (is an estimate of a firm's¬†economic¬†profit), is an estimate of a firm's¬†economic¬†profit, being the¬†value¬†created in excess of the required return of the company's investors¬†created in excess of the required return of the company's investors be made without an estimate of the future outcome of that decision.
Without these making those conjecture and even selling seminars on how to make decision without estimates have no way to be tested in an actual business environment. Tested by those actually paying for the work. No conjectured by those spending the money from those paying for the work.Related articles Why Guessing is not Estimating and Estimating is not Guessing Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management
On the way home last week from a program managers conference, was listening to Bob Dylan's¬†Idiot¬†Wind
Everything's a little upside down, matter of fact the wheels have stopped. What‚Äôs good is bad, what‚Äôs bad is good.¬†Idiot Wind, Bob Dylan, Blood on the Tracks, Copyright¬†1978
Remindeds me of the current discourse of #NoEstimates
The more those in the #NoEstimating community try to convince others that Estimating is Bad, can't be done, results in a¬†smell of dysfunction, the more Bob Dylan resonants.
We‚Äôre idiots, babe
It‚Äôs a wonder we can even feed ourselves
We in the management of other peoples money domain must¬†be, since we must have missed the suspension of the Microeconomics of Software Development when making decisions. We must have missed the suspension of Managerial Finance applied when we're asked to be stewards of the money our customers have given us to provide value for the needed cost on the needed date. We must have missed the suspension of the need to know when and how much so our Time Phased Return on Investment doesn't get a divide by Zero error.Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
You think that because you understand¬†one that you must therefore understand¬†two because¬†one and one make two. But you forget that you must also understand¬†and - Sufi teaching story
The elements of a system, the software system being built, are the easiest parts to recognize. They're visible and tangible because they're in the backlog and scheduled for development. If we look closer - and accept the fact that these elements have interactions with each other - there is more to the solution than a¬†pile of stories being implemented as slices of larger elements of needed capabilities.
The intangibles of the system, the interactions between these elements (slices), the realtime behaviors that produce or consume data, the emergent behaviors of the system resulting from the evolution of the¬†system state¬†from the systems execution are also critical to success.
If we only consider the¬†sliced elements of the system, there is no end to the process. How small is too small? What is the appropriate size of the slice? Not from an effort point of view, but from a systems point of view? But more importantly¬†what are the¬†interactions¬†between the sliced elements? This is dependent on the¬†slices and their interfaces. This dependent on the interconnections, the¬†relationships that hold the sliced elements together.¬†
Without considering these interconnections and the dependencies on the¬†sliced points - this is a Cut Set Optimization Problem - simply saying¬†slicing provides benefits to estimating and execution has no actual basis in practice.¬†
Here's how to tell the difference between an actual¬†systems view and just a pile of¬†sliced work:
Many of the interconnections in the system operate through the flow of information. This information holds the system together and enables the system to operate as needed.
Slicing is only useful if it answers to the questions above and most importantly ¬†those¬†sliced parts fit in the overall structure of the system - the system architecture, both static and dynamic - to statically and dynamically provide the customer with the needed capabilities at the needed time, for the needed cost, and deliver the needed performance and effectiveness from those capabilities.
The least obvious part of any system - its function or purpose - is often the most crucial determinate of the system's behavior and its resulting success -¬†Thinking in Systems: A Primer, Donella H. Meadows.
Take care so as to not fall under the siren song of simple approaches.
I have yet to see any problem, however complicated, which, when looked at in the right way, did not become more complicated - Poul Anderson, quoted in Arthur Koestler,¬†The Ghost in the Machine.
Care when slicing to make sure you have an understanding of the¬†system,¬†the interaction of the elements, and the outcomes of those interactions that you don't¬†break the topological structure needed to assure the proper flow of value to those paying for your work.¬†Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
It is popular to use several references to the¬†estimating¬†problem¬†that are three to four decades old
Much has happened in the last 3 to 4 decades to increase to accuracy and precision of software development efforts.
So when we hear there is a problem with estimating the the basis of that claim is 30 to 40 year old reports, we need to be skeptical at best. When those claims are used to¬†sell a book, a workshop, and entire idea, then some serious questions need to be asked.
Any understanding at all of the current software estimating techniques as applied with tools and databases to modern systems, not 40 year old FORTRAN systems?
While there are huge issues with estimating any complex emergent system, identification of the the root cause of the problem has not been done by those conjecturing that Not Estimating is the solution. This Root Cause Analysis has been done for modern complex systems and it has been found to be one of three sources.
The principles of cost and schedule estimating, assessment of the related technical and programmatic gaps are the same in all domains for every scale. From small to billion. Why? Because it's the same problem no matter the scale.
But here's the way out of the trap for at least (1) and (2)¬†
So take care when you hear about problems in the past, the long ago, possibly longer before those conjecturing the problem and the solution were born.Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
In agile there is a mnemonic INVEST. This term is one of those¬†Holy Grails¬†that is never subject to assessment within the agile community. I had a hands on experience with an agile tools vendor when we were selecting tools for a DOD program. When speaking with the guru's at the tool vendor, we mentioned multiple resources assigned to a single task and interdependence of the tasks and their deliverables.
You'd thought the devil himself had walked into the room. In¬†systems there are always interdependencies and the work requires multiple skills working together on those interdependencies.¬†
A reminder of INVEST
The ironies here are those suggesting that¬†pure agile doesn't not require estimating seemed to have missed INVEST.¬†
But here's the issue...
In our domain, we work on systems. Others may work on a¬†bunch of stuff. Here's how to tell the difference
If the I in INVEST is in fact true, then you're likely working in a¬†bunch of stuff not a system.¬†Bunch of stuff is likely de minims in ways¬†systems¬†are not.¬†
You think that because you understand "one" that you must¬†therefore¬†understand "two," because¬†one and one make two. But you forget that you must also¬†understand¬†"and"¬†- Sufi teaching story
The notion of decomposing the work - slicing - into small chunks needs to be tested against the systems requirements to also develop and manage the interconnections between these¬†sliced chunks of work. Interconnections in a tree system are the physical floes and chemical reactions that govern the tree's metabolic processes. Similar interconnections occur in software systems.
Slicing work below the level of these interconnections of the system elements looses sight of the interdependencies and therefore looses sight of the system.
Literally you can't see the forest for the trees.
It is the management of these interdependencies that is the Critical Success Factor for increasing the probability of success for the project. Be very careful falling for the¬†holy grail of slicing, without also maintaining visibility to the¬†system it's operations and the interdependencies between all the elements and all the work needed to produce these elements.Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
I am rarely the person directly in charge of the business itself (CEO, CIO, CTO). Department yes (PMO, DIR) whole business no. I work for CEO's, CIO's, Program Managers, Policy Directors. What I have learned from all these leaders is both simple and complicated.
They have a hard headed view of how business works. Revenue comes in. Cost to produce that revenue is known ahead of time. Surprises in this cost are not welcome. Everyone talks to each other in probabilistic numbers. Accounting speaks in single¬†point values.¬†Business people and finance people speak in probability and statistics.
All the worlds's a random process, evolving, impacted by externalities outside the control of the process, non-linear interactions among the components of the system.
Decision making in the presence of these conditions requires several attributes for success:
Managers are not confronted with problems that are independent of each other, but with dynamic situations that consist of complex systems of changing problems that interact with each other. I call such situation messes .... Managers do not solve problems, they manage messes - Russell Ackhoff, "The Future of Operational Research is Past,"¬†Journal of Operational Research Society, 30, No. 2 (February 1979), pp. 93-104
¬†Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
It is popular in some parts of the agile community to use Water Fall as the¬†boggy man for all things wrong with the management of software projects. As one who works in the Software Intensive Systems domain on System of Systems programs, Waterfall is an approach that was removed from our guidance a decade and a half ago. But first some definitions from the people who actually invented waterfall, not those critical of the process and unlikely having accountability for showing up on time, on budget, on spec.
The first criticism of Waterfall came from Dr. Winston Royce, "Managing the Development of Large Software Systems,"¬†Proceedings. IEEE WESCON, August 1970. pages 1-9,¬†Originally published by TRW. Notice in the paper¬†design iterations.
Royce‚Äôs view of this model has been widely misinterpreted: he recommended that the model be applied after a significant prototyping phase that was used to first better understand the core technologies to be applied as well as the actual requirements that customers needed!
TRW (where Royce worked) was an early adopter of Iterative and Incremental Development (IID) originated by Dr. Barry Boehm in the mid 1980's. The first work in IID programs was taking place in the¬†mid 1970s. A large and successful ¬†program using of IID at IBM Federal Systems Division was the USA Navy helicopter-ship system LAMPS. A 4-year 200 person-year effort involving millions of lines of code. This program was incrementally delivered in 45 time boxed iterations (1 month per iteration).¬†
The project was successful: "Every one of those deliveries was on time and under budget" in, "Principles of Software Engineering," Harlan Mills, IBM Systems Journal, Vol 19, No 4, 1980. Where he says ...
The basic idea behind iterative enhancement is to develop a software system incrementally, allowing the developer to take advantage of what was being learned during the development of earlier, incremental, deliverable versions of the system. Learning comes from both the development and use of the system, where possible. Key steps in the process were to start with a simple implementation of a subset of the software requirements and iteratively enhance the evolving sequence of versions until the full system is implemented. At each iteration, design modifications are made along with adding new functional capabilities.
Many in the agile community use these words, perhaps without ever have read the 1980 description of how complex software intensive systems were developed at IBM FSD and TRW.
In 1976 Tom Glib stated:
"Evolution" is a technique for producing the appearance of stability. A complex system will be most successful if it is implemented in small steps and if each step has a clear measure of successful achievement as well as a "retreat" possibility to a previous successful step upon failure.¬†You have the opportunity of receiving some feedback from the real world before throwing in all resources intended for a system, and you can correct possible design errors.
The Incremental Commitment Spiral Model is applied in Software Intensive Systems in the DOD. But agile development has entered the domain in 2014 with the connections between Earned Value Management and Agile Development.
Without an understanding of the history of development life cycles, many - most recently the #NoEstimates community - use Waterfall as the stalking horse for all things wrong with software development other than there approaches.¬†
So when you hear the Red Herring Fallacy that Waterfall is the¬†evil empire,¬†ask if the person making that claim has done his homework, worked any Software Intensive System of Systems, or has experience being accountable for the delivery of mission critical,¬†can't fail systems? Probably not. Just personal anecdotes yet again.Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
There is no good way to perform a software cost‚Äźbenefit analysis, breakeven analysis, or make‚Äźor‚Äźbuy analysis without some reasonably accurate method of estimating software costs and their sensitivity to various product, project, and environmental factors. ‚Äź Dr. Barry Boehm
The previous post on Source Lines of Code, set off a firestorm from the proponents of #NoEstimates.¬†
I'd rather not estimate than estimate with SLOC¬†
or my favorite since we work in the domains of flight avionics (command and data handling (C&DH) and guidance navigation and control (GN&C)), fire control systems, fault tolerant process control¬†and the diagnostic coverage needed for process safety management,¬†ground data and business process systems for both aircraft and spacecraft.
I'm no longer going to fly with any company that counts LOC as (it) shows a lack of intelligence. ‚Ä†
So the question is where and when are¬†estimating the source lines of code¬†useful for making business decisions?¬†
Embedded Software Intensive Systems
In the embedded systems business, memory is fixed, processor speed is¬†hardwired and many times limited by thermal control process. Aircraft and spacecraft avionics bays have limited cooling, so¬†getting a faster processes has repercussions beyond the cost of getting a faster processor. In an aircraft cooling must be added, increasing weight, possibly impacting the center of gravity. In a spacecraft, cooling is not done with fans and moving air. There is no air. Heat pipes and radiators are needed, again adding weight.
For those with experience in rapid development of small chunks of code the get released often to the customer for incremental use in the business process that then provide feedback for the next¬†sliced piece of functionality being concerned about the center of gravity, thermal load, realtime critical path of the executing code so it maintains the realtime closed loop control algorithm so we don't crash into the end of the runway or onto the surface of a distance planet is probably not in their vocabulary.
Business and Processing Systems
For terrestrial systems, even business processing systems, the number of lines of code has direct impact on cost and schedule. Let's start with a source code security analyzer. Those whose skills are rapidly chunking out pieces of useful functionality aren't likely to be interested in running all their code through a security analyzer before even starting the compile and check out process.¬†
A¬†source code security analyzer¬†examines source code to¬†detect and report weaknesses that can lead to security vulnerabilities.
They are one of the last lines of defense to eliminate software vulnerabilities during development or after deployment. Like all things¬†mission critical there is a¬†Source Code Security Analysis Tool Functional Specification Version 1.1, NIST Special Publication 500-268, February 2011,¬†http://samate.nist.gov/docs/source_code_security_analysis_spec_SP500-268_v1.1.pdf
Development and Product Maintenance¬†
A recent hands on experience with the need to know the SLOC comes from a refactoring effort to remove all the¬†reflection¬†from a code base. Those note familiar with¬†reflection¬†it provides objects that describe assemblies, modules and types. Reflection dynamically creates an instance of a type, binds the type to an existing object, or gets the type from an existing object and invoke its methods or access its fields and properties. If you are using attributes in your code, reflection enables you to access them.
This is a cleaver way to build code in a rapidly changing requirements paradigm. A bit too cleaver in our high performance transaction processing system
In larger production transaction processing systems, it's a way to crater the performance of the code by searching for object types on every single call for the transaction.
Removing all the¬†reflection code structures has eliminates a huge percentage of the CPU time, memory requirements, database performance impacts - along with separating all the DB logic into Stored Procedures - resulting in the decommissioning of large chucks of the server farm running a very large public health application.
How long is it going to take to refactor all this code? I know, let's make an estimate by counting the lines of code. Do a few conversions from the current design (reflections), count how long that took. Divide the total lines of code (objects and their size) by that and we have an Estimate to Complete. Add some margin and we'll know approximately when the big pile of crappy code can get rid of the smell of running fat, slow, and error prone.
High Performance Embedded Mission Systems
High Performance Embedded Systems are found everywhere. Current estimates show they outnumber desktop and server systems 100 to 1. Most of these systems have ZERO defect goals. As well as ZERO tolerance for performance shortfalls, processing disruptions, and other¬†reset conditions.¬†
How do we have any sense of that the code base is capable of meeting these conditions? Testing of course is one way. But exhaustive testing is simply not possible. In a past life¬†verification and¬†validation¬†of the code was the method - and still the method. Along with that is the cyclomatic complexity assessment¬†of the code base. Another activity not likely to be of much interest to those producing the small chunks of sliced code to rapidly satisfy the customers emerging and possibly unknowable needs until they see it working.¬†
So In The End
Unless we suspend the principles of Microeconomics and Managerial Finance when making management decisions in the presence of uncertainty, we're going to need to estimate the outcomes of our decisions.
This process is the basis of opportunity cost - that is¬†what is the cost of one decisions over some others.¬†If I make Decision A, what is the cost of NOT making decision B or C. This LOST¬†opportunity¬†is the cost of¬†choice.
Unless we suspend the principles of probability and statistics when applied to networks of interrelated work, we're not going to be able to make decisions without making estimates.
In the four examples above, from direct hands on experience, Source Lines of Code are a good proxy for making estimates about cost and schedule, as well as the complexity of the code base when computing the inherent reliability and fault tolerance of the applications that are embedded in the software by which our daily lives depend on. From flight controls in aircraft, process control loops in everything under computer control, including the computers themselves, the assurance that the code we write is secure and will behave as needed.
If you hear some unsubstantiated claim that SLOC are not of any use in estimating further outcomes, ask¬†when you were working a system where failure is not an option did those paying for that system tell you they didn't need to estimate the outcomes of their decisions? ¬†Haven't worked in that environment? May want to do some exploring of your own to see some of the many ways estimates are made and how SLOC is one of those in Software Intensive Systems¬†Cost and Schedule Estimating¬†(this document is an example of how SLOC is used in systems that are sensitive to size and performance based on the¬†size of the code base. So take a read and possible see something you may not have encountered before. May not be your domain, but embedded systems outnumber desktop and server side systems 100 to 1)
One final thought about Software Intensive Systems and their impact on larger software development processes is the introduction of Agile Development in these domains. Progress is being made in the integration of Agile with large systems acquisition processes. Here's a recent briefing in a domain where systems are¬†engineered. Systems we depend on to work as specified every single time.
‚Ä† It's going to be a long walk for the poster of that nonsense idea. Oh yeah those building Positive Train Controls, are also realtime embedded systems developers and they use SLOC to estimate timing, testing, complexity, and many other¬†...ilities. Same with auto manufacturers. Maybe the Nike show company doesn't. So enjoy the walk. And BTW that OP deleted his post, but worry not, got a screen capture.
¬†Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
In some "points of view" the notion of measuring software development parameters with Source Lines of Code is equivalent to the devil incarnate. This is of course another POV that makes little sense without understanding the domain and context. It's one of those irrationally held truths that has been passed down from on high by those NOT working in the domains where SLOC is a critical measure of project and system performance.¬†
In the embedded real time systems domains - Software Intensive Systems - where the number of systems and related code base dominates by 100X the desk top and server side code base - the number of line of code in a systems is a direct measure of predicted cost and schedule, as well as predicted performance. Estimating in the presence of uncertainty for Software Intensive Systems is a critical success factor.¬†
For some background on software in intensive systems...
The importance of embedded systems is undisputed. Their market size is about 100 times the desktop market. Hardly any new product reaches the market without embedded systems any more. The number of embedded systems in a product ranges from one to tens in consumer products and to hundreds in large professional systems. [‚Ä¶] This will grow at least one order of magnitude in this decade. [‚Ä¶] The strong increasing penetration of embedded systems in products and services creates huge opportunities for all kinds of enterprises and institutions. At the same time, the fast pace of penetration poses an immense threat for most of them. It concerns enterprises and institutions in such diverse areas as agriculture, health care, environment, road construction, security, mechanics, shipbuilding, medical appliances, language products, consumer electronics, etc. (Embedded Systems Design: The ARTIST Roadmap for Research and Development. ed. / Bruno Bouyssounouse; Joseph Sifakis. Berlin / Heidelberg : IEEE Computer Society Press, 2005. p. 72 (Lecture Notes in Computer Science, Vol. 3436).
There are some that are repelled by the notion of¬†counting the lines of code or¬†estimating the number of lines of code¬†that may be needed to produce the needed capabilities. But that'd be because of the domain problem again.¬†
Databases exist that correlate the SLOC with cost and schedule for business systems. (www.qsm.com)
So like it or not, consider it the¬†devil incarnate or not, the numbers talk.
Predicting computer performance requirements for a completed system early in the design and development lifecycle of that system is challenging. Software requirements and avionic or hardware systems may times mature in parallel, and, in early stages of design, uncertainty of meeting the performance requirements makes determination of processing architecture difficult.
Later in the design process, as details ¬†finalized and prototypes can be developed, estimates of performance, cost, and schedule become increasingly ¬†accurate. If we wait till later in the lifecycle to make architectural changes causes, those changes are much more costly. These changes also result in ¬†schedule and technical risks.
The early performance needs are determined and the corresponding system architecture is established, the easier an appropriate computing platform (hardware and software) can be incorporated into the design.
A direct example I'm familiar with is NASA's Orion Crew Exploration Vehicle flight software. That approach uses available requirements documentation as a basis of the estimate and decouples input/output (I/O) and computation-based processing to estimate each separately then combine them to a final result.
This approach was unique since it was used to estimate the execution time for unwritten or partially specified software, in addition to giving a specific contribution for I/O. As well as estimating the time needed to develop the code and therefore the cost of that code. The method for estimating I/O processing performance was based on quantifying data, and the method for estimating algorithmic processing was based on approximated code size.
The result was used to predict ¬†processor types and quantities, allocate software to processors, predict communication bandwidth utilization, and manage processor margins. (Requirements-based execution time prediction of a partitioned real-time system using I/O and SLOC estimates, (Innovations in Systems and Software Engineering, Volume 8 Issue 4, December 2012, Pages 309-320)
Now is SLOC appropriate for you? Good question. Actually a theological question in some quarters, since the¬†conveyed truth from the agile community is this is never an appropriate approach. Trouble is, research shows a direct correlation between the size of the software systems - both measured and estimated - and its cost and schedule.
Databases exist showing this and other parametric measures that used produce estimates very useful to both business and technical management. At the ICEAA 2014 conference I and a colleague presented our research paper showing how to apply Time Series Analysis (ARIMA) and Principle Component Analysis (PCA) for estimate future performance of projects, there was briefing on the databases available for making estimates of software intensive systems, here's a sample:
These some sources of¬†reference classes¬†for estimating cost and schedule for business and engineering systems. So what ever your thoughts - and likely biases - SLOC is a very useful production tool in many domains - business and embedded systems - with¬†reference class databases, if you're willing to do the work to estimate the complexity of the code. If you claim it can't be done, then for you that's likely true.¬†Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing