Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/sources/6' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Herding Cats - Glen Alleman
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.
Syndicate content
Performance-Based Project Management¬ģ Principles, Practices, and Processes to Increase Probability of Success
Updated: 2 hours 54 min ago

What Does Done Look Like?

Fri, 11/27/2015 - 17:47

One source of project failure in any domain is the failure to have the answer to What Does Done Look Like? The answer to this question has to be in units of measure meaningful to the decision makers. Here's a white paper on this topic.

As always the answers to each of the questions are asked in the presence of uncertainty about future outcomes and because of that the answer needs to be estimated in some way. Without making those estimates, it's not possible to have any visibility into the outcomes of the decisions that are made along the way to Done.

Del from Glen Alleman Related articles Navigating the Way Home All Project Work is Probabilistic Work Closed Loop Control
Categories: Project Management


Thu, 11/26/2015 - 18:50

Reflect upon your present blessings of which every man has many - not on your past misfortunes, of which all men have - Charles Dickens

Categories: Project Management

Estimating Software Intensive Systems

Thu, 11/26/2015 - 16:29

Estimating SW Intensive SystemsEstimation and the measurements that result are critical success factors for Sofware Intensive Systems. Software development projects exist to design, build, and test products and to provide services from those software systems to customers. These products and there services include hardware, software, the documents describing their use and maintenance, and the operational data needed to provide business value to the customers who paid for that value.

Some may argue the differences between a project, product, or service. That is irrelevant for the discussion here. Some may argue there are many types of way to develop products or services. That is irrelevant here as well. Some projects deliver services on a continuous basis. Some projects have finite period of performance. Each needs to be answer the questions below.

But just to clear the air there are some differences between a system and a product. The system usually implies a unique collection of hardware and software, which a single organization owns and operates. The system of processing ACA eligibility applications is an example I worked on. This system was operated by a State agency in support of the governmental policies of providing affordable health care access to its citizens. The $42,000,000 a year budget for this system includes design, development, test, operations, and support of the hardware and software for a state government health care agency.

The products used to deploy the system included database engines, web servers, security products, development products, testing and integration products, help desk products and similar purpose built or COTS products integrated into a system.

Good Estimates Are One Key To Success

Estimates provide information to the decision makers to assess the impact of those decisions on future outcomes of the project, product, or system. Estimates provide information to make decisions, define feasible performance objectives, and substantiate the credibility of plans, revise designs and improve future estimates and processes.

The big question is, if the activities listed below are needed for the success of the project, how can they be answered in the absence of making an estimate?

Product Size, Performance, and Quality?

  • Evaluate feasibility of requirements
  • Analyze alternative product designs
  • Determine the required capacity and speed of hardware components
  • Evaluate product performance - accuracy, speed, reliability, availability and other¬†...ilities
  • Quantify resources needed to develop, deploy, and support a product, service, or system
  • Provide technical baselines for tracking and controlling cost,m schedule, and technical performance

Project Effort, Cost, and Schedule?

  • Determine project feasibility in terms of cost and schedule
  • Identify¬†¬†and assess project risks
  • Negotiate achievable commitments
  • Prepare realistic plans and budgets
  • Evaluate business value, the cost of achieving that value and the cost - benefits of that value
  • Provide cost and schedule baselines for tracking and controlling the business and technical processes

Process Capability and Performance

  • Predict resources consumption and efficiency of those resources
  • Establish norms for expected performance
  • Identify opportunities for improvement of technical, cost, and schedule parameters

Anyone conjecturing the answers to these questions can be obtained without estimating needs to show exactly that can happen. 

Related articles Estimates Critical Success Factors of IT Forecasting Humpty Dumpty and #NoEstimates Complex Project Management Estimating Software-Intensive Systems Essential Reading List for Managing Other People's Money Can Enterprise Agile Be Bottom Up? Intentional Disregard for Good Engineering Practices?
Categories: Project Management

Agile at Scale - A Reading List (Update)

Wed, 11/25/2015 - 22:29

I'm working two programs where Agile at Scale as the development paradigm. When we start an engagement using other peoples money, in this case the money belongs to a sovereign, we make sure everyone is on the same page. When Agile at Scale is applied, it is usually applied on programs that have tripped the FAR 34.2/DFARS 234.2 levels for Earned Value Management. This means $20M programs are self assessed and $100M and above are validated by the DCMA (Defense Contract Management Agency).

While these programs are applying Agile, usually Scrum, they are also subject to EIA-748-C compliance and a list of other DID's (Data Item Description) and other procurement, development, testing, and operational guidelines . These means there are multiple constraints on how the progress to plan is reported to the customer - the sovereign. These programs are not 5 guys at the same table as their customer exploring what will be needed for mission success when they're done. These programs are not everyone's cup of tea, but agile is a powerful tool in the right hands of Software Intensive System of Systems for Mission Critical programs. Programs that MUST, deliver the needed Capabilities, at the Need Time, for the Planned Cost, within the planned Margins for cost, schedule, and technical performance.

One place to start to improve the probability that we're all on the same page- is this reading list. This is not an exhaustive list, and it is ever growing. But it's a start. It;s hoped this list is the basis of a shared understanding that while Agile is a near universal principle, there are practices that must be tailored to specific domains. And one's experience in one domain may or may not be applicable to other domains. 

Like it says in the Scrum Guide. 

Scrum (n): A framework within which people can address complex adaptive problems, while productively and creatively delivering products of the highest possible value.

And since Scrum is an agile software development framework, Scrum is a framework not a methodology. Scrum of Scrums, Agile At Scale, especially Agile at Scale inside EIA-748-C prorgams has much different needs than 5 people sitting at the same table with their customer with an emerging set of requirements where the needed capabilities will also emerge.

Related articles Slicing Work Into Small Pieces Agile Software Development in the DOD Technical Performance Measures What Do We Mean When We Say "Agile Community?" Can Enterprise Agile Be Bottom Up? How To Measure Anything
Categories: Project Management

Debunking: The Five Laws of Software Estimating

Wed, 11/25/2015 - 17:52

The Five Laws of Software Estimating contains several Bunk ideas. let's look at each of the Five

1. Estimates are a Waste

Time spent on estimates is time that isn‚Äôt spent delivering value. It‚Äôs a zero-sum game when it comes to how much time developers have to get work done ‚Äď worse if estimates are being requested urgently and interrupting developers who would otherwise be ‚Äúin the zone‚ÄĚ getting things done. If your average developer is spending 2-4 hours per 40-hour week on estimates, that‚Äôs a 5-10% loss in productivity, assuming they were otherwise able to be productive the entire time. It‚Äôs even worse if the developer in question is part-time, or is only able to spend part of their work week writing software.

The estimates aren't for the developers, they're for those paying the developers. 

Estimates can help developers plan work sequencing, needed staffing and other resources, but to say it takes away from their development effort reeks of hubris. 

Writing software for money, requires money to be provided, sales revenue to be acquired to offset that cost. This is a business discussion not a coders point of view. 

2. Estimates are Non-Transferable

Software estimates are not fungible, mainly as a corollary to the fact that team members are not fungible. This means one individual’s estimate can’t be used to predict how long it might take another individual to complete a task.

This is the basis of Reference Class Forecasting. Write down what the estimates were when you made them, classify the estimates by the categories of software, put those in a sprad sheet and use then again when you encounter a similar class of software. This is standard software engineering practice in any mature organization. CMMI ML 2 and 3 process areas. 

3. Estimates are Wrong

Estimates aren’t promises. They’re guesses, and generally the larger the scope and the further in the future the activity being estimated is, the greater the potential error.

This is one of those naive statements that ignores the mathematics of estimating.

Estimate can be guesses, that'd be really bad estimating. Don't do stupid things on purpose. Read Estimating Software Intensive Systems, and learn how to estimate, then you won't be saying naive things like estimates are guesses

4. Estimates are Temporary

Estimates are perishable. They have a relatively short shelf-life. A developer might initially estimate that a certain feature will take a week to develop, before the project has started. Three months into the project, a lot has been learned and decided, and that same feature might now take a few hours, or a month, or it might have been dropped from the project altogether due to changes in priorities or direction. In any case, the estimate is of little or perhaps even negative value since so much has potentially changed since it was created.

Yes, estimates are temporary. They are updated periodically with new information. This is standard estimating process.

Reference Class Forecasting updates the reference classes when new data is available. 

5. Estimate are Necessary

Despite the first four Laws of Estimates, estimates are often necessary. Businesses cannot make decisions about whether or not to build software without having some idea of the cost and time involved.

Yes they are. Making decisions in the presence of uncertainty means making estimates of the outcomes and impacts of those decisions on future.

So before applying any of these ideas, ask a s simple question and get the answer.

What's the value at risk for NOT estimating the impact of your decision?


Related articles Sources for Reference Class Forecasting Should I Be Estimating My Work? Managing Your Project With Dilbert Advice - Not!
Categories: Project Management

Decision Making in the Presence of Uncertainty

Thu, 11/19/2015 - 17:02

Just back from the Integrated Program Management Workshop in Bethesda MD these week, where the topic of Agile and Earned Value Management occupied 1/3 of the conference speaks and training - I was one speaking and training. I'm current working two Agile+EVM programs in the Federal Governance and started in this path back in 2003 and 2003. Today the maturity of both Agile and EVM is much greater than the early 2000's Tools, processes, and practices have improved. Now we have SAFE and Scrum of Scrums as he basis of our program controls work. 5 guys in the same room as their customer around the same table is extremely rare where I work. The threshold for EVM is $20M and full validation of the EVMS is now $100M, so small teams are simply not the paradigm. But Agile is still a powerful tool in the INTEL, Software Intensive Systems, Systems of Systems, and rapidly emerging enterprise IT projects in our domain. All driven by the NDAA section 804

In this paradigm, programs are awarded through a formal contracting process guided by the Federal Acquisition Regulations starting with FAR 15 - competitive procurement. A period of performance and a Dollar amount in some form, Not To Exceed, Firm Fixed Price, or Cost Plus, still has a dollar amount. On all these programs, requirements emerge over the course of the work. The notion that requirements are defined upfront is a Red Herring, by those not working in this domain. Capabilities Based Planning is used in most cases, for larger programs. Increasing maturity of the end item deliverables is guided by the Integrated Master Plan / Integrated Master Schedule process in he DOD and other agencies as well as applied  to large commercial programs we work.

In all cases decisions must be made in the presence of uncertainties about the future. This is the basis of all risk informed decision making, guided by Risk and Opportunity Acquisition Guide. 

Much of this experience is applicable outside of major program development. Based in 5 Immutable Principles of project success

  1. What does Done look like in units of measure meaningful to the decision maker?
  2. What's the Plan to get to Done with the needed Capabilities, at the needed Time, for the needed Budget  to starting returning the needed value or fulfilling the mission? 
  3. What resources are needed to deliver the needed Capabilities, at the needed Time, for the needed Budget?
  4. What impediments will be encountered along the way to Done, and what activities are needed to removed these impediments?
  5. How is progress being measured to inform the decision makers of the probability of success of the program?

These 5 principles are applicable to all project work no matter the domain, process, procedures, or tools.

I came across the briefing below while researching materials of a client. The decision making processes here at deterministic. In practice these are probabilistic branches. Each branch of the decision tree has a probability of occurrence. So making decisions require making estimates of not only the probability of occurrence, but also the probabilistic outcome of that decisions. 


So, you think you are good at making decisions? - 8th October - Aberdeen from Association for Project Management   Making decisions in the presence of uncertainty between the multiple choices and the impacts of those choices requires making estimates of probabilistic outcomes. To suggest otherwise willfully ignores the Microeconomics of decision making and the Managerial Finance processes that funds those decisions.  Related articles Moving EVM from Reporting and Compliance to Managing for Success Agile Software Development in the DOD 5 Questions That Need Answers for Project Success
Categories: Project Management

Immutable Principles of Managerial Finance When Spending Other Peoples Money

Mon, 11/16/2015 - 12:57

One of the primary responsibilities of  management is to make decisions during the execution of projects so that gains are maximized and losses are minimized. Decision analysis is critical for projects with built-in flexibility, or options. when these choices operate in the presence of uncertainty. [1]

The notion that we can spend other peoples money in the absence of the immutable principles of Managerial Finance and Microeconomics of decision making pervades the #Noestimates community. Here's a few of those principles to counter that fallacy.

  • We can't determine the value of a software feature or product unless we know the cost to acquire that feature or product.¬†If we determine the value of a feature in our new ERP system is worth $150,000 in savings to us every year.¬†What if to acquire that feature, it costs $50,000 one time and $10,000 a year in maintenance, a typical 20% maintenance charge.¬†That sounds pretty good. But what if¬†to acquire that feature it costs $1,200,000 for a one time charge and $150,000 a year in maintenance? It'll take 80 years¬†to break even not counting the maintenance. This is the principle of cost benefit analysis. And since both the cost and benefit are random variables in the future we'll need to estimate both and construct time phased money to determine the cash flow, sunk cost recovery and benefit flow..

We can't know the benefit until we know the cost to achieve that benefit

  • When we need to make a decision in the presence of uncertainty, and there are multiple choices we need to choose between all of them and pick one. If we know something about the cost of each choice,we can assess the beneficial outcome of that¬†choice. The cost of the choice - that is to acquire a feature - also has a cost of NOT choosing the other choices. This is the¬†opportunity cost between all the choices. This opportunity cost makes visible not¬†only the beneficial outcome of our choice, but the cost of NOT making the other choices.

Microeconomics of decision making in the presence of uncertainty is based on Opportunity costs

  • In any complex software system development domain the usefulness of a precise numerical and analytical approach to make choices breaks down. Resorting to heuristics - empirical assessment of the past, modeling of the future - is not always the best approach, since it's sometimes hard to evaluate the validity of the heuristic because of the¬†informal structure. One approach in this situation is an economics approach that recognizes software efforts expend valuable resources in the presence of an uncertain payoff. This is¬†decision making under uncertainty.¬†Like the Microeconomics¬†of¬†opportunity¬†cost - Real Options¬†is a tool applicable to this problem. We're not¬†trading¬†stock here, but a¬†real options view of capital investments is based on the observation that investment opportunities are similar¬†to a¬†call option that confers upon its holder the right but not the obligation to purchase an asset as a set cost for a period of time. This¬†value seeking¬†approach is called Real Options.

Uncertainty is a central fact of life on most large IT capital investments. Along with uncertainty comes managerial flexibility. A real option refers to the right to obtain the benefits of owning a real world asset without the obligation to exercise that right. 

So in the end, when making a decision - that is selecting from more than one choice - there are several methods listed here. 

Each of these methods of making choices in the presence of uncertainty requires making an estimate of the impact of that choice, an estimate of the cost to acquire the value for the choice, and an estimate of that value. 

 [1] Real Options: The Value Added through Optimal Decision Making, Graziadio Business Review, 

Related articles A Theory of Speculation Architecture -Center ERP Systems in the Manufacturing Domain Making Choices in the Absence of Information
Categories: Project Management

Immutable Laws of Software Development

Fri, 11/13/2015 - 17:45

A 2013 webinar at Cyber Security & Information Systems Information Analysis Center, presented some Immutable Laws of Software Development. These are worth repeating every time there is a suggestion hat some method or another, or some new and untested idea is put fort that will increase productivity by 10X or increase your profitability by NOT doing core business processes.

Here's the list presented in the webinar and is dedicated to Watts Humphrey who said all these in the past. For each Immutable Law, I've made a suggestion on how to avoid the undesirable outcome.

  • The number of development hours is directly proportional to the size of the software product
    • Size Matters
    • Build products in¬†chunks of capability, deploy them incrementally when possible to provide measurable value
    • Increase the maturity of these capability with Capabilities Based Planning
  • When buyers and suppliers¬†both guess as to how long a project should take, the buyers guess will always win
    • Credible cost, schedule, and technical performance estimating must be in place and jointly used by buyer and supplier
    • The basis of estimate must include risk adjusted baseline for reducible and irreducible risks¬†
  • When management compresses schedule arbitrary¬†the project will end up taking longer
    • Any project without margin is late, over budget, and produces poor quality on dat one
    • Reducible and Irreducible uncertainty must be modeled and handled if this is o be avoided
  • Team morale is inversely proportional to the arbitrary and unrealistic nature of the schedule imposed on the team
    • All project variables - programmatic and technical must have protective margin
  • Schedule problems are normal, management actions to remediate will make them worse.
    • Margin is needed for everything, since all project variables are random variables
  • When poor quality impacts schedule, schedule problems will end up as quality disasters
    • Technical performance (quality) require cost and schedule margin to achieve to needed goal
  • Those that don't learn from poor quality's adverse impact on schedule, are domed to repeat it.
    • Recording past performance in a statistically correct manner provides¬†reference classes for modeling future performance
  • The less facts you know about a project during development, the more you will be forced to know later
    • Systems Engineering is an anchor discipline for all successful projects.
    • This is no more important than on software development projects.
    • The Systems Engineering processes applied to software development are a critical success factor
  • The number of defects injected during development will be directly proportional to the development hours expended
  • The number of defect found in production use will be inversely proportional to the percent of defect removed prior to integration, system, and acceptance testing.
    • Formal defect modeling is the starting point for quality improvements
    • Test coverage metrics are a starting point
  • The number of defect found in production use will be directly proportional to the number of defects found in integration, systems, and acceptance testing.
    • Same as above
  • When test is the principal defect removal method during development, corrective maintenance will account for he majority of the maintenance spend.
    • Designed quality is the basis of reducing defects
    • Designed quality starts with architecture, interface management, process and data models¬†
  • The amount of technical debt in inversely proportional to the length of the agile¬†sprint
    • 2 week sprints produce more debt that 4 week sprints
    • Scrum and agile in general have no notion of¬†margin so without margin debt increases
  • The earliest predictor of a software project's cost, schedule and quality outcome is the quality of the development process through code complete.
    • Process is King
  • Management actions based on metric not normalized by size will make the situation worse.
    • All numbers must be properly modeled, statistically sound, and used to make management decisions.¬†
  • On a 40 hour week, the number of task hours for each engineer will stat under 20, unless steps are taken to improve it.
    • Work processes must be modeled, recorded, and assessed to determine the capacity for work by the team.
  • Successful cost, schedule, and quality outcomes depend on the degree of convergence between the organization's official process, the teams' perceived process and the individual's actual processes.
    • Process is King
  • Insanity is doing the same thing iver and over ¬†and firing the project manager and or¬†contractor when you don't get the results you expected.
    • Strategy for the organization requires tangible assessments of the Critical Success Factors and should ¬†never be confused with Operational Effectiveness.
    • If we ask someone to do something like make a change - what are the measures of Effectiveness and Performance for the change that can be assessed to find the root cause of the failure and the needed corrective actions.
    • Don't have those? It's gonna fail.


Related articles Estimating Guidance How to Deal With Complexity In Software Projects? Overarching Paradigm of Project Success
Categories: Project Management

Misuses a Valid Concept

Wed, 11/11/2015 - 21:28

It's common these days to re-purpose a quote or a platitude from one domain into another and assume it's applicable to the second domain. My favorite recent one is

"Layers of redundancy are the central risk management property of natural systems‚ÄĚ - Taleb

Taleb is the author of Black Swan, about long tailed statistical processes in the financial domain. These Black Swans tend to bite you when you least expect it. Are there Black Swans in the software development domain? Only if you're not looking. Financial systems are rarely engineered to perform in specific ways. Software systems are, st where I work and I suspect everywhere someone is paying money for the system to be developed or acquired. 

So let's look at the Taleb quote that is often re-quoted by agile people and especially those advocating no estimates.

First some full disclosure. One of my graduate degrees in Systems Management, which is a combination of Systems Engineering and Finance. As well I work with systems engineers and support systems engineering processes in the aerospace and defense  domain. So I'll predisposed to view the work through the eyes of Systems Engineering. Everything is a System is a good starting point for what we do. 

Now let's look at the Taleb quote through the eyes of Systems Engineering and the software systems that are engineered in the domain we work. There are many kinds of redundancy found in our systems. To avoid falling victim to platitudes that abound in the agile and No estimates domains, let's start with a framing assumption.

Redundancy  provides resiliency to the system to withstand disruption within acceptable degradation parameters and to recover within an acceptable time and composite costs and risks.

In Taleb's (financial trading systems) domain resilience is desirable as it is in software intensive systems. Software systems that fly the airliner you ride on, manage the trains, process credit card transactions, control air traffic, manage the moving parts of your car. Any system where software is the dominate component for the proper functioning of the product or service.

There are rules for assessing the resiliency that results from redundancy or other system design aspects

  • Absorption rule - is a buffering characteristic that prevents overload of the system. Redundancy can provide this protection. The Microsoft¬†Always On product provides¬†this as well as other resiliency and redundancy¬†capabilities.
  • Limit Degradation support rule - provide a lower limit to which the system can degrade before failing. This is he¬†circuit breaker for your home. Also the¬†circuit breaker for the stick exchange.
  • Margin Support Rule - margin is added to the system to protect from disruptions. This can be schedule margin, cost margin, technical performance margin, operational margin. Any kind of margin that allows the system to continue to operate properly inside the range of parameters.¬†

This notion of margin is absent from Agile development. And the result is when things go wrong, you're late, over budget and the product doesn't work. To have margin we must be able to estimate how much margin. Too much margin is a waste. Too little margin will not protect the system from disruption. 

  • Physical Redundancy rule -¬†buy two in case one breaks¬†was the request when I first started writing code for a Ballistic Missile Defense radar system. We were buying the original Sun I cards to replace legacy computers. I went on from there to work at a Triple Redundant process control startup as the Software Manager. Where we developed a physically redundant computer and software system in the petro-chem and nuclear power domain.¬†Fault-Tolerant System¬†Reliability¬†in the¬†Presence¬†of Imperfect¬†Diagnostic¬†Coverage, describes how that triple redundancy was protected through realtime fault detection and dynamic reconfiguration of the hardware components.¬†
  • Functional Redundancy rule - is sometimes called¬†design diversity and avoids the vulnerabilities of Physical Redundancy.
  • Layers of Defense rule - states that for a failure to occur a disturbance has to penetrate a series of layers simulate to layers of Swiss Cheese. The system has¬†holes like the holes in Swiss Cheese, that allow the failure to penetrate to the next level, where they can be handled.

So when we hear a platitude like Layers of redundancy are the central risk management property of natural systems ask what kind of redundancy, what kind of fault handling and response processes. In fact ask first is that quote used as a platitude even applicable in the domain of interest? Or is it just a phrase picked up and repeated with little or no understanding of the principles, practices, or processes to which it CAN be applied. 

[1] The Theory and Practice of Reliable System Design, Daniel Siewiorek and Robert Swarz

Related articles Systems Thinking, System Engineering, and Systems Management The Use, Misuse, and Abuse of Complexity and Complex Want To Learn How To Estimate? Agile as a Systems Engineering Paradigm Software Engineering is a Verb
Categories: Project Management

Veterans Day 2015

Wed, 11/11/2015 - 16:00


For all who have served. For those I know directly in my professional and personal life. For all who have served throughout our United States. Let's celebrate our service. 

Categories: Project Management

Marine Corps Birthday

Tue, 11/10/2015 - 19:26


Having served with Marines, Happy Birthday. Having had Marines on my work teams, Happy Birthday

Semper Fi

Categories: Project Management

The Twilight Zone Approach to Business Decision Making

Tue, 10/27/2015 - 16:28

CSVHqUjWUAAx9f1When we here about focusing on value first, delivery early and often, there is rarely any mention of the cost of that delivery. Or anything about the customers ability to accept those early and often deliveries and actually put them to work earning that value.

In the must read book Product-Focused Software Process Improvement there is a paper, Value Creation by Agile Projects: Methodology or Mystery? a Conference Paper from Lecture Notes in Business Information Processing · January 2009.

We can learn a lot from this journal paper.

Business value is a key concept in agile software development approaches. This paper presents results of a systematic review of literature on how business value is created by agile projects. We found that with very few exceptions, most published studies take the concept of business value for granted and do not state what it means in general as well as in the specific study context. We could find no study which clearly indicates how exactly individual agile practices or groups of those create value and keep accumulating it over time. The key implication for research is that we have an incentive to pursue the study of value creation in agile project by deploying empirical research methods.

The platitude of we focus on value is just that - a platitude. And like most platitudes, it gives you a warm and fuzzy feeling inside, but provides ZERO advice in terms of actionable outcomes.

Value (business value of the software) has several attributes that need to be determined before the Value of the Value can be assessed as desirable

  • What is the cost of acquiring that value?
  • When will I be able to put that value to work to start earning back that cost?
  • Over what period of time will I have to pay to acquire that value?
  • Are there any recurring costs associated with acquiring that value?
  • How do I account for the cost of acquiring¬†that value and accounting for the Value itself in the larger accounting systems for the project being funded?

This is the FASB 86 problem with Agile Software Development. This is a capitalization issue for business. All those agile and #Noestimates platitudes need to come in contact with the reality of FASB 86 and other GAAP governance when spending other peoples money. Not much to date on that.

These are all finance and accounting questions, no coding questions. So in the end...

In the economics of writing software for money and producing the Value to the customer for that software, knowing the cost to acquire that Value, the time cost of money when producing that Value, and the planned arrival date of that Value versus the business need date of that Value all have to be known to some degree of confidence in order to make credible decisions. Determining this information is called ESTIMATING.

No matter how often it is repeated the #Noestimates applicable to writing software for money - using other peoples money - it simply not true. It violates microeconomics of decision making, managerial finance principles, simple time cost of money accounting principles, and is a all around bogus idea.

Making decisions in the presence of uncertainty about future outcomes (value in exchange for cost) requires making estimates.

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Risk Management Is How Adults Manage Projects: Agile is Not Risk Management (Alone)

Tue, 10/27/2015 - 15:18

Risk Management is How Adults Manage Projects - Tim Lister

Here's a formal definition of Risk Management.

Risk management is an endeavor that begins with requirements formulation and assessment, includes the planning and conducting of a technical risk reduction phase if needed, and strongly influences the structure of the development and test activities. Active risk management requires investment based on identification of where to best deploy scarce resources for the greatest impact on the program’s risk profile. PMs and staff should shape and control risk, not just observe progress and react to risks that are realized. Anticipating possible adverse events, evaluating probabilities of occurrence, understanding cost and schedule impacts, and deciding to take cost effective steps ahead of time to limit their impact if they occur is the essence of effective risk management. Risk management should occur throughout the lifecycle of the program and strategies should be adjusted as the risk profile changes. 

Managing in the presence of uncertainty and the resulting risk, means making estimates of the outcomes that will appear in the future from the decisions made today, in the presence of naturally occurring variances, and probabilistic events during the period over which the decision is applicable. Without these estimates, there is no means of assessing a decision for its effectiveness in keeping the project on track for success.

There is a connection between uncertainty and risk.

  • Uncertainty is present when probabilities cannot be quantified in a rigorous or valid manner, but can be described as intervals within a probability distribution function (PDF).
  • Risk is present when the uncertainty of the outcome can be quantified in terms of probabilities or a range of possible values.
This distinction is important for modeling the future performance of cost, schedule, and technical outcomes of a project¬†in the presence of reducible and irreducible risk. Uncertainties that can be reduced with more knowledge are called ‚Äď Epistemic¬†or¬†Reducible.‚Ä° Epistemic (subjective or probabilistic) uncertainties are event based probabilities, knowledge-based, and are reducible by further gathering of knowledge. Epistemic uncertainty pertains to the degree of knowledge about models and their parameters. This term comes from the Greek episteme¬†(knowledge). Epistemic uncertainties,
  • Are Event based uncertainties, where there¬†is a probability of something happening that will unfavorably impact the project.
  • Are described by¬†a probability that something will happen in the future.
  • Can state this probability of the event, and do something about reducing this probability of occurrence.

Uncertainties that cannot be reduced with knowledge are called - Aleatory or Irreducible†

Aleatory uncertainties pertain to stochastic (non-deterministic) events, the outcome of which is described using a probability distribution function(PDF). The term aleatory come from the Latin alea. For example in a game of chance stochastic variability's are the natural randomness of the process and are characterized by a probability density function (PDF) for their range and frequency. Since these variability's are natural they are therefore irreducible.

  • These are Naturally occurring Variances in the underlying processes of the program.
  • These are variances in work duration, cost, technical performance.
  • We can state the probability range of these variances within the Probability Distribution Function.

When it is suggested projects can be managed in the absence of estimating the impact of any decisions, probabilistic events, or underlying statistical processes - think about Lister's quote and read his presentation - and proceed at your own risk.

In a recent post Top 5 Ways Agile Mitigates Risk, suggests Agile is the same as Risk Management. While the items listed there are good project management, the term Risk Management for projects means something else beyond the agile concepts in the post. 

Let's look at each of these suggestion that Agile is Risk Management. First let's look at the processes of risk management and what they entails in practice. Picture1 These processes have specific activities and relationships:
  • Risk Identification -¬†Before risks can be managed, they must be identified. Identification surfaces risks before they become problems and adversely affect a program. The SEI has developed techniques for surfacing risks by the application of a disciplined and systematic process that encourages program personnel to raise concerns and issues for subsequent analysis.
    • Agile does not have a formal¬†process for identifying risk
    • There is no Risk Register to capture the risks, codify them, assign them ranges or¬†probabilities¬†of occurance
  • Risk Analysis - is the conversion of risk data into risk decision‚Äímaking information. Analysis provides the basis for the program manager to work on the ‚Äúright‚ÄĚ risks.
    • Agile does not provide a Risk Analysis process
  • Risk Planning - turns risk information into decisions and actions (both present and future). Planning involves developing actions to address individual risks, prioritizing risk actions, and creating an integrated risk management plan. The plan for a specific risk could take many forms. For example:
    • Mitigate the impact of the risk by developing a contingency plan (along with an identified triggering event) should the risk occur.
    • Avoid a risk by changing the product design or the development process.
    • Accept the risk and take no further action, thus accepting the consequences if the risk occurs.
    • Study the risk further to acquire more information and better determine the characteristics of the risk to enable decision making.¬†
      • Agile does not provde a process for planning the resuction of risk that is seperate from the development of the software
      • While not a¬†critical¬†issue, it's another example of Agile not actually being risks management, but using the name
  • Risk Tracking - consists of monitoring the status of risks and actions taken to ameliorate risks. Appropriate risk metrics are identified and monitored to enable the evaluation of the status of risks themselves and of risk mitigation plans. Tracking serves as the ‚Äúwatch dog‚ÄĚ function of management.
    • Risk Trackiing is done in the Risk Register
    • Agile does not provide a Risk¬†Register¬†process
    • One can¬†certainty¬†be¬†built, but it's not a role of Agile software development
  • Control ‚Äď corrects for deviations from planned risk actions. Once risk metrics and triggering events have been chosen, there is nothing unique about risk control. Rather, risk control melds into program management and relies on program management processes to control risk action plans, correct for variations from plans, respond to triggering events, and improve risk management processes.

    • Not clear how Agile provides a control function for managing risk
  • ¬†

    Communication & Documentation ‚Äď Risk communication lies at the center of the model to emphasize both its pervasiveness and its criticality. Without effective communication, no risk management approach can be viable. While communication facilitates interaction among the elements of the model, there are higher level communications to consider as well. To be analyzed and managed correctly, risks must be communicated to and between the appropriate organizational levels and entities. This includes levels within the development project and organization, within the customer organization, and most especially, across that threshold between the developer, the customer, and, where different, the user. Because communication is pervasive, our approach is to address it as integral to every risk management activity and not as something performed outside of, and as a supplement to, other activities.
    • Communication is¬†independent¬†of any process or framework, so Agile can likley do this as well

With these processes, the management of Risk proceeds as described below


So Now To The Point - Finally The conjecture that Agile is Risk Management needs to be assessed. Where in the Agile software development process can we find the Risk Management activities described above? Good question. Here's one suggestion:
  • Sprint Durations - providing tangible evidence of progress to plan is good project management. Measuring this progress is critical to managing risk. If the plan calls for a specified progress and that progress is not made, then there ¬†is risk to the planned schedule and as a result a risk to the planned cost. But Sprint Durations are not a risk management process, hey are a development process outcome used to¬†TRACK the planned risk reduction
  • Retrospectives - at the end of the release, assessment of performance is always a good idea. But that information¬†then needs to be applied to the Risk Handing processes through some process
  • Backlog Grooming - this has nothing to do with statistical or probabilistic risk management
  • Promoting Transparency - this is always good.
  • Frequent Deliveries - the period between deliveries must answer the question¬†how long are we willing to wait before we find out we're late? Agile¬†FORCES¬†this period to be short.

This is the core contribution to managing in the presence of uncertainty. Forced delivery of assessable outcomes on defined boundaries. This provides the necessary feedback to assess progress top plan. This sampling rate should be at a frequency sufficient to take corrective action when the project is in fact late. This is called the Nyquist Sampling Rate and the Nyquist-Shannon theorem from information theory, which says (essentially) what sample rate is sufficient to permits a discrete sequence of samples (the Sprints) to capture all the information from a continuous-time signal of finite bandwidth.= (the underlying project activities).

This answers the question - how long are you willing to wait before you find out you're late? The answer in agile is at the end of every Feature inside the Sprint. In our more traditional world, it's every month with the Integrated Program Performance Report. Agile Forces this to be every week or so.

‡ Epistemic uncertainty, also known as systematic uncertainty, which is due to things we could in principle know but don't in practice. This may be because we have not measured a quantity sufficiently accurately, or because our model neglects certain effects, or because particular data are deliberately hidden. Epistemic uncertainties can be modeled with Probability Distributions. For example, there is a Probability that the Always On function for the Database Farm will fail to switch to the shadows system when called upon to do so. The risk that results from Epistemic uncertainty can be handled with specific activities paid for by the project - testing, redundancies, prototyping. Agile provide a means of handling Epistemic uncertainties and the resulting risk. Build incremental outcomes, test them to confirm they're what the customer wants. Agile contributes this information to the Identify, Analyze, and Plan phases of Risk Management.

† Aleatory uncertainty, also known as statistical uncertainty, which is representative of unknowns that differ each time we run the same experiment. For example, the duration to produce a piece of software cannot be determined in definitive value. There is always some variability in the duration. Aleatory uncertainties and the resulting risk can only be handled with Margin. Cost, Schedule, and Technical margin. Agile as no means of discussing the needed margin, since it's business rhythm is time boxed on fixed intervals - no margin is available.

  Related articles Why Guessing is not Estimating and Estimating is not Guessing Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management
Categories: Project Management

Hofstadter's Law Is Misguided

Mon, 10/26/2015 - 17:49

Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law¬†‚ÄĒ‚ÄĮDouglas Hofstadter,¬†G√∂del, Escher, Bach: An Eternal Golden Braid¬†

4-IMG_1870Hofstadter's Law is actually about self-referencing systems. The statement about how long it takes is a self-referencing statement.

The #NoEstimate community uses it as an example that you can't estimate, because even when you do it's going to be wrong.

This of course willfully ignores the principles, practices, and processes of mathematical estimating. Both Parametric and probabilistic estimates.

In both these paradigms, parametric and probabilistic, uncertainty is the core driver of variance both Irreducible and Reducible uncertainties. 

Here's the well known approach to managing in the presence of unceratnty

Managing in the presence of uncertainty from Glen Alleman   So what the #NoEstimates advocates fail to understand is that in the presence of reducible (epistemic) and irreducible (aleatory) uncertainty, actions must be taken to address these uncertainty and prevent these from unfavorably impact the project outcomes, as suggested in ... Bliss Chart
  • Irreducible uncertainty can be addressed with¬†Margin. Cost margin, schedule margin, technical performance margin.
  • Reducible uncertainty can be addressed with redundancy,¬†risk retirement¬†activities, that¬†buy down the risk resulting from the uncertainty to an acceptable level.

In both cases managing in the presence of uncertainty means following Tim Lister's advice...

Risk Management is how Adults Manage projects

So when Hofstadter's Law is used without addressing the reducible and irreducible uncertainties and the resulting risk to project success, the result is Hofstadter's Law.

A self reference circular logic leading directly to project disappointment.  

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

How Many Definitions Are There? Let Me Count the Ways

Thu, 10/22/2015 - 06:00

In any discussion of project management, software development, and business management, agreeing on a shared set of definitions is critical por progress in exchanging ideas. When standard definitions are redefined to suit the needs of those conjecturing a new approach, it makes it more difficult to have a meaningful conversation.

One recent example is the No Estimates advocates redefining the meaning of estimate and forecast. Where forecasting is NOT estimating, so the No Estimates moniker can be applied to what is essentially estimating, 

  • Estimate: a human judgement derived estimate
  • Forecast:¬†a data driven forecast

First let's look at some definitions of Estimate as applied to project and program work. The estimate is applied to something. Cost, Schedule, some technical performance parameter. So we can't just use Estimate by itself. It needs to be an estimate of something. Here's some things we can estimate and the processes of performing those estimates on projects that involve uncertainty. If there is no uncertainty in the project the need for estimating goes away. Just observe the empirical data and calculate the needed value - cost, schedule, or technical performance.

  • Analogy Cost Estimate - An estimate of costs based on historical data of a similar (analog) item.
  • Ball Park Estimate - Very rough estimate (usually cost estimate), but with some knowledge and confidence. (‚ÄúSomewhere in the ball park.‚ÄĚ)
  • Budget Estimate - a Cost estimate prepared for inclusion in the budget to support ¬†projects funding.
  • Cost Estimate - A judgment or opinion regarding the cost of an object, commodity, or service. A result or product of an estimating procedure that specifies the expected dollar cost required to perform a stipulated task or to acquire an item. A cost estimate may constitute a single value or a range of values.
  • Cost Growth - A term related to the net change of an estimated or actual amount over a base figure previously established. The base must be relatable to a program, project, or contract and be clearly identified, including source, approval authority, specific items included, specific assumptions made, date, and the amount.
  • Cost Model - A compilation of cost estimating logic that aggregates cost estimating details into a total cost estimate.
  • Engineering Cost Estimate - is derived by summing detailed cost estimates of the individual work packages (tasks, sprints, iterations, defined packages of work that produce a single useable outcome)and adding appropriate burdens. Usually determined by a contractor‚Äôs industrial engineers, price analysts, and cost accountants.
  • Estimate at Completion (EAC) (Cost) - Actual direct costs, plus indirect costs or costs allocable to the contract, plus the estimate of costs (direct and indirect) for authorized work remaining.¬†
  • Long Range Investment Plans - are broad plans based on best estimates of future top-line resources that form the basis for making long-range affordability assessments of products or services produced by the project.¬†
  • Manpower Estimate - An estimate of the most effective mix of direct labor and contract support for a project.¬†
  • Parametric Cost Estimate - is a cost estimating methodology using statistical relationships between historical costs and other program variables such as system physical or performance characteristics, contractor output measures, or manpower loading.¬†
  • Project Office Estimate (POE) - is a detailed estimate of development and ownership costs normally required for high-level decisions by the business. The estimate is performed early in the project and serves as the basepoint for all subsequent tracking and auditing purposes.
  • Should Cost Estimate - is an estimate of the project cost that reflects reasonably achievable economy and efficiency during the development and operational phases of the product or service.¬†
  • Standard Error of Estimate - is a measure of divergence in the actual values of the dependent variable from their regression estimates. (Also known as standard deviation from regression line.) The deviations of observations from the regression line are squared, summed, and divided by the number of observations.
  • Technical Performance Measurement (TPM) - describes all the activities undertaken by the buter to obtain design status beyond that treating schedule and cost. TPM is the product of design assessment that estimates the values of essential performance parameters of the current design as contained in Work Breakdown Structure (WBS) product elements through tests. It forecasts the values to be achieved through the planned technical program effort, measures differences between achieved values and those allocated to the product element by the Systems Engineering Process (SEP), and determines the impact of these differences on system effectiveness.

So now to the killer concept

If the project you are working has no uncertainty and therefore has no resulting risk, the need for estimating is greatly reduced of not eliminated. If there is uncertainty and resulting risk, then Tim Lister has something to say...

Risk Management is How Adults Manage Projects

So no Uncertainty, No Risk, No need for Risk Management, low if any need for estimates of the outcomes of the decisions made the project participants.

Can This Really Be True?

Ask if your project has NO uncertainties? Nothing in the future that is unknown at the moment. No process that has any variance, in the past, now, or in the future. Nothing is going to change. All the work will run at the same efficiency as it does now. No defect will appear. No changing performance behaviors as the result of new development.

A bit about uncertainty. There are two kinds. Reducible and Irreducible. Reducible uncertainty is event based. That is there is a probability associated with the uncertainty coming true. For example there is q 30% probability of rain today over the forecast area. This is an event based uncertainty. To protect yourself from this uncertainty, depending on where you live, bring a rain coat.

There is irreducible uncertainty. This is the naturally occurring variances in the underlying system of interest. For example, I drive to the airport every Tuesday morning every other week to work at a client site.  There is Reducible uncertainty around the weather. I'll look at the weather report to see what is going to happen. But given the weather, there will be an impact on the traffic. Snow or Sleet will have a different impact than rain or sunshine. This impact is a variances in the traffic flow which I have no control over. The only protection for not missing my flight is to have sufficient time margin
in the drive. On a clear day, not traffic, no other impediments, it's 45 minutes from driveway to my favorite parking spot on east side of DIA, Level 1. Margin is applied depending on the events and underlying processes of travel.

So when the term Estimate and its related term Forecast are redefined so estimating is not seen as estimating, it pollutes the conversation. This conversation is critical to increasing the probability of success of projects in all domain. 

  • Estimates are about the past, present, and future - an estimate of the numberBrachiosaurus that walked on the beach on what is now called Dinosaur Ridge¬†in Morrison Colorado. Or the Estimate of the number of cars on E-470 headed to Denver International Airport. Or the estimate to complete for the lane expansion on CO-36 between Boulder and Denver.
  • Forecasts are about the future - a weather forecast.
Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

The Unmyths of Estimating

Wed, 09/30/2015 - 17:33

Phillip Armour has a classic article in CACM titled "Ten Unmyths of Project Estimation," Communications of the ACM (CACM), November 2002, Vol 45, No 11. Several of these Unmyths are applicable to the current #NoEstimates concept. Much of the misinformation about how estimating is the smell of dysfunction can be traced to these unmyths.

Mythology is not a lie ... it is metaphorical. It has been well said that mythology is the penultimate truth - Joseph Campbell, The Power of Myth

Using Campbell's quote, myths are not untrue. They are an essential truth, but wrapped in anecdotes that are not literally true. In our software development domain a myth is a truth that seems to be untrue. This is Armour's origin of the unmyth. 

The unmyth is something that seems to be true but is actually false.

Let's look at the three core conjectures of the #Noestimates paradigm:

  • Estimates cannot be accurate - we cannot get an accurate estimate of cost, schedule, or probability that the result will work.
  • We can't say when we'll be done or how much it will cost.
  • All estimates are commitments - making estimates makes us committed to the number that results from the estimate.

The Accuracy Myth

Estimates are not numeric values. they are probability distributions. If the Probability Distribution below represents the probability of the duration of a project, there is a finite minim - some time where the project cannot be completed in less time.

There is the highest probability, or the Most Likely duration for the project. This is the Mode of the distribution. There is a mid point in the distribution, the Median. This  is the value between the highest and the lowest possible completion times. Then there  is the Mean of the distribution. This is the average of all the possible completion times. And of course The Flaw of Averages is in effect for any decisions being made on this average value †


‚ÄúIt is moronic to predict without first establishing an error rate for a prediction and keeping track of one‚Äôs past record of accuracy‚Ä̬†‚ÄĒ Nassim Nicholas Taleb, Fooled By Randomness

If we want to answer the question What is the probability of completing ON OR BEFORE a specific date, we can look at the Cumulative Distribution Function (CDF) of the Probability Distribution Function (PDF). In the chart below the PDF has the earliest finish in mid-September 2014 and the latest finish early November 2014.

The 50% probability is 23 September 2014. In most of our work, we seek an 80% confidence level of completing ON OR BEFORE the need date.

The project then MUST have schedule, cost, and technical margin to protect that probabilistic date.

How much margin is another topic.

But projects without margin are late, over budget, and likely don't work on day one. Can't be complaining about poor project performance if you don't have margin, risk management, and a plan for managing both as well as the technical processes.

So where do these charts come from? They come from a simulation of the work. The order and dependencies of the work. And the underlying statistical nature of the work elements. 

  • No individual work element is deterministic.
  • Each work element has some type of dependency on the previous work element and the following work element.
  • Even if all the work elements are Independent and sitting in a Kanban queue, unless we have unlimited servers of that queue, being late on the current piece of work will delay the following work.¬†


So what we need is not Accurate estimates, we need Useful estimates. The usefulness of the estimate is the degree to which it helps make optimal business decisions. The process of estimating is Buying Information. The Value of the estimates, like all value is determined by the cost to obtain that information. The value of the estimate of the opportunity cost, which is the different between the business decision made with the estimate and the business decision made without the estimate. ‡

Anyone suggesting that simple serial work streams can accurately forecast -  an estimate of the completion time - MUST read Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban & Scrum Projects using Monte Carlo Simulation, Troy Magennis.

In this book are the answers to all the questions those in the #NoEstimates camp say can't be answered.

The Accuracy Answer

  • All work is probabilistic.
  • Discover the Probability Distribution Functions for the work.
  • If you don't know the PDF, make one up - we use -5% + 15% for everything until we know better.
  • If you don't know the PDF, go look in databases of past work for your domain. Here's some databases:
    • http://www.nesma.org/
    • http://www.isbsg.org/
    • http://www.cosmicon.com/
  • If you still don't know, go find someone who does, don't guess.
  • With this framework - it's called Reference Class Forecasting - that is making estimate about your project from¬†reference classes¬†of other projects, you can start making¬†useful estimates.¬†

But remember, making estimates is how you make business decisions with opportunity costs. Those opportunity costs are the basis of Microeconomics and Managerial Finance. 

Cone of Uncertainty and Accuracy of Estimating

There is a popular myth that the Cone of Uncertainty prevents us from making accurate estimates. We now know we need useful estimates, but those are not prevented by in the cone of uncertainty. Here's the guidance we use on our Software Intensive Systems projects.


Finally in the estimate accuracy discussion comes the cost estimate. The chart below shows how cost is driven by the probabilistic elements of the project. Which brings us back to the fundamental principle that all project work is probabilistic. Modeling the cost, schedule, and probability of technical success is mandatory in any non-trivial project. By non-trivial I mean a de minimis project, one that if we're off by a lot it doesn't really matter to those paying.


The Commitment Unmyth

So now to the big bug a boo of #NoEstimates. Estimates are evil, because they are taken as commitments by management. They're taken as commitment by Bad Management, uninformed management., management that was asleep in the High School Probability and Statistics class, management that claims to have a Business degree, but never took the Business Statistics class. 

So let's clear something up,

Commitment is how Business Works

Here's an example taken directly from ‡ 

Estimation is a technical activity of assembling technical information about a specific situation to create hypothetical scenarios that (we hope) support a business decision. Making a commitment based on these scenarios is a business function.

The Technical ‚ÄúEstimation‚ÄĚ decisions include:

  • When does our flight leave?
  • How do we get there? Car? Bus?
  • What route do we take?
  • What time of day and traffic conditions?
  • How busy is the airport, how long are the lines?
  • What is the weather like?
  • Are there flight delays?

This kind of information allows us to calculate the amount of time we should allow to get there.

The Business ‚ÄúCommitment‚ÄĚ and Risk decisions include:

  • What are the benefits in catching the flight on time?
  • What are the consequences of missing the plane?
  • What is the cost of leaving early?

These are the business consequences that determine how much risk we can afford to take.

Along with these of course is the risk associated with the uncertainty in the decisions. So estimating is also Risk Management and Risk Management is management in the presence of uncertainty. And the now familiar presentation from this blog.

Managing in the presence of uncertainty from Glen Alleman

Wrap Up

Risk Management is how Adults manage projects - Tim Lister. Risk management is managing in the presence of uncertainty. All project work is probabilistic and creates uncertainty. Making decisions in the presence of uncertainty requires - mandates actually - making estimates (otherwise you're guess your pulling numbers from the rectal database).  So  if we're going to have an Adult conversation about managing in the presence of uncertainty, it's going to be around estimating. Making estimates. improving estimates, making estimates valuable to the decision makers. 

Estimates are how business works - exploring for alternatives means willfully ignoring the needs of business. Proceed at your own risk

† This average notion is common in the No estimates community. Take all the past stories or story points and find the average value and use that for the future values. That is a serious error in statistical thinking, since without the variance being acceptable, that average can be wildly off form the actual future outcomes of the project

‡ Unmythology and the Science of Estimation, Corvus International, Inc., Chicago Software Process Improvement Network, C-Spin, October 23, 2013

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

All Conjectures Need a Hypothesis

Tue, 09/29/2015 - 16:53

As far as hypothesis are concerned, let no one expect anything certain from astronomy, which cannot furnish it, lest he accept as the truth ideas conceived for another purpose, and depart from this study a greater fool than when he entered it. Andreas Osiander's (editor) preface to De Revolutionbus, Copernicus, in To Explain the World: The Discovery  of Modern Science, Steven Weinberg

In the realm of project, product, and business management we come across nearly endless ideas conjecturing to solve some problem or another.

Replace the word Astronomy with what ever word those conjecturing a solution will fix some unnamed problem.

From removing the smell of dysfunction, to increasing productivity by 10 times, to removing the need to have any governance frameworks, to making decisions in the presence of uncertainty without the need to know the impacts of those decisions.

In the absence of any hypothesis by which to test those conjectures, leaving a greater fool than when entering is the likely result. In the absence of a testable hypothesis, any conjecture is an unsubstantiated anecdotal opinion.

An anecdote is a sample of one from an unknown population

And that makes those conjectures doubly useless, because they can not only not be tested, they are likely applicable only the those making the conjectures.   

If we are ever to discover new and innovative ways to increase the probability of success for our project work, we need to move far away from conjecture, anecdote, and untestable ideas and toward evidence based assessment of the problem, the proposed solutions and the evidence that the propsed correction will in fact result in improvement.

One Final Note

As a first year Grad student in Physics I learned a critical concept that is missing from much of the conversation around process improvement. When an idea is put forward in the science and engineering world, the very first thing is to do a literature search.

  • Is this idea recognized by others as being credible. Are there supporting studies that confirm the effectiveness and applicability of the idea outside the authors own experience?
  • Are those supporting the idea, themselves credible, or just following the herd?
  • Are there references to the idea that have been tested outside the authors own experience?
  • Are there criticisms of the idea in the literature? Seeking critics is itself a critical success factor in testing any ideas. There would be knock down drag out shouting matches in the halls of the physics building about an idea. Nobel Laureates would be waving arms and speaking in loud voices. In the end it was a test of new and emergent ideas. And anyone who takes offense to being criticized, has no basis to stand on for defending his idea.¬†
  • Is the idea the basis of a business, e.g. is the author¬†selling something. A book, a seminar, consulting services?
  • Has this ¬†idea been tested by someone else. We'd tear down our experiment, have someone across the country rebuild it, run the data and see if they got the same results.¬†

Without some way to assess the credibility of any idea, either through replication, assessment against a baseline (governance framework, accounting rules, regulations), the idea is just an opinion. And like Daniel Moynihan says:

Everyone is entitled to his own opinion, but not his own facts. 

and of course my favorite

Again and again and again ‚ÄĒ what are the facts? Shun wishful thinking, ignore divine revelation, forget what "the stars foretell," avoid opinion, care not what the neighbors think, never mind the unguessable "verdict of history" ‚ÄĒ what are the facts, and to how many decimal places?¬†You pilot always into an unknown future; facts are your single clue. Get the facts! -¬†Robert Heinlein (1978)

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Projects versus Products

Mon, 09/28/2015 - 16:00

There's a common notion in some agile circles the projects aren't the right vehicle for developing products. This is usually expressed by Agile Coaches. As a business manager, applying Agile to develop products  as well as delivering Operational Services based on those products, projects are how we account for the expenditures of those outcomes, manage the resources and coordinate the needed resources to produce products as planned.

In our software product business, we use both a Product Manager and a Project Manager. These roles are separate and at the same time overlapping.

  • Products¬†are customer facing. Market needs, business models for revenue, cost, earnings, interfaces with Marketing and Sales and other business management processes- contracts, accounting - are Product focused.

Product Managers focus on Markets. What features are the market segments demanding? What features Must Ship and what featues can we drop? What is the Sales impacts of any slipped dates?

  • Projects are¬†internally facing - internal resources need looking after. The notion of¬†self organizing is fine, but¬†self directed¬†only works when the work efforts have direct contact with the customers. And even then, without some oversight - governance - a self directed team has limitations in the larger context of the business. If the¬†self directed¬†team IS the business, then the need for external governance is removed. This would be rare in any non-trivial business.¬†

Project Managers are inward focused to the resource allocation and management of the development teams. How can we get the work done to meet the market demand? When can we ship the product to maintain the sales forecast?

In very small companies and startups these roles are usually performed by the same person.

Once we move beyond the sole proprietor and his friends, separation of concerns takes over. These roles become distinct.

  • The Product Manager is a member of the Business Development Team, tasked with the business side of the product delivery process.¬†
  • The Project Management Team (PMs and Coordinators, along with development leads and staff), is a member of the delivery team tasked with producing the capabilities needs to capture and maintain the market.

Products are about What and Why. Projects are about Who, How, When, and Where. From Rudyard Kipling's Six Trusted Friends)

Product Management focuses on the overall product vision - usually documented in a Product Roadmap, showing the release cycles of capabilities and features as a function of time. Project Management is about logistics, schedule, planning, staffing, and work management to produce products in accordance with the Road Map.

When agile says it's customer focused, this is true only when there is One customer for the Product, rather than a Market for the Product and that customer is on site. That'd not be a very robust product company if they had only one customer. 

When we hear Products are not Projects, ask in what domain, business size, and value at risk is it possible not to separate these concerns between Products and Projects?

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Risk Management is How Adults Manage Projects

Sun, 09/27/2015 - 20:59

Risk Management is How Adults Manage Projects - Tim Lister

Let's start with some background on Risk Management

Tim's quote sets the paradigm for managing the impediments to success in all our endeavors

It says volumes about project management and project failure. It also means that managing risk is managing in the presence of uncertainty. And managing in the presence of uncertainty means making estimates about the impacts of our decision on future outcomes. So you can invert the statement when you hear we can make decisions in the absence of estimates.

Tim's update is titled Risk Management is Project Management for Grownups.

For those interested in managing projects in the presence of uncertainty and the risk that uncertainty creates, here's a collection from the office library, in no particular order

Here's a summary at a recent meeting around decision making in the presence of risk

Earning value from risk (v4 full charts) from Glen Alleman
Categories: Project Management

Complex, Complexity, Complicated (Update)

Sun, 09/27/2015 - 16:14

Cynefin_as_of_1st_June_2014The popular notion that Cynefin can be applied in the software development domain as a way of discussing the problems involved in writing software for money is missing the profession of Systems Engineering. From Wikipedia Cynefin is...

The framework provides a typology of contexts that guides what sort of explanations or solutions might apply. It draws on research into complex adaptive systems theory, cognitive science, anthropology, and narrative patterns, as well as evolutionary psychology, to describe problems, situations, and systems.

While Cynefin uses the term Complexity and Complex Adaptive System, it is applied from the observational point of view. That is the system exists outside of our influence on the system to control its behavior - we are observers of the systems, not engineers of the solutions in the form of a system that provides needed capabilities to solve a problem.

Read carefully the original paper on Cynefin The New Dynamics of Strategy: Sense Making in a Complex and Complicated World This post is NOT about those types of systems, but about the conjecture that the development of software is by its nature Chaotic. This argument is used by many in the agile world for avoid the engineering disciplines of INCOSE style Systems Engineering.  

There are certainly engineered systems that transform into complex adaptive systems with emergent behaviors that cause the system to fail. Example below. This is not likely to be the case when engineering principles are applied in the domains of Complex and Complicated.

A good starting point for the complex, complicated, and chaotic view of engineered systems is Complexity and Chaos - State of the Art: List of Works, Experts, Organizations, Projects, Journals, Conferences, and Tools There is a reference to Cynefin as organization modeling. While organizational modeling is important - I suspect Cynefin advocates would suggest the only important item - the engineered aspects  of applying Systems Engineering to complex, complicated, and emergent systems is mandatory for any organization to get the product out the door on time, on budget, and on specification.

For another view of the complex systems problem Principles of Complex Systems for Systems Engineering is a good place to start along with the resources from INCOSE and AIAA like Complexity Primer for Systems Engineers, Engineering Complex Systems, Complex System Classification, and many others.

So Let's Look At the Agile Point of View

In the agile community it is popular to use the terms complex, complexity, complicated, complex adaptive system many times interchangeably and many times wrongly - to assert we can't possibly plan ahead, know what we're going to need, and establish a cost and schedule because the systems complex, and emergent.

These terms are many times overloaded with an agenda used to push a process or even a method. As well, in the agile community it is popular to claim we have no control over the system, so we must adapt to its emerging behavior. This is likely the case in one condition - the chaotic behaviors of Complex Adaptive Systems. But this is only the case when we fail to establish the basis for how the CAS was formed and what sub-systems are driving that behaviors, and most importantly what are the dynamics of the interfaces between those subsystems - the System of Systems architecture - that create the chaotic behaviors . 

It is highly unlikely those working in the agile community actually work on complex systems that evolve AND at the same time are engineered at the lower levels to meet specific capabilities and resulting requirements of the system owner. They've simply let the work and the resulting outcomes emerge and become Complex, Complicated, and create Complexity. They are observers  of the outcomes, not engineers of the outcomes.

Here's one example of an engineered system that actually did become a CAS because of poor efforts of the Systems Engineers. I worked on Class I and II sensor platforms. Eventually FCS was canceled for all the right reasons. But for small teams of agile developers the outcomes become complex when the Systems Engineering processes are missing. Cynefin partitions beyond obvious emerge for the most part when Systems Engineering is missing.


First some definitions

  • Complex - consisting of many different and connected part. Not easy to analyze or understand. Complicated or intricate. When a system or problem is considered complex, analytical approaches, like dividing it into parts to make the problem tractable is not sufficient, because it is the interactions of the parts that make the system complex and without these interconnections, the system no longer functions.
  • Complex System -¬†is a functional whole, consisting of interdependent and variable parts. Unlike conventional systems, the parts need not have fixed relationships, fixed behaviors or fixed quantities, and their individual functions may be undefined in traditional terms.
  • Complicated - containing a number of hidden parts, which must be revealed separately because they do not interact. Mutual interaction of the components creates nonlinear behaviors of the system. In principle all systems are complex. The number of parts or components is irrelevant n the definition of complexity. There can be complexity - nonlinear behaviour - in small systems of large systems.¬†
  • Complexity - there is no standard definition of complexity is a view of systems that suggests simple causes result in complex effects. Complexity as a term¬†is generally used to characterize a system with many parts whose interactions with each other occur in multiple ways. Complexity can occur in a variety of forms
    • Complex behaviour
    • Complex mechanisms
    • Complex situations
    • Complex systems
    • Complex data
  • Complexity Theory -¬†states that critically interacting components self-organize to form potentially evolving structures exhibiting a hierarchy of emergent system properties.¬†This theory takes the view that systems are best regarded as wholes, and studied as such, rejecting the traditional emphasis on simplification and reduction as inadequate techniques on which to base this sort of scientific work.

One more item we need is the types of Complexity

  • Type 1 - fixed systems, where the structure doesn't change as a function of time.
  • Type 2 - systems where time causes changes. This can be repetitive cycles or change with time.
  • Type 3 - moves beyond repetitive systems into organic where change is extensive and non-cyclic in nature.
  • Type 4 - are self organizing where we can¬†combine internal constraints of closed systems, like machines, with the creative evolution of open systems, like people.

And Now To The Point

When we hear complex, complexity, complex systems, complex adaptive system, pause to ask what kind of complex are you talking about. What Type of complex system. In what system are you applying the term complex. Have you classified that system in a way that actually matches a real system. Don't take anyone saying well the system is emerging and becoming too complex for us to manage Unless in fact that is the case after all the Systems Engineering activities have been exhausted. It's a cheap excuse for simply not doing the hard work of engineering the outcomes.

It is common use the terms complex, complicated, and complexity interchanged. And software development is classified or mis-classified as one or the both or all three. It is also common to toss around these terms with not actual understanding of their meaning or application.

We need to move beyond buzz words. Words like Systems Thinking. Building software is part of a system. There are interacting parts that when assembled, produce an outcome. Hopefully a desired outcome. In the case of software the interacting parts are more than just the parts. Software has emergent properties. A Type 4 system, built from Type 1, 2, and 3 systems. With changes in time and uncertainty, modeling these systems requires stochastic processes. These processes depend on estimating behaviors as a starting point. 

The understanding that software development is an uncertain process (stochastic) is well known, starting in the 1980's [1] with COCOMO. Later models, like Cone of Uncertainty made it clear that these uncertainties, themselves, evolve with time. The current predictive models based on stochastic processes include Monte Carlo Simulation of networks of activities, Real Options, and Bayesian Networks. Each is directly applicable to modeling software development projects.

[1] Software Engineering Economics, Barry Boehm, Prentice-Hall, 1981.

Related articles Decision Analysis and Software Project Management Making Decisions in the Presence of Uncertainty Some More Background on Probability, Needed for Estimating Approximating for Improved Understanding The Microeconomics of a Project Driven Organization How to Avoid the "Yesterday's Weather" Estimating Problem Hope is not a Strategy Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management