Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Herding Cats - Glen Alleman
Syndicate content
Performance-Based Project Management® Principles, Practices, and Processes to Increase Probability of Success Доверяй, но проверяй
Updated: 2 days 12 min ago

Decision Making Without Estimates?

Mon, 10/20/2014 - 02:56

In a recent post there are 5 suggestions of how decisions about software development can be made in the absence of estimating the cost, duration, and impact of these decisions. Before looking at each in more detail, let's see what the basis is of these suggestions from the post.

A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.

Decision making in the presence of the allocation of limited resources is called Microeconomics. These decision - in the presence of limited resources - involves opportunity costs. That is what is the cost of NOT choosing one of the alternatives - the allocations? To know these means we need to know something about the outcome of NOT choosing. We can't wait to do the work, we need to know what happens - to some level of confidence - if we DON'T Do something. How can we do this? We need estimate what happens if we don't choose one of the possible allocations, since all the outcomes are in the future.

But first, the post started with suggesting the five approaches are part of Strategy. I'm familiar with strategy making in the domain of software development, having been schooled by two Balanced Scorecard leaders while working as a program manager for a large Department of Energy site, where we pioneered the use of agile development in the presence of highly formal nuclear safety and safeguards applications.

What is Strategy?

Before proceeding with the 5 suggestions, let's look at what strategy is, since it is common to confuse strategy with tactics.

Strategy is creating fit among a firm's activities. The success of a strategy depends on doing many things well – not just a few. The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions. When this occurs, operational effectiveness determines the relative performance of the firm. 

Improving operational effectiveness is a necessary part of management, but it is not strategy. In confusing the two, managers will be unintentionally backed into a way of thinking about competition that drives the business processes (IT) away from the strategic support and toward the tactical improvement of operational effectiveness.

Managers must be able to clearly distinguish operational effectiveness from strategy. Both are essential, but the two agendas are different. The operational effectiveness agenda involves continual improvement business processes that have no trade–offs associated with them. The operational effectiveness agenda is the proper place for constant change, flexibility, and relentless efforts to achieve best practices.

In contrast, the strategic agenda is the place for making clear trade offs and tightening the fit between the participating business components. Strategy involves the continual search for ways to reinforce and extend the company’s position in the market place. 

“What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.

Using Porter's notion of strategy in a business context, the post seems more about tactics. But ignoring that for the moment, let's look further into the ideas presented in the post.

I'm going to suggest that each of the five decision process described in the post are the proper ones - ones with many approaches - but each has ignored the underlying principles of Microeconomics. This principle is that decisions about future outcomes are informed by the opportunity cost and that cost requires - mandated actually since they're in the future - an estimate. This is the basis of Real Options, forecasting, and the very core of business decision making in the presence of uncertainty.

The post then asks

  1. How well does this decision proposal help us reach our business goals?
  2. Does the risk profile resulting from this decision fit our acceptable risk profile?

 
Screen Shot 2014-10-18 at 11.02.18 AMThe 1st question needs another question to be answered. What are our business goals and what are the units of measure of these goals. In order to answer the 1st question we need a steering target to know how we are proceeding toward that goal.

The second question is about risk. All risk comes from uncertainty. Two types of uncertainty exist on projects:

Reducible (Epistemic) and Irreducible (Aleatory). Epistemic uncertainty comes from lack of knowledge. Epistemology is the study of the acquisition of knowledge. We can pay money to buy down this lack of knowledge. That is Epistemic uncertainty can be reduced with work. Risk reduction work. But this leaves open how much time, budget, and performance margin is needed?

ANSWER: We need an Estimate of the Probability of the Risk Coming True. Estimating the Epistemic risk probability of occurrence, the cost and schedule for the reduction efforts, and the probability of the residual risk is done with probabilistic model. There are several and many tools. But estimating all three components: occurrence, impact, effort to mitigate, and residual risk is required.

Aleatory uncertainty comes from the naturally occurring variances of the underlying processes. The only way to reduce the risk arising from Aleatory uncertainty is with margin. Cost Margin, Schedule Margin, Performance Margin. But this leaves open how do we know how margin?

ANSWER: We need to estimate the needed margin from the Probability Distribution Function of the Underlying Statistical Process. Estimating the needed aleatory margin (cost, schedule, and performance) can be done with Monte Carlo Simulation or Method of Moments.


Probability and StatisticsSo one more look at the suggestions before examining  further the 5 ways of making decisions in the absence of estimating their impacts and the cost to achieve those impacts.

All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?

All risk is probabilistic, based on underlying statistical processes. Either the process of lack of knowledge (Epistemic) or the process of natural variability (Aleatory). In the consideration of risk we must incorporate these probability and statistical behaviours in our decision making activities. Since the outcomes of these processes occur in the future, we need to estimate them based on  knowledge - or lack of knowledge - of their probability of occurrence. For the naturally occurring variances that have occurred in the past we need to know how they might occur in the future. To answer these questions, we need a probabilistic model. This model based on the underlying statistical processes. And since the  underlying model is statistical, we need to estimate the impact of this behaviour.

Let's Look At The Five Decision Making Processes

1. Do the most important work firstIf you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.

    • Important work first is good strategy. But importance needs a unit of measure. That unit of measure should be found in the strategy. This is the purpose of the strategy. But the strategy needs units of measure as well. Simply saying do important work first doesn't provide a way to make that decision.
    • The notion of validating versus implementing the strategy is artificial. A read of the Strategy Making literature will clear this up. Strategy for business and especially strategy in IT is a very mature domain, with a long history.
    • One approach to generating the units of measure from the strategy is Balanced Score Card, where strategic objectives are mapped to Performance Goals, then to Critical Success Factors, then to the Key Performance Indicators. The way to do that is with a Strategy Map, shown below.
    • This is the use of strategy as Porter defines it. 

Screen Shot 2014-10-18 at 11.28.31 AM

2. Do the Highest Techncial Risk FirstWhen you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.

    • This is likely dependent on the technical and programmatic architecture of the project or product. 
    • We may want to establish a platform on which to build riskier components. A platform that is known and trusted, stable, bug free - before embarking on any high risk development.
    • High risk may mean high cost. So doing risky things first have consequences. What are those consequences? One is risking the budget before it's clear we have a stable platform, in which to build follow on capabilities. Knowing soemthing is high risk may mean high cost, and this requires estimating something that will occur in the future - the cost to achieve and the cost of the consequences.
    • So doing highest technical risk first, is itself a risk that needs to be assessed. Without this assessment, this suggestion has no way of being tested in practice.

3. Do the Easiest Work FirstSuppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.

    • This is also dependent on the technical and programmatic architecture of the project or product.
    • It's also counter intuitive to #2. Since High Risk is not likely to be the easiest to do.
    • These assessments between risk and work sequence require some sort of trade space analysis, and since the outcomes and their impacts in in the future, estimates these is part of the Analysis of Alternatives approach for any non-trivial project where Systems Engineering guides the work processes.

4. Do the legal Requirements FirstIn medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.

    • Medical Devices are regulated with 21CFR Parts 800-1299. The suggestion doesn't reference any regulations for medical software, which ranges for patient check in at the front desk to surgical devices controlled by software.
    • Developing 21 CFR Software components first may not be possible until the foundation on which they are build is established, tested, and verified. 
    • This means - Quality Planning, Requirements, Design, Construction or Coding, Testing by the Software Developer, User Site Testing, and Maintenance and Software Changes. 
    • Once the plan - a required plan for validation - is in place, the order of the development will be more visible. 
    • Deciding which components to develop, just because they are impacted by Legal Requirements usually means ALL the components. So this approach - Do The Legal Requirements First - usually means do them all.
    • The notion of - Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the marketfails to describe how they knew when they would be ready to test out these ideas. And most importantly how they were able to go to market in the absence of the certification.
    • As well what type of testing - early trials, full 21 CFR release, human applciations, animal testing, etc. is not stated. With some experience in the medical device - devices.
    • A colleague is the CEO of http://acumyn.com/  I'll confirm the processes of validating software from him.

5. Liability Driven Investment - This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.

    • It's not clear why this is called liability. Liability on the balance sheet is an obligation to pay. Deciding what work to do now to generate needed revenue is certainly a strategy. Value Stream Mapping or Impact Mapping is a way to define that. But liability seems to be the wrong term.
    • Not sure how that connects with a Securities Exchange and what problem they are solving using the term liabilities. Shorts are obligations to pay in the future when the short is calledPuts and Calls are terms used in stock trading, but developing software products is not trading. The Real Options used by the poster in the past don't exercise the Option, so the liability to pay doesn't seem to connect here.

References

  1. Risk Informed Decision Handbook, NASA/SP-2010-576 Version 1.0 April 2010.
  2. General Principles of Software Validation; Final Guidance for Industry and FDA Staff, US Food and Drug Administration.

  3. Strategy Maps: Converting Intangible Assets into Tangible Outcomes, Robert Kaplan and David Norton, Harvard Business Press.
  4. Estimating Optimal Decision Rules in Presence of Model Parameter Uncertainty, Christopher Joseph Bennett, Vanderbilt University, June 6, 2012.
Related articles Making Choices in the Absence of Information
Categories: Project Management

Quote of the Day

Mon, 10/20/2014 - 01:08

To achieve great things, two things are needed; a plan, and not quite enough time.
− Leonard Bernstein

Plan of the Day (CV-41)

The notion that planning is a waste is common in domains where mission critical, high risk - high reward, must work, type projects do not exist.

Notice the Plan and the Planned delivery date. The notion that  deadlines  are somehow evil, goes along with the lack on understanding that business needs a set of capabilities to be in place on a date in order to start booking the value in the general ledger.

Plans are strategies. Strategies are a hypothesis. The Hypothesis is tested with  experiments. Experiments show from actual data what the outcome is of the work. These outcomes are used as feedback to take corrective actions at the strategic and tactical level of the project.

This is called Closed Loop Control. Set the strategy, define the units of measure for the desired outcome - Measures of Effectiveness and Measures of Performance. Perform work as assess these measures. Determine the variance between the planned outcomes and the needed outcomes. Take corrective action by adjusting the plan to keep the project moving toward the strategic goals. For Closed Loop Control, we need

  • A steering target for some future state.
  • A measure of the current state.
  • The variance between current and future.
  • Corrective actions to put the project back on track toward the desired state.

Control systems from Glen Alleman Related articles Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow's Book
Categories: Project Management

What Is Systems Architecture And Why Should We Care?

Sat, 10/18/2014 - 20:10

If we were setting out to build a home, we would first lay out the floor plans, grouping each room by function and placing structural items within each room according to their best utility. This is not an arbitrary process – it is architecture. Moving from home design to IT system design does not change the process. Grouping data and processes into information systems creates the rooms of the system architecture. Arranging the data and processes for the best utility is the result of deploying an architecture. Many of the attributes of building architecture are applicable to system architecture. Form, function, best use of resources and materials, human interaction, reuse of design, longevity of the design decisions, robustness of the resulting entities are all attributes of well designed buildings and well designed computer systems. [1]

In general, an architecture is a set of rules that defines a unified and coherent structure consisting of constituent parts and connections that establish how those parts fit and work together. An architecture may be conceptualized from a specific perspective focusing on an aspect or view of its subject. These architectural perspectives themselves can become components in a higher–level architecture serving to integrate and unify them into a higher level structure.

The architecture must define the rules, guidelines, or constraints for creating conformant implementations of the system. While this architecture does not specify the details on any implementation, it does establish guidelines that must be observed in making implementation choices. These conditions are particularly important for component architectures that embody extensibility features to allow additional capabilities to be added to previously specified parts. [2] This is the case where Data Management is the initial deployment activity followed by more complex system components.

By adopting a system architecture motivation as the basis for the IT Strategy, several benefits result:

  • Business processes are streamlined – a fundamental benefit to building enterprise information architecture is the discovery and elimination of redundancy in the business processes themselves. In effect, it can drive the reengineering the business processes it is designed to support. This occurs during the construction of the information architecture. By revealing the different organizational views of the same processes and data, any redundancies can be documented and dealt with. The fundamental approach to building the information architecture is to focus on data, process and their interaction.
  • Systems information complexity is reduced – the architectural framework reduces information system complexity by identifying and eliminating redundancy in data and software. The resulting enterprise information architecture will have significantly fewer applications and databases as well as a resulting reduction in intersystem links. This simplification also leads to significantly reduced costs. Some of those recovered costs can and should be reinvested into further information system improvements. This reinvestment activity becomes the raison d’état for the enterprise–wide system deployment.
  • Enterprise–wide integration is enabled through data sharing and consolidation – the information architecture identifies the points to deploy standards for shared data. For example, most Kimball business units hold a wealth of data about products, customers, and manufacturing processes. However, this information is locked within the confines of the business unit specific applications. The information architecture forces compatibility for shared enterprise data. This compatible information can be stripped out of operational systems, merged to provide an enterprise view, and stored in data repositories. In addition, data standards streamline the operational architecture by eliminating the need to translate or move data between systems. A well–designed architecture not only streamlines the internal information value chain, but it can provide the infrastructure necessary to link information value chains between business units or allow effortless substitution of information value chains.
  • Rapid evolution to new technologies is enabled – client / server and object–oriented technology revolves around the understanding of data and the processes that create and access this data. Since the enterprise information architecture is structured around data and process and not redundant organizational views of the same thing, the application of client / server and object–oriented technologies is much cleaner. Attempting to move to these new technologies without an enterprise information architecture will result in the eventual rework of the newly deployed system.

[1] A Timeless way of Building, C. Alexander, Oxford University Press, 1979.

[2] “How Architecture Wins Technology Wars,” C. Morris and C. Ferguson, Harvard Business Review, March–April 1993, pp. 86–96.

Categories: Project Management

Connecting the Dots Between Technical Performance and Earned Value Management

Wed, 10/15/2014 - 15:36

We gave a recent College of Performance Management webinar on using techncial progress to inform Earned Value. Here's the annotated charts.

Categories: Project Management

Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow’s Book

Tue, 10/14/2014 - 17:08

Effective Risk ManagementIn a recent post to “Who Is Ed Conrow?” a responder asked about the differences between the PMBOK® Risk approach and the DoD PMBOK risk approaches as well as a summary of the book Effective Risk Management: Some Keys to Success, Edmund Conrow. Ed worked the risk management processes for a NASA proposal I was on. I was the IMP/IMS lead, so integrating Risk with the Integrated Master Plan / Integrtaed Master Schedule in the mannder he prescribed was a live changing experience. I was naive before, but no longer after that proposal won ˜$7B for the client.

 

Let me start with a few positioning statements:

  1. Project risk management is a topic with varying levels of understanding, interest, and applicability. The PMI PMBOK® provides a “starter kit” view of project risk. It covers the areas of risk management but does not have guidance on actually “doing” risk management. This results many times in the false sense that “if we’re following PMBOK® then we’re OK.”
  2. The separation of technical risk from programmatic risk is not clearly discussed in PMBOK®. While not a “show stopper” issue for some projects, programmatic risk management is critical for enterprise class projects. By enterprise I mean, ERP, large product development, large construction, aerospace and defense class programs. In fact, DI-MGMT-81861 mandates programmatic risk management processes for procurements greater than $20M. $20M sounds like a very large sum of money for the typical internal IT development project. It hardly keeps the lights on an aerospace and defense program.
  3. The concepts around schedule variances are misunderstood and misapplied in almost every discussion of risk management in the popular literature. The common red-herring is the ineffective use of Critical Path and PERT. This of course is simply a false statement in domains outside IT or small low risk projects. Critical Path, PERT and Monte Carlo Simulation are mandated by government procurement guidelines. Not that these guides make them “correct.” What makes them correct is that programs measurably benefit from their application. This discipline is called Program Planning and Controls (PP&C) and is a profession in aerospace, defense, and large construction. No amount of “claiming the processes don’t work” removes the documented facts they do, when properly applied. Anyone wishing to learn more about these approaches to programmatic risk management need only look to the NASA, Department of Energy, and Department of Defense Risk Management communities.

With all my biases out of the way, let’s look at the DoD PMBOK®

  1. First, the DoD PMBOK® is free and can be found at http://www.dau.mil/pubs/gdbks/pmbok.asp. It appears to be gone so now you can find it here. This is a US Department of Defense approved document. It provides supplemental guidance to the PMI PMBOK®, but in fact replaces a large portion of PMI PMBOK®.
  2. Chapter 11 of DoD PMBOK® is Risk. It starts with the Risk Management Process areas – in Figure 11-1, page 125. (I will not put them here, because you should down load the document and turn to that page.) The diagram contains six (6) process areas. The same number as the PMI PMBOK®. But what’s missing from PMI PMBOK® and present in DoD PMBOK® is how these processes interact to provide a framework for Risk Management.
  3.  There are several missing critical concepts in PMI PMBOK® that are provided in DoD PMBOK®.
    • The Risk Management structure in Figure 11-1 is the first. Without the connections between the process areas in some form other than “linear” the management of technical and programmatic risk turns into a “check list.” This is the approach of PMI PMBOK® - to provide guidance on the process areas and leave it up to the reader to develop a process around these areas. This is also the approach of CMMI. This is an area too important to leave it up to the read to develop the process.
    • The concept of the Probability and Impact matrix is fatally flawed in PMI PMBOK®. It turns out you cannot multiple probability of occurrence with the impact of this occurrence. These “numbers” are not numbers in the sense of integer or real valued numbers. They are probability distributions themselves. Multiplication is not an operation that can be applied to a probability distribution. Distributions are integral equations and the multiplication operator ´ is actually the convolution operator Ä.
    • The second fatal flaw of the PMI PMBOK® approach to probability of occurrence and impact is these “numbers” are uncalibrated cardinal values. That is they have no actually meaning since their “units of measure” are not calibrated.

Page 124 of DoD PMBOK® summarizes the principles of Risk Management as developed in two seminal sources.

  1. Effective Risk Management: Some Keys to Success, Edmund Conrow, American Institute of Aeronautics and Astronautics, 2000.
  2. Risk Management Guide for DoD Acquisition, Sixth Edition, (Version 1.0), August 2006, US Department of Defense, which is part of the Defense Acquisition Guide, §11.4), which is published within the Office of the Under Secretary of Defense, Acquisition, Technology and Logistics  (OUSD(AT&L)),  Systems and Software Engineering / Enterprise Development.

Now all these pedantic references are here for a purpose. This is how people who manage risk for a living, manage risk. By risk, I mean technical risk that results in loss of mission, loss of life. Programmatic risk that results in loss of Billions of Tax Payer dollars. They are serious enough about risk management to not let the individual project or program manager interpret the vague notions in PMI PMBOK®. These may appear to be harsh words, but the road to the management of enterprise class projects is littered with disasters. You can read every day of IT projects that are 100% over budget, 100% behind schedule. From private firms to the US Government, the trail of destruction is front page news.

A Slight Diversion – Why are Enterprise Projects So Risky?

There are many reasons for failure – too many to mention – but one is the inability to identify and mitigate risk. The words “indentify” and “mitigate,” sound simple. They are listed in the PMI PMBOK® and the DoD PMBOK®. However, here is where the problem starts:

  1. The process of separating risk from issue.
  2. Classifying  the statistical nature risk.
  3. Managing the risk process independently from project management and technical development.
  4. Incorporating the technical risk mitigation processes into the schedule.
  5. Modeling the impact of technical risk on programmatic risk.
  6. Modeling the programmatic risk independent from the technical risk.

Using Conrow as a Guide

Here is one problem. When you use the complete phrase “Project Risk Management” with Google, you get ~642,000 hits. There are so many books, academic papers, and commercial articles on Risk Management, where do we start? Ed Conrow’s book is probably not the starting point for learning how to practice risk management on your project. However, it might be the ending point. If you are in the software development business, a good starting point is – Managing Risk: Methods for Software Systems Development, Elaine M. Hall, Addison Wesley, 1998. Another broader approach is Continuous Risk Management Guidebook, Software Engineering Institute, August 1996. While these two sources focus on software, they provide the foundation for the discussion of risk management as a discipline.

There are public sources as well:

  1. Start with the NASA Risk Management page, http://www.hq.nasa.gov/office/codeq/risk/index.htm
  2. Software Engineering Institute’s Risk Management page, http://www.sei.cmu.edu/risk/index.html
  3. A starting point for other resources from NASA, http://www.hq.nasa.gov/office/hqlibrary/ppm/ppm22.htm
  4. Department of Energy’s Risk Management Handbook, http://www.oecm.energy.gov/Portals/2/RiskManagement.pdf (which uses the DoD Risk Process Map described above)

However, care needs the be taken once you go outside the government boundaries. Here are many voices plying the waters of “risk management,” as well as other voices with “axes to grind” regarding project management methods and risk management processes. The result is many times a confusing message full of anecdotes, analogies, and alternative approaches to the topic of Risk Management.

Conrow in his Full Glory

Before starting into the survey of the Conrow book, let me state a few observations:

  1. This book is tough going. I mean really tough going. Tough in the way a graduate statistical mechanics book is tough going. Or a graduate micro-economics of managerial finance book is tough going. It “professional grade” stuff. By “professional grade” I mean – written by professionals to be used by professionals.
  2. Not every problem has the same solution need. Conrow’s solutions may not be appropriate for a software development project with 4 developers and a customer in the same room. But from projects that have multiple teams, locations, and stake holders some type of risk management is needed.
  3. The book is difficult to read for another reason. It assumes you have a “a reasonable understanding of the issues” around risk management. This means this is not a tutorial style or “risk management for dummies” book. It is a technical reference book. There is little in the way of introductory material or bringing the reader up top speed before presenting the material.

From the introduction:

The purpose of this book is two-fold: first, to provide key lessons learned that I have documented from performing risk management on a wide variety of programs, and second, to assist you, the reader, in developing and implementing an effective risk management process on your program.

A couple of things here. One is the practical experience in risk management. Many in the risk management “talking” community have limited experience with risk management in the way Ed does. I first met Ed on a proposal for a $8B Manned Spaceflight program. He was responsible for the risk strategy and the conveying of that strategy in the proposal. The proposal resulted in an award and now our firm provides Program Planning and Controls for a major subsystem of the program. In this role programmatic and technical risk management is part of the Statement of Work flowed down from the prime contractor.

Second Ed is a technical advisor to the US Arms Control and Disarmament Agency as well as a consultant industry and government on risk management. These “resume” items are meant to show that the practice of risk management is just that – a practice. Speaking about risk management and doing risk management on high risk programs are two different things.

One of Ed’s principle contributions to the discipline was the development of a micro-economic framework of risk management in which the design feasibility (or technical performance) is traded against cost and schedule.

In the end, this is a reference text for the process of managing the risk of projects, written by a highly respected practitioner.

What does the Conrow Book have to offer over the Standard approach?

Ed’s book contains the current “best practices” for managing technical and programmatic risk. These practices are used on high risk, high value programs. The guidelines in Ed’s book are generally applicable to many other classes of projects as well. But there are several critical elements that differentiate this approach from the pedestrian approach to risk management.

  1. The introduction of the “ordinal risk scale.” This approach is dramatically different than the PMI PMBOK description of risk in which the probability of occurrence is multiplied by the consequences to produce a “risk rating.” Neither the probability of occurrence nor the consequences are calibrated in anyway. The result is a meaningless number that may satisfy the C-Level that “risk management” is being done on the program. By ordinal risk scales it is meant a classification of risk, say – A,B,C,D,E,F – that are descriptive in nature. Not just numbers. By the way, the use of letters is intentional. If numbers are used to ordinal risk ranks, there is a tendency to do arithmetic on them. Compute the average risk rank, or multiply them by the consequences. Letters remove this notion and prevent the first failure of the common risk management approach – to produce an overall risk measure.

The ordinal approach works like this. Ed describes some classes of risk scales which include: maturity, sufficiency, complexity, uncertainty, estimative probability, and probability based scales.

A maturity risk scale would be:

Definition

Scale Level

Basic principles observed

E

Concept design analyzed for performance

D

Breadboard or brassboard validation in relevant environment

C

Prototype passes performance tests

B

Item deployed and operational

A

 The critical concept is to relate the risk ordinal value to an objective measure. For a maturity risk assessment, some “calibration” of what it means to have the “basic principles observed” must be developed. This approach can be applied to the other classes – sufficiency, complexity, uncertainty, estimative probability and probability based scales.

It’s the estimative probability that is important to cost and schedule people in our PP&C practice. The estimative probability scale attempts to relate a word to a probability value. “High” to 80%. An ordinal estimative probability scale using point estimates derived from a statistical analysis of survey data might look like.

Definition

Median probability value

Scale Level

Certain

0.95

E

High

0.85

D

Medium

0.45

C

Low

0.15

B

Almost no chance

0.05

A

Calibrating these risk scales is the primary analysis task of building a risk management system. What does it mean to have a “medium” risk, in the specific problem domain?

  1. The second item is the formal use of a risk management process. Simply listing the risk process areas – as is done in PMBOK – is not only poor project management practice, it is poor risk management practice. The process to be used are shown on page 135 of http://www.dau.mil/pubs/gdbks/pmbok.asp. The application of these processes are described in detail. No process area is optional. All are needed. All need to be performed in the right relationship to each other.

These two concepts are the ones that changed the way I perform risk management on the programs I’m involved with and how we advise our clients. They are paradigm changing concepts. No more simple mined arithmetic with probabilities and consequences. No more uncalibrated risk scales. No more tolerating those who claim PERT, Critical Path, and Monte Carlo are unproven, obsolete, or “wrong headed” approaches.

Get Ed’s book. It’ll cost way too much when compared to the “paperback” approach to risk. But for those tasked with “managing risk,” this is the starting point.

 

Categories: Project Management

Managing Your Project With Dilbert Advice — Not!

Mon, 10/13/2014 - 16:11

Scott Adams provides cartons of what not to do for most things technical. Software and Hardware. I actually saw him once, when he worked for PacBell in Pleasanton, CA. I was on a job at major oil company deploying document management systems for OSHA 1910.119 - process safety management and integrating CAD systems for control of safety critical documents.

The most popular use of Dilbert cartoon lately has been with the #NoEstimates community in support of the notion that estimates are somehow evil, used to make commitments that can't be met, and generally should be avoided when spending other people's money.

The cartoon below resonated with me for several reasons. What's happening here is classic misguided, intentionally ignoring the established processes of Reference Class Forecasting. As well, in typical Dilbert fashion, doing stupid things on purpose.

Screen Shot 2014-10-12 at 9.16.06 AM

Reference Class Forecasting is a well developed estimating process used across a broad range of technical, business, and finance domains. The characters above seem not to know anything about RCF. As a result they are DSTOP

Here's how not to DSTOP for cost and schedule estimates and the associated risks and the technical risk that the product you're building can't do what it's supposed to do on or before the date it needs to do it, at or below the cost you need it to do  it in order to stay in business.

The approach below may be complete overkill for your domain. So start by asking what's the Value at Risk. How much of our customers money are we willing to right off, if we don't have a sense of what DONE looks like in units of measure meaningful to the decision makers. Don't know that? then it's likely you've already put that money at risk, you're likely late, and don't really know what capabiltiies will be produced when you run out of time and money.

Don't end up a cartoon character in Dilbert strip. Learn how to properly manage your efforts, the efforts of others, using your customers money.

Managing in the presence of uncertainty from Glen Alleman
Categories: Project Management

Intentional Disregard for Good Engineering Practices?

Mon, 10/13/2014 - 03:22

It seems lately there is an intentional disregard of the core principles of business development of software intensive systems. The #Noestimates community does, but other collections of developers do as well. 

  • We'd rather be writing code than estimating how much it's going to cost writing code.
  • Estimates are a waste.
  • The more precise the estimate, the more deceptive it is
  • We can't predict the future and it's a waste trying to
  • We can make decsision without estimating

These notions of course are seriously misinformed on how probability and statistics work in the estimating paradigm. I've written about this in the past. But there are a few new books we're putting to work in ouyr Software Intensive Systems (SIS) work that may be of interest to those wanted to learn more.

These are foundation texts for the profession of estimating. The continued disregard - ignoring possibly - of these principles has become all to clear. Not just in the sole contributor software development domain,. But all the way to Multi-Billion dolalr programs in defense, space, infrastructure, and other high risk domains. 

Which brings me back to a core conjecture - there is no longer any engineering discipline in the software development domain. At least outside the embedded systems like flight controls, process control, telecommunications equipment, and the like. There was a conjecture awhile back that the Computer Science discipline at the university level should be split - software engineering and coding.

Here's a sample of the Software Intensive System paradigm, where the engineering of the systems is a critical success factor. And Yes Virginia, the Discipline of Agile is applied in the Software Intensive Systems world - emphasis on the term DISCIPLINE.

Categories: Project Management

Principles Trump Practices

Sat, 10/11/2014 - 15:23

PrincipiaPrinciples, Practices, and Processes are the basis of successful project management. It is popular in some circles to think that practices come before Principles. 

The principles of management, project management, software development and its management, product development management are immutable.

What does done look like, what's our plan to reach done, what resources will we need along the way to done, what impediments will we encounter and how will we overcome them, and how are we going to measure our progress toward done in units meaningful to the decision makers?

This are immutable principles. These immutable principles can then be used to test practices and process by asking what is the evidence that the practice or process enables the principle to be applied and how do we know that the principle is being fulfilled.

 

Categories: Project Management

What Informs My Project Management View

Fri, 10/10/2014 - 15:44

In recent discussion (of sorts) about estimating - Not Estimating actually - I realized something that should have been obvious. I travel in a world not shared by the staunch advocates of #NoEstimates. They appear to be sole contributors. I came top this after reading Peter Kretzman's 3rd installment, where he re-quoted a statement by Ron Jeffries,

Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before.

This is a sole contributor or small team paradigm.

So let's pretend we work at Price/Waterhouse/Cooper and we’re playing our roles  - Peter as CIO advisor, me as Program Performance Management adviser. We've been asked by our new customer to develop a product from scratch, estimate the cost and schedule, and provide some confidence level that the needed business capabilities will be available on or before a date and at or below a cost. Why you ask, because that's in the business plan for this new product and if they're late or overrun the planned cost that will be a balance sheet problem.

What would we do? Well, we'd start with PWC resource management database – held by HR – and ask for “past performance experience” people in the business domain and the problem domain. Our new customer did not “invent” a new business domain, so it's likely we'll find people who know what our new customer does for money. We’d look to see where in the 195,433 people in the database that work for PWC world wide there is someone, somewhere, that knows what the customer does for money and what kinds of business capabilities this new system needs to provide. If there is no one, then we'd look in our 10,000 or so partner relationships database to find someone.

If we found no one who knows the business and the needed capabilities, we’d no bid.

This notion of “I've been asked to do something that’s never been done before, so how can I possibly estimate it” really means “I'm doing something I’ve never done before.” And since “I’m a sole contributor, the population of experience in doing this new thing for the new customer is ONE – me." So since I don't know how the problem has been solved in the past, I can't possibly know how to estimate the cost, schedule, and needed capabilities. And of course I'm absolutely correct to say - new development with unknown requirements can't be estimated. Because those unknown requirements are actually Unknown to me, but may be known to another. But in the population of 195,000 other people in our firm, I'm no longer alone in my quest to come up with an estimate.

So the answer to the question, “what if we encounter new and unknown needs, how can we estimate?” is actually a core problem for the sole contributor, or small team. It'd be rare that the sole contributor or small team would have encountered the broad spectrum of domains and technologies needed to establish the necessary Reference Classes to address this open ended question. This is not the fault of the sole contributor. It is simply the situation of small numbers versus large numbers.

This is the reason the PWC’s of the world exist. They get asked to do things the sole contributors never have an opportunity to see.

Related articles Software Requirements Are Vague
Categories: Project Management

Software Requirements Are Vague

Thu, 10/09/2014 - 22:23

Peter Kretzman has a nice post in his series on #NoEstimates. Peter and I share a skepticism of "making decisions in the absence of estimating the cost and impact" of those decisions. In Peter's current post there is a quote that is telling.

Let’s use Ron Jeffries’ statement as an example of this stance:

“Estimates are difficult. When requirements are vague — and it seems that they always are — then the best conceivable estimates would also be very vague. Accurate estimation becomes essentially impossible. Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before. “

One of my 3 half time jobs is working in the space and defense program performance management domain, both embedded systems and enterprise IT systems. DOD is the largest buyer of ERP on the planet. In this domain we have a formal process for determining what went wrong. The department looking after this is called Performance Assessment and Root Cause Analysis (PARCA). PARCA provides Root Cause Analysis for programs that have gone Nunn McCurdy as we would say. 

When you read the reports from Rand and Institute for Defense Analyses on N-M breaches, requirements instability is in the top 5 as root causes

It seems to me - in my narrow minded program performance management view of the world - that unstable requirements being used as the reason for vague estimates is so obvious a problem that has been completely ignored by the #NoEstimates advocates. It's like the olde saw

Doctor, Doctor it hurts when I do this (make estimates in the presence of vague requirements). Then stop doing that!

The notion of Capabilities Based Planning is missing in many software organizations. So having vague requirements is a natural outcome of not having definitive understanding of what Capabilities the system must provide, in units of measure meanigful to the decision makers. These units are:

  • Measures of Effectiveness  - are the operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions. MOE's are stated in units meaningful to the buyer, focus on capabilities independent of any technical implementation, are connected to the mission success.
  • Measures of Performance - characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions. Measures of Performance are attributes that assure the system has the capability and capacity to perform, and assess the system to assure it meets design requirements to satisfy the MoE.

Without these requirements have not home, are vague, and therefore create the root cause of bad estimates.

So what would a logical person do when working on a project that spends other peoples money, sometimes lots of other peoples money? Not Estimate? Does that sound like the corrective action to the root cause of the problems with software project success shortfall?

Not to me. It's the doctor, doctor this hurts paradigm. So until the root cause is determined, the corrective actions identified and applied, there can be no credible solution to the estimating problem. And there is a huge estimating problem in our domain, just read the N-M reports at RAND and IDA (Goggle nunn-mcurdy Rand or IDA to find them). Similar assessments of root causes can be found for enterprise IT from many sources. 

The #NoEstimates advocates are attempting to solve the wrong problem with the wrong approach. They've yet to connect with the core process of writing software for money - MicroEconomics of software development. Here's a starting point to address the root casue rather than the symptom. Fixing the symptoms does nothing in the end. It just spends money, with no actonable outcomes. And that woudl be very counter to the principles of Agile.

Capabilities based planning (v2) from Glen Alleman

 

Related articles Who pays? No Estimates Needs to Come In Contact With Those Providing the Money #NoEstimates? #NoProjects? #NoManagers? #NoJustNo
Categories: Project Management

How to Estimate Software Development - Update

Thu, 10/09/2014 - 02:49

There is a popular quote used by many in the #NoEstimates community, that is sadly misinformed.

Those who have knowledge, don’t predict. Those who predict, don’t have knowledge. − Lao Tseu

This of course was from a 6th Century BC Chinese philosopher, who was not likely familiar with the notion of probability and statistics developed some 900 years later. The quoting and re-quoting of Lao Tseu as an example of why estimates can't be made brings to light one of the more troublesome aspects of our modern age.

The lack of understanding of basic probability and statistics when applied to human endeavors.

Or possibly the intentional ignorance of probability and statistics as it is applied to the development of software systems. I can't really say if it is for lack of understanding, lack of exposure, or just a simple intent to ignore. 

But for any of those reasons and more, here's a starting point on how to actually become a member of the modern of statistical estimating community, once it is decided that is better than ignoring the basic knowledge needed to be a steward of other peoples money.

Here's some starting points in no particular order, other than that's how they came off the office book shelf.

These are just a small sample of the information readily available at your local book store or through the mail. If you google "software cost estimating," (all in quotes) there will be 100's of more articles, papers, and web sites. As well tools for estimating software are used every single day in a variety of domains. 

The Value at Risk is a starting point as well. Low value - this is defined by those providing the money, not by those doing the work, and low risk - this usually defined by those doing the work, not by those providing the money - at least in the domains we work. This Value at Risk, sets the tone. Low Value, Low Risk - and this is in absolutely no way an assessment of the relative value and risk - usually doesn't need much estimating.

Got a 6 week, 2 person database update project. Just do it. Got a 38 month, 400 person National Asset sofwtare project, probably so. Everything and anything in between needs to ask and answer that value at risk question before deciding. 

So poor Mr. Tzu was sadly informed when he made his quote. As are those repeating it. In the 21 century

Those who have knowledge of probability, statistics, and the processes described by them can predict their future behaviour. Those without this knowledge, skills, or experience cannot.

Related articles How to Estimate Software Development
Categories: Project Management

Quote of the Day

Mon, 10/06/2014 - 17:52

Novum_organum_scientiarumThe unassisted hand, and the understanding left to itself, posses but little power. Effects are produced by the means of instruments and helps, which the understanding requires no less than the hand. And as instruments either promote or regulate the motion of the hand, so those that are applied to the mind prompt or protect the understanding - Novum Organum Sciantiarium (Aphorisms concerning the Interpretation of Nature and the Kingdom of Man), Francis Bacon (1561 - 1626).

 

 

When we hear we can make decisions in the absence of estimating the impacts of those decisions, the cost when complete, or the lost opportunity cost for make alternative decisions, think of Bacon.

He essentially says show me the money.

Control systems from Glen Alleman
Categories: Project Management

Quote of the Day

Sun, 10/05/2014 - 15:33

Nothing is too wonderful to be true, if it is consistent with the laws of nature, and in  such things as these, experiment is the best test of such consistency − Michael Faraday's Diary, March 19, 1849

When we hear wonderful concepts conjectured that are untested outside personal anecdote, ask where has this worked, in what domain, in what governance framework, and what was the value at risk to applying the idea.

Categories: Project Management

Estimating Software-Intensive Systems

Fri, 10/03/2014 - 23:27

Estimating SW Intensive SystemsThe are numerous conjectures about waste of software project estimates. Most are based on personal opinion divorced from the business processes of writing sofwtare for money.

From the Introduction of the book to the left.

Good estimates are key to project (and product) success. Estimates provide information to make decisions, define feasible performance objectives, and plans. Measurements provide data to gauge adherence to performance specifications and plans, make decisions, revise designs and plans, and improve future estimates and processes.

Engineers use estimates and measurements to evaluate the feasibility and affordability of proposed products, choose amoung alternatives designs, assess risk, and support business decisions. Engineers and planners estimate the resources needed to develop, maintain, enhance, and deploy a product. Project planners use the estimated staffing level to identify needed facilities.

Planners and managers use the resource estimates to compute project cost and schedule, and prepare budgets and plans. Estimates of product, project and process characteristics provide "baselines"to assess progress the execution of the project. Managers compare compare estimates and actual values to identify deviations from the project plan and to understand the causes of the variation.

For products, engineers compare estimates of the technical baseline to observed performance to decide if the product meets its functional and operational requirements. Process capability baselines establish norms for process performance. Managers use these norms to control the process and detect compliance problems. Process engineers use capability baselines to improve the production process.

Bad estimates affect everyone associated with the project - the engineers and managers, the customer who buys the product, and sometimes even the stockholders of the company responsible for delivering the software. Incomplete or inaccurate resource estimates for a project mean that the project may not have enough time and money to complete the required work.

If you work in a domain where none of these conditions are in place, then by all means don't estimate.

If you do recognize some or all of these conditions, then here's a summary of the reasons to estimate and measure, from the book.

  • Product Size, Performance, and Quality
    • Evaluate feasibility of requirements
    • Analyze alternative product designs
    • Determine the required capacity and performance of hardware.
    • Evaluate product performance - accuracy, speed, reliability, availability, and all the ...ilities. (ACA web site missed answering this question).
    • Identify and assess technical risks
    • Provide technical baselines for tracking and controlling - this is called Closed Loop Control. No steering targets with measures of actual performance assessed against desired performance is called Open Loop Control.
  • Project Effort, Cost, and Schedule - yes Virginia real business managers need to know when you'll be done, how much it will cost, and what you'll deliver on that day for that cost. And yes Virginia there is no Santa Claus
    • Determine project feasibility in terms of cost and time
    • Identify and assess project risks - Risk Management is how Adults Manage Projects  - Tim Lister
    • Negotiate achievable commitments
    • Prepare realistic plans and budgets
    • Evaluate business value - cost versus benefit is how business stay in business
    • Provide cost and schedule baseline for tracking and controlling
  • Process Capability and Performance
    • Predict resource consumption and efficiency
    • Establish norms of expected performance - back to the steering targets
    • Identify opportunities for improvement.

 

Related articles The Failure of Open Loop Thinking Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices An Agile Estimating Story How To Make Decisions Incremental Commitment Spiral Model Probabilistic Cost and Schedule Processes Project Finance The Three Elements of Project Work and Their Estimates How Not To Make Decisions Using Bad Estimates
Categories: Project Management

Critical Thinking - The Missing Element in Project Management Processes

Thu, 10/02/2014 - 21:56

The_Thinker_Musee_RodinComplex and unstable environments encountered in project work - especially software development project work - calls for critical thinking by all participants. Complexity comes in part from technical uncertainties, starting with requirements for software capabilities. If there is uncertainty in what capabilities are needed, the project is starting off on the path to failure on day one. Certainly the functional and operational requirements have emergent properties that create uncertainty. As do staffing, productivity and risks created by reducible and irreducible uncertainty.

To deal with these complexities, critical thinking is needed on behalf of the project leaders and project participants alike.

The first responsibility of the project staff as well as management is to think. To think what is being asked of them. To consider that they are being paid to produce value by someone other than themselves. To think about why they are there, what they are being asked to do, and how they can go about being stewards of the money provided by those paying for their work. To be true professionals applying their education, training, and experience through analysis and creative, informed thought to make daily decisions.

Failure to apply critical thinking creates a  disconnect between those providing the value and those paying for the value. This is best illustrated in the notion that business decisions can be made in the absence of knowing the cost and outcomes of those decisions. The first gap in critical thinking occurs when the decsion making process ignores the principles of MicroEconomics. This gap is future reinforced when the probabilistic nature of all project work is also ignored. 

The three core elements of all projects are the delivered capabilities, the cost of producing those capabilities, and the time frame over which those capabilities are delivered for the cost. Each of these acts as a random variable, interacting with the other two in statistically complex ways.

To manage in the presence of these random variables, means making decisions in the presence of uncertainties - and resulting risks - created by the randomness of the variables. These uncertainties are reducible - we can pay more to find out more information. Or they are irreducible, we can only manage in their presence with margin for cost, schedule, and technical performance.

To make decisions in the presence of this paradigm we need to Estimate. Making decisions in the absence of estimating first violates the principles of microeconomics, and secondly ignores the underlying statistical and probabilistic nature of all project work.

When we hear that decisions be made in the absence of estimates, ask how Microeconomics and statistical uncertainty are handled. 

Slide1
 

Categories: Project Management

Quote of the Day

Thu, 10/02/2014 - 15:33

We shall not cease from exploration We shall not cease from exploration and the end of all our exploring will be to arrive where we started and be to arrive where we started and know the place for the first time. -T.S. Eliot

Quoting 29 year old reports or referencing 30 year old books it not likely the best way to reveal the problems of the day. These problems have existed of 30 year, but new more effective, and much more transparent solutions are available today.

Categories: Project Management

No Estimates Needs to Come In Contact With Those Providing the Money

Wed, 10/01/2014 - 17:36

For all the words written and posted around estimating or not estimating - and I've contributed my share - the basis of estimates has yet to be addressed outside of a few people. @PeterKretzman @aritanninen @kalapaistos@fscavo come to mind.

The gap here is simple. No one seems to ask - or even want to ask - Who are the estimates for? They are not likely for developers, who rightly, so in some cases see estimating as taking away from their valuable development duties.

Who Are Estimates For?

Estimates are for business managers providing the money that appears in the developers paycheck. Estimates are for those same business managers accountable for the Profit & Loss statement of the firm employing the developers writing the code. Those estimates forecast confidence intervals of profit or loss on a project or service before that profit or loss arrives and is irrevocable. 

Estimates are for the business marketing staff in a product firm, who are forecasting the "break even" plan for the sunk cost of developing  software that will be sold in the market. Whose revenue will pay back the short term loan (line of credit) used to pay the salaries of the developers. Without this forecast, decisions about spending or further spending have to be made in the dark.

Estimates are for the business development staff in a professional services and development firm to forecast the confidence in the assure that the contractual obligations to provide working software will not cost more - including management reserve and contingency - than they quoted the customer during the early phases of the project. Since all forecasting are probabilistic, this confidence is - or should be - discussed as the probability of cost at of below or completing on or before. The dysfunction of using estimates as commitments, is recognized as just that - dysfuntion. But as a dysfunction, it's classified as Bad Management. Don't Do Stop Things on Purpose is good advice for any business.

Estimates are for the internal business finance staff accountable for managing and forecasting costs for internal software development or procurement used to run the business - and likely used to generate revenue - and assure the senior finance people that the "value" produced by this software measured in monetized units of "money" will exceed the cost to achieve that value when the project completes. And some sense of when the date will be, so those monetized benefits can start to appear on the balance sheet using FASB 86 accounting rules.

The estimates are not for the developers

Those talking about #NoEstimates from the developers point of view are talking to the wrong people. They appeat to be talking to their own self-selected group and not the group that provides the money for their work. As my former NASA Cost Director colleague reminds me "follow the money." So follow the money. Unless the developers are providing the money themselves, the question of estimating or not estimating is a self-referencing conversation in the absence of these people. Because of that, those best to say if estimates are of value or not are not in the conversation. 

So Back To The Original Question

Ignoring for the moment the observed or perceived dysfunctions found in low maturity software development organizations of the misuse of estimates. Ignore for the moment the preception that making estimates of the future cost, duration, and probabilistic outcomes of development work is part of normal engineering processes. Ignore the emotional rhetoric of the Dilbert approach to management. 

The core principle of Microeconomics of software development requires we  have some approximation of the future to make decisions about alternatives. The opportunity cost, the trade-space of decision making, requires we approximate the cost and outcomes of our decisions. 

Now add the core business process of managing expenditures against a planned and targeted Return on Investment, which has both Value and Cost in it's equation. 

Then ask those conjecturing there are:

  • Decision making frameworks for project that do not require estimates
  • Investment models for software projects that do not require estimates
  • Project management approaches of dealing with risk, scope management, progress reporting that do not require estimates

To connect the dots to those conjectures with Microeconomics of software development and ROI assessments of standard business processes.

 

Related articles How NOT to Estimate Anything How To Fix Martin Fowler's Estimating Problem in 3 Easy Steps More #NoEstimates All Project Numbers are Random Numbers - Act Accordingly How To Estimate, If You Really Want To Resources for Moving Beyond the "Estimating Fallacy" Back To The Future How to "Lie" with Statistics

 

Categories: Project Management

What Can Lean Learn From Systems Engineering?

Tue, 09/30/2014 - 16:33

The Lean Aerospace Initiative and the Lean Aerospace Initiative Consortium define processes applicable in many domains for applying lean. At first glance there is no natural connection between Lean and System Engineering. The ideas below are from a paper Igave at a Lean conference.

Key Takeaways

  • Lean and Systems engineering are cousins.
  • All but trivial projects are systems and many are systems of systems. Thinking like a systems engineer is the basis of implementing Lean processes. Thinking in the absence of systems, does little to add sustaining value to any process improvement.
  • Product development is a value stream process, but how the components interact at the technical, business, financial, and operational levels is a systems engineering process. Lean itself does not possess the vocabulary to speak to these systems complexity issues [1]

Core Concepts of Systems Engineering

  1. Capture and understand the requirements in terms of Capabilities assessed through Measures of Effectiveness (MOE) and Measures of Performance (MOP).
  2. Ensure requirements are consistent with what is predicted to be possible in a solution in these MOEs and MOPs.
  3. Treat goals as desired characteristics for what may not be possible.
  4. Define the MOE, MOP, goals, and solutions for the whole lifecycle of the project in units meaningful to the buyer.
  5. Maintain the distinction between the statement of the problem and the description of the solution.
  6. Baseline each statement of the problem and the statement of the solution.
  7. Identify descriptions of alternative solutions.
  8. Develop descriptions of the solution.
  9. Except for simple problems, develop a logical  solution description.
  10. Be prepared to iterate in design to drive up effectiveness.
  11. Base the solution of the evaluation of its effectiveness, in units of measure meaningful to the buyer.
  12. Independently verify all work products.
  13. Validate all work products from the perspective of the stakeholders.
  14. Some management is needed to plan and implement effective and efficient transformation of requirements and goals into a description of the solution.

Typical System Engineering Activities

  1. Technical management
  2. System design
  3. Product realization
  4. Technical analysis and evaluation
  5. Product control
  6. Process control
  7. Post implementation support

Steps to Lean Thinking [2]

  1. Specify value
  2. Identify value stream
  3. Make value flow continuously
  4. Let customers pull value
  5. Pursue perfection

Differences and Similarities between Lean and Systems Engineering

  1. Both emerged from practice. Only later were the principles and theories codified.
  2. Both have focused on different phases of the product lifecycle. SE is generally on product development. SE is more focused on planning. Lean generally on product production. While Lean is more focused on empirical action.
  3. Unlike Lean, SE has less focus on quality, except for Integrated Product and Product Development (IPPD).

Despite these differences and similarities both Lean and Systems Engineering are focused on the same objectives – delivering products or lifecycle value to the stakeholders.

It is the lifecycle value that drives both paradigms and must drive any other process paradigm associated with Lean and Systems Engineering. Paradigm like software development, the management of any form of a project and the very notion of agile. A critical understanding often missed is that Lifecycle Value includes the cost of delivering that value.

Value can't be determined in the absence of knowing the cost. ROI and Microeconomics of decision making require both variables to be used to make decisions.

What do we mean by lifecycle?

Generally lifecycle is a combination of product performance, quality, cost and fulfillment of the buyers needed capabilities.[3]

Lean and Systems Engineering share this common goal. The more complex the system, the more contribution there from Lean and SE.

Putting Lean and Systems Engineering Together on Real Projects

First some success factors on complex projects [4]

  1. Dedicated and stable interdisciplinary teams
  2. Use of prototypes and models to generate tradeoffs
  3. Prioritizing product features
  4. Engagement with senior management and customers at every point in the project
  5. Some form of high performing front end decision process that reduces instability of key inputs and improve the flow of work throughout the product lifecycle.

This last success factor is core to any complex environment, no matter what the process is called. In the absence of stability of requirements and funding, improvements to the flow of work is constrained.

The notion of adapting to changing requirements is not the same as having the requirements – and the associated funding – be unstable.

Mapping of the Value Stream to the work process requires some level of stability. It is the search for this stability where Systems Engineering – as a paradigm – adds measureable value to any Lean initiative.

The standardization and commonality of processes across complex systems is the basis for this value. [5]

Conclusions

  1. Lean and SE are two side of the same coin regarding the objective of creating value for the stakeholder
  2. Lean and SE complement each other during different phases of the project – ideation, product trades for SE and production waste removal for Lean anchor both ends of the spectrum of improvement opportunities.
  3. Value stream thinking makes visible the paths to be taken in transitioning to a Lean paradigm while maintaining the principles of systems engineering. [6]
  4. The result is the combination of Speed and Robustness – systems are easily adaptable to change while maintaining fewer surprises, using leading indicators to make decisions and decreasing sensitivity to production and use variables.

[1] “The Lean Enterprise – A Management Philosophy at Lockheed Martin,” Joyce and Schechter, Defense Acquisition Review Journal, 2004.

[2] Lean Thinking, Womack and Jones, Simon and Schuster, 1996

[3] Lean Enterprise Value: Insights from MIT’s Lean Aerospace Initiative, Murman, et al, Palgrave 2002.

[4] “Lean Systems Engineering: Research Initiatives in Support of a New Paradigm,” Rebentisch, Rhodes, and Murman, Conference on Systems Engineering, April 2004.

[5] LM21 Best Practices, Jack Hugus, National Security Studies, Louis A. Bantle Symposium, Syracuse University Maxwell School, October 1999

[6] “Enterprise Transition to Lean Roadmap,” MIT Lean Aerospace Initiative, 2004 Plenary Conference.

 

Related articles Why Projects Fail, No Matter the Domain When We Say Risk What Do We Really Mean? How to Deal With Complexity In Software Projects? Big Systems Acquisitions - Lessons for ACA Web Site
Categories: Project Management

The Cognitive Illusion of Bad Software Project Outcomes

Tue, 09/30/2014 - 00:41

Daniel Kahneman's and Amos Tversky's paper On The Reality of Cognitive Illusion. ‡ They suggest, through their research, that intuitive predictions and judgements are often mediated by a small number of distinctive mental operations, called judgement heuristics. These heuristics often lead to characteristic errors and biases.

For example, the effect of aerial perspective on apparent distance is confirmed by the observation that the same mountain appears closer on a clear day rather than a hazy day. The intuitive predication and judgement of probability are often based on the relations of similarity between evidence and possible outcomes. This representativeness is an assessment of the degree of correspondence between a sample and a population. 

The next heuristic is the availability bias in which the probability is estimated by assessing availability or associative distance. † Experience shows and experiments confirm that large classes are recalled better and faster than instances of less frequent classes. That likely occurrences are easier to imagine than unlikely ones. And associative connections are strengthened when two events frequently co-occur. That these associative bonds are strengthened by repetition is the basis of memory. 

So Here's the Issue

When we hear or read that software projects fail often or Standish report says ..., or a personal anecdote that resonates with our own personal experience, we recall that experience from memory. The actual data from the population of all data are not used for comparison. Rather we assume - by applying the cognitive illusion - that the sample sata represents the large class of population data, since our repeated observations of the sample data class has reinforced our illusion that that sample data IS the population.

This is the core issue with anecdotal information when making decisions in the presence of uncertainty. Or speaking about a condition in the absence of statistically testable hypothesis. Or attempting to convey a message in the absence of external confirmation that the message is on solid footing compared to the population of data.

Why This Is Not Good Management

When we hear we're all bad at making estimates, in the absence of actual population statistics about estimate making, we're using Cognitive Illusions and Availability Heuristics. Because we have personal experience with making bad estimates and the majority of people we associate with have the same experience.

This experience is in no way representative of the population of people tasked with making estimates. This would be irrelevant of course if the conversation were simple chatter at the bar. But once that conversation enters the realm of policy making, method development, or suggestions that the anecdotal observations need to result in changing how business conducts its business - we're bad at making estimates so the solution is to stop making estimates - then both availability bias and Cognitive Illusions have displaced the actual conversation about the very validity of the anecdotal concepts. And it is replaced by strong defense of the cognitively biased dea, no matter the credibility of the concept - which is most often weak at best and simply false at worse.

So next time you hear some statement about something involving observational and anecdotal data, ask a simple question.

What's the process by which these anecdotal observation have been tested in the broader population of conditions?

This is the core issue with the Standish Reports. They are self selected samples of projects that are troubled in the absence of the population of projects that are troubled and not troubled. 

Always ask for references, data representative of the references, and an assessment of the statistical confidence that the anecdotal data is in fact correlated with the population data. Otherwise it's just an opinion, and very likely an uniformed opinion.

And if you're paying money to listen to someone tell you ancedotal data and don't speak up and ask those questions, you've participated in the availability heuristic and cognitive illusion along with the speaker.

† Availability: A Heuristic for Judging Frequency and Probability, Amos Tversky and Daniel Kahneman, a chapter appearing in Cognitive Psychology, 1973

‡ On the Reality of Cognitive Illusions, Daniel Kahneman and Amos Tversky, Psychological Review, Vol. 103, No. 3, pp. 582-591

Categories: Project Management

Don Yaeger's 16 Consistent Characteristics of Greatness

Mon, 09/29/2014 - 04:40

Don Yeager has a small book mark sized card on 16 Consistent Characteristics of Greatness. I got my card at a PMI conference where he spoke. I'm repeating them here. Don's talk was about sports people he interviewed for magazines and books. The audience was hard-bitten Government and Industry project and program managers. Those accountable for millions and billions of dollars of high risk, high reward endeavors. After Don finished his talk, no a person in the room had dry eyes. Subscribe to Don's daily message at www.donyaeger.com

How They Think

1. It's personal - they hate to lose more than they love to win.

2. Rubbing elbows - they understand the value  of association.

3. Believe - they have faith in a higher power.

4. Contagious Enthusiasm - they are positive thinkers... They are enthusiastic... and that enthusiasm rubs off.

How They Prepare

5. Hope for the best, but ... They prepare for all possibilities before they step on the field.

6. What Off-Season? They are always working towards the next game... The goal is whart's ahead, and there's always something ahead.

7. Visualize Victory - They see victory before the game begins.

8. Inner Fire - they use adversity as fuel.

How They Work

9. Ice in Their Veins - they are risk-takers and don't fear making a mistake.

10. When All Else Fails - they know how - and when - to adjust their game plan.

11. Ultimate Teammate - they will assume whatever role is necessary for the team to win.

12. Not Just About the Benjamin's - they don't just play for the money.

How They Live

13. Do Unto Others - they know character is defined by how they treat those who cannot help them.

14. When No One Is Watching - they are comfortable in the mirror... They live their life with integrity.

15. When Everyone is Watching - they embrace the idea of being a role model.

16. Records Are Made To Be Broken - they know their legacy isn't what they did on the field. They are well rounded.

Categories: Project Management