Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Project Management

Mix Mashup

NOOP.NL - Jurgen Appelo - Mon, 10/20/2014 - 10:08
mashup-square

<promotion>

The MIX Mashup is a gathering of the vanguard of management innovators—pioneering leaders, courageous hackers, and agenda-setting thinkers from every realm of endeavor

The post Mix Mashup appeared first on NOOP.NL.

Categories: Project Management

Decision Making Without Estimates?

Herding Cats - Glen Alleman - Mon, 10/20/2014 - 02:56

In a recent post there are 5 suggestions of how decisions about software development can be made in the absence of estimating the cost, duration, and impact of these decisions. Before looking at each in more detail, let's see what the basis is of these suggestions from the post.

A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.

Decision making in the presence of the allocation of limited resources is called Microeconomics. These decision - in the presence of limited resources - involves opportunity costs. That is what is the cost of NOT choosing one of the alternatives - the allocations? To know these means we need to know something about the outcome of NOT choosing. We can't wait to do the work, we need to know what happens - to some level of confidence - if we DON'T Do something. How can we do this? We need estimate what happens if we don't choose one of the possible allocations, since all the outcomes are in the future.

But first, the post started with suggesting the five approaches are part of Strategy. I'm familiar with strategy making in the domain of software development, having been schooled by two Balanced Scorecard leaders while working as a program manager for a large Department of Energy site, where we pioneered the use of agile development in the presence of highly formal nuclear safety and safeguards applications.

What is Strategy?

Before proceeding with the 5 suggestions, let's look at what strategy is, since it is common to confuse strategy with tactics.

Strategy is creating fit among a firm's activities. The success of a strategy depends on doing many things well – not just a few. The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions. When this occurs, operational effectiveness determines the relative performance of the firm. 

Improving operational effectiveness is a necessary part of management, but it is not strategy. In confusing the two, managers will be unintentionally backed into a way of thinking about competition that drives the business processes (IT) away from the strategic support and toward the tactical improvement of operational effectiveness.

Managers must be able to clearly distinguish operational effectiveness from strategy. Both are essential, but the two agendas are different. The operational effectiveness agenda involves continual improvement business processes that have no trade–offs associated with them. The operational effectiveness agenda is the proper place for constant change, flexibility, and relentless efforts to achieve best practices.

In contrast, the strategic agenda is the place for making clear trade offs and tightening the fit between the participating business components. Strategy involves the continual search for ways to reinforce and extend the company’s position in the market place. 

“What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.

Using Porter's notion of strategy in a business context, the post seems more about tactics. But ignoring that for the moment, let's look further into the ideas presented in the post.

I'm going to suggest that each of the five decision process described in the post are the proper ones - ones with many approaches - but each has ignored the underlying principles of Microeconomics. This principle is that decisions about future outcomes are informed by the opportunity cost and that cost requires - mandated actually since they're in the future - an estimate. This is the basis of Real Options, forecasting, and the very core of business decision making in the presence of uncertainty.

The post then asks

  1. How well does this decision proposal help us reach our business goals?
  2. Does the risk profile resulting from this decision fit our acceptable risk profile?

 
Screen Shot 2014-10-18 at 11.02.18 AMThe 1st question needs another question to be answered. What are our business goals and what are the units of measure of these goals. In order to answer the 1st question we need a steering target to know how we are proceeding toward that goal.

The second question is about risk. All risk comes from uncertainty. Two types of uncertainty exist on projects:

Reducible (Epistemic) and Irreducible (Aleatory). Epistemic uncertainty comes from lack of knowledge. Epistemology is the study of the acquisition of knowledge. We can pay money to buy down this lack of knowledge. That is Epistemic uncertainty can be reduced with work. Risk reduction work. But this leaves open how much time, budget, and performance margin is needed?

ANSWER: We need an Estimate of the Probability of the Risk Coming True. Estimating the Epistemic risk probability of occurrence, the cost and schedule for the reduction efforts, and the probability of the residual risk is done with probabilistic model. There are several and many tools. But estimating all three components: occurrence, impact, effort to mitigate, and residual risk is required.

Aleatory uncertainty comes from the naturally occurring variances of the underlying processes. The only way to reduce the risk arising from Aleatory uncertainty is with margin. Cost Margin, Schedule Margin, Performance Margin. But this leaves open how do we know how margin?

ANSWER: We need to estimate the needed margin from the Probability Distribution Function of the Underlying Statistical Process. Estimating the needed aleatory margin (cost, schedule, and performance) can be done with Monte Carlo Simulation or Method of Moments.


Probability and StatisticsSo one more look at the suggestions before examining  further the 5 ways of making decisions in the absence of estimating their impacts and the cost to achieve those impacts.

All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?

All risk is probabilistic, based on underlying statistical processes. Either the process of lack of knowledge (Epistemic) or the process of natural variability (Aleatory). In the consideration of risk we must incorporate these probability and statistical behaviours in our decision making activities. Since the outcomes of these processes occur in the future, we need to estimate them based on  knowledge - or lack of knowledge - of their probability of occurrence. For the naturally occurring variances that have occurred in the past we need to know how they might occur in the future. To answer these questions, we need a probabilistic model. This model based on the underlying statistical processes. And since the  underlying model is statistical, we need to estimate the impact of this behaviour.

Let's Look At The Five Decision Making Processes

1. Do the most important work firstIf you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.

    • Important work first is good strategy. But importance needs a unit of measure. That unit of measure should be found in the strategy. This is the purpose of the strategy. But the strategy needs units of measure as well. Simply saying do important work first doesn't provide a way to make that decision.
    • The notion of validating versus implementing the strategy is artificial. A read of the Strategy Making literature will clear this up. Strategy for business and especially strategy in IT is a very mature domain, with a long history.
    • One approach to generating the units of measure from the strategy is Balanced Score Card, where strategic objectives are mapped to Performance Goals, then to Critical Success Factors, then to the Key Performance Indicators. The way to do that is with a Strategy Map, shown below.
    • This is the use of strategy as Porter defines it. 

Screen Shot 2014-10-18 at 11.28.31 AM

2. Do the Highest Techncial Risk FirstWhen you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.

    • This is likely dependent on the technical and programmatic architecture of the project or product. 
    • We may want to establish a platform on which to build riskier components. A platform that is known and trusted, stable, bug free - before embarking on any high risk development.
    • High risk may mean high cost. So doing risky things first have consequences. What are those consequences? One is risking the budget before it's clear we have a stable platform, in which to build follow on capabilities. Knowing soemthing is high risk may mean high cost, and this requires estimating something that will occur in the future - the cost to achieve and the cost of the consequences.
    • So doing highest technical risk first, is itself a risk that needs to be assessed. Without this assessment, this suggestion has no way of being tested in practice.

3. Do the Easiest Work FirstSuppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.

    • This is also dependent on the technical and programmatic architecture of the project or product.
    • It's also counter intuitive to #2. Since High Risk is not likely to be the easiest to do.
    • These assessments between risk and work sequence require some sort of trade space analysis, and since the outcomes and their impacts in in the future, estimates these is part of the Analysis of Alternatives approach for any non-trivial project where Systems Engineering guides the work processes.

4. Do the legal Requirements FirstIn medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.

    • Medical Devices are regulated with 21CFR Parts 800-1299. The suggestion doesn't reference any regulations for medical software, which ranges for patient check in at the front desk to surgical devices controlled by software.
    • Developing 21 CFR Software components first may not be possible until the foundation on which they are build is established, tested, and verified. 
    • This means - Quality Planning, Requirements, Design, Construction or Coding, Testing by the Software Developer, User Site Testing, and Maintenance and Software Changes. 
    • Once the plan - a required plan for validation - is in place, the order of the development will be more visible. 
    • Deciding which components to develop, just because they are impacted by Legal Requirements usually means ALL the components. So this approach - Do The Legal Requirements First - usually means do them all.
    • The notion of - Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the marketfails to describe how they knew when they would be ready to test out these ideas. And most importantly how they were able to go to market in the absence of the certification.
    • As well what type of testing - early trials, full 21 CFR release, human applciations, animal testing, etc. is not stated. With some experience in the medical device - devices.
    • A colleague is the CEO of http://acumyn.com/  I'll confirm the processes of validating software from him.

5. Liability Driven Investment - This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.

    • It's not clear why this is called liability. Liability on the balance sheet is an obligation to pay. Deciding what work to do now to generate needed revenue is certainly a strategy. Value Stream Mapping or Impact Mapping is a way to define that. But liability seems to be the wrong term.
    • Not sure how that connects with a Securities Exchange and what problem they are solving using the term liabilities. Shorts are obligations to pay in the future when the short is calledPuts and Calls are terms used in stock trading, but developing software products is not trading. The Real Options used by the poster in the past don't exercise the Option, so the liability to pay doesn't seem to connect here.

References

  1. Risk Informed Decision Handbook, NASA/SP-2010-576 Version 1.0 April 2010.
  2. General Principles of Software Validation; Final Guidance for Industry and FDA Staff, US Food and Drug Administration.

  3. Strategy Maps: Converting Intangible Assets into Tangible Outcomes, Robert Kaplan and David Norton, Harvard Business Press.
  4. Estimating Optimal Decision Rules in Presence of Model Parameter Uncertainty, Christopher Joseph Bennett, Vanderbilt University, June 6, 2012.
Related articles Making Choices in the Absence of Information
Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Mon, 10/20/2014 - 01:08

To achieve great things, two things are needed; a plan, and not quite enough time.
− Leonard Bernstein

Plan of the Day (CV-41)

The notion that planning is a waste is common in domains where mission critical, high risk - high reward, must work, type projects do not exist.

Notice the Plan and the Planned delivery date. The notion that  deadlines  are somehow evil, goes along with the lack on understanding that business needs a set of capabilities to be in place on a date in order to start booking the value in the general ledger.

Plans are strategies. Strategies are a hypothesis. The Hypothesis is tested with  experiments. Experiments show from actual data what the outcome is of the work. These outcomes are used as feedback to take corrective actions at the strategic and tactical level of the project.

This is called Closed Loop Control. Set the strategy, define the units of measure for the desired outcome - Measures of Effectiveness and Measures of Performance. Perform work as assess these measures. Determine the variance between the planned outcomes and the needed outcomes. Take corrective action by adjusting the plan to keep the project moving toward the strategic goals. For Closed Loop Control, we need

  • A steering target for some future state.
  • A measure of the current state.
  • The variance between current and future.
  • Corrective actions to put the project back on track toward the desired state.

Control systems from Glen Alleman Related articles Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow's Book
Categories: Project Management

What Is Systems Architecture And Why Should We Care?

Herding Cats - Glen Alleman - Sat, 10/18/2014 - 20:10

If we were setting out to build a home, we would first lay out the floor plans, grouping each room by function and placing structural items within each room according to their best utility. This is not an arbitrary process – it is architecture. Moving from home design to IT system design does not change the process. Grouping data and processes into information systems creates the rooms of the system architecture. Arranging the data and processes for the best utility is the result of deploying an architecture. Many of the attributes of building architecture are applicable to system architecture. Form, function, best use of resources and materials, human interaction, reuse of design, longevity of the design decisions, robustness of the resulting entities are all attributes of well designed buildings and well designed computer systems. [1]

In general, an architecture is a set of rules that defines a unified and coherent structure consisting of constituent parts and connections that establish how those parts fit and work together. An architecture may be conceptualized from a specific perspective focusing on an aspect or view of its subject. These architectural perspectives themselves can become components in a higher–level architecture serving to integrate and unify them into a higher level structure.

The architecture must define the rules, guidelines, or constraints for creating conformant implementations of the system. While this architecture does not specify the details on any implementation, it does establish guidelines that must be observed in making implementation choices. These conditions are particularly important for component architectures that embody extensibility features to allow additional capabilities to be added to previously specified parts. [2] This is the case where Data Management is the initial deployment activity followed by more complex system components.

By adopting a system architecture motivation as the basis for the IT Strategy, several benefits result:

  • Business processes are streamlined – a fundamental benefit to building enterprise information architecture is the discovery and elimination of redundancy in the business processes themselves. In effect, it can drive the reengineering the business processes it is designed to support. This occurs during the construction of the information architecture. By revealing the different organizational views of the same processes and data, any redundancies can be documented and dealt with. The fundamental approach to building the information architecture is to focus on data, process and their interaction.
  • Systems information complexity is reduced – the architectural framework reduces information system complexity by identifying and eliminating redundancy in data and software. The resulting enterprise information architecture will have significantly fewer applications and databases as well as a resulting reduction in intersystem links. This simplification also leads to significantly reduced costs. Some of those recovered costs can and should be reinvested into further information system improvements. This reinvestment activity becomes the raison d’état for the enterprise–wide system deployment.
  • Enterprise–wide integration is enabled through data sharing and consolidation – the information architecture identifies the points to deploy standards for shared data. For example, most Kimball business units hold a wealth of data about products, customers, and manufacturing processes. However, this information is locked within the confines of the business unit specific applications. The information architecture forces compatibility for shared enterprise data. This compatible information can be stripped out of operational systems, merged to provide an enterprise view, and stored in data repositories. In addition, data standards streamline the operational architecture by eliminating the need to translate or move data between systems. A well–designed architecture not only streamlines the internal information value chain, but it can provide the infrastructure necessary to link information value chains between business units or allow effortless substitution of information value chains.
  • Rapid evolution to new technologies is enabled – client / server and object–oriented technology revolves around the understanding of data and the processes that create and access this data. Since the enterprise information architecture is structured around data and process and not redundant organizational views of the same thing, the application of client / server and object–oriented technologies is much cleaner. Attempting to move to these new technologies without an enterprise information architecture will result in the eventual rework of the newly deployed system.

[1] A Timeless way of Building, C. Alexander, Oxford University Press, 1979.

[2] “How Architecture Wins Technology Wars,” C. Morris and C. Ferguson, Harvard Business Review, March–April 1993, pp. 86–96.

Categories: Project Management

Quote of the Month October 2014

From the Editor of Methods & Tools - Thu, 10/16/2014 - 11:48
Minimalism also applies in software. The less code you write, the less you have to maintain. The less you maintain, the less you have to understand. The less you have to understand, the less chance of making a mistake. Less code leads to fewer bugs. Source: “Quality Code: Software Testing Principles, Practices, and Patterns”, Stephen Vance, Addison-Wesley

The Results of My First OKRs (Running)

NOOP.NL - Jurgen Appelo - Wed, 10/15/2014 - 21:34
2014-09-20 14.52.07

A popular topic in the new one-day Management 3.0 workshop is the OKRs system for performance measurement. (See Google’s YouTube video here.) Instead of explaining what OKRs are, I will just share with you the result of my first iteration. If you read this, you will get the general idea of how the OKRs system works.

The post The Results of My First OKRs (Running) appeared first on NOOP.NL.

Categories: Project Management

Podcast with Cesar Abeid Posted

Cesar Abeid interviewed me, Project Management for You with Johanna Rothman. We talked about my tools for project management, whether you are managing a project for yourself or managing projects for others.

We talked about how to use timeboxes in the large and small, project charters, influence, servant leadership, a whole ton of topics.

I hope you listen. Also, check out Cesar’s kickstarter campaign, Project Management for You.

Categories: Project Management

Connecting the Dots Between Technical Performance and Earned Value Management

Herding Cats - Glen Alleman - Wed, 10/15/2014 - 15:36

We gave a recent College of Performance Management webinar on using techncial progress to inform Earned Value. Here's the annotated charts.

Categories: Project Management

Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow’s Book

Herding Cats - Glen Alleman - Tue, 10/14/2014 - 17:08

Effective Risk ManagementIn a recent post to “Who Is Ed Conrow?” a responder asked about the differences between the PMBOK® Risk approach and the DoD PMBOK risk approaches as well as a summary of the book Effective Risk Management: Some Keys to Success, Edmund Conrow. Ed worked the risk management processes for a NASA proposal I was on. I was the IMP/IMS lead, so integrating Risk with the Integrated Master Plan / Integrtaed Master Schedule in the mannder he prescribed was a live changing experience. I was naive before, but no longer after that proposal won ˜$7B for the client.

 

Let me start with a few positioning statements:

  1. Project risk management is a topic with varying levels of understanding, interest, and applicability. The PMI PMBOK® provides a “starter kit” view of project risk. It covers the areas of risk management but does not have guidance on actually “doing” risk management. This results many times in the false sense that “if we’re following PMBOK® then we’re OK.”
  2. The separation of technical risk from programmatic risk is not clearly discussed in PMBOK®. While not a “show stopper” issue for some projects, programmatic risk management is critical for enterprise class projects. By enterprise I mean, ERP, large product development, large construction, aerospace and defense class programs. In fact, DI-MGMT-81861 mandates programmatic risk management processes for procurements greater than $20M. $20M sounds like a very large sum of money for the typical internal IT development project. It hardly keeps the lights on an aerospace and defense program.
  3. The concepts around schedule variances are misunderstood and misapplied in almost every discussion of risk management in the popular literature. The common red-herring is the ineffective use of Critical Path and PERT. This of course is simply a false statement in domains outside IT or small low risk projects. Critical Path, PERT and Monte Carlo Simulation are mandated by government procurement guidelines. Not that these guides make them “correct.” What makes them correct is that programs measurably benefit from their application. This discipline is called Program Planning and Controls (PP&C) and is a profession in aerospace, defense, and large construction. No amount of “claiming the processes don’t work” removes the documented facts they do, when properly applied. Anyone wishing to learn more about these approaches to programmatic risk management need only look to the NASA, Department of Energy, and Department of Defense Risk Management communities.

With all my biases out of the way, let’s look at the DoD PMBOK®

  1. First, the DoD PMBOK® is free and can be found at http://www.dau.mil/pubs/gdbks/pmbok.asp. It appears to be gone so now you can find it here. This is a US Department of Defense approved document. It provides supplemental guidance to the PMI PMBOK®, but in fact replaces a large portion of PMI PMBOK®.
  2. Chapter 11 of DoD PMBOK® is Risk. It starts with the Risk Management Process areas – in Figure 11-1, page 125. (I will not put them here, because you should down load the document and turn to that page.) The diagram contains six (6) process areas. The same number as the PMI PMBOK®. But what’s missing from PMI PMBOK® and present in DoD PMBOK® is how these processes interact to provide a framework for Risk Management.
  3.  There are several missing critical concepts in PMI PMBOK® that are provided in DoD PMBOK®.
    • The Risk Management structure in Figure 11-1 is the first. Without the connections between the process areas in some form other than “linear” the management of technical and programmatic risk turns into a “check list.” This is the approach of PMI PMBOK® - to provide guidance on the process areas and leave it up to the reader to develop a process around these areas. This is also the approach of CMMI. This is an area too important to leave it up to the read to develop the process.
    • The concept of the Probability and Impact matrix is fatally flawed in PMI PMBOK®. It turns out you cannot multiple probability of occurrence with the impact of this occurrence. These “numbers” are not numbers in the sense of integer or real valued numbers. They are probability distributions themselves. Multiplication is not an operation that can be applied to a probability distribution. Distributions are integral equations and the multiplication operator ´ is actually the convolution operator Ä.
    • The second fatal flaw of the PMI PMBOK® approach to probability of occurrence and impact is these “numbers” are uncalibrated cardinal values. That is they have no actually meaning since their “units of measure” are not calibrated.

Page 124 of DoD PMBOK® summarizes the principles of Risk Management as developed in two seminal sources.

  1. Effective Risk Management: Some Keys to Success, Edmund Conrow, American Institute of Aeronautics and Astronautics, 2000.
  2. Risk Management Guide for DoD Acquisition, Sixth Edition, (Version 1.0), August 2006, US Department of Defense, which is part of the Defense Acquisition Guide, §11.4), which is published within the Office of the Under Secretary of Defense, Acquisition, Technology and Logistics  (OUSD(AT&L)),  Systems and Software Engineering / Enterprise Development.

Now all these pedantic references are here for a purpose. This is how people who manage risk for a living, manage risk. By risk, I mean technical risk that results in loss of mission, loss of life. Programmatic risk that results in loss of Billions of Tax Payer dollars. They are serious enough about risk management to not let the individual project or program manager interpret the vague notions in PMI PMBOK®. These may appear to be harsh words, but the road to the management of enterprise class projects is littered with disasters. You can read every day of IT projects that are 100% over budget, 100% behind schedule. From private firms to the US Government, the trail of destruction is front page news.

A Slight Diversion – Why are Enterprise Projects So Risky?

There are many reasons for failure – too many to mention – but one is the inability to identify and mitigate risk. The words “indentify” and “mitigate,” sound simple. They are listed in the PMI PMBOK® and the DoD PMBOK®. However, here is where the problem starts:

  1. The process of separating risk from issue.
  2. Classifying  the statistical nature risk.
  3. Managing the risk process independently from project management and technical development.
  4. Incorporating the technical risk mitigation processes into the schedule.
  5. Modeling the impact of technical risk on programmatic risk.
  6. Modeling the programmatic risk independent from the technical risk.

Using Conrow as a Guide

Here is one problem. When you use the complete phrase “Project Risk Management” with Google, you get ~642,000 hits. There are so many books, academic papers, and commercial articles on Risk Management, where do we start? Ed Conrow’s book is probably not the starting point for learning how to practice risk management on your project. However, it might be the ending point. If you are in the software development business, a good starting point is – Managing Risk: Methods for Software Systems Development, Elaine M. Hall, Addison Wesley, 1998. Another broader approach is Continuous Risk Management Guidebook, Software Engineering Institute, August 1996. While these two sources focus on software, they provide the foundation for the discussion of risk management as a discipline.

There are public sources as well:

  1. Start with the NASA Risk Management page, http://www.hq.nasa.gov/office/codeq/risk/index.htm
  2. Software Engineering Institute’s Risk Management page, http://www.sei.cmu.edu/risk/index.html
  3. A starting point for other resources from NASA, http://www.hq.nasa.gov/office/hqlibrary/ppm/ppm22.htm
  4. Department of Energy’s Risk Management Handbook, http://www.oecm.energy.gov/Portals/2/RiskManagement.pdf (which uses the DoD Risk Process Map described above)

However, care needs the be taken once you go outside the government boundaries. Here are many voices plying the waters of “risk management,” as well as other voices with “axes to grind” regarding project management methods and risk management processes. The result is many times a confusing message full of anecdotes, analogies, and alternative approaches to the topic of Risk Management.

Conrow in his Full Glory

Before starting into the survey of the Conrow book, let me state a few observations:

  1. This book is tough going. I mean really tough going. Tough in the way a graduate statistical mechanics book is tough going. Or a graduate micro-economics of managerial finance book is tough going. It “professional grade” stuff. By “professional grade” I mean – written by professionals to be used by professionals.
  2. Not every problem has the same solution need. Conrow’s solutions may not be appropriate for a software development project with 4 developers and a customer in the same room. But from projects that have multiple teams, locations, and stake holders some type of risk management is needed.
  3. The book is difficult to read for another reason. It assumes you have a “a reasonable understanding of the issues” around risk management. This means this is not a tutorial style or “risk management for dummies” book. It is a technical reference book. There is little in the way of introductory material or bringing the reader up top speed before presenting the material.

From the introduction:

The purpose of this book is two-fold: first, to provide key lessons learned that I have documented from performing risk management on a wide variety of programs, and second, to assist you, the reader, in developing and implementing an effective risk management process on your program.

A couple of things here. One is the practical experience in risk management. Many in the risk management “talking” community have limited experience with risk management in the way Ed does. I first met Ed on a proposal for a $8B Manned Spaceflight program. He was responsible for the risk strategy and the conveying of that strategy in the proposal. The proposal resulted in an award and now our firm provides Program Planning and Controls for a major subsystem of the program. In this role programmatic and technical risk management is part of the Statement of Work flowed down from the prime contractor.

Second Ed is a technical advisor to the US Arms Control and Disarmament Agency as well as a consultant industry and government on risk management. These “resume” items are meant to show that the practice of risk management is just that – a practice. Speaking about risk management and doing risk management on high risk programs are two different things.

One of Ed’s principle contributions to the discipline was the development of a micro-economic framework of risk management in which the design feasibility (or technical performance) is traded against cost and schedule.

In the end, this is a reference text for the process of managing the risk of projects, written by a highly respected practitioner.

What does the Conrow Book have to offer over the Standard approach?

Ed’s book contains the current “best practices” for managing technical and programmatic risk. These practices are used on high risk, high value programs. The guidelines in Ed’s book are generally applicable to many other classes of projects as well. But there are several critical elements that differentiate this approach from the pedestrian approach to risk management.

  1. The introduction of the “ordinal risk scale.” This approach is dramatically different than the PMI PMBOK description of risk in which the probability of occurrence is multiplied by the consequences to produce a “risk rating.” Neither the probability of occurrence nor the consequences are calibrated in anyway. The result is a meaningless number that may satisfy the C-Level that “risk management” is being done on the program. By ordinal risk scales it is meant a classification of risk, say – A,B,C,D,E,F – that are descriptive in nature. Not just numbers. By the way, the use of letters is intentional. If numbers are used to ordinal risk ranks, there is a tendency to do arithmetic on them. Compute the average risk rank, or multiply them by the consequences. Letters remove this notion and prevent the first failure of the common risk management approach – to produce an overall risk measure.

The ordinal approach works like this. Ed describes some classes of risk scales which include: maturity, sufficiency, complexity, uncertainty, estimative probability, and probability based scales.

A maturity risk scale would be:

Definition

Scale Level

Basic principles observed

E

Concept design analyzed for performance

D

Breadboard or brassboard validation in relevant environment

C

Prototype passes performance tests

B

Item deployed and operational

A

 The critical concept is to relate the risk ordinal value to an objective measure. For a maturity risk assessment, some “calibration” of what it means to have the “basic principles observed” must be developed. This approach can be applied to the other classes – sufficiency, complexity, uncertainty, estimative probability and probability based scales.

It’s the estimative probability that is important to cost and schedule people in our PP&C practice. The estimative probability scale attempts to relate a word to a probability value. “High” to 80%. An ordinal estimative probability scale using point estimates derived from a statistical analysis of survey data might look like.

Definition

Median probability value

Scale Level

Certain

0.95

E

High

0.85

D

Medium

0.45

C

Low

0.15

B

Almost no chance

0.05

A

Calibrating these risk scales is the primary analysis task of building a risk management system. What does it mean to have a “medium” risk, in the specific problem domain?

  1. The second item is the formal use of a risk management process. Simply listing the risk process areas – as is done in PMBOK – is not only poor project management practice, it is poor risk management practice. The process to be used are shown on page 135 of http://www.dau.mil/pubs/gdbks/pmbok.asp. The application of these processes are described in detail. No process area is optional. All are needed. All need to be performed in the right relationship to each other.

These two concepts are the ones that changed the way I perform risk management on the programs I’m involved with and how we advise our clients. They are paradigm changing concepts. No more simple mined arithmetic with probabilities and consequences. No more uncalibrated risk scales. No more tolerating those who claim PERT, Critical Path, and Monte Carlo are unproven, obsolete, or “wrong headed” approaches.

Get Ed’s book. It’ll cost way too much when compared to the “paperback” approach to risk. But for those tasked with “managing risk,” this is the starting point.

 

Categories: Project Management

Practices, Not Platitudes

NOOP.NL - Jurgen Appelo - Tue, 10/14/2014 - 15:27
Practices, Not Platitudes

I recently took part in a conversation about compensation of employees. Some readers offered criticism on the Merit Money practice, described in my new Workout book, claiming that Merit Money is just another way to incentivize people. The feedback I received was, “Money doesn’t motivate people”, followed by, “Don’t incentivize people” and “Just pay people well”.

Let me explain why I think this advice is useless.

The post Practices, Not Platitudes appeared first on NOOP.NL.

Categories: Project Management

5 Decision-Making Strategies that require no estimates

Software Development Today - Vasco Duarte - Tue, 10/14/2014 - 04:00

One of the questions that I and other #NoEstimates proponents hear quite often is: How can we make decisions on what projects we should do next, without considering the estimated time it takes to deliver a set of functionality?

Although this is a valid question, I know there are many alternatives to the assumptions implicit in this question. These alternatives - which I cover in this post - have the side benefit of helping us focus on the most important work to achieve our business goals.

Below I list 5 different decision-making strategies (aka decision making models) that can be applied to our software projects without requiring a long winded, and error prone, estimation process up front.

What do you mean by decision-making strategy?

A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.

Some possible goals for business strategies might be:

  • Growth: growing the number of customer or users, growing revenues, growing the number of markets served, etc.
  • Market segment focus/entry: entering a new market or increasing your market share in an existing market segment.
  • Profitability: improving or maintaining profitability.
  • Diversification: creating new revenue streams, entering new markets, adding products to the portfolio, etc.

Other types of business goals are possible, and it is also possible to mix several goals in one business strategy.

Different decision-making strategies should be considered for different business goals. The 5 different decision-making strategies listed below include examples of business goals they could help you achieve. But before going further, we must consider one key aspect of decision making: Risk Management.

The two questions that I will consider when defining a decision-making strategy are:

  • 1. How well does this decision proposal help us reach our business goals?
  • 2. Does the risk profile resulting from this decision fit our acceptable risk profile?

Are you taking into account the risks inherent in the decisions made with those frameworks?

All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?

A different risk profile requires different decisions

Each decision we make has an impact on the following risk dimensions:

  • Failing to meet the market needs (the risk of what).
  • Increasing your technical risks (the risk of how).
  • Contracting or committing to work which you are not able to staff or assign the necessary skills (the risk of who).
  • Deviating from the business goals and strategy of your organization (the risk of why).

The categorization above is not the only possible. However it is very practical, and maps well to decisions regarding which projects to invest in.

There may good reasons to accept increasing your risk exposure in one or more of these categories. This is true if increasing that exposure does not go beyond your acceptable risk profile. For example, you may accept a larger exposure to technical risks (the risk of how), if you believe that the project has a very low risk of missing market needs (the risk of what).

An example would be migrating an existing product to a new technology: you understand the market (the product has been meeting market needs), but you will take a risk with the technology with the aim to meet some other business need.

Aligning decisions with business goals: decision-making strategies

When making decisions regarding what project or work to undertake, we must consider the implications of that work in our business or strategic goals, therefore we must decide on the right decision-making strategy for our company at any time.

Decision-making Strategy 1: Do the most important strategic work first

If you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.

Decision-making Strategy 2: Do the highest technical risk work first

When you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.

Decision-making Strategy 3: Do the easiest work first

Suppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.

Decision-making Strategy 4: Do the legal requirements first

In medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.

Decision-making Strategy 5: Liability driven investment model

This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.

These are just 5 possible investment or decision-making strategies that can help you make project decisions, or even business decisions, without having to invest in estimation upfront.

None of these decision-making strategies guarantees success, but then again nothing does except hard work, perseverance and safe experiments!

In the upcoming workshops (Helsinki on Oct 23rd, Stockholm on Oct 30th) that me and Woody Zuill are hosting, we will discuss these and other decision-making strategies that you can take and start applying immediately. We will also discuss how these decision making models are applicable in day to day decisions as much as strategic decisions.

If you want to know more about what we will cover in our world-premiere #NoEstimates workshops don't hesitate to get in touch!

Your ideas about decision-making strategies that do not require estimation

You may have used other decision-making strategies that are not covered here. Please share your stories and experiences below so that we can start collecting ideas on how to make good decisions without the need to invest time and money into a wasteful process like estimation.

Managing Your Project With Dilbert Advice — Not!

Herding Cats - Glen Alleman - Mon, 10/13/2014 - 16:11

Scott Adams provides cartons of what not to do for most things technical. Software and Hardware. I actually saw him once, when he worked for PacBell in Pleasanton, CA. I was on a job at major oil company deploying document management systems for OSHA 1910.119 - process safety management and integrating CAD systems for control of safety critical documents.

The most popular use of Dilbert cartoon lately has been with the #NoEstimates community in support of the notion that estimates are somehow evil, used to make commitments that can't be met, and generally should be avoided when spending other people's money.

The cartoon below resonated with me for several reasons. What's happening here is classic misguided, intentionally ignoring the established processes of Reference Class Forecasting. As well, in typical Dilbert fashion, doing stupid things on purpose.

Screen Shot 2014-10-12 at 9.16.06 AM

Reference Class Forecasting is a well developed estimating process used across a broad range of technical, business, and finance domains. The characters above seem not to know anything about RCF. As a result they are DSTOP

Here's how not to DSTOP for cost and schedule estimates and the associated risks and the technical risk that the product you're building can't do what it's supposed to do on or before the date it needs to do it, at or below the cost you need it to do  it in order to stay in business.

The approach below may be complete overkill for your domain. So start by asking what's the Value at Risk. How much of our customers money are we willing to right off, if we don't have a sense of what DONE looks like in units of measure meaningful to the decision makers. Don't know that? then it's likely you've already put that money at risk, you're likely late, and don't really know what capabiltiies will be produced when you run out of time and money.

Don't end up a cartoon character in Dilbert strip. Learn how to properly manage your efforts, the efforts of others, using your customers money.

Managing in the presence of uncertainty from Glen Alleman
Categories: Project Management

Intentional Disregard for Good Engineering Practices?

Herding Cats - Glen Alleman - Mon, 10/13/2014 - 03:22

It seems lately there is an intentional disregard of the core principles of business development of software intensive systems. The #Noestimates community does, but other collections of developers do as well. 

  • We'd rather be writing code than estimating how much it's going to cost writing code.
  • Estimates are a waste.
  • The more precise the estimate, the more deceptive it is
  • We can't predict the future and it's a waste trying to
  • We can make decsision without estimating

These notions of course are seriously misinformed on how probability and statistics work in the estimating paradigm. I've written about this in the past. But there are a few new books we're putting to work in ouyr Software Intensive Systems (SIS) work that may be of interest to those wanted to learn more.

These are foundation texts for the profession of estimating. The continued disregard - ignoring possibly - of these principles has become all to clear. Not just in the sole contributor software development domain,. But all the way to Multi-Billion dolalr programs in defense, space, infrastructure, and other high risk domains. 

Which brings me back to a core conjecture - there is no longer any engineering discipline in the software development domain. At least outside the embedded systems like flight controls, process control, telecommunications equipment, and the like. There was a conjecture awhile back that the Computer Science discipline at the university level should be split - software engineering and coding.

Here's a sample of the Software Intensive System paradigm, where the engineering of the systems is a critical success factor. And Yes Virginia, the Discipline of Agile is applied in the Software Intensive Systems world - emphasis on the term DISCIPLINE.

Categories: Project Management

Principles Trump Practices

Herding Cats - Glen Alleman - Sat, 10/11/2014 - 15:23

PrincipiaPrinciples, Practices, and Processes are the basis of successful project management. It is popular in some circles to think that practices come before Principles. 

The principles of management, project management, software development and its management, product development management are immutable.

What does done look like, what's our plan to reach done, what resources will we need along the way to done, what impediments will we encounter and how will we overcome them, and how are we going to measure our progress toward done in units meaningful to the decision makers?

This are immutable principles. These immutable principles can then be used to test practices and process by asking what is the evidence that the practice or process enables the principle to be applied and how do we know that the principle is being fulfilled.

 

Categories: Project Management

What Informs My Project Management View

Herding Cats - Glen Alleman - Fri, 10/10/2014 - 15:44

In recent discussion (of sorts) about estimating - Not Estimating actually - I realized something that should have been obvious. I travel in a world not shared by the staunch advocates of #NoEstimates. They appear to be sole contributors. I came top this after reading Peter Kretzman's 3rd installment, where he re-quoted a statement by Ron Jeffries,

Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before.

This is a sole contributor or small team paradigm.

So let's pretend we work at Price/Waterhouse/Cooper and we’re playing our roles  - Peter as CIO advisor, me as Program Performance Management adviser. We've been asked by our new customer to develop a product from scratch, estimate the cost and schedule, and provide some confidence level that the needed business capabilities will be available on or before a date and at or below a cost. Why you ask, because that's in the business plan for this new product and if they're late or overrun the planned cost that will be a balance sheet problem.

What would we do? Well, we'd start with PWC resource management database – held by HR – and ask for “past performance experience” people in the business domain and the problem domain. Our new customer did not “invent” a new business domain, so it's likely we'll find people who know what our new customer does for money. We’d look to see where in the 195,433 people in the database that work for PWC world wide there is someone, somewhere, that knows what the customer does for money and what kinds of business capabilities this new system needs to provide. If there is no one, then we'd look in our 10,000 or so partner relationships database to find someone.

If we found no one who knows the business and the needed capabilities, we’d no bid.

This notion of “I've been asked to do something that’s never been done before, so how can I possibly estimate it” really means “I'm doing something I’ve never done before.” And since “I’m a sole contributor, the population of experience in doing this new thing for the new customer is ONE – me." So since I don't know how the problem has been solved in the past, I can't possibly know how to estimate the cost, schedule, and needed capabilities. And of course I'm absolutely correct to say - new development with unknown requirements can't be estimated. Because those unknown requirements are actually Unknown to me, but may be known to another. But in the population of 195,000 other people in our firm, I'm no longer alone in my quest to come up with an estimate.

So the answer to the question, “what if we encounter new and unknown needs, how can we estimate?” is actually a core problem for the sole contributor, or small team. It'd be rare that the sole contributor or small team would have encountered the broad spectrum of domains and technologies needed to establish the necessary Reference Classes to address this open ended question. This is not the fault of the sole contributor. It is simply the situation of small numbers versus large numbers.

This is the reason the PWC’s of the world exist. They get asked to do things the sole contributors never have an opportunity to see.

Related articles Software Requirements Are Vague
Categories: Project Management

Software Requirements Are Vague

Herding Cats - Glen Alleman - Thu, 10/09/2014 - 22:23

Peter Kretzman has a nice post in his series on #NoEstimates. Peter and I share a skepticism of "making decisions in the absence of estimating the cost and impact" of those decisions. In Peter's current post there is a quote that is telling.

Let’s use Ron Jeffries’ statement as an example of this stance:

“Estimates are difficult. When requirements are vague — and it seems that they always are — then the best conceivable estimates would also be very vague. Accurate estimation becomes essentially impossible. Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before. “

One of my 3 half time jobs is working in the space and defense program performance management domain, both embedded systems and enterprise IT systems. DOD is the largest buyer of ERP on the planet. In this domain we have a formal process for determining what went wrong. The department looking after this is called Performance Assessment and Root Cause Analysis (PARCA). PARCA provides Root Cause Analysis for programs that have gone Nunn McCurdy as we would say. 

When you read the reports from Rand and Institute for Defense Analyses on N-M breaches, requirements instability is in the top 5 as root causes

It seems to me - in my narrow minded program performance management view of the world - that unstable requirements being used as the reason for vague estimates is so obvious a problem that has been completely ignored by the #NoEstimates advocates. It's like the olde saw

Doctor, Doctor it hurts when I do this (make estimates in the presence of vague requirements). Then stop doing that!

The notion of Capabilities Based Planning is missing in many software organizations. So having vague requirements is a natural outcome of not having definitive understanding of what Capabilities the system must provide, in units of measure meanigful to the decision makers. These units are:

  • Measures of Effectiveness  - are the operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions. MOE's are stated in units meaningful to the buyer, focus on capabilities independent of any technical implementation, are connected to the mission success.
  • Measures of Performance - characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions. Measures of Performance are attributes that assure the system has the capability and capacity to perform, and assess the system to assure it meets design requirements to satisfy the MoE.

Without these requirements have not home, are vague, and therefore create the root cause of bad estimates.

So what would a logical person do when working on a project that spends other peoples money, sometimes lots of other peoples money? Not Estimate? Does that sound like the corrective action to the root cause of the problems with software project success shortfall?

Not to me. It's the doctor, doctor this hurts paradigm. So until the root cause is determined, the corrective actions identified and applied, there can be no credible solution to the estimating problem. And there is a huge estimating problem in our domain, just read the N-M reports at RAND and IDA (Goggle nunn-mcurdy Rand or IDA to find them). Similar assessments of root causes can be found for enterprise IT from many sources. 

The #NoEstimates advocates are attempting to solve the wrong problem with the wrong approach. They've yet to connect with the core process of writing software for money - MicroEconomics of software development. Here's a starting point to address the root casue rather than the symptom. Fixing the symptoms does nothing in the end. It just spends money, with no actonable outcomes. And that woudl be very counter to the principles of Agile.

Capabilities based planning (v2) from Glen Alleman

 

Related articles Who pays? No Estimates Needs to Come In Contact With Those Providing the Money #NoEstimates? #NoProjects? #NoManagers? #NoJustNo
Categories: Project Management

Small Internal Releases Lead to Happy Customers

If you saw Large Program? Release More Often, you might have noted that I said,

You want to release all the time inside your building. You need the feedback, to watch the product grow.

Some of my clients have said, “But my customers don’t want the software that often.” That might be true.  You may have product constraints, also. If you are working on a hardware/software product, you can’t integrate the software with the hardware either until the hardware is ready or that often.

I’m not talking about releasing the product to the customers. I’m not talking about integrating the software with the hardware. I’m talking about small, frequent, fully functional releases that help you know that your software is actually done.

You don’t need hardening sprints. Or, if you do, you know it early. You know you have that technical debt now, not later. You can fix things when the problem is small. You see, I don’t believe in hardening sprints.

Hardening sprints mean you are not getting to done on your features. They might be too big. Your developers are not finishing the code, so the testers can’t finish the tests. Your testers might not be automating enough. Let’s not forget architectural debt. It could be any number of things. Hardening sprints are a sign that “the software is not done.” Wouldn’t you like to know that every three or four weeks, not every ten or twelve? You could fix it when the problem is small and easier to fix.

Here’s an example. I have a number of clients who develop software for the education market.  One of them said to me, “We can’t release all the time.”

I said, “Sure, you can’t release the grading software in the middle of the semester. You don’t want to upset the teachers. I get that. What about the how-to-buy-books module? Can you update that module?”

“Of course. That’s independent. We’re not sure anyone uses that in the middle of the semester anyway.”

I was pretty sure I knew better. Teachers are always asking students to buy books. Students procrastinate. Why do you think they call it “Student syndrome”? But I decided to keep my mouth shut. Maybe I didn’t know better. The client decided to try just updating the buy-the-book module as they fixed things.

The client cleaned up the UI and fixed irritating defects. They released internally every two weeks for about six weeks. They finally had the courage to release mid-semester. A couple of schools sent emails, asking why they waited so long to install these fixes. “Please fix the rest of these problems, as soon as you can. Please don’t wait.”

The client had never released this often before. It scared them. It didn’t scare their customers. Their customers were quite happy. And, the customers didn’t have all the interim releases; they had the planned mini-releases that the Product Owner planned.

My client still doesn’t release every day. They still have an internal process where they review their fixes for a couple of weeks before the fixes go live. They like that. But, they have a schedule of internal releases that is much shorter than what they used to have. They also release more often to their customers. The customers feel as if they have a “tighter” relationship with my client. Everyone is happier.

My client no longer has big-bang external releases. They have many small internal releases. They have happier customers.

That is what I invite you to consider.

Release externally whenever you want. That is a business decision. Separate that business decision from your ability to release internally all the time.

Consider moving to a continuous delivery model internally, or as close as you can get to continuous delivery internally. Now, you can decide what you release externally. That is a business decision.

What do you need to do to your planning, your stories, your technical practices to do so?

Categories: Project Management

How to Estimate Software Development - Update

Herding Cats - Glen Alleman - Thu, 10/09/2014 - 02:49

There is a popular quote used by many in the #NoEstimates community, that is sadly misinformed.

Those who have knowledge, don’t predict. Those who predict, don’t have knowledge. − Lao Tseu

This of course was from a 6th Century BC Chinese philosopher, who was not likely familiar with the notion of probability and statistics developed some 900 years later. The quoting and re-quoting of Lao Tseu as an example of why estimates can't be made brings to light one of the more troublesome aspects of our modern age.

The lack of understanding of basic probability and statistics when applied to human endeavors.

Or possibly the intentional ignorance of probability and statistics as it is applied to the development of software systems. I can't really say if it is for lack of understanding, lack of exposure, or just a simple intent to ignore. 

But for any of those reasons and more, here's a starting point on how to actually become a member of the modern of statistical estimating community, once it is decided that is better than ignoring the basic knowledge needed to be a steward of other peoples money.

Here's some starting points in no particular order, other than that's how they came off the office book shelf.

These are just a small sample of the information readily available at your local book store or through the mail. If you google "software cost estimating," (all in quotes) there will be 100's of more articles, papers, and web sites. As well tools for estimating software are used every single day in a variety of domains. 

The Value at Risk is a starting point as well. Low value - this is defined by those providing the money, not by those doing the work, and low risk - this usually defined by those doing the work, not by those providing the money - at least in the domains we work. This Value at Risk, sets the tone. Low Value, Low Risk - and this is in absolutely no way an assessment of the relative value and risk - usually doesn't need much estimating.

Got a 6 week, 2 person database update project. Just do it. Got a 38 month, 400 person National Asset sofwtare project, probably so. Everything and anything in between needs to ask and answer that value at risk question before deciding. 

So poor Mr. Tzu was sadly informed when he made his quote. As are those repeating it. In the 21 century

Those who have knowledge of probability, statistics, and the processes described by them can predict their future behaviour. Those without this knowledge, skills, or experience cannot.

Related articles How to Estimate Software Development
Categories: Project Management

Large Program? Release More Often

I’m working on the release planning chapter for Agile and Lean Program Management: Collaborating Across the Organization. There are many ways to plan releases. But the key? Release often. How often? I suggest once a month.

Yes, have a real, honest-to-goodness release once a month.

I bet that for some of you, this is counter-intuitive. “We have lots of teams. Lots of people. Our iterations are three weeks long. How can we release once a month?”

Okay, release every three weeks. I’m easy.

Look, the more people and teams on your program, the more feedback you need. The more chances you have for getting stuck, being in the death spiral of slowing inertia. What you want is to gain momentum.

Large programs magnify this problem.

If you want to succeed with a large agile program, you need to see progress, wherever it is. Hopefully, it’s all over the program. But, even if it’s not, you need to see it and get feedback. Waiting for feedback is deadly.

Here’s what you do:

  1. Shorten all iterations to two weeks or less. You then have a choice to release every two or four weeks.
  2. If you have three-week iterations, plan to release every three weeks.
  3. Make all features sufficiently small so that they fit into an iteration. This means you learn how to make your stories very small. Yes, you learn how. You learn what a feature set (also known as a theme) is. You learn to break down epics. You learn how to have multiple teams collaborate on one ranked backlog. Your teams start to swarm on features, so the teams complete one feature in one iteration or in flow.
  4. The teams integrate all the time. No staged integration.

Remember this picture, the potential for release frequency?

Potential Release Frequency

Potential for Release Frequency

That’s the release frequency outside your building.

I’m talking about your internal releasing right now. You want to release all the time inside your building. You need the feedback, to watch the product grow.

In agile, we’re fond of saying, “If it hurts, do it more often.” That might not be so helpful. Here’s a potential translation:  “Your stuff is too big. Make it smaller.”

Make your release planning smaller. Make your stories smaller. Integrate smaller chunks at one time. Move one story across the board at one time. Make your batches smaller for everything.

When you make everything smaller (remember Short is Beautiful?), you can go bigger.

Categories: Project Management

Lean Change Management: A Truly Agile Change Management approach

Software Development Today - Vasco Duarte - Wed, 10/08/2014 - 04:00

"I've been working in this company for a long time, we've tried everything. We've tried involving the teams, we've tried training senior management, but nothing sticks! We say we want to be agile, but..."

Many people in organizations that try to adopt agile will have said this at some point. Not every company fails to adopt agile, but many do.

Why does this happen, what prevents us from successfully adopting agile practices?

Learning from our mistakes

Actually, this section should be called learning from our experiments. Why? Because every change in an organization is an experiment. It may work, it may not work - but for sure it will help you learn more about the organization you work for.

I learned this approach from reading Jason Little's Lean Change Management. Probably the most important book about Agile adoption to be published this year. I liked his approach to how change can be implemented in an organization.

He describes a framework for change that is cyclical (just like agile methods):

  • Generate or gain insights: in this step we - who are involved in the change - do small experiments (like for example asking questions) to generate insights into how the organization works, and what possible things we could use to help people embrace the next steps of change.
  • Define options: in this step we list what are the options we have. What experiments could we run that would help us towards our Vision for the change.
  • Select and run experiments: each option will, after being selected, be transformed into an experiment. Each experiment will have a step of actions, people to involve, expected outcomes, etc.
  • Review, learn and...: After the experiments are concluded (and sometimes right after starting those experiments) we gain even more insights that we can feed right back into what Jason call the Lean Change Management Cycle.

The Mojito method of change

The overall cycle for Lean Change Management is then complemented in the book with concrete practices that Jason used and explains how to use in the book. Jason uses the story of The Commission to describe how to apply the different practices he used. For example, in Chapter 8 he goes into details of how he used the Change Canvas to create alignment in a major change for a large (and slow moving) organization.

Jason also reviews several change frameworks (Kotter's 8 steps, McKinsey's 7S, OCAI, ADKAR, etc.) and how he took the best out of each framework to help him walk through the Lean Change Management cycle.

The most important book about Agile adoption right now

After having worked on this book for almost a year together with Jason, I can say that I am very proud to be part of what I think is a critical knowledge area for any Agile Coach out there. Jason's book describes a very practical approach to changing any organization - which is what Agile adoption is all about.

For this reason I'd say that any Agile Coach out there should read the book and learn the practices and methods that Jason describes. The practices and ideas he describes will be key tools for anyone wanting to change their organization and adopt Agile in the process.

Here's where you can find more details about what the book includes.