Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Herding Cats - Glen Alleman
Syndicate content
Performance-Based Project Management® Principles, Practices, and Processes to Increase Probability of Success Доверяй, но проверяй
Updated: 14 hours 37 min ago

Why Trust is Hard

Thu, 10/30/2014 - 14:06

Screen Shot 2014-10-29 at 10.09.40 PMHugh McCleod's art for Zappo's provides the foundation for trust in that environment

If I'm the head of HR, I'm responsible for filling the desks at my company with amazing employees. I can hold people to all the right standards. But ultimately I can't control what they do. This is why hiring for culture works. What Zappos does is radical because it trusts. It says "Go do the best job you can do for the customer, without policy". And leaves employees to come up with human solutions. Something it turns out they're quite good at, if given the chance.

Now let's take another domain, one I'm very famailar with - fault tolerant process control systems. Software and support hardware applied to emergency shutdown of exothermic chemical reactors - those that make the unleaded gasoline for our cars, nuclear reactors and conventional fired power generation, gas turbine controls, and other must work properly machines. And a similar domain of DO-178c flight control systems, which must equally work without fail and provide all the needed capabilities on day one.

At Zappos, the HR Diector describes a work environment where employess are free to do the best job they can for the customer. In the domains above, employees also work to do the best job for the customer they can, but flight safety, live safety, equipment safety are also part of that best job. In other domains we work, doing the best job for the customer means processing with extremely low error rates, transactions for 100's of millions of dollars of value in the enterprise IT paradigm. Medical insurance provider services, HHS enrollment, enterprise IT in a variety of domains.

Zappo's can recover from an error, other domains can't. Nonrecoverable errors mean serious loss of revenue, or even loss of live. In the other domains, failure is similar consequences. I come from those domains, they inform my view of the software development world - where software fail safe and fault tolerance is the basis of business success.

So when we hear about the freedom to fail early and fail often in the absence of a domain or context, care is needed. Without a domain and context, it is difficult to assess the credibility of any concept, let alone one that is untested outside of personal ancedote. It comes down to Trust alone or Trust But VerifyI could also guarantee that Zappos has some of the verify process. It is doubtful employees are left to do anything they wish for their customer. The simple reason there is a business governance process at any firm, no matter the size. Behaviour, even full trust behavior fits inside that governance process.
Categories: Project Management

Quote of the Day

Wed, 10/29/2014 - 16:56

Vision without Execution is HallucinationJeffrey E. Garten, The Mind Of The CEO

All the rhetoric around any idea needs actionable outcomes that can be tested in the market place, beyond the personal anecdotes of self-selected conversations.


Categories: Project Management

Quote of the Day

Wed, 10/29/2014 - 15:56

The Sky's the limit when you don't know what you don't know.

Categories: Project Management

Should I Be Estimating My Work?

Mon, 10/27/2014 - 15:47

The question asked by #NoEstimates is in the form of a statement. 

No Estimates

On the surface this statement sounds interesting until the second sentence. The MicroEconomics of writing software for money is based on estimating future outcomes thaty result from current day decisions. But let's pretend for a moment that Micro Econ is beyond consideration, this is never true, but let's pretend.

The next approach is to construct a small decision tree that can invert the question. Forget the exploring, since all business effort is a zero sum game, in that someone has to pay for everything we do. Exploring, coding, testing, installing training, even operating.

Screen Shot 2014-10-26 at 9.29.05 AM

 So let's start down the flow chart.

Is It Your Money?

In the crass world of capitalism, money talks, BS walks. While this may be abhorrent to some, it's the way the world works, and unless you've got your own bank, you're going to likely have to use other peoples money to produce software -  either for yourself or for someone else. Self-funded start up? No problem, but even the best known names in software today went on to raise more money to move the firm forward. Then self-funded became venture funded, private equity funded, and then publicly funded.

If you're writing software for money, and it's not your money, those providing the money have - or should have if they're savvy investors - a vested interest in knowing how much will this cost. As well when will it be done and most importantly, what will be delivered during the work effort and at the end. 

This requires estimating

Is There A Governance Policy Where You Work?

Governance of software development, either internal projects, external projects, or product development is a subset of corporate governance. 

If you work at a place that has no corporate governance, then estimating is probably a waste.

If however, you work somewhere that does have a corporate governance process - no matter how simple - and this is likely the case when there is a non-trival amount of money at risk, then someone, somewhere in the building has an interest in knowing how much things will cost before you spend the money to do them or buy them. 

This requres estimating

What's the Value at Risk for Your Project?

If the value at risk for a project is low - that is if you spend all the money and consume all the time and produce nothing and the management of your firm writes that off as a loss without much concern, then estimating probably doesn't add much value.

But if those providing you the money have an expectation that something of value will be returned and that something is needed for the business, then writing off the time and cost is not likely to be seen as favorable to you the provider. 

We trusted you because you said "trust me" and you didn't show up on or before the planned time, at or below the planned budget, with the needed capabilities - and you didn't want to estimate those up front and keep us informed about your new and updated Estimate To Complete and Estimate At Complete so we could take corrective actions to help you out - then we're going to suggest you look for work somewhere else.

On low value projects estimating the probability of success, the probability of the cost of that success, the probability of completion date of that success is not likely of much value.

But using the Value At Risk paradigm - risk of loss of a specific asset (in this case the value produced by the project) is defined as a threshold loss value, such that the probability that the loss on the value from the project over the given time horizon exceeds some value.

As an aside the notion of slicing does not reduce the Value at Risk. Slicing is a estimating normalization process where the exposure to risk is reduced to same sized chucks. But the natural and event based variability of the work is still present in the chunks, and the probability of impacting the outcome of the project due to changes in demand, productivity, defects, rework, unfavorable and even un anticipated interactions between the produced chuncks needs to be accounted for in some estimating process. aAs well the sunk cost of breaking down the work into same sized chunks needs to be accounted.

In our space and defense world, there is the 44 Day Rule, where chunks of work are broken down into 44 days (2 financial months) with tangible deliverables. The agile community would consider this to long, but they don't work on National Asset billion dollar software intensive programs, so ignore that for the moment.

So yes, slicing is a good project management process. But using the definition of No Estimates in the opening, slicing is Estimating and done in every credible project management process, usually through the Work Breakdown Structure guide.

The Five Immutable Principles of Project Success

To increase the probability of project success, five immutable principles must be in place and have credible answers to their question. Each requires some form of an estimate, since the outcomes from these principles is in the future. No amount of slicing and dicing is going to produce a non-statistical or non-probabilistic outcome. All slicing does - as mentioned before - is reduce the variance of the work demand, not the work productivity, the variance in that work productivity process, the rework due to defects, or any unidentified dependencies between those work products that will create uncertainty and therefore create risk to showing up on time, on budget, and on specification.

5 Immutable Principles

The Devil Made Me Do It

Those of us seeking an evidence based discussion about the issues around estimating - and there are an endless supply of real issues with real solutions - have pushed back on using Dilbert cartons. But I just couldn't resist today carton.

When we need to make a decision between options - Microeconomics and opportunity costs - about some outcome in the future, we need an estimate of the cost and benefit of that choice. To suggest that decisions can be made without estimates has little merit in the real world of spending other peoples money.

Screen Shot 2014-10-27 at 8.09.49 AM

Related articles No Estimates Needs to Come In Contact With Those Providing the Money
Categories: Project Management

Anecdotal Evidence is not Actually Evidence

Sun, 10/26/2014 - 16:25

Untitled When we hear I know a CEO that uses this method and she's happy with the outcomes, has several core fallacies wrapped into one.

The first is the self-selection problem of statistics. This is the Standish problem. Send out a survey, tally the results from those that were returned. Don't publish how many surveys went out and how many came back.

The next is the Anecdotal sample. I know a guy that... in support of the suggestion that by knowing someone that supports you're conjecture, your conjecture is some how supported.

These are both members of the cherry picking process. The result is lots of exchanges of questions to the original conjecture that have not basis in evidence for the conjecture.

When you encounter such a conjecture, apply the Sagan's BS detection kit

  • Seek independent confirmation of alleged facts.
  • Encourage an open debate about the issue and the available evidence.
  • In our domain and most other there are no authorities. At most, there are experts.
  • Come up with a variety of competing hypotheses explaining a given outcome. Considering many different explanations will lower the risk of confirmation bias.
  • Quantify whenever possible, allowing for easier comparisons between hypotheses' relative explanatory power.
  • Every step in an argument must be logically sound; a single weak link can doom the entire chain.
  • When the evidence is inconclusive, use Occam's Razor to discriminate between hypotheses.
  • Pay attention to falsifiability. Science does not concern itself with unfalsifiable propositions.

When there is push back from hard questions, you'll know those making the claims have no evidence and are essentially BS'ing their constituents.

Categories: Project Management

Trust but Verify

Sat, 10/25/2014 - 18:28

There is this notion in some circles that trust trumps all business management processes.

Screen Shot 2014-10-24 at 4.35.37 PMThe Russian proverb is

"Доверяй, но проверяй, Лучше перебдеть, чем недобдеть"

Who's butchered translation is Trust but Verify, don't rely on chance.

President Regan used that proverb reflected back to the Russian in the SALT I treaty. So what does it mean trust that people can think for themselves and decide if it applies to them ... that not making estimates of the cost, performance and schedule for the project are needed?

The first question is - what's the value at risk? Trust alone is likely possible in low value at risk. In that case the impact of not showing up on or before the needed time, at or below the needed cost, and with ot without all the needed capabilities for the mission or business case fulfillment has much less impact and therefore is acceptable.


Trust but Verify

6 week DB update with 3 developers

18 month ERP integration with 87 developers whose performance is reported to the BoD on a quarterly basis

Water filter install in kitchen using the local handyman

Water filter install in kitchen with wife testing to see if it does what it said in the brochure

Install the finch feeder on the pole attached to the back deck in front of the kitchen window over looking the golf course.

Design and build the 1,200 square foot deck attached to the second floor on the back of the house using the architects plans and schedule the county for the inspection certificate so it can be used last summer.

Arrange for a ride in a glider at the county airport sometime Saturday afternoon

Plan departure from DIA and check for departure delay of SWA flight DEN to DCA.

In the first instances (left column) trust us, we'll be done in the 6 week window probably means that team doesn't need to do much estimating other than the agree among themselves that the Promise made to the manager has a good chance of coming true.

The second (right column) $178M ERP integration project in a publicly traded firm, filing their 10K and subject to FASB 86, and having promised the shareholders, insured, and the provider network that the new system will remove all the grief of the several dozen legacy apps will be made all betteron or before  the Go Live date announced at the Board Meeting and in the Press has a good chance of coming true. 

To assess that chance, more that Trust is needed. Evidence of the probability of completing on or before the go live date and at or below the target budget is needed. That probability is developed with an estimating process and updated on a periodic basis - in this case every month, with a mid-month assessment of the month end's reportable data. 

So next time you hear...

Screen Shot 2014-10-24 at 5.27.28 PM

...think of the Value at Risk, the fiduciary responsibility to those funding your work, to ask and produce an answer to the question of how much, when, and what will be delivered. And even possibly the compliance responsibility - SOX, CMS, FAR/DFARS, ITIL - for knowing to some degree of confidence the Estimate To Complete and the Estimate at Complete for your project. Writing 6 week warehouse apps, probably not much value. Spending 100's of millions of stock holders money and betting the company likely knowing something like those numbers is needed.

Trust Alone is Open Loop Trust but Verify is Closed Loop

Control systems from Glen Alleman   Without knowing the Value At Risk it's difficult if not impossible to have a conversation about applying any method of managing the spend of other peoples money. Here's a clip from another book that needs to be on the shelf of anyone accoutable for spending money in the presence of a governance process. Practical Spread Sheet Risk Modeling. Don't do risk management of other peoples money? Then you don't need this book or similar ones, and likley don't need to estimate the impact of decisions made using other peoples money. Just keep going, your customer will tell you when to stop. Screen Shot 2014-10-25 at 11.21.45 AM  
Categories: Project Management

Happy Permutation Day

Fri, 10/24/2014 - 15:05


1024 - 2014

Thanks to Mr. Honner, a mathematics teacher in at Brooklyn Technical High School. If you like mathematics and appreciate the contribution a good teacher can make to mathematically understanding which is woefully lacking in our project management domain, sign up to get his blog posts.

Categories: Project Management

Basis of #NoEstimates are 27 Year Old Reports

Thu, 10/23/2014 - 17:52

The #NoEstimates movement appears to be based on a 27 year old report† that provides examples of FORTRAN and PASCAL programs as the basis on which estimates is done. 

Screen Shot 2014-10-21 at 10.51.54 PM


A lot has happend since 1987. For a short crtiique on the Software Crisis report - which is referenced in the #NoEstimates argument, see "There is No Software Engineering Crisis."

  • Parametric modeling tools - the structure of software projects are constrained by the external structures of their components. These can be parameterized for estimating purposes.
  • COCOMO - is an example of a parametic estimating tool. There are several others.
  • Reference Class Forecasting models, have been developed as the result of overruns and disasters in many areas, including SWDev. But now we know and we know better not to succumb to all the biases discovered in the past.
  • Monte Carlo Simulation, using Reference Class Forecasting, there as simple and cheap tools, for example. For $125.00 all the handwaving around forecasting cost, schedule, and other project variables can be modeled with ease. 
  • Object Oriented Programming - old news but no more debugging of FORTRAN "Unnamed COMMON" overwriting floating point numbers!!
  • Component Based Software, where we can buy parts and assemble them.
  • SOA and CORBA (TIBCO) where ETL and Enterprise Bus are the part of the Enterprise Architecture. Stop writing application code and start writing scripts to integrate. BTW the example of having developers write database apps for what is essentially a warehousing app, has missed the COTS, component based solutions bus.
  • FPA, while a bit long in the tooth, but idea is still valid.
  • Databases of Reference Classes
  • The Web and Web components, same as above.
  • CORBA, same as above.
  • All the web based languages
  • All the runtime interpretative languages with built in debuggers, rather compiled code with stop dead runtime debugging. 
  • ERP and COTS products and components, with out of the box functions that remove the need to write any code. Configure the system, sure. Write some scripting code ABAP e.g., but no coding in the developer sense.
  • Software as a Service, where you can buy the solution. That was unheard of in 1986 and 1987.
  • DevOps, another unheard idea back then.
  • Open Source and Reuse

1000's of research and practicum books and papers on how to estimate software projects have be published. Maybe it's time to catch up with the 21st Century approach of estimating the time, cost, and capabilities needed to deliver value for those paying for our work. These approaches answer the mail in the 1987 report, along with the much referenced NATO Software Crisis report published in 1986.

While estimates have always been needed to make decisions in the paradigm of Microeconomics of software development, the techniques, tools, and data have improved dramatically in the last 27 years. Let's acknowldge that and start taking advantage of the efforts to improve our lot in life of being good stewards of other peoples money. And when we hear #Noestimates can be used to forecast completion times and costs at the end, test that idea with activities in the Baloney Claims check list.


† #NoEstimates is an approach to software development that arose from the observation that large amounts of time were spent over the years in estimating and improving those estimates, but we see no value from that investment. Indeed, according to scholars Conte, Dunmore and Shens [1] a good estimate is one that is within 25% of the actual cost, 75% of the time. in

As a small aside, that's not what the statement actually says in the context of statistical estimating. It says there is a 75% confidence that there will be on overage of 25% which needs to be covered with management reserve for 25% to protect the budget. Since all project work is probabilistic, uncertainty is both naturally occurring and event based. Event based uncertainty can be reduced by spending money. This is a core concept of Agile development. Do small things to discover what will and won't work. Naturally occurring uncertainty, can only be handled with margin. In this statement - remember it's 27 years old - there is a likelihood that a 25% management reserve will be needed 25% of the time there is a project estimate produced. If you know that ahead of time, it's won't be a disappointment when it occurs 25% of the time.

This is standard best management practice in mature organizations. In some domains, it's mandatory to have Management Reserve built from Monte Carlo Simulations using Reference Classes of past performance.

Related articles How to Estimate Software Development Software Requirements Are Vague How Not To Make Decisions Using Bad Estimates #NoEstimates? #NoProjects? #NoManagers? #NoJustNo
Categories: Project Management

Baloney Claims: Pseudo–science and the Art of Software Methods

Wed, 10/22/2014 - 16:40

Ascertaining of the success and applicability of any claims made that are outside the accepted practices of business, engineering, or governance processes require careful testing of ideas through tangible evidence they are actually going to do what it is conjectured they're suppose to do.

The structure of this checklist is taken directly from Scientific American's essay on scientific baloney, but sure feels right for many of the outrageous claims found in today's software development community about approaches to estimating the cost, schedule, and likely outcomes.

How reliable is the 
source of the claim?

Self-pronounced experts often appear credible at first glance, but when examined more closely, the facts and figures they cite are distorted, taken out of context, long out of date, mathematically wrong, missing critical domain and context basis, or occasionally even fabricated.

In many instances the data used to support the claims are weak or poorly formed. Relying on surveys of friends or hearsay, small population samples, classroom experiments, or worse anecdotal evidence where the expert extends personal experience to a larger population.

Does this source often make similar claims?

Self pronounced experts have a habit of going well beyond the facts and generalizing their claims to a larger population of problems or domains. Many proponents of ideas make claims that cannot be substantiated within a testable framework. This is the nature of early development  in the engineering world. Of course, some great thinkers do frequently go beyond the data in their creative speculations.

But when those creative thinkers are used to support the new claims it's more suspect the hard work of testing the claim outside of personal experience hasn't been performed. 

They said agile wouldn't work, so my conjecture is getting the same criticism and I'll be considered just like those guys when I'm proven right.

Have the claims been verified by another source?

Typically self pronounced experts make statements that are unverified or verified only by a source within their own private circle, or who's conclusions are based primarily on anecdotal information.

We must ask, who is checking the claims, and even who is checking the checkers? Outside verification is crucial to good business decisions as it is crucial to good methodology development.

How does the claim fit with what we
know about how the world works? 


Any specific claim must be placed into a larger context to see how it fits. When people claim that a specific method, approach, or technique results in significant benefits, dramatic changes in an outcome, etc. they are usually not presenting the specific context for the application of their idea.

Such a claim is typically not supported by quantitative statistics as well. There may be qualitative data, but this is likely to be biased by the experimental method as well as the underlying population of the sample statistics.

In most cases to date, the sample size is minuscule compared to that needed to draw correlations and causation's to the conjectured outcomes.

Has anyone gone out
of the way to disprove the claim, or has only supportive evidence
been sought? 


This is the confirmation bias, or the tendency to seek confirmatory evidence and to reject or ignore dis–confirmatory evidence. The confirmation bias is powerful, pervasive and almost impossible to avoid.

It is why the methods that emphasize checking and rechecking, verification and replication, and especially attempts to falsify a claim, are critical.

When self-selected communities see external criticism as  harassment or you're simply not getting it, or  those people are just like talking to a box of rocks, the confirmation bias is in full force.

Does the preponderance of evidence point to the claimant's conclusion or to a different one?

Evidence is the basis of all confirmation processes. The problem is having evidence alone is necessary but not sufficient. The evidence must somehow be "predicted" by the process, fit the process model, or somehow participate in the process in a supportive manner.

Is the claimant employing the
accepted rules of reason and tools of research, or have
these been abandoned in favor of others that lead to the desired conclusion?

Unique and innovative ways of conducting research, process data, and "conjecturing" about the results are not statistically sound. In almost every discipline there are accepted mechanisms for conducting research. One of the first courses taken in graduate school is quantitative methods  for experiments. This course sets the ground rules for conducting research in the field.

Is the claimant providing an explanation for the observed phenomena or merely denying the existing explanation? 

This is a classic debate strategy—criticize your opponent and never affirm what you believe to avoid criticism.

Show us your data, is that starting point for engaging in a conversation about a speculative idea.

If the claimant proffers a new explanation, does it account for as many phenomena as the old explanation did? 

This concept is usually lost on "innovative" claims. The need to explain previous results is mandatory. Without this bridge to past results, a new suggested approach has no foundation for acceptance.

Do the claimant's personal beliefs and biases drive the conclusions, or vice versa?

All claimants hold social, political and ideological beliefs that could potentially slant their interpretations of the data, but how do those biases and beliefs affect their research in practice?

Usually during some peer-review system, such biases and beliefs are rooted out, or the paper or book is rejected.

In the absence of peer review - self publishing is popular these days - there is no external assessment of the ideas and therefore the author reinforces of the confirmation bias.

 So the next time you hear a suggestion that appears to violate a principles of  business, economics, or even physics, think of these questions. So let's move to the #NoEstimates suggestion that we can make decisions in the absence of estimate, that is we can make decisions about a future outcome in absence of estimating the cost to acheive that outcome and the impact of that outcome.

The core question is how can this conjecture be tested beyond the personal anecdotes of those proffering the notion that decisions can be made in the absence of estimates? Certainly those making the claim have no interest in performing that test. It's incumbant on those attempting to apply the notion to first test if for validity, applicability, and simple credibility. 

A final recommendation is Ken Schwaber's talk and slides to think about evidence based discussions around improving the business of software development. And the book he gave away at the end of the talk Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management

Related articles Falsifiability and Junk Science
Categories: Project Management

Quote of the Day

Tue, 10/21/2014 - 16:23

If a man will begin with certainties, he shall end in doubts, but if he will be content to begin with doubts, he shall end in certainties. - Francis Bacon

Everything in project work is uncertainty. Estimating is required to discover the extent of the range of these uncertainties, the impacts of the uncertainties on cost, schedule, and the performance of the products or services.

To suggest decisions can be made in the presence of these uncertainties without knowledge of the future outcomes and the cost of achieving these outcomes, or deciding between alternatives with future outcomes ignores completely the notion of uncertainty and microeconomics of decision making in the presence of these uncertainties.

Categories: Project Management

Decision Making Without Estimates?

Mon, 10/20/2014 - 02:56

In a recent post there are 5 suggestions of how decisions about software development can be made in the absence of estimating the cost, duration, and impact of these decisions. Before looking at each in more detail, let's see what the basis is of these suggestions from the post.

A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.

Decision making in the presence of the allocation of limited resources is called Microeconomics. These decision - in the presence of limited resources - involves opportunity costs. That is what is the cost of NOT choosing one of the alternatives - the allocations? To know these means we need to know something about the outcome of NOT choosing. We can't wait to do the work, we need to know what happens - to some level of confidence - if we DON'T Do something. How can we do this? We need estimate what happens if we don't choose one of the possible allocations, since all the outcomes are in the future.

But first, the post started with suggesting the five approaches are part of Strategy. I'm familiar with strategy making in the domain of software development, having been schooled by two Balanced Scorecard leaders while working as a program manager for a large Department of Energy site, where we pioneered the use of agile development in the presence of highly formal nuclear safety and safeguards applications.

What is Strategy?

Before proceeding with the 5 suggestions, let's look at what strategy is, since it is common to confuse strategy with tactics.

Strategy is creating fit among a firm's activities. The success of a strategy depends on doing many things well – not just a few. The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions. When this occurs, operational effectiveness determines the relative performance of the firm. 

Improving operational effectiveness is a necessary part of management, but it is not strategy. In confusing the two, managers will be unintentionally backed into a way of thinking about competition that drives the business processes (IT) away from the strategic support and toward the tactical improvement of operational effectiveness.

Managers must be able to clearly distinguish operational effectiveness from strategy. Both are essential, but the two agendas are different. The operational effectiveness agenda involves continual improvement business processes that have no trade–offs associated with them. The operational effectiveness agenda is the proper place for constant change, flexibility, and relentless efforts to achieve best practices.

In contrast, the strategic agenda is the place for making clear trade offs and tightening the fit between the participating business components. Strategy involves the continual search for ways to reinforce and extend the company’s position in the market place. 

“What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.

Using Porter's notion of strategy in a business context, the post seems more about tactics. But ignoring that for the moment, let's look further into the ideas presented in the post.

I'm going to suggest that each of the five decision process described in the post are the proper ones - ones with many approaches - but each has ignored the underlying principles of Microeconomics. This principle is that decisions about future outcomes are informed by the opportunity cost and that cost requires - mandated actually since they're in the future - an estimate. This is the basis of Real Options, forecasting, and the very core of business decision making in the presence of uncertainty.

The post then asks

  1. How well does this decision proposal help us reach our business goals?
  2. Does the risk profile resulting from this decision fit our acceptable risk profile?

Screen Shot 2014-10-18 at 11.02.18 AMThe 1st question needs another question to be answered. What are our business goals and what are the units of measure of these goals. In order to answer the 1st question we need a steering target to know how we are proceeding toward that goal.

The second question is about risk. All risk comes from uncertainty. Two types of uncertainty exist on projects:

Reducible (Epistemic) and Irreducible (Aleatory). Epistemic uncertainty comes from lack of knowledge. Epistemology is the study of the acquisition of knowledge. We can pay money to buy down this lack of knowledge. That is Epistemic uncertainty can be reduced with work. Risk reduction work. But this leaves open how much time, budget, and performance margin is needed?

ANSWER: We need an Estimate of the Probability of the Risk Coming True. Estimating the Epistemic risk probability of occurrence, the cost and schedule for the reduction efforts, and the probability of the residual risk is done with probabilistic model. There are several and many tools. But estimating all three components: occurrence, impact, effort to mitigate, and residual risk is required.

Aleatory uncertainty comes from the naturally occurring variances of the underlying processes. The only way to reduce the risk arising from Aleatory uncertainty is with margin. Cost Margin, Schedule Margin, Performance Margin. But this leaves open how do we know how margin?

ANSWER: We need to estimate the needed margin from the Probability Distribution Function of the Underlying Statistical Process. Estimating the needed aleatory margin (cost, schedule, and performance) can be done with Monte Carlo Simulation or Method of Moments.

Probability and StatisticsSo one more look at the suggestions before examining  further the 5 ways of making decisions in the absence of estimating their impacts and the cost to achieve those impacts.

All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?

All risk is probabilistic, based on underlying statistical processes. Either the process of lack of knowledge (Epistemic) or the process of natural variability (Aleatory). In the consideration of risk we must incorporate these probability and statistical behaviours in our decision making activities. Since the outcomes of these processes occur in the future, we need to estimate them based on  knowledge - or lack of knowledge - of their probability of occurrence. For the naturally occurring variances that have occurred in the past we need to know how they might occur in the future. To answer these questions, we need a probabilistic model. This model based on the underlying statistical processes. And since the  underlying model is statistical, we need to estimate the impact of this behaviour.

Let's Look At The Five Decision Making Processes

1. Do the most important work firstIf you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.

    • Important work first is good strategy. But importance needs a unit of measure. That unit of measure should be found in the strategy. This is the purpose of the strategy. But the strategy needs units of measure as well. Simply saying do important work first doesn't provide a way to make that decision.
    • The notion of validating versus implementing the strategy is artificial. A read of the Strategy Making literature will clear this up. Strategy for business and especially strategy in IT is a very mature domain, with a long history.
    • One approach to generating the units of measure from the strategy is Balanced Score Card, where strategic objectives are mapped to Performance Goals, then to Critical Success Factors, then to the Key Performance Indicators. The way to do that is with a Strategy Map, shown below.
    • This is the use of strategy as Porter defines it. 

Screen Shot 2014-10-18 at 11.28.31 AM

2. Do the Highest Techncial Risk FirstWhen you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.

    • This is likely dependent on the technical and programmatic architecture of the project or product. 
    • We may want to establish a platform on which to build riskier components. A platform that is known and trusted, stable, bug free - before embarking on any high risk development.
    • High risk may mean high cost. So doing risky things first have consequences. What are those consequences? One is risking the budget before it's clear we have a stable platform, in which to build follow on capabilities. Knowing soemthing is high risk may mean high cost, and this requires estimating something that will occur in the future - the cost to achieve and the cost of the consequences.
    • So doing highest technical risk first, is itself a risk that needs to be assessed. Without this assessment, this suggestion has no way of being tested in practice.

3. Do the Easiest Work FirstSuppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.

    • This is also dependent on the technical and programmatic architecture of the project or product.
    • It's also counter intuitive to #2. Since High Risk is not likely to be the easiest to do.
    • These assessments between risk and work sequence require some sort of trade space analysis, and since the outcomes and their impacts in in the future, estimates these is part of the Analysis of Alternatives approach for any non-trivial project where Systems Engineering guides the work processes.

4. Do the legal Requirements FirstIn medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.

    • Medical Devices are regulated with 21CFR Parts 800-1299. The suggestion doesn't reference any regulations for medical software, which ranges for patient check in at the front desk to surgical devices controlled by software.
    • Developing 21 CFR Software components first may not be possible until the foundation on which they are build is established, tested, and verified. 
    • This means - Quality Planning, Requirements, Design, Construction or Coding, Testing by the Software Developer, User Site Testing, and Maintenance and Software Changes. 
    • Once the plan - a required plan for validation - is in place, the order of the development will be more visible. 
    • Deciding which components to develop, just because they are impacted by Legal Requirements usually means ALL the components. So this approach - Do The Legal Requirements First - usually means do them all.
    • The notion of - Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the marketfails to describe how they knew when they would be ready to test out these ideas. And most importantly how they were able to go to market in the absence of the certification.
    • As well what type of testing - early trials, full 21 CFR release, human applciations, animal testing, etc. is not stated. With some experience in the medical device - devices.
    • A colleague is the CEO of  I'll confirm the processes of validating software from him.

5. Liability Driven Investment - This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.

    • It's not clear why this is called liability. Liability on the balance sheet is an obligation to pay. Deciding what work to do now to generate needed revenue is certainly a strategy. Value Stream Mapping or Impact Mapping is a way to define that. But liability seems to be the wrong term.
    • Not sure how that connects with a Securities Exchange and what problem they are solving using the term liabilities. Shorts are obligations to pay in the future when the short is calledPuts and Calls are terms used in stock trading, but developing software products is not trading. The Real Options used by the poster in the past don't exercise the Option, so the liability to pay doesn't seem to connect here.


  1. Risk Informed Decision Handbook, NASA/SP-2010-576 Version 1.0 April 2010.
  2. General Principles of Software Validation; Final Guidance for Industry and FDA Staff, US Food and Drug Administration.

  3. Strategy Maps: Converting Intangible Assets into Tangible Outcomes, Robert Kaplan and David Norton, Harvard Business Press.
  4. Estimating Optimal Decision Rules in Presence of Model Parameter Uncertainty, Christopher Joseph Bennett, Vanderbilt University, June 6, 2012.
Related articles Making Choices in the Absence of Information
Categories: Project Management

Quote of the Day

Mon, 10/20/2014 - 01:08

To achieve great things, two things are needed; a plan, and not quite enough time.
− Leonard Bernstein

Plan of the Day (CV-41)

The notion that planning is a waste is common in domains where mission critical, high risk - high reward, must work, type projects do not exist.

Notice the Plan and the Planned delivery date. The notion that  deadlines  are somehow evil, goes along with the lack on understanding that business needs a set of capabilities to be in place on a date in order to start booking the value in the general ledger.

Plans are strategies. Strategies are a hypothesis. The Hypothesis is tested with  experiments. Experiments show from actual data what the outcome is of the work. These outcomes are used as feedback to take corrective actions at the strategic and tactical level of the project.

This is called Closed Loop Control. Set the strategy, define the units of measure for the desired outcome - Measures of Effectiveness and Measures of Performance. Perform work as assess these measures. Determine the variance between the planned outcomes and the needed outcomes. Take corrective action by adjusting the plan to keep the project moving toward the strategic goals. For Closed Loop Control, we need

  • A steering target for some future state.
  • A measure of the current state.
  • The variance between current and future.
  • Corrective actions to put the project back on track toward the desired state.

Control systems from Glen Alleman Related articles Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow's Book
Categories: Project Management

What Is Systems Architecture And Why Should We Care?

Sat, 10/18/2014 - 20:10

If we were setting out to build a home, we would first lay out the floor plans, grouping each room by function and placing structural items within each room according to their best utility. This is not an arbitrary process – it is architecture. Moving from home design to IT system design does not change the process. Grouping data and processes into information systems creates the rooms of the system architecture. Arranging the data and processes for the best utility is the result of deploying an architecture. Many of the attributes of building architecture are applicable to system architecture. Form, function, best use of resources and materials, human interaction, reuse of design, longevity of the design decisions, robustness of the resulting entities are all attributes of well designed buildings and well designed computer systems. [1]

In general, an architecture is a set of rules that defines a unified and coherent structure consisting of constituent parts and connections that establish how those parts fit and work together. An architecture may be conceptualized from a specific perspective focusing on an aspect or view of its subject. These architectural perspectives themselves can become components in a higher–level architecture serving to integrate and unify them into a higher level structure.

The architecture must define the rules, guidelines, or constraints for creating conformant implementations of the system. While this architecture does not specify the details on any implementation, it does establish guidelines that must be observed in making implementation choices. These conditions are particularly important for component architectures that embody extensibility features to allow additional capabilities to be added to previously specified parts. [2] This is the case where Data Management is the initial deployment activity followed by more complex system components.

By adopting a system architecture motivation as the basis for the IT Strategy, several benefits result:

  • Business processes are streamlined – a fundamental benefit to building enterprise information architecture is the discovery and elimination of redundancy in the business processes themselves. In effect, it can drive the reengineering the business processes it is designed to support. This occurs during the construction of the information architecture. By revealing the different organizational views of the same processes and data, any redundancies can be documented and dealt with. The fundamental approach to building the information architecture is to focus on data, process and their interaction.
  • Systems information complexity is reduced – the architectural framework reduces information system complexity by identifying and eliminating redundancy in data and software. The resulting enterprise information architecture will have significantly fewer applications and databases as well as a resulting reduction in intersystem links. This simplification also leads to significantly reduced costs. Some of those recovered costs can and should be reinvested into further information system improvements. This reinvestment activity becomes the raison d’état for the enterprise–wide system deployment.
  • Enterprise–wide integration is enabled through data sharing and consolidation – the information architecture identifies the points to deploy standards for shared data. For example, most Kimball business units hold a wealth of data about products, customers, and manufacturing processes. However, this information is locked within the confines of the business unit specific applications. The information architecture forces compatibility for shared enterprise data. This compatible information can be stripped out of operational systems, merged to provide an enterprise view, and stored in data repositories. In addition, data standards streamline the operational architecture by eliminating the need to translate or move data between systems. A well–designed architecture not only streamlines the internal information value chain, but it can provide the infrastructure necessary to link information value chains between business units or allow effortless substitution of information value chains.
  • Rapid evolution to new technologies is enabled – client / server and object–oriented technology revolves around the understanding of data and the processes that create and access this data. Since the enterprise information architecture is structured around data and process and not redundant organizational views of the same thing, the application of client / server and object–oriented technologies is much cleaner. Attempting to move to these new technologies without an enterprise information architecture will result in the eventual rework of the newly deployed system.

[1] A Timeless way of Building, C. Alexander, Oxford University Press, 1979.

[2] “How Architecture Wins Technology Wars,” C. Morris and C. Ferguson, Harvard Business Review, March–April 1993, pp. 86–96.

Categories: Project Management

Connecting the Dots Between Technical Performance and Earned Value Management

Wed, 10/15/2014 - 15:36

We gave a recent College of Performance Management webinar on using techncial progress to inform Earned Value. Here's the annotated charts.

Categories: Project Management

Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow’s Book

Tue, 10/14/2014 - 17:08

Effective Risk ManagementIn a recent post to “Who Is Ed Conrow?” a responder asked about the differences between the PMBOK® Risk approach and the DoD PMBOK risk approaches as well as a summary of the book Effective Risk Management: Some Keys to Success, Edmund Conrow. Ed worked the risk management processes for a NASA proposal I was on. I was the IMP/IMS lead, so integrating Risk with the Integrated Master Plan / Integrtaed Master Schedule in the mannder he prescribed was a live changing experience. I was naive before, but no longer after that proposal won ˜$7B for the client.


Let me start with a few positioning statements:

  1. Project risk management is a topic with varying levels of understanding, interest, and applicability. The PMI PMBOK® provides a “starter kit” view of project risk. It covers the areas of risk management but does not have guidance on actually “doing” risk management. This results many times in the false sense that “if we’re following PMBOK® then we’re OK.”
  2. The separation of technical risk from programmatic risk is not clearly discussed in PMBOK®. While not a “show stopper” issue for some projects, programmatic risk management is critical for enterprise class projects. By enterprise I mean, ERP, large product development, large construction, aerospace and defense class programs. In fact, DI-MGMT-81861 mandates programmatic risk management processes for procurements greater than $20M. $20M sounds like a very large sum of money for the typical internal IT development project. It hardly keeps the lights on an aerospace and defense program.
  3. The concepts around schedule variances are misunderstood and misapplied in almost every discussion of risk management in the popular literature. The common red-herring is the ineffective use of Critical Path and PERT. This of course is simply a false statement in domains outside IT or small low risk projects. Critical Path, PERT and Monte Carlo Simulation are mandated by government procurement guidelines. Not that these guides make them “correct.” What makes them correct is that programs measurably benefit from their application. This discipline is called Program Planning and Controls (PP&C) and is a profession in aerospace, defense, and large construction. No amount of “claiming the processes don’t work” removes the documented facts they do, when properly applied. Anyone wishing to learn more about these approaches to programmatic risk management need only look to the NASA, Department of Energy, and Department of Defense Risk Management communities.

With all my biases out of the way, let’s look at the DoD PMBOK®

  1. First, the DoD PMBOK® is free and can be found at It appears to be gone so now you can find it here. This is a US Department of Defense approved document. It provides supplemental guidance to the PMI PMBOK®, but in fact replaces a large portion of PMI PMBOK®.
  2. Chapter 11 of DoD PMBOK® is Risk. It starts with the Risk Management Process areas – in Figure 11-1, page 125. (I will not put them here, because you should down load the document and turn to that page.) The diagram contains six (6) process areas. The same number as the PMI PMBOK®. But what’s missing from PMI PMBOK® and present in DoD PMBOK® is how these processes interact to provide a framework for Risk Management.
  3.  There are several missing critical concepts in PMI PMBOK® that are provided in DoD PMBOK®.
    • The Risk Management structure in Figure 11-1 is the first. Without the connections between the process areas in some form other than “linear” the management of technical and programmatic risk turns into a “check list.” This is the approach of PMI PMBOK® - to provide guidance on the process areas and leave it up to the reader to develop a process around these areas. This is also the approach of CMMI. This is an area too important to leave it up to the read to develop the process.
    • The concept of the Probability and Impact matrix is fatally flawed in PMI PMBOK®. It turns out you cannot multiple probability of occurrence with the impact of this occurrence. These “numbers” are not numbers in the sense of integer or real valued numbers. They are probability distributions themselves. Multiplication is not an operation that can be applied to a probability distribution. Distributions are integral equations and the multiplication operator ´ is actually the convolution operator Ä.
    • The second fatal flaw of the PMI PMBOK® approach to probability of occurrence and impact is these “numbers” are uncalibrated cardinal values. That is they have no actually meaning since their “units of measure” are not calibrated.

Page 124 of DoD PMBOK® summarizes the principles of Risk Management as developed in two seminal sources.

  1. Effective Risk Management: Some Keys to Success, Edmund Conrow, American Institute of Aeronautics and Astronautics, 2000.
  2. Risk Management Guide for DoD Acquisition, Sixth Edition, (Version 1.0), August 2006, US Department of Defense, which is part of the Defense Acquisition Guide, §11.4), which is published within the Office of the Under Secretary of Defense, Acquisition, Technology and Logistics  (OUSD(AT&L)),  Systems and Software Engineering / Enterprise Development.

Now all these pedantic references are here for a purpose. This is how people who manage risk for a living, manage risk. By risk, I mean technical risk that results in loss of mission, loss of life. Programmatic risk that results in loss of Billions of Tax Payer dollars. They are serious enough about risk management to not let the individual project or program manager interpret the vague notions in PMI PMBOK®. These may appear to be harsh words, but the road to the management of enterprise class projects is littered with disasters. You can read every day of IT projects that are 100% over budget, 100% behind schedule. From private firms to the US Government, the trail of destruction is front page news.

A Slight Diversion – Why are Enterprise Projects So Risky?

There are many reasons for failure – too many to mention – but one is the inability to identify and mitigate risk. The words “indentify” and “mitigate,” sound simple. They are listed in the PMI PMBOK® and the DoD PMBOK®. However, here is where the problem starts:

  1. The process of separating risk from issue.
  2. Classifying  the statistical nature risk.
  3. Managing the risk process independently from project management and technical development.
  4. Incorporating the technical risk mitigation processes into the schedule.
  5. Modeling the impact of technical risk on programmatic risk.
  6. Modeling the programmatic risk independent from the technical risk.

Using Conrow as a Guide

Here is one problem. When you use the complete phrase “Project Risk Management” with Google, you get ~642,000 hits. There are so many books, academic papers, and commercial articles on Risk Management, where do we start? Ed Conrow’s book is probably not the starting point for learning how to practice risk management on your project. However, it might be the ending point. If you are in the software development business, a good starting point is – Managing Risk: Methods for Software Systems Development, Elaine M. Hall, Addison Wesley, 1998. Another broader approach is Continuous Risk Management Guidebook, Software Engineering Institute, August 1996. While these two sources focus on software, they provide the foundation for the discussion of risk management as a discipline.

There are public sources as well:

  1. Start with the NASA Risk Management page,
  2. Software Engineering Institute’s Risk Management page,
  3. A starting point for other resources from NASA,
  4. Department of Energy’s Risk Management Handbook, (which uses the DoD Risk Process Map described above)

However, care needs the be taken once you go outside the government boundaries. Here are many voices plying the waters of “risk management,” as well as other voices with “axes to grind” regarding project management methods and risk management processes. The result is many times a confusing message full of anecdotes, analogies, and alternative approaches to the topic of Risk Management.

Conrow in his Full Glory

Before starting into the survey of the Conrow book, let me state a few observations:

  1. This book is tough going. I mean really tough going. Tough in the way a graduate statistical mechanics book is tough going. Or a graduate micro-economics of managerial finance book is tough going. It “professional grade” stuff. By “professional grade” I mean – written by professionals to be used by professionals.
  2. Not every problem has the same solution need. Conrow’s solutions may not be appropriate for a software development project with 4 developers and a customer in the same room. But from projects that have multiple teams, locations, and stake holders some type of risk management is needed.
  3. The book is difficult to read for another reason. It assumes you have a “a reasonable understanding of the issues” around risk management. This means this is not a tutorial style or “risk management for dummies” book. It is a technical reference book. There is little in the way of introductory material or bringing the reader up top speed before presenting the material.

From the introduction:

The purpose of this book is two-fold: first, to provide key lessons learned that I have documented from performing risk management on a wide variety of programs, and second, to assist you, the reader, in developing and implementing an effective risk management process on your program.

A couple of things here. One is the practical experience in risk management. Many in the risk management “talking” community have limited experience with risk management in the way Ed does. I first met Ed on a proposal for a $8B Manned Spaceflight program. He was responsible for the risk strategy and the conveying of that strategy in the proposal. The proposal resulted in an award and now our firm provides Program Planning and Controls for a major subsystem of the program. In this role programmatic and technical risk management is part of the Statement of Work flowed down from the prime contractor.

Second Ed is a technical advisor to the US Arms Control and Disarmament Agency as well as a consultant industry and government on risk management. These “resume” items are meant to show that the practice of risk management is just that – a practice. Speaking about risk management and doing risk management on high risk programs are two different things.

One of Ed’s principle contributions to the discipline was the development of a micro-economic framework of risk management in which the design feasibility (or technical performance) is traded against cost and schedule.

In the end, this is a reference text for the process of managing the risk of projects, written by a highly respected practitioner.

What does the Conrow Book have to offer over the Standard approach?

Ed’s book contains the current “best practices” for managing technical and programmatic risk. These practices are used on high risk, high value programs. The guidelines in Ed’s book are generally applicable to many other classes of projects as well. But there are several critical elements that differentiate this approach from the pedestrian approach to risk management.

  1. The introduction of the “ordinal risk scale.” This approach is dramatically different than the PMI PMBOK description of risk in which the probability of occurrence is multiplied by the consequences to produce a “risk rating.” Neither the probability of occurrence nor the consequences are calibrated in anyway. The result is a meaningless number that may satisfy the C-Level that “risk management” is being done on the program. By ordinal risk scales it is meant a classification of risk, say – A,B,C,D,E,F – that are descriptive in nature. Not just numbers. By the way, the use of letters is intentional. If numbers are used to ordinal risk ranks, there is a tendency to do arithmetic on them. Compute the average risk rank, or multiply them by the consequences. Letters remove this notion and prevent the first failure of the common risk management approach – to produce an overall risk measure.

The ordinal approach works like this. Ed describes some classes of risk scales which include: maturity, sufficiency, complexity, uncertainty, estimative probability, and probability based scales.

A maturity risk scale would be:


Scale Level

Basic principles observed


Concept design analyzed for performance


Breadboard or brassboard validation in relevant environment


Prototype passes performance tests


Item deployed and operational


 The critical concept is to relate the risk ordinal value to an objective measure. For a maturity risk assessment, some “calibration” of what it means to have the “basic principles observed” must be developed. This approach can be applied to the other classes – sufficiency, complexity, uncertainty, estimative probability and probability based scales.

It’s the estimative probability that is important to cost and schedule people in our PP&C practice. The estimative probability scale attempts to relate a word to a probability value. “High” to 80%. An ordinal estimative probability scale using point estimates derived from a statistical analysis of survey data might look like.


Median probability value

Scale Level













Almost no chance



Calibrating these risk scales is the primary analysis task of building a risk management system. What does it mean to have a “medium” risk, in the specific problem domain?

  1. The second item is the formal use of a risk management process. Simply listing the risk process areas – as is done in PMBOK – is not only poor project management practice, it is poor risk management practice. The process to be used are shown on page 135 of The application of these processes are described in detail. No process area is optional. All are needed. All need to be performed in the right relationship to each other.

These two concepts are the ones that changed the way I perform risk management on the programs I’m involved with and how we advise our clients. They are paradigm changing concepts. No more simple mined arithmetic with probabilities and consequences. No more uncalibrated risk scales. No more tolerating those who claim PERT, Critical Path, and Monte Carlo are unproven, obsolete, or “wrong headed” approaches.

Get Ed’s book. It’ll cost way too much when compared to the “paperback” approach to risk. But for those tasked with “managing risk,” this is the starting point.


Categories: Project Management

Managing Your Project With Dilbert Advice — Not!

Mon, 10/13/2014 - 16:11

Scott Adams provides cartons of what not to do for most things technical. Software and Hardware. I actually saw him once, when he worked for PacBell in Pleasanton, CA. I was on a job at major oil company deploying document management systems for OSHA 1910.119 - process safety management and integrating CAD systems for control of safety critical documents.

The most popular use of Dilbert cartoon lately has been with the #NoEstimates community in support of the notion that estimates are somehow evil, used to make commitments that can't be met, and generally should be avoided when spending other people's money.

The cartoon below resonated with me for several reasons. What's happening here is classic misguided, intentionally ignoring the established processes of Reference Class Forecasting. As well, in typical Dilbert fashion, doing stupid things on purpose.

Screen Shot 2014-10-12 at 9.16.06 AM

Reference Class Forecasting is a well developed estimating process used across a broad range of technical, business, and finance domains. The characters above seem not to know anything about RCF. As a result they are DSTOP

Here's how not to DSTOP for cost and schedule estimates and the associated risks and the technical risk that the product you're building can't do what it's supposed to do on or before the date it needs to do it, at or below the cost you need it to do  it in order to stay in business.

The approach below may be complete overkill for your domain. So start by asking what's the Value at Risk. How much of our customers money are we willing to right off, if we don't have a sense of what DONE looks like in units of measure meaningful to the decision makers. Don't know that? then it's likely you've already put that money at risk, you're likely late, and don't really know what capabiltiies will be produced when you run out of time and money.

Don't end up a cartoon character in Dilbert strip. Learn how to properly manage your efforts, the efforts of others, using your customers money.

Managing in the presence of uncertainty from Glen Alleman
Categories: Project Management

Intentional Disregard for Good Engineering Practices?

Mon, 10/13/2014 - 03:22

It seems lately there is an intentional disregard of the core principles of business development of software intensive systems. The #Noestimates community does, but other collections of developers do as well. 

  • We'd rather be writing code than estimating how much it's going to cost writing code.
  • Estimates are a waste.
  • The more precise the estimate, the more deceptive it is
  • We can't predict the future and it's a waste trying to
  • We can make decsision without estimating

These notions of course are seriously misinformed on how probability and statistics work in the estimating paradigm. I've written about this in the past. But there are a few new books we're putting to work in ouyr Software Intensive Systems (SIS) work that may be of interest to those wanted to learn more.

These are foundation texts for the profession of estimating. The continued disregard - ignoring possibly - of these principles has become all to clear. Not just in the sole contributor software development domain,. But all the way to Multi-Billion dolalr programs in defense, space, infrastructure, and other high risk domains. 

Which brings me back to a core conjecture - there is no longer any engineering discipline in the software development domain. At least outside the embedded systems like flight controls, process control, telecommunications equipment, and the like. There was a conjecture awhile back that the Computer Science discipline at the university level should be split - software engineering and coding.

Here's a sample of the Software Intensive System paradigm, where the engineering of the systems is a critical success factor. And Yes Virginia, the Discipline of Agile is applied in the Software Intensive Systems world - emphasis on the term DISCIPLINE.

Categories: Project Management

Principles Trump Practices

Sat, 10/11/2014 - 15:23

PrincipiaPrinciples, Practices, and Processes are the basis of successful project management. It is popular in some circles to think that practices come before Principles. 

The principles of management, project management, software development and its management, product development management are immutable.

What does done look like, what's our plan to reach done, what resources will we need along the way to done, what impediments will we encounter and how will we overcome them, and how are we going to measure our progress toward done in units meaningful to the decision makers?

This are immutable principles. These immutable principles can then be used to test practices and process by asking what is the evidence that the practice or process enables the principle to be applied and how do we know that the principle is being fulfilled.


Categories: Project Management

What Informs My Project Management View

Fri, 10/10/2014 - 15:44

In recent discussion (of sorts) about estimating - Not Estimating actually - I realized something that should have been obvious. I travel in a world not shared by the staunch advocates of #NoEstimates. They appear to be sole contributors. I came top this after reading Peter Kretzman's 3rd installment, where he re-quoted a statement by Ron Jeffries,

Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before.

This is a sole contributor or small team paradigm.

So let's pretend we work at Price/Waterhouse/Cooper and we’re playing our roles  - Peter as CIO advisor, me as Program Performance Management adviser. We've been asked by our new customer to develop a product from scratch, estimate the cost and schedule, and provide some confidence level that the needed business capabilities will be available on or before a date and at or below a cost. Why you ask, because that's in the business plan for this new product and if they're late or overrun the planned cost that will be a balance sheet problem.

What would we do? Well, we'd start with PWC resource management database – held by HR – and ask for “past performance experience” people in the business domain and the problem domain. Our new customer did not “invent” a new business domain, so it's likely we'll find people who know what our new customer does for money. We’d look to see where in the 195,433 people in the database that work for PWC world wide there is someone, somewhere, that knows what the customer does for money and what kinds of business capabilities this new system needs to provide. If there is no one, then we'd look in our 10,000 or so partner relationships database to find someone.

If we found no one who knows the business and the needed capabilities, we’d no bid.

This notion of “I've been asked to do something that’s never been done before, so how can I possibly estimate it” really means “I'm doing something I’ve never done before.” And since “I’m a sole contributor, the population of experience in doing this new thing for the new customer is ONE – me." So since I don't know how the problem has been solved in the past, I can't possibly know how to estimate the cost, schedule, and needed capabilities. And of course I'm absolutely correct to say - new development with unknown requirements can't be estimated. Because those unknown requirements are actually Unknown to me, but may be known to another. But in the population of 195,000 other people in our firm, I'm no longer alone in my quest to come up with an estimate.

So the answer to the question, “what if we encounter new and unknown needs, how can we estimate?” is actually a core problem for the sole contributor, or small team. It'd be rare that the sole contributor or small team would have encountered the broad spectrum of domains and technologies needed to establish the necessary Reference Classes to address this open ended question. This is not the fault of the sole contributor. It is simply the situation of small numbers versus large numbers.

This is the reason the PWC’s of the world exist. They get asked to do things the sole contributors never have an opportunity to see.

Related articles Software Requirements Are Vague
Categories: Project Management