Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Hugh McCleod's art for Zappo's provides the foundation for trust in that environment
If I'm the head of HR, I'm responsible for filling the desks at my company with amazing employees.Ā I can hold people to all the right standards. But ultimately I can't control what they do.Ā This is why hiring for culture works.Ā What Zappos does is radical because it trusts.Ā It says "Go do the best job you can do for the customer, without policy".Ā And leaves employees to come up with human solutions.Ā Something it turns out they're quite good at, if given the chance.
Now let's take another domain, one I'm very famailar with - fault tolerant process control systems. Software and support hardware applied to emergency shutdown of exothermic chemical reactors - those that make the unleaded gasoline for our cars, nuclear reactors and conventional fired power generation, gas turbine controls, and otherĀ must work properly machines. And a similar domain of DO-178c flight control systems, which must equally work without fail and provide all the needed capabilities on day one.
At Zappos, the HR Diector describes a work environment where employess are free to do the best job they can for the customer. In the domains above, employees also work to do the best job for the customer they can, but flight safety, live safety, equipment safety are also part of thatĀ best job. In other domains we work, doing the best job for the customer means processing with extremely low error rates, transactions for 100's of millions of dollars of value in the enterprise IT paradigm. Medical insurance provider services, HHS enrollment, enterprise IT in a variety of domains.
Zappo's can recover from an error, other domains can't. Nonrecoverable errors mean serious loss of revenue, or even loss of live. In the other domains, failure is similar consequences. I come from those domains, they inform my view of the software development world - where softwareĀ fail safe andĀ fault tolerance isĀ the basis of business success.
So when we hear about the freedom to fail early and fail often in the absence of a domain or context, care is needed. Without aĀ domain and context, it is difficult to assess the credibility of any concept, let alone one that is untested outside of personal ancedote. It comes down to Trust alone orĀ Trust But Verify.Ā I could also guarantee that Zappos has some of theĀ verifyĀ process. It is doubtful employees are left to do anything they wish for their customer. The simple reason there is a business governance process at any firm, no matter the size. Behaviour, evenĀ full trust behavior fits inside that governance process.
All the rhetoric around any idea needs actionable outcomes that can be tested in the market place, beyondĀ the personal anecdotes of self-selected conversations.
The question asked by #NoEstimates is in the form of a statement.Ā
On the surface this statement sounds interesting until the second sentence. The MicroEconomics of writing software for money is based on estimating future outcomes thaty result from current day decisions. But let's pretend for a moment that Micro Econ is beyond consideration, this is never true, but let's pretend.
The next approach is to construct a small decision tree that can invert the question. Forget the exploring, since all business effort is aĀ zero sum game, in that someone has to pay for everything we do. Exploring, coding, testing, installing training, even operating.
Ā So let's start down the flow chart.
Is It Your Money?
In the crass world of capitalism, money talks, BS walks. While this may be abhorrent to some, it's the way the world works, and unless you've got your own bank, you're going to likely have to use other peoples money to produce software - Ā either for yourself or for someone else. Self-funded start up? No problem, but even the best known names in software today went on to raise more money to move the firm forward. ThenĀ self-funded became venture funded, private equity funded, and then publicly funded.
If you're writing software for money, and it's not your money, those providing the money have - or should have if they're savvy investors - a vested interest in knowing how much will this cost. As well when will it be done and most importantly, what will be delivered during the work effort and at the end.Ā
This requires estimating
Is There A Governance Policy Where You Work?
Governance of software development, either internal projects, external projects, or product development is a subset of corporate governance.Ā
If you work at a place that has no corporate governance, then estimating is probably a waste.
If however, you work somewhere that does have a corporate governance process - no matter how simple - and this is likely the case when there is a non-trival amount of money at risk, then someone, somewhere in the building has an interest in knowing how much things will cost before you spend the money to do them or buy them.Ā
This requres estimating
What's the Value at Risk for Your Project?
If the value at risk for a project is low - that is if you spend all the money and consume all the time and produce nothing and the management of your firm writes that off as a loss without much concern, then estimating probably doesn't add much value.
But if those providing you the money have an expectation that something of value will be returned and that something is needed for the business, then writing off the time and cost is not likely to be seen as favorable to you the provider.Ā
We trusted you because you said "trust me" and you didn't show up on or before the planned time, at or below the planned budget, with the needed capabilities - and you didn't want to estimate those up front and keep us informed about your new and updated Estimate To Complete and Estimate At Complete so we could take corrective actions to help you out - then we're going to suggest you look for work somewhere else.
On low value projects estimating the probability of success, the probability of the cost of that success, the probability of completion date of that success is not likely of much value.
But using the Value At Risk paradigm - risk of loss of a specific asset (in this case the value produced by the project) isĀ defined as a threshold loss value, such that the probability that the loss on the value from the project over the given time horizon exceeds some value.
As an aside the notion ofĀ slicing does not reduce the Value at Risk. Slicing is a estimatingĀ normalizationĀ process where the exposure to risk is reduced to same sizedĀ chucks. But the natural and event based variability of the work is still present in the chunks, and the probability of impacting the outcome of the project due to changes in demand, productivity, defects, rework, unfavorable and even un anticipated interactions between the produced chuncks needs to be accounted for in some estimating process. aAs wellĀ the sunk cost ofĀ breaking down the work into same sized chunks needs to be accounted.
In our space and defense world, there is the 44 Day Rule, whereĀ chunks of work are broken down into 44 days (2 financial months) with tangible deliverables. The agile community would consider this to long, but they don't work on National Asset billion dollar software intensive programs, so ignore that for the moment.
So yes, slicing is a good project management process. But using the definition of No Estimates in the opening, slicing is Estimating and done in every credible project management process, usually through the Work Breakdown Structure guide.
The Five Immutable Principles of Project Success
To increase the probability of project success, five immutable principles must be in place and have credible answers to their question. Each requires some form of an estimate, since the outcomes from these principles is in the future. No amount ofĀ slicing and dicing is going to produce a non-statistical or non-probabilistic outcome. All slicing does - as mentioned before - is reduce the variance of the work demand, not the work productivity, the variance in that work productivity process, the rework due to defects, or any unidentified dependencies between those work products that will create uncertainty and therefore create risk to showing up on time, on budget, and on specification.
The Devil Made Me Do It
Those of us seeking an evidence based discussion about the issues around estimating - and there are an endless supply of real issues with real solutions - have pushed back on using Dilbert cartons. But I just couldn't resist today carton.
When we need to make a decision between options - Microeconomics andĀ opportunity costsĀ - about some outcome in the future, we need an estimate of the cost and benefit of that choice. To suggest that decisions can be made without estimates has little merit in the real world of spending other peoples money.No Estimates Needs to Come In Contact With Those Providing the Money
The first is the self-selection problem of statistics. This is the Standish problem. Send out a survey, tally the results from those that were returned. Don't publish how many surveys went out and how many came back.
These are both members of theĀ cherry picking process. The result is lots of exchanges of questions to the original conjecture that have not basis in evidence for the conjecture.
When you encounter such a conjecture, apply the Sagan'sĀ BS detection kit
When there is push back from hard questions, you'll know those making the claims have no evidence and are essentially BS'ing their constituents.
There is this notion in some circles that trust trumps all business management processes.
"ŠŠ¾Š²ŠµŃŃŠ¹, Š½Š¾ ŠæŃŠ¾Š²ŠµŃŃŠ¹, ŠŃŃŃŠµ ŠæŠµŃŠµŠ±Š“ŠµŃŃ, ŃŠµŠ¼ Š½ŠµŠ“Š¾Š±Š“ŠµŃŃ"
Who's butchered translation is Trust but Verify, don't rely on chance.
President Regan used that proverb reflected back to the Russian in the SALT I treaty. So what does it meanĀ trust that people can think for themselves and decide if it applies to them ...Ā that not making estimates of the cost, performance and schedule for the project are needed?
The first question is - what's the value at risk? Trust alone is likely possible in low value at risk. In that case the impact of not showing up on or before the needed time, at or below the needed cost, and with ot without all the needed capabilities for theĀ mission or business case fulfillment has much less impact and therefore is acceptable.
Trust but Verify
6 week DB update with 3 developers
18 month ERP integration with 87 developers whose performance is reported to the BoD on a quarterly basis
Water filter install in kitchen using the local handyman
Water filter install in kitchen with wife testing to see if it does what it said in the brochure
Install the finch feeder on the pole attached to the back deck in front of the kitchen window over looking the golf course.
Design and build the 1,200 square foot deck attached to the second floor on the back of the house using the architects plans and schedule the county for the inspection certificate so it can be used last summer.
Arrange for a ride in a glider at the county airport sometime Saturday afternoon
Plan departure from DIA and check for departure delay of SWA flight DEN to DCA.
In the first instances (left column)Ā trust us, we'll be done in the 6 week window probably means that team doesn't need to do much estimating other than the agree among themselves that theĀ Promise made to the manager has a good chance of coming true.
The second (right column) $178M ERP integration project in a publicly traded firm, filing their 10K and subject to FASB 86, and havingĀ promised the shareholders, insured, and the provider network that the new system will remove all the grief of the several dozen legacy apps will beĀ made all better -Ā on or beforeĀ the Go Live date announced at the Board Meeting and in the Press has a good chance of coming true.Ā
To assess that chance, more thatĀ TrustĀ is needed. Evidence of the probability of completingĀ on or beforeĀ the go live date andĀ at or below the target budget is needed. That probability is developed with an estimating process and updated on a periodic basis - in this case every month, with a mid-month assessment of the month end's reportable data.Ā
So next time you hear...
...think of theĀ Value at Risk, the fiduciary responsibility to those funding your work, to ask and produce an answer to the question ofĀ how much, when, and what will be delivered. And even possibly the compliance responsibility - SOX, CMS, FAR/DFARS, ITIL - forĀ knowing to some degree of confidence the Estimate To CompleteĀ and theĀ Estimate at Complete for your project. Writing 6 week warehouse apps, probably not much value. Spending 100's of millions of stock holders money andĀ betting the company likely knowing something like those numbers is needed.
Trust Alone is Open Loop Trust but Verify is Closed Loop
Control systems from Glen Alleman Ā Without knowing theĀ Value At Risk it's difficult if not impossible to have a conversation about applying any method of managing the spend of other peoples money. Here's a clip from another book that needs to be on the shelf of anyone accoutable for spending money in the presence of a governance process. Practical Spread Sheet Risk Modeling. Don't do risk management of other peoples money? Then you don't need this book or similar ones, and likley don't need to estimate the impact of decisions made using other peoples money. Just keep going, your customer will tell you when to stop. Ā
1024 - 2014
Thanks to Mr. Honner, a mathematics teacher in at Brooklyn Technical High School. If you like mathematics and appreciate the contribution a good teacher can make to mathematically understanding which is woefully lacking in our project management domain, sign up to get his blog posts.
The #NoEstimates movement appears to be based on a 27 year old reportā that provides examples of FORTRAN and PASCAL programs as the basis on which estimates is done.Ā
A lot has happend since 1987. For a short crtiique on the Software Crisis report - which is referenced in the #NoEstimates argument, see "There is No Software Engineering Crisis."
1000's of research and practicum books and papers on how to estimate software projects have be published. Maybe it's time to catch up with the 21st Century approach of estimating the time, cost, and capabilities needed to deliver value for those paying for our work. These approachesĀ answer the mail in the 1987 report, along with the much referencedĀ NATO Software Crisis report published in 1986.
While estimates have always been needed to make decisions in the paradigm of Microeconomics of software development, the techniques, tools, and data have improved dramatically in the last 27 years. Let's acknowldge that and start taking advantage of the efforts to improve our lot in life of being good stewards of other peoples money. And when we hear #Noestimates can be used to forecast completion times and costs at the end, test that idea with activities in the Baloney Claims check list.
ā Ā #NoEstimates is an approach to software development that arose from the observation that large amounts of time were spent over the years in estimating and improving those estimates, but we see no value from that investment. Indeed, according to scholars Conte, Dunmore and ShensĀ Ā a good estimate is one that is within 25% of the actual cost, 75% of the time. inĀ http://www.mystes.fi/agile2014/
As a small aside, that's not what the statement actually says in the context of statistical estimating. It says there is a 75% confidence that there will be on overage of 25% which needs to be covered with management reserve for 25% to protect the budget. Since all project work is probabilistic, uncertainty is both naturally occurring and event based. Event based uncertainty can be reduced by spending money. This is a core concept of Agile development. Do small things to discover what will and won't work. Naturally occurring uncertainty, can only be handled withĀ margin. In this statement - remember it's 27 years old - there is a likelihood that a 25% management reserve will be needed 25% of the time there is a project estimate produced. If you know that ahead of time, it's won't be a disappointment when it occurs 25% of the time.
This is standardĀ best management practice in mature organizations. In some domains, it's mandatory to have Management Reserve built from Monte Carlo Simulations using Reference Classes of past performance.Related articles How to Estimate Software Development Software Requirements Are Vague How Not To Make Decisions Using Bad Estimates #NoEstimates? #NoProjects? #NoManagers? #NoJustNo
Ascertaining of the success and applicability of any claims made that are outside the accepted practices of business, engineering, or governance processes require careful testing of ideas through tangible evidence they are actually going to do what it is conjectured they're suppose to do.
The structure of this checklist is taken directly fromĀ Scientific American'sĀ essay on scientific baloney, but sure feels right for many of theĀ outrageousĀ claims found in today's software development community about approaches to estimating the cost, schedule, and likely outcomes.
How reliable is theĀ
source of the claim?
Self-pronounced experts often appear credible at first glance, but when examined more closely, the facts and figures they cite are distorted, taken out of context, long out of date, mathematically wrong, missing critical domain and context basis, or occasionally even fabricated.
In many instances the data used to support the claims are weak or poorly formed. Relying on surveys of friends or hearsay, small population samples, classroom experiments, or worse anecdotal evidence where the expert extends personal experience to a larger population.
Does this source often make similar claims?
Self pronounced experts have a habit of going well beyond the facts and generalizing their claims to a larger population of problems or domains. Many proponents of ideas make claims that cannot be substantiated within a testable framework. This is the nature ofĀ early development Ā in the engineering world. Of course, some great thinkers do frequently go beyond the data in their creative speculations.
But when those creative thinkers are used to support the new claims it's more suspect the hard work of testing the claim outside of personal experience hasn't been performed.Ā
They said agile wouldn't work, so my conjecture is getting the same criticism and I'll be considered just like those guys when I'm proven right.
Have the claims been verified by another source?
Typically self pronounced experts make statements that are unverified or verified only by a source within their own private circle, or who's conclusions are based primarily on anecdotal information.
We must ask, who is checking the claims, and even who is checking the checkers? Outside verification is crucial to good business decisions as it is crucial to good methodology development.
How does the claim fit with what we
know about how the world works?Ā
Any specific claim must be placed into a larger context to see how it fits. When people claim that a specific method, approach, or technique results in significant benefits, dramatic changes in an outcome, etc. they are usually not presenting the specific context for the application of their idea.
Such a claim is typically not supported by quantitative statistics as well. There may be qualitative data, but this is likely to be biased by the experimental method as well as the underlying population of the sample statistics.
In most cases to date, the sample size is minuscule compared to that needed to draw correlations and causation's to the conjectured outcomes.
Has anyone gone out
of the way to disprove the claim, or has only supportive evidence
This is the confirmation bias, or the tendency to seek confirmatory evidence and to reject or ignore disāconfirmatory evidence. The confirmation bias is powerful, pervasive and almost impossible to avoid.
It is why the methods that emphasize checking and rechecking, verification and replication, and especially attempts to falsify a claim, are critical.
When self-selected communities see external criticism as Ā harassment orĀ you're simply not getting it, or Ā those people are just like talking to a box of rocks, the confirmation bias is in full force.
Does the preponderance of evidence point to the claimant's conclusion or to a different one?
Evidence is the basis of all confirmation processes. The problem is having evidence alone is necessary but not sufficient. The evidence must somehow be "predicted" by the process, fit the process model, or somehow participate in the process in a supportive manner.
Is the claimant employing the
accepted rules of reason and tools of research, or have
these been abandoned in favor of others that lead to the desired conclusion?Ā
Unique and innovative ways of conducting research, process data, and "conjecturing" about the results are not statistically sound. In almost every discipline there are accepted mechanisms for conducting research. One of the first courses taken in graduate school isĀ quantitative methods Ā for experiments. This course sets the ground rules for conducting research in the field.
Is the claimant providing an explanation for the observed phenomena or merely denying the existing explanation?Ā
This is a classic debate strategyācriticize your opponent and never affirm what you believe to avoid criticism.
Show us your data, is that starting point for engaging in a conversation about a speculative idea.
If the claimant proffers a new explanation, does it account for as many phenomena as the old explanation did?Ā
This concept is usually lost on "innovative" claims. The need to explain previous results is mandatory. Without this bridge to past results, a new suggested approach has no foundation for acceptance.
Do the claimant's personal beliefs and biases drive the conclusions, or vice versa?
All claimants hold social, political and ideological beliefs that could potentially slant their interpretations of the data, but how do those biases and beliefs affect their research in practice?
Usually during some peer-review system, such biases and beliefs are rooted out, or the paper or book is rejected.
In the absence of peer review - self publishing is popular these days - there is no external assessment of the ideas and therefore the author reinforces of the confirmation bias.
Ā So the next time you hear a suggestion that appears to violate a principles of Ā business, economics, or even physics, think of these questions. So let's move to the #NoEstimates suggestion that we can make decisions in the absence of estimate, that is we can make decisions about a future outcome in absence of estimating the cost to acheive that outcome and the impact of that outcome.
The core question isĀ how can this conjecture be tested beyond the personal anecdotes of those proffering the notion that decisions can be made in the absence of estimates? Certainly those making the claim have no interest in performing that test. It's incumbant on those attempting to apply the notion to first test if for validity, applicability, and simple credibility.Ā
A final recommendation is Ken Schwaber's talk and slidesĀ to think aboutĀ evidence basedĀ discussions around improving the business of software development. And the book he gave away at the end of the talkĀ Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based ManagementRelated articles Falsifiability and Junk Science
If a man will begin with certainties, he shall end in doubts, but if he will be content to begin with doubts, he shall end in certainties. - Francis Bacon
Everything in project work isĀ uncertainty. Estimating is required to discover the extent of the range of these uncertainties, the impacts of the uncertainties on cost, schedule, and the performance of the products or services.
To suggest decisions can be made in the presence of these uncertainties without knowledge of the future outcomes and the cost of achieving these outcomes, or deciding between alternatives with future outcomes ignores completely the notion of uncertainty and microeconomics of decision making in the presence of these uncertainties.
In a recent postĀ there are 5 suggestions of how decisions about software development can be made in the absence of estimating the cost, duration, and impact of these decisions. Before looking at each in more detail,Ā let's see what the basis is of these suggestions from the post.
A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.
Decision making in the presence of theĀ allocationĀ ofĀ limited resourcesĀ is called Microeconomics. These decision - in the presence of limited resources - involvesĀ opportunity costs. That isĀ what is the cost of NOT choosing one of the alternatives - the allocations?Ā To know these means we need to know something about the outcome ofĀ NOT choosing. We can't wait to do the work, we need to know what happens - to some level of confidence - if weĀ DON'T Do something. How can we do this? We need estimate what happens if we don't choose one of the possible allocations, since all the outcomes are in the future.
But first, the post started with suggesting the five approaches are part of Strategy. I'm familiar with strategy making in the domain of software development, having beenĀ schooled by two Balanced Scorecard leaders while working as a program manager for a large Department of Energy site, where we pioneered the use of agile development in the presence of highly formal nuclear safety and safeguards applications.
What is Strategy?
Before proceeding with the 5 suggestions, let's look at what strategy is, since it is common to confuse strategy with tactics.
Strategy is creating fit among a firm's activities. The success of a strategy depends on doing many things well ā not just a few. The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions. When this occurs, operational effectiveness determines the relative performance of the firm.Ā
Improving operational effectiveness is a necessary part of management, but it is not strategy. In confusing the two, managers will be unintentionally backed into a way of thinking about competition that drives the business processes (IT) away from the strategic support and toward the tactical improvement of operational effectiveness.
Managers must be able to clearly distinguish operational effectiveness from strategy. Both are essential, but the two agendas are different. The operational effectiveness agenda involves continual improvement business processes that have no tradeāoffs associated with them. The operational effectiveness agenda is the proper place for constant change, flexibility, and relentless efforts to achieve best practices.
In contrast, the strategic agenda is the place for making clear trade offs and tightening the fit between the participating business components. Strategy involves the continual search for ways to reinforce and extend the companyās position in the market place.Ā
āWhat is Strategy,ā M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61ā78.
Using Porter's notion of strategy in a business context, the post seems more about tactics. But ignoring that for the moment, let's look further into the ideas presented in the post.
I'm going to suggest that each of the five decision process described in the post are the proper ones - ones with many approaches - but each has ignored the underlying principles of Microeconomics. This principle is that decisions about future outcomes are informed by theĀ opportunity cost and that cost requires - mandated actually since they're in the future - an estimate. This is the basis of Real Options, forecasting, and the very core of business decision making in the presence of uncertainty.
The post then asks
The 1st question needs another question to be answered.Ā What are our business goals and what are the units of measure of these goals.Ā In order to answer the 1st question we need aĀ steering target to know how we are proceeding toward that goal.
The second question is about risk. All risk comes from uncertainty. Two types of uncertainty exist on projects:
Reducible (Epistemic) and Irreducible (Aleatory). Epistemic uncertainty comes fromĀ lack of knowledge. EpistemologyĀ is the study of the acquisition of knowledge. We can pay money to buy down this lack of knowledge. That is Epistemic uncertainty can be reduced with work. Risk reduction work. But this leaves open how much time, budget, and performance margin is needed?
ANSWER: We need an Estimate of the Probability of the Risk Coming True. Estimating the Epistemic risk probability of occurrence, the cost and schedule for the reduction efforts, and the probability of the residual risk is done with probabilistic model. There are several and many tools. But estimating all three components: occurrence, impact, effort to mitigate, and residual risk is required.
Aleatory uncertainty comes from the naturally occurring variances of the underlying processes. The only way to reduce the risk arising from Aleatory uncertainty is withĀ margin. Cost Margin, Schedule Margin, Performance Margin. But this leaves open how do we know how margin?
ANSWER: We need to estimate the needed margin from the Probability Distribution Function of the Underlying Statistical Process. Estimating the needed aleatory margin (cost, schedule, and performance)Ā can be done with Monte Carlo SimulationĀ or Method of Moments.
All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?
All risk is probabilistic, based on underlying statistical processes. Either the process ofĀ lack of knowledge (Epistemic) or the process ofĀ natural variability (Aleatory). In the consideration of risk we must incorporate these probability and statistical behaviours in our decision making activities. Since the outcomes of these processes occur in the future, we need to estimate them based on Ā knowledge - or lack of knowledge - of their probability of occurrence. For the naturally occurring variances that have occurred in the past we need to know how they might occur in the future. To answer these questions, we need a probabilistic model. This model based on the underlying statistical processes. And since the Ā underlying model is statistical, we need to estimate the impact of this behaviour.
Let's Look At The Five Decision Making Processes
1. Do the most important work first -Ā If you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.
2. Do the Highest Techncial Risk First -Ā When you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.
3. Do the Easiest Work First -Ā Suppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.
4. Do the legal Requirements First -Ā In medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.
5. Liability Driven Investment -Ā This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.
General Principles of SoftwareĀ Validation; Final Guidance forĀ Industry and FDA Staff, US Food and Drug Administration.
To achieve great things, two things are needed; a plan, and not quite enough time.
ā Leonard Bernstein
The notion thatĀ planning is a waste is common in domains where mission critical, high risk - high reward,Ā must work, type projects do not exist.
Notice the Plan and the Planned delivery date. The notion that Ā deadlines Ā are somehow evil, goes along with the lack on understanding that business needs a set of capabilities to be in place on a date in order to startĀ booking the value in the general ledger.
Plans are strategies. Strategies are a hypothesis. The Hypothesis is tested with Ā experiments. Experiments show from actual data what the outcome is of the work. These outcomes are used as feedback to take corrective actions at the strategic and tactical level of the project.
This is called Closed Loop Control. Set the strategy, define the units of measure for the desired outcome - Measures of Effectiveness and Measures of Performance. Perform work as assess these measures. Determine theĀ variance between the planned outcomes and the needed outcomes. Take corrective action by adjusting the plan to keep the project moving toward the strategic goals. For Closed Loop Control, we need
Control systems from Glen Alleman Related articles Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow's Book
If we were setting out to build a home, we would first lay out the floor plans, grouping each room by function and placing structural items within each room according to their best utility. This is not an arbitrary process ā it is architecture. Moving from home design to IT system design does not change the process. Grouping data and processes into information systems creates the rooms of the system architecture. Arranging the data and processes for the best utility is the result of deploying an architecture. Many of the attributes of building architecture are applicable to system architecture. Form, function, best use of resources and materials, human interaction, reuse of design, longevity of the design decisions, robustness of the resulting entities are all attributes of well designed buildings and well designed computer systems.Ā 
In general, an architecture is a set of rules that defines a unified and coherent structure consisting of constituent parts and connections that establish how those parts fit and work together. An architecture may be conceptualized from a specific perspective focusing on an aspect or view of its subject. These architectural perspectives themselves can become components in a higherālevel architecture serving to integrate and unify them into a higher level structure.
The architecture must define the rules, guidelines, or constraints for creating conformant implementations of the system. While this architecture does not specify the details on any implementation, it does establish guidelines that must be observed in making implementation choices. These conditions are particularly important for component architectures that embody extensibility features to allow additional capabilities to be added to previously specified parts.Ā Ā This is the case where Data Management is the initial deployment activity followed by more complex system components.
By adopting a system architecture motivation as the basis for the IT Strategy, several benefits result:
 āHow Architecture Wins Technology Wars,ā C. Morris and C. Ferguson, Harvard Business Review, MarchāApril 1993, pp. 86ā96.
We gave a recent College of Performance Management webinar on using techncial progress to inform Earned Value. Here's the annotated charts.
In a recent post to āWho Is Ed Conrow?ā a responder asked about the differences between the PMBOKĀ® Risk approach and the DoD PMBOK risk approaches as well as a summary of the book Effective Risk Management: Some Keys to Success, Edmund Conrow. Ed worked the risk management processes for a NASA proposal I was on. I was the IMP/IMS lead, so integrating Risk with the Integrated Master Plan / Integrtaed Master Schedule in the mannder he prescribed was a live changing experience. I was naive before, but no longer after that proposal won Ė$7B for the client.
Let me start with a few positioning statements:
With all my biases out of the way, letās look at the DoD PMBOKĀ®
Page 124 of DoD PMBOKĀ® summarizes the principles of Risk Management as developed in two seminal sources.
Now all these pedantic references are here for a purpose. This is how people who manage risk for a living, manage risk. By risk, I mean technical risk that results in loss of mission, loss of life. Programmatic risk that results in loss of Billions of Tax Payer dollars. They are serious enough about risk management to not let the individual project or program manager interpret the vague notions in PMI PMBOKĀ®. These may appear to be harsh words, but the road to the management of enterprise class projects is littered with disasters. You can read every day of IT projects that are 100% over budget, 100% behind schedule. From private firms to the US Government, the trail of destruction is front page news.
A Slight Diversion ā Why are Enterprise Projects So Risky?
There are many reasons for failure ā too many to mention ā but one is the inability to identify and mitigate risk. The words āindentifyā and āmitigate,ā sound simple. They are listed in the PMI PMBOKĀ® and the DoD PMBOKĀ®. However, here is where the problem starts:
Using Conrow as a Guide
Here is one problem. When you use the complete phrase āProject Risk Managementā with Google, you get ~642,000 hits. There are so many books, academic papers, and commercial articles on Risk Management, where do we start? Ed Conrowās book is probably not the starting point for learning how to practice risk management on your project. However, it might be the ending point. If you are in the software development business, a good starting point is ā Managing Risk: Methods for Software Systems Development, Elaine M. Hall, Addison Wesley, 1998. Another broader approach is Continuous Risk Management Guidebook, Software Engineering Institute, August 1996. While these two sources focus on software, they provide the foundation for the discussion of risk management as a discipline.
There are public sources as well:
However, care needs the be taken once you go outside the government boundaries. Here are many voices plying the waters of ārisk management,ā as well as other voices with āaxes to grindā regarding project management methods and risk management processes. The result is many times a confusing message full of anecdotes, analogies, and alternative approaches to the topic of Risk Management.
Conrow in his Full Glory
Before starting into the survey of the Conrow book, let me state a few observations:
From the introduction:
The purpose of this book is two-fold: first, to provide key lessons learned that I have documented from performing risk management on a wide variety of programs, and second, to assist you, the reader, in developing and implementing an effective risk management process on your program.
A couple of things here. One is the practical experience in risk management. Many in the risk management ātalkingā community have limited experience with risk management in the way Ed does. I first met Ed on a proposal for a $8B Manned Spaceflight program. He was responsible for the risk strategy and the conveying of that strategy in the proposal. The proposal resulted in an award and now our firm provides Program Planning and Controls for a major subsystem of the program. In this role programmatic and technical risk management is part of the Statement of Work flowed down from the prime contractor.
Second Ed is a technical advisor to the US Arms Control and Disarmament Agency as well as a consultant industry and government on risk management. These āresumeā items are meant to show that the practice of risk management is just that ā a practice. Speaking about risk management and doing risk management on high risk programs are two different things.
One of Edās principle contributions to the discipline was the development of a micro-economic framework of risk management in which the design feasibility (or technical performance) is traded against cost and schedule.
In the end, this is a reference text for the process of managing the risk of projects, written by a highly respected practitioner.
What does the Conrow Book have to offer over the Standard approach?
Edās book contains the current ābest practicesā for managing technical and programmatic risk. These practices are used on high risk, high value programs. The guidelines in Edās book are generally applicable to many other classes of projects as well. But there are several critical elements that differentiate this approach from the pedestrian approach to risk management.
The ordinal approach works like this. Ed describes some classes of risk scales which include: maturity, sufficiency, complexity, uncertainty, estimative probability, and probability based scales.
A maturity risk scale would be:
Basic principles observed
Concept design analyzed for performance
Breadboard or brassboard validation in relevant environment
Prototype passes performance tests
Item deployed and operational
Ā The critical concept is to relate the risk ordinal value to an objective measure. For a maturity risk assessment, some ācalibrationā of what it means to have the ābasic principles observedā must be developed. This approach can be applied to the other classes ā sufficiency, complexity, uncertainty, estimative probability and probability based scales.
Itās the estimative probability that is important to cost and schedule people in our PP&C practice. The estimative probability scale attempts to relate a word to a probability value. āHighā to 80%. An ordinal estimative probability scale using point estimates derived from a statistical analysis of survey data might look like.
Median probability value
Almost no chance
Calibrating these risk scales is the primary analysis task of building a risk management system. What does it mean to have a āmediumā risk, in the specific problem domain?
These two concepts are the ones that changed the way I perform risk management on the programs Iām involved with and how we advise our clients. They are paradigm changing concepts. No more simple mined arithmetic with probabilities and consequences. No more uncalibrated risk scales. No more tolerating those who claim PERT, Critical Path, and Monte Carlo are unproven, obsolete, or āwrong headedā approaches.
Get Edās book. Itāll cost way too much when compared to the āpaperbackā approach to risk. But for those tasked with āmanaging risk,ā this is the starting point.
Scott Adams provides cartons ofĀ what not to do for most things technical. Software and Hardware. I actually saw him once, when he worked for PacBell in Pleasanton, CA. I was on a job at major oil company deploying document management systems for OSHA 1910.119 - process safety management and integrating CAD systems for control of safety critical documents.
The most popular use of Dilbert cartoon lately has been with the #NoEstimates community in support of the notion that estimates are somehow evil, used to make commitments that can't be met, and generally should be avoided when spending other people's money.
The cartoon below resonated with me for several reasons. What's happening here is classic misguided, intentionally ignoring the established processes of Reference Class Forecasting. As well, in typical Dilbert fashion,Ā doing stupid things on purpose.
Reference Class Forecasting is a well developed estimating process used across a broad range of technical, business, and finance domains. The characters above seem not to know anything about RCF. As a result they are DSTOP.Ā
Here's how not to DSTOP for cost and schedule estimates and the associated risks and the technical risk that the product you're building can't do what it's supposed to do on or before the date it needs to do it, at or below the cost you need it to do Ā it in order to stay in business.
The approach below may be complete overkill for your domain. So start by asking what's theĀ Value at Risk. How much of our customers money are we willing to right off, if we don't have a sense of what DONE looks like in units of measure meaningful to the decision makers.Ā Don't know that? then it's likely you've already put that money at risk, you're likely late, and don't really know what capabiltiies will be produced when you run out of time and money.
Don't end up a cartoon character in Dilbert strip. Learn how to properly manage your efforts, the efforts of others, using your customers money.
Managing in the presence of uncertainty from Glen Alleman
It seems lately there is an intentional disregard of the core principles of business development of software intensive systems. The #Noestimates community does, but other collections of developers do as well.Ā
These notions of course are seriously misinformed on how probability and statistics work in the estimating paradigm. I've written about this in the past. But there are a few new books we're putting to work in ouyr Software Intensive Systems (SIS) work that may be of interest to those wanted to learn more.
These are foundation texts for the profession of estimating. The continued disregard - ignoring possibly - of these principles has become all to clear. Not just in the sole contributor software development domain,. But all the way to Multi-Billion dolalr programs in defense, space, infrastructure, and other high risk domains.Ā
Which brings me back to a core conjecture - there is no longer any engineering discipline in the software development domain. At least outside the embedded systems like flight controls, process control, telecommunications equipment, and the like. There was a conjecture awhile back that the Computer Science discipline at the university level should be split -Ā software engineering andĀ coding.
Here's a sample of the Software Intensive System paradigm, where theĀ engineering of the systems is a critical success factor. AndĀ Yes Virginia, the Discipline of Agile is applied in the Software Intensive Systems world - emphasis on the term DISCIPLINE.
The principles of management, project management, software development and its management, product development management are immutable.
What does done look like, what's our plan to reach done, what resources will we need along the way to done, what impediments will we encounter and how will we overcome them, and how are we going to measure our progress toward done in units meaningful to the decision makers?
This are immutable principles. These immutable principles can then be used to test practices and process by asking what is the evidence that the practice or process enables the principle to be applied and how do we know that the principle is being fulfilled.
In recent discussion (of sorts) about estimating - Not Estimating actually - I realized something that should have been obvious. I travel in a world not shared by the staunch advocates of #NoEstimates. They appear to be sole contributors. I came top this after reading Peter Kretzman's 3rd installment, where he re-quoted a statement byĀ Ron Jeffries,
Even with clear requirements ā and it seems that they never are ā it is still almost impossible to know how long something will take, because weāve never done it before.
This is a sole contributor or small team paradigm.
So let's pretend we work at Price/Waterhouse/Cooper and weāre playing our roles Ā -Ā Peter as CIO advisor, me as Program Performance Management adviser.Ā We've been asked by our new customer to develop a product from scratch, estimate the cost and schedule, and provide some confidence level that the needed business capabilities will be available on or before a date and at or below a cost. Why you ask, because that's in the business plan for this new product and if they're late or overrun the planned cost that will be a balance sheet problem.
What would we do? Well, we'd start with PWC resource management database ā held by HR ā and ask for āpast performance experienceā people in the business domain and the problem domain. Our new customer did not āinventā a new business domain, so it's likely we'll find people who know what our new customer does for money. Weād look to see where in the 195,433 people in the database that work for PWC world wide there is someone, somewhere, that knows what the customer does for money and what kinds of business capabilities this new system needs to provide. If there is no one, then we'd look in our 10,000 or so partner relationships database to find someone.
If we found no one who knows the business and the needed capabilities, weād no bid.
This notion of āI've been asked to do something thatās never been done before, so how can I possibly estimate itā really means āI'm doing something Iāve never done before.ā And since āIām a sole contributor, the population of experience in doing this new thing for the new customer is ONE ā me." So since I don't know how the problem has been solved in the past, I can't possibly know how to estimate the cost, schedule, and needed capabilities. And of course I'm absolutely correct to say -Ā new development with unknown requirements can't be estimated. Because those unknown requirements are actuallyĀ Unknown to me, but may be known to another. But in the population of 195,000 other people in our firm, I'm no longer alone in my quest to come up with an estimate.
So the answer to the question, āwhat if we encounter new and unknown needs, how can we estimate?ā is actually a core problem for the sole contributor, or small team. It'd be rare that the sole contributor or small team would have encountered the broad spectrum of domains and technologies needed to establish the necessary Reference Classes to address this open ended question. This is not the fault of the sole contributor. It is simply the situation ofĀ small numbers versusĀ large numbers.
This is the reason the PWCās of the world exist. They get asked to do things the sole contributors never have an opportunity to see.Related articles Software Requirements Are Vague