Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Project Management

Trust but Verify

Herding Cats - Glen Alleman - Sat, 10/25/2014 - 18:28

There is this notion in some circles that trust trumps all business management processes.

Screen Shot 2014-10-24 at 4.35.37 PMThe Russian proverb is

"Доверяй, но проверяй, Лучше перебдеть, чем недобдеть"

Who's butchered translation is Trust but Verify, don't rely on chance.

President Regan used that proverb reflected back to the Russian in the SALT I treaty. So what does it mean trust that people can think for themselves and decide if it applies to them ... that not making estimates of the cost, performance and schedule for the project are needed?

The first question is - what's the value at risk? Trust alone is likely possible in low value at risk. In that case the impact of not showing up on or before the needed time, at or below the needed cost, and with ot without all the needed capabilities for the mission or business case fulfillment has much less impact and therefore is acceptable.

Trust

Trust but Verify

6 week DB update with 3 developers

18 month ERP integration with 87 developers whose performance is reported to the BoD on a quarterly basis

Water filter install in kitchen using the local handyman

Water filter install in kitchen with wife testing to see if it does what it said in the brochure

Install the finch feeder on the pole attached to the back deck in front of the kitchen window over looking the golf course.

Design and build the 1,200 square foot deck attached to the second floor on the back of the house using the architects plans and schedule the county for the inspection certificate so it can be used last summer.

Arrange for a ride in a glider at the county airport sometime Saturday afternoon

Plan departure from DIA and check for departure delay of SWA flight DEN to DCA.

In the first instances (left column) trust us, we'll be done in the 6 week window probably means that team doesn't need to do much estimating other than the agree among themselves that the Promise made to the manager has a good chance of coming true.

The second (right column) $178M ERP integration project in a publicly traded firm, filing their 10K and subject to FASB 86, and having promised the shareholders, insured, and the provider network that the new system will remove all the grief of the several dozen legacy apps will be made all betteron or before  the Go Live date announced at the Board Meeting and in the Press has a good chance of coming true. 

To assess that chance, more that Trust is needed. Evidence of the probability of completing on or before the go live date and at or below the target budget is needed. That probability is developed with an estimating process and updated on a periodic basis - in this case every month, with a mid-month assessment of the month end's reportable data. 

So next time you hear...

Screen Shot 2014-10-24 at 5.27.28 PM

...think of the Value at Risk, the fiduciary responsibility to those funding your work, to ask and produce an answer to the question of how much, when, and what will be delivered. And even possibly the compliance responsibility - SOX, CMS, FAR/DFARS, ITIL - for knowing to some degree of confidence the Estimate To Complete and the Estimate at Complete for your project. Writing 6 week warehouse apps, probably not much value. Spending 100's of millions of stock holders money and betting the company likely knowing something like those numbers is needed.

Trust Alone is Open Loop Trust but Verify is Closed Loop

Control systems from Glen Alleman   Without knowing the Value At Risk it's difficult if not impossible to have a conversation about applying any method of managing the spend of other peoples money. Here's a clip from another book that needs to be on the shelf of anyone accoutable for spending money in the presence of a governance process. Practical Spread Sheet Risk Modeling. Don't do risk management of other peoples money? Then you don't need this book or similar ones, and likley don't need to estimate the impact of decisions made using other peoples money. Just keep going, your customer will tell you when to stop. Screen Shot 2014-10-25 at 11.21.45 AM  
Categories: Project Management

Happy Permutation Day

Herding Cats - Glen Alleman - Fri, 10/24/2014 - 15:05

10/24/2014

1024 - 2014

Thanks to Mr. Honner, a mathematics teacher in at Brooklyn Technical High School. If you like mathematics and appreciate the contribution a good teacher can make to mathematically understanding which is woefully lacking in our project management domain, sign up to get his blog posts.

Categories: Project Management

Basis of #NoEstimates are 27 Year Old Reports

Herding Cats - Glen Alleman - Thu, 10/23/2014 - 17:52

The #NoEstimates movement appears to be based on a 27 year old report† that provides examples of FORTRAN and PASCAL programs as the basis on which estimates is done. 

Screen Shot 2014-10-21 at 10.51.54 PM

 

A lot has happend since 1987. For a short crtiique on the Software Crisis report - which is referenced in the #NoEstimates argument, see "There is No Software Engineering Crisis."

  • Parametric modeling tools - the structure of software projects are constrained by the external structures of their components. These can be parameterized for estimating purposes.
  • COCOMO - is an example of a parametic estimating tool. There are several others.
  • Reference Class Forecasting models, have been developed as the result of overruns and disasters in many areas, including SWDev. But now we know and we know better not to succumb to all the biases discovered in the past.
  • Monte Carlo Simulation, using Reference Class Forecasting, there as simple and cheap tools, http://www.riskamp.com/ for example. For $125.00 all the handwaving around forecasting cost, schedule, and other project variables can be modeled with ease. 
  • Object Oriented Programming - old news but no more debugging of FORTRAN "Unnamed COMMON" overwriting floating point numbers!!
  • Component Based Software, where we can buy parts and assemble them.
  • SOA and CORBA (TIBCO) where ETL and Enterprise Bus are the part of the Enterprise Architecture. Stop writing application code and start writing scripts to integrate. BTW the example of having developers write database apps for what is essentially a warehousing app, has missed the COTS, component based solutions bus.
  • FPA, while a bit long in the tooth, but idea is still valid.
  • Databases of Reference Classes
  • The Web and Web components, same as above.
  • CORBA, same as above.
  • All the web based languages
  • All the runtime interpretative languages with built in debuggers, rather compiled code with stop dead runtime debugging. 
  • ERP and COTS products and components, with out of the box functions that remove the need to write any code. Configure the system, sure. Write some scripting code ABAP e.g., but no coding in the developer sense.
  • Software as a Service, where you can buy the solution. That was unheard of in 1986 and 1987.
  • DevOps, another unheard idea back then.
  • Open Source and Reuse

1000's of research and practicum books and papers on how to estimate software projects have be published. Maybe it's time to catch up with the 21st Century approach of estimating the time, cost, and capabilities needed to deliver value for those paying for our work. These approaches answer the mail in the 1987 report, along with the much referenced NATO Software Crisis report published in 1986.

While estimates have always been needed to make decisions in the paradigm of Microeconomics of software development, the techniques, tools, and data have improved dramatically in the last 27 years. Let's acknowldge that and start taking advantage of the efforts to improve our lot in life of being good stewards of other peoples money. And when we hear #Noestimates can be used to forecast completion times and costs at the end, test that idea with activities in the Baloney Claims check list.

————————————

† #NoEstimates is an approach to software development that arose from the observation that large amounts of time were spent over the years in estimating and improving those estimates, but we see no value from that investment. Indeed, according to scholars Conte, Dunmore and Shens [1] a good estimate is one that is within 25% of the actual cost, 75% of the time. in http://www.mystes.fi/agile2014/

As a small aside, that's not what the statement actually says in the context of statistical estimating. It says there is a 75% confidence that there will be on overage of 25% which needs to be covered with management reserve for 25% to protect the budget. Since all project work is probabilistic, uncertainty is both naturally occurring and event based. Event based uncertainty can be reduced by spending money. This is a core concept of Agile development. Do small things to discover what will and won't work. Naturally occurring uncertainty, can only be handled with margin. In this statement - remember it's 27 years old - there is a likelihood that a 25% management reserve will be needed 25% of the time there is a project estimate produced. If you know that ahead of time, it's won't be a disappointment when it occurs 25% of the time.

This is standard best management practice in mature organizations. In some domains, it's mandatory to have Management Reserve built from Monte Carlo Simulations using Reference Classes of past performance.

Related articles How to Estimate Software Development Software Requirements Are Vague How Not To Make Decisions Using Bad Estimates #NoEstimates? #NoProjects? #NoManagers? #NoJustNo
Categories: Project Management

How to Be a 10x Better Speaker with 20 Benefits

NOOP.NL - Jurgen Appelo - Thu, 10/23/2014 - 04:37
speaking

Last week, one conference attendee told me my presentation at Agile Tour Toulouse was perfect.

It was very kind, but I didn’t believe her.

In five years, I have spoken at almost 100 conferences and joined a similar number of community and company events. Thanks to the many discussions I had with event organizers, I think I now understand how to be a more valuable speaker.

The post How to Be a 10x Better Speaker with 20 Benefits appeared first on NOOP.NL.

Categories: Project Management

Baloney Claims: Pseudo–science and the Art of Software Methods

Herding Cats - Glen Alleman - Wed, 10/22/2014 - 16:40

Ascertaining of the success and applicability of any claims made that are outside the accepted practices of business, engineering, or governance processes require careful testing of ideas through tangible evidence they are actually going to do what it is conjectured they're suppose to do.

The structure of this checklist is taken directly from Scientific American's essay on scientific baloney, but sure feels right for many of the outrageous claims found in today's software development community about approaches to estimating the cost, schedule, and likely outcomes.

How reliable is the 
source of the claim?

Self-pronounced experts often appear credible at first glance, but when examined more closely, the facts and figures they cite are distorted, taken out of context, long out of date, mathematically wrong, missing critical domain and context basis, or occasionally even fabricated.

In many instances the data used to support the claims are weak or poorly formed. Relying on surveys of friends or hearsay, small population samples, classroom experiments, or worse anecdotal evidence where the expert extends personal experience to a larger population.

Does this source often make similar claims?

Self pronounced experts have a habit of going well beyond the facts and generalizing their claims to a larger population of problems or domains. Many proponents of ideas make claims that cannot be substantiated within a testable framework. This is the nature of early development  in the engineering world. Of course, some great thinkers do frequently go beyond the data in their creative speculations.

But when those creative thinkers are used to support the new claims it's more suspect the hard work of testing the claim outside of personal experience hasn't been performed. 

They said agile wouldn't work, so my conjecture is getting the same criticism and I'll be considered just like those guys when I'm proven right.

Have the claims been verified by another source?
 

Typically self pronounced experts make statements that are unverified or verified only by a source within their own private circle, or who's conclusions are based primarily on anecdotal information.

We must ask, who is checking the claims, and even who is checking the checkers? Outside verification is crucial to good business decisions as it is crucial to good methodology development.

How does the claim fit with what we
know about how the world works? 

 

Any specific claim must be placed into a larger context to see how it fits. When people claim that a specific method, approach, or technique results in significant benefits, dramatic changes in an outcome, etc. they are usually not presenting the specific context for the application of their idea.

Such a claim is typically not supported by quantitative statistics as well. There may be qualitative data, but this is likely to be biased by the experimental method as well as the underlying population of the sample statistics.

In most cases to date, the sample size is minuscule compared to that needed to draw correlations and causation's to the conjectured outcomes.

Has anyone gone out
of the way to disprove the claim, or has only supportive evidence
been sought? 

 

This is the confirmation bias, or the tendency to seek confirmatory evidence and to reject or ignore dis–confirmatory evidence. The confirmation bias is powerful, pervasive and almost impossible to avoid.

It is why the methods that emphasize checking and rechecking, verification and replication, and especially attempts to falsify a claim, are critical.

When self-selected communities see external criticism as  harassment or you're simply not getting it, or  those people are just like talking to a box of rocks, the confirmation bias is in full force.

Does the preponderance of evidence point to the claimant's conclusion or to a different one?
 

Evidence is the basis of all confirmation processes. The problem is having evidence alone is necessary but not sufficient. The evidence must somehow be "predicted" by the process, fit the process model, or somehow participate in the process in a supportive manner.

Is the claimant employing the
accepted rules of reason and tools of research, or have
these been abandoned in favor of others that lead to the desired conclusion?
 

Unique and innovative ways of conducting research, process data, and "conjecturing" about the results are not statistically sound. In almost every discipline there are accepted mechanisms for conducting research. One of the first courses taken in graduate school is quantitative methods  for experiments. This course sets the ground rules for conducting research in the field.

Is the claimant providing an explanation for the observed phenomena or merely denying the existing explanation? 

This is a classic debate strategy—criticize your opponent and never affirm what you believe to avoid criticism.

Show us your data, is that starting point for engaging in a conversation about a speculative idea.

If the claimant proffers a new explanation, does it account for as many phenomena as the old explanation did? 

This concept is usually lost on "innovative" claims. The need to explain previous results is mandatory. Without this bridge to past results, a new suggested approach has no foundation for acceptance.

Do the claimant's personal beliefs and biases drive the conclusions, or vice versa?
 

All claimants hold social, political and ideological beliefs that could potentially slant their interpretations of the data, but how do those biases and beliefs affect their research in practice?

Usually during some peer-review system, such biases and beliefs are rooted out, or the paper or book is rejected.

In the absence of peer review - self publishing is popular these days - there is no external assessment of the ideas and therefore the author reinforces of the confirmation bias.

 So the next time you hear a suggestion that appears to violate a principles of  business, economics, or even physics, think of these questions. So let's move to the #NoEstimates suggestion that we can make decisions in the absence of estimate, that is we can make decisions about a future outcome in absence of estimating the cost to acheive that outcome and the impact of that outcome.

The core question is how can this conjecture be tested beyond the personal anecdotes of those proffering the notion that decisions can be made in the absence of estimates? Certainly those making the claim have no interest in performing that test. It's incumbant on those attempting to apply the notion to first test if for validity, applicability, and simple credibility. 

A final recommendation is Ken Schwaber's talk and slides to think about evidence based discussions around improving the business of software development. And the book he gave away at the end of the talk Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management

Related articles Falsifiability and Junk Science
Categories: Project Management

Software Development Linkopedia October 2014

From the Editor of Methods & Tools - Wed, 10/22/2014 - 15:04
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about managing code duplication, product backlog prioritization, engineering management, organizational culture, mobile testing, code reuse and big data. Blog: Practical Guide to Code Clones (Part 1) Blog: Practical Guide to Code Clones (Part 2) Blog: Selecting Backlog Items By Cost of Delay Blog: WIP and Priorities – how to get fast and focused! Blog: 44 engineering management lessons Article: Do’s and Don’ts of Agile Retrospectives Article: Organizational Culture Considerations with Agile Article: ...

Quote of the Day

Herding Cats - Glen Alleman - Tue, 10/21/2014 - 16:23

If a man will begin with certainties, he shall end in doubts, but if he will be content to begin with doubts, he shall end in certainties. - Francis Bacon

Everything in project work is uncertainty. Estimating is required to discover the extent of the range of these uncertainties, the impacts of the uncertainties on cost, schedule, and the performance of the products or services.

To suggest decisions can be made in the presence of these uncertainties without knowledge of the future outcomes and the cost of achieving these outcomes, or deciding between alternatives with future outcomes ignores completely the notion of uncertainty and microeconomics of decision making in the presence of these uncertainties.

Categories: Project Management

Is Your Culture Working the Way You Think it Is?

Long ago, I was a project manager and senior engineer for a company undergoing a Change Transformation. You know the kind, where the culture changes, along with the process. The senior managers had bought into the changes. The middle managers were muddling through, implementing the changes as best they could.

Us project managers and the technical staff, we were the ones doing the bulk of the changes. The changes weren’t as significant as an agile transformation, but they were big.

One day, the Big Bosses, the CEO and the VP Engineering spoke at an all-hands meeting. “You are empowered,” they said. No, they didn’t say it as a duet. They each said it separately. They had choreographed speeches, with great slide shows, eight by ten color glossies, and pictures. They had a vision. They just knew what the future would hold.

I managed to keep my big mouth shut.

The company was not doing well. We had too many managers for not enough engineers or contracts. If you could count, you could see that.

I was traveling back and forth to a client in the midwest. At one point, the company owed me four weeks of travel expenses. I quietly explained that no, I was not going to book any more airline travel or hotel nights until I was paid in full for my previous travel.

“I’m empowered. I can refuse to get on a plane.”

That did not go over well with anyone except my boss, who was in hysterics. He thought it was quite funny. My boss agreed I should be reimbursed before I racked up more charges.

Somehow, they did manage to reimburse me. I explained that from now on, I was not going to float the company more than a week’s worth of expenses. If they wanted me to travel, I expected to be reimbursed within a week of travel. I got my expenses in the following Monday. They could reimburse me four days later, on Friday.

“But that’s too fast for us,” explained one of the people in Accounting.

“Then I don’t have to travel every other week,” I explained. “You see, I’m empowered. I’ll travel after I get the money for the previous trip. I won’t make a new reservation until I receive all the money I spent for all my previous trips. It’s fine with me. You’ll just have to decide how important this project is. It’s okay.”

The VP came to me and tried to talk me out of it. I didn’t budge. (Imagine that!) I told him that I didn’t need to float the company money. I was empowered.

“Do you like that word?”

“Sure I do.”

“Do you feel empowered?”

“Not at all. I have no power at all, except over my actions. I have plenty of power over what I choose to do. I am exercising that power. I realized that during your dog and pony show.

“You’re not changing our culture. You’re making it more difficult for me to do my job. That’s fine. I’m explaining how I will work.”

The company didn’t get a contract it had expected. It had a layoff. Guess who got laid off? Yes, I did. It was a good thing. I got a better job for more money. And, I didn’t have to travel every other week.

Change can be great for an organization. But telling people the culture is one thing and then living up to that thing can be difficult. That’s why this month’s management myth is Myth 34: You’re Empowered Because I Say You Are.

I picked on empowerment. I could have chosen “open door.” Or “employees are our greatest asset.” (Just read that sentence. Asset???)

How you talk about culture says a lot about what the culture is. Remember, culture is how you treat people, what you reward, and what is okay to talk about.

Go read Myth 34: You’re Empowered Because I Say You Are.

Categories: Project Management

Velocity-Driven Sprint Planning

Mike Cohn's Blog - Tue, 10/21/2014 - 15:00

There are two general approaches to planning sprints: velocity-driven planning and commitment-driven planning. Over the next three posts, I will describe each approach, and make my case for the one I prefer.

Let’s begin with velocity-driven sprint planning because it’s the easiest to describe. Velocity-driven sprint planning is based on the premise that the amount of work a team will do in the coming sprint is roughly equal to what they’ve done in prior sprints. This is a very valid premise.

This does assume, of course, things such as a constant team size, the team is working on similar work from sprint to sprint, consistent sprint lengths, and so on. Each of these assumptions is generally valid and violations of the assumptions are easily identified—that is, when a sprint changes from 10 to nine working days as in the case of a holiday, the team knows this in advance.

The steps in velocity-driven sprint planning are as follows:

1. Determine the team’s historical average velocity.
2. Select a number of product backlog items equal to that velocity.

Some teams stop there. Others include the additional step of:

3. Identify the tasks involved in the selected user stories and see if it feels like the right amount of work.

And some teams will go even further and:

4. Estimate the tasks and see if the sum of the work is in line with past sprints.

Let’s look in more detail at each of these steps.

Step 1. Select the Team’s Average Velocity

The first step in velocity-driven sprint planning is determining the team’s average velocity. Some agile teams prefer to look only at their most recent velocity. Their logic is that the most recent sprint is the single best indicator of how much will be accomplished in the next sprint. While this is certainly valid, I prefer to look back over a longer period if the team has the data.

I prefer to take the average of the eight most recent sprints. I’ve found that to be fairly predictive of the future. I certainly wouldn’t take the average over the last 10 years. How fast a team was going during the Bush Administration is irrelevant to that team’s velocity today. But, similarly, if you have the data, I don’t think you should look back only one sprint.

Step 2: Select Product Backlog Items

Armed with an average velocity, team members select the top items from the product backlog. They grab items up to or equal to the average velocity.

At this point, velocity-driven planning in the strictest sense is finished. Rather than taking an hour or two per week of the sprint, sprint planning is done in a matter of seconds. As quickly as the team can add up to their average velocity, they’re done.

Step 3: Identify Tasks (Optional)

Most teams who favor velocity-driven sprint planning realize that planning a sprint in a few seconds is probably insufficient. And so many will add in this third step of identifying the tasks to be performed.

To do this, team members discuss each of the product backlog items that have been selected for the sprint and attempt to identify the key steps necessary in delivering each product backlog item. Teams can’t possibly think of everything so they instead make a good faith effort to think of all they can.

After having identified most of the tasks that will ultimately be necessary, team members look at the selected product backlog item and the tasks for each, and decide if the sprint seems full. They may conclude that the sprint is overfilled or insufficiently filled and add or drop product backlog items. At this point, some teams are done and they conclude sprint planning with a set of selected product backlog items and a list of the tasks to deliver them. Some teams proceed though to a final step:

Step 4: Estimate the Tasks (Optional)

With a nice list of tasks in front of them, some teams decide to go ahead and estimate those tasks in hours, as an aid in helping them decide if they’ve selected the right amount of work.

In next week’s post, I’ll describe an alternative approach, commitment-driven sprint planning. And after that, I’ll share some recommendations.

Mix Mashup

NOOP.NL - Jurgen Appelo - Mon, 10/20/2014 - 10:08
mashup-square

<promotion>

The MIX Mashup is a gathering of the vanguard of management innovators—pioneering leaders, courageous hackers, and agenda-setting thinkers from every realm of endeavor

The post Mix Mashup appeared first on NOOP.NL.

Categories: Project Management

Decision Making Without Estimates?

Herding Cats - Glen Alleman - Mon, 10/20/2014 - 02:56

In a recent post there are 5 suggestions of how decisions about software development can be made in the absence of estimating the cost, duration, and impact of these decisions. Before looking at each in more detail, let's see what the basis is of these suggestions from the post.

A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.

Decision making in the presence of the allocation of limited resources is called Microeconomics. These decision - in the presence of limited resources - involves opportunity costs. That is what is the cost of NOT choosing one of the alternatives - the allocations? To know these means we need to know something about the outcome of NOT choosing. We can't wait to do the work, we need to know what happens - to some level of confidence - if we DON'T Do something. How can we do this? We need estimate what happens if we don't choose one of the possible allocations, since all the outcomes are in the future.

But first, the post started with suggesting the five approaches are part of Strategy. I'm familiar with strategy making in the domain of software development, having been schooled by two Balanced Scorecard leaders while working as a program manager for a large Department of Energy site, where we pioneered the use of agile development in the presence of highly formal nuclear safety and safeguards applications.

What is Strategy?

Before proceeding with the 5 suggestions, let's look at what strategy is, since it is common to confuse strategy with tactics.

Strategy is creating fit among a firm's activities. The success of a strategy depends on doing many things well – not just a few. The things that are done well must operate within a close nit system. If there is no fit among the activities, there is no distinctive strategy and little to sustain the strategic deployment process. Management then reverts to the simpler task of overseeing independent functions. When this occurs, operational effectiveness determines the relative performance of the firm. 

Improving operational effectiveness is a necessary part of management, but it is not strategy. In confusing the two, managers will be unintentionally backed into a way of thinking about competition that drives the business processes (IT) away from the strategic support and toward the tactical improvement of operational effectiveness.

Managers must be able to clearly distinguish operational effectiveness from strategy. Both are essential, but the two agendas are different. The operational effectiveness agenda involves continual improvement business processes that have no trade–offs associated with them. The operational effectiveness agenda is the proper place for constant change, flexibility, and relentless efforts to achieve best practices.

In contrast, the strategic agenda is the place for making clear trade offs and tightening the fit between the participating business components. Strategy involves the continual search for ways to reinforce and extend the company’s position in the market place. 

“What is Strategy,” M. E. Porter, Harvard Business Review, Volume 74, Number 6, pp. 61–78.

Using Porter's notion of strategy in a business context, the post seems more about tactics. But ignoring that for the moment, let's look further into the ideas presented in the post.

I'm going to suggest that each of the five decision process described in the post are the proper ones - ones with many approaches - but each has ignored the underlying principles of Microeconomics. This principle is that decisions about future outcomes are informed by the opportunity cost and that cost requires - mandated actually since they're in the future - an estimate. This is the basis of Real Options, forecasting, and the very core of business decision making in the presence of uncertainty.

The post then asks

  1. How well does this decision proposal help us reach our business goals?
  2. Does the risk profile resulting from this decision fit our acceptable risk profile?

 
Screen Shot 2014-10-18 at 11.02.18 AMThe 1st question needs another question to be answered. What are our business goals and what are the units of measure of these goals. In order to answer the 1st question we need a steering target to know how we are proceeding toward that goal.

The second question is about risk. All risk comes from uncertainty. Two types of uncertainty exist on projects:

Reducible (Epistemic) and Irreducible (Aleatory). Epistemic uncertainty comes from lack of knowledge. Epistemology is the study of the acquisition of knowledge. We can pay money to buy down this lack of knowledge. That is Epistemic uncertainty can be reduced with work. Risk reduction work. But this leaves open how much time, budget, and performance margin is needed?

ANSWER: We need an Estimate of the Probability of the Risk Coming True. Estimating the Epistemic risk probability of occurrence, the cost and schedule for the reduction efforts, and the probability of the residual risk is done with probabilistic model. There are several and many tools. But estimating all three components: occurrence, impact, effort to mitigate, and residual risk is required.

Aleatory uncertainty comes from the naturally occurring variances of the underlying processes. The only way to reduce the risk arising from Aleatory uncertainty is with margin. Cost Margin, Schedule Margin, Performance Margin. But this leaves open how do we know how margin?

ANSWER: We need to estimate the needed margin from the Probability Distribution Function of the Underlying Statistical Process. Estimating the needed aleatory margin (cost, schedule, and performance) can be done with Monte Carlo Simulation or Method of Moments.


Probability and StatisticsSo one more look at the suggestions before examining  further the 5 ways of making decisions in the absence of estimating their impacts and the cost to achieve those impacts.

All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?

All risk is probabilistic, based on underlying statistical processes. Either the process of lack of knowledge (Epistemic) or the process of natural variability (Aleatory). In the consideration of risk we must incorporate these probability and statistical behaviours in our decision making activities. Since the outcomes of these processes occur in the future, we need to estimate them based on  knowledge - or lack of knowledge - of their probability of occurrence. For the naturally occurring variances that have occurred in the past we need to know how they might occur in the future. To answer these questions, we need a probabilistic model. This model based on the underlying statistical processes. And since the  underlying model is statistical, we need to estimate the impact of this behaviour.

Let's Look At The Five Decision Making Processes

1. Do the most important work firstIf you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.

    • Important work first is good strategy. But importance needs a unit of measure. That unit of measure should be found in the strategy. This is the purpose of the strategy. But the strategy needs units of measure as well. Simply saying do important work first doesn't provide a way to make that decision.
    • The notion of validating versus implementing the strategy is artificial. A read of the Strategy Making literature will clear this up. Strategy for business and especially strategy in IT is a very mature domain, with a long history.
    • One approach to generating the units of measure from the strategy is Balanced Score Card, where strategic objectives are mapped to Performance Goals, then to Critical Success Factors, then to the Key Performance Indicators. The way to do that is with a Strategy Map, shown below.
    • This is the use of strategy as Porter defines it. 

Screen Shot 2014-10-18 at 11.28.31 AM

2. Do the Highest Techncial Risk FirstWhen you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.

    • This is likely dependent on the technical and programmatic architecture of the project or product. 
    • We may want to establish a platform on which to build riskier components. A platform that is known and trusted, stable, bug free - before embarking on any high risk development.
    • High risk may mean high cost. So doing risky things first have consequences. What are those consequences? One is risking the budget before it's clear we have a stable platform, in which to build follow on capabilities. Knowing soemthing is high risk may mean high cost, and this requires estimating something that will occur in the future - the cost to achieve and the cost of the consequences.
    • So doing highest technical risk first, is itself a risk that needs to be assessed. Without this assessment, this suggestion has no way of being tested in practice.

3. Do the Easiest Work FirstSuppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.

    • This is also dependent on the technical and programmatic architecture of the project or product.
    • It's also counter intuitive to #2. Since High Risk is not likely to be the easiest to do.
    • These assessments between risk and work sequence require some sort of trade space analysis, and since the outcomes and their impacts in in the future, estimates these is part of the Analysis of Alternatives approach for any non-trivial project where Systems Engineering guides the work processes.

4. Do the legal Requirements FirstIn medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.

    • Medical Devices are regulated with 21CFR Parts 800-1299. The suggestion doesn't reference any regulations for medical software, which ranges for patient check in at the front desk to surgical devices controlled by software.
    • Developing 21 CFR Software components first may not be possible until the foundation on which they are build is established, tested, and verified. 
    • This means - Quality Planning, Requirements, Design, Construction or Coding, Testing by the Software Developer, User Site Testing, and Maintenance and Software Changes. 
    • Once the plan - a required plan for validation - is in place, the order of the development will be more visible. 
    • Deciding which components to develop, just because they are impacted by Legal Requirements usually means ALL the components. So this approach - Do The Legal Requirements First - usually means do them all.
    • The notion of - Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the marketfails to describe how they knew when they would be ready to test out these ideas. And most importantly how they were able to go to market in the absence of the certification.
    • As well what type of testing - early trials, full 21 CFR release, human applciations, animal testing, etc. is not stated. With some experience in the medical device - devices.
    • A colleague is the CEO of http://acumyn.com/  I'll confirm the processes of validating software from him.

5. Liability Driven Investment - This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.

    • It's not clear why this is called liability. Liability on the balance sheet is an obligation to pay. Deciding what work to do now to generate needed revenue is certainly a strategy. Value Stream Mapping or Impact Mapping is a way to define that. But liability seems to be the wrong term.
    • Not sure how that connects with a Securities Exchange and what problem they are solving using the term liabilities. Shorts are obligations to pay in the future when the short is calledPuts and Calls are terms used in stock trading, but developing software products is not trading. The Real Options used by the poster in the past don't exercise the Option, so the liability to pay doesn't seem to connect here.

References

  1. Risk Informed Decision Handbook, NASA/SP-2010-576 Version 1.0 April 2010.
  2. General Principles of Software Validation; Final Guidance for Industry and FDA Staff, US Food and Drug Administration.

  3. Strategy Maps: Converting Intangible Assets into Tangible Outcomes, Robert Kaplan and David Norton, Harvard Business Press.
  4. Estimating Optimal Decision Rules in Presence of Model Parameter Uncertainty, Christopher Joseph Bennett, Vanderbilt University, June 6, 2012.
Related articles Making Choices in the Absence of Information
Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Mon, 10/20/2014 - 01:08

To achieve great things, two things are needed; a plan, and not quite enough time.
− Leonard Bernstein

Plan of the Day (CV-41)

The notion that planning is a waste is common in domains where mission critical, high risk - high reward, must work, type projects do not exist.

Notice the Plan and the Planned delivery date. The notion that  deadlines  are somehow evil, goes along with the lack on understanding that business needs a set of capabilities to be in place on a date in order to start booking the value in the general ledger.

Plans are strategies. Strategies are a hypothesis. The Hypothesis is tested with  experiments. Experiments show from actual data what the outcome is of the work. These outcomes are used as feedback to take corrective actions at the strategic and tactical level of the project.

This is called Closed Loop Control. Set the strategy, define the units of measure for the desired outcome - Measures of Effectiveness and Measures of Performance. Perform work as assess these measures. Determine the variance between the planned outcomes and the needed outcomes. Take corrective action by adjusting the plan to keep the project moving toward the strategic goals. For Closed Loop Control, we need

  • A steering target for some future state.
  • A measure of the current state.
  • The variance between current and future.
  • Corrective actions to put the project back on track toward the desired state.

Control systems from Glen Alleman Related articles Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow's Book
Categories: Project Management

What Is Systems Architecture And Why Should We Care?

Herding Cats - Glen Alleman - Sat, 10/18/2014 - 20:10

If we were setting out to build a home, we would first lay out the floor plans, grouping each room by function and placing structural items within each room according to their best utility. This is not an arbitrary process – it is architecture. Moving from home design to IT system design does not change the process. Grouping data and processes into information systems creates the rooms of the system architecture. Arranging the data and processes for the best utility is the result of deploying an architecture. Many of the attributes of building architecture are applicable to system architecture. Form, function, best use of resources and materials, human interaction, reuse of design, longevity of the design decisions, robustness of the resulting entities are all attributes of well designed buildings and well designed computer systems. [1]

In general, an architecture is a set of rules that defines a unified and coherent structure consisting of constituent parts and connections that establish how those parts fit and work together. An architecture may be conceptualized from a specific perspective focusing on an aspect or view of its subject. These architectural perspectives themselves can become components in a higher–level architecture serving to integrate and unify them into a higher level structure.

The architecture must define the rules, guidelines, or constraints for creating conformant implementations of the system. While this architecture does not specify the details on any implementation, it does establish guidelines that must be observed in making implementation choices. These conditions are particularly important for component architectures that embody extensibility features to allow additional capabilities to be added to previously specified parts. [2] This is the case where Data Management is the initial deployment activity followed by more complex system components.

By adopting a system architecture motivation as the basis for the IT Strategy, several benefits result:

  • Business processes are streamlined – a fundamental benefit to building enterprise information architecture is the discovery and elimination of redundancy in the business processes themselves. In effect, it can drive the reengineering the business processes it is designed to support. This occurs during the construction of the information architecture. By revealing the different organizational views of the same processes and data, any redundancies can be documented and dealt with. The fundamental approach to building the information architecture is to focus on data, process and their interaction.
  • Systems information complexity is reduced – the architectural framework reduces information system complexity by identifying and eliminating redundancy in data and software. The resulting enterprise information architecture will have significantly fewer applications and databases as well as a resulting reduction in intersystem links. This simplification also leads to significantly reduced costs. Some of those recovered costs can and should be reinvested into further information system improvements. This reinvestment activity becomes the raison d’état for the enterprise–wide system deployment.
  • Enterprise–wide integration is enabled through data sharing and consolidation – the information architecture identifies the points to deploy standards for shared data. For example, most Kimball business units hold a wealth of data about products, customers, and manufacturing processes. However, this information is locked within the confines of the business unit specific applications. The information architecture forces compatibility for shared enterprise data. This compatible information can be stripped out of operational systems, merged to provide an enterprise view, and stored in data repositories. In addition, data standards streamline the operational architecture by eliminating the need to translate or move data between systems. A well–designed architecture not only streamlines the internal information value chain, but it can provide the infrastructure necessary to link information value chains between business units or allow effortless substitution of information value chains.
  • Rapid evolution to new technologies is enabled – client / server and object–oriented technology revolves around the understanding of data and the processes that create and access this data. Since the enterprise information architecture is structured around data and process and not redundant organizational views of the same thing, the application of client / server and object–oriented technologies is much cleaner. Attempting to move to these new technologies without an enterprise information architecture will result in the eventual rework of the newly deployed system.

[1] A Timeless way of Building, C. Alexander, Oxford University Press, 1979.

[2] “How Architecture Wins Technology Wars,” C. Morris and C. Ferguson, Harvard Business Review, March–April 1993, pp. 86–96.

Categories: Project Management

Quote of the Month October 2014

From the Editor of Methods & Tools - Thu, 10/16/2014 - 11:48
Minimalism also applies in software. The less code you write, the less you have to maintain. The less you maintain, the less you have to understand. The less you have to understand, the less chance of making a mistake. Less code leads to fewer bugs. Source: “Quality Code: Software Testing Principles, Practices, and Patterns”, Stephen Vance, Addison-Wesley

The Results of My First OKRs (Running)

NOOP.NL - Jurgen Appelo - Wed, 10/15/2014 - 21:34
2014-09-20 14.52.07

A popular topic in the new one-day Management 3.0 workshop is the OKRs system for performance measurement. (See Google’s YouTube video here.) Instead of explaining what OKRs are, I will just share with you the result of my first iteration. If you read this, you will get the general idea of how the OKRs system works.

The post The Results of My First OKRs (Running) appeared first on NOOP.NL.

Categories: Project Management

Podcast with Cesar Abeid Posted

Cesar Abeid interviewed me, Project Management for You with Johanna Rothman. We talked about my tools for project management, whether you are managing a project for yourself or managing projects for others.

We talked about how to use timeboxes in the large and small, project charters, influence, servant leadership, a whole ton of topics.

I hope you listen. Also, check out Cesar’s kickstarter campaign, Project Management for You.

Categories: Project Management

Connecting the Dots Between Technical Performance and Earned Value Management

Herding Cats - Glen Alleman - Wed, 10/15/2014 - 15:36

We gave a recent College of Performance Management webinar on using techncial progress to inform Earned Value. Here's the annotated charts.

Categories: Project Management

Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow’s Book

Herding Cats - Glen Alleman - Tue, 10/14/2014 - 17:08

Effective Risk ManagementIn a recent post to “Who Is Ed Conrow?” a responder asked about the differences between the PMBOK® Risk approach and the DoD PMBOK risk approaches as well as a summary of the book Effective Risk Management: Some Keys to Success, Edmund Conrow. Ed worked the risk management processes for a NASA proposal I was on. I was the IMP/IMS lead, so integrating Risk with the Integrated Master Plan / Integrtaed Master Schedule in the mannder he prescribed was a live changing experience. I was naive before, but no longer after that proposal won ˜$7B for the client.

 

Let me start with a few positioning statements:

  1. Project risk management is a topic with varying levels of understanding, interest, and applicability. The PMI PMBOK® provides a “starter kit” view of project risk. It covers the areas of risk management but does not have guidance on actually “doing” risk management. This results many times in the false sense that “if we’re following PMBOK® then we’re OK.”
  2. The separation of technical risk from programmatic risk is not clearly discussed in PMBOK®. While not a “show stopper” issue for some projects, programmatic risk management is critical for enterprise class projects. By enterprise I mean, ERP, large product development, large construction, aerospace and defense class programs. In fact, DI-MGMT-81861 mandates programmatic risk management processes for procurements greater than $20M. $20M sounds like a very large sum of money for the typical internal IT development project. It hardly keeps the lights on an aerospace and defense program.
  3. The concepts around schedule variances are misunderstood and misapplied in almost every discussion of risk management in the popular literature. The common red-herring is the ineffective use of Critical Path and PERT. This of course is simply a false statement in domains outside IT or small low risk projects. Critical Path, PERT and Monte Carlo Simulation are mandated by government procurement guidelines. Not that these guides make them “correct.” What makes them correct is that programs measurably benefit from their application. This discipline is called Program Planning and Controls (PP&C) and is a profession in aerospace, defense, and large construction. No amount of “claiming the processes don’t work” removes the documented facts they do, when properly applied. Anyone wishing to learn more about these approaches to programmatic risk management need only look to the NASA, Department of Energy, and Department of Defense Risk Management communities.

With all my biases out of the way, let’s look at the DoD PMBOK®

  1. First, the DoD PMBOK® is free and can be found at http://www.dau.mil/pubs/gdbks/pmbok.asp. It appears to be gone so now you can find it here. This is a US Department of Defense approved document. It provides supplemental guidance to the PMI PMBOK®, but in fact replaces a large portion of PMI PMBOK®.
  2. Chapter 11 of DoD PMBOK® is Risk. It starts with the Risk Management Process areas – in Figure 11-1, page 125. (I will not put them here, because you should down load the document and turn to that page.) The diagram contains six (6) process areas. The same number as the PMI PMBOK®. But what’s missing from PMI PMBOK® and present in DoD PMBOK® is how these processes interact to provide a framework for Risk Management.
  3.  There are several missing critical concepts in PMI PMBOK® that are provided in DoD PMBOK®.
    • The Risk Management structure in Figure 11-1 is the first. Without the connections between the process areas in some form other than “linear” the management of technical and programmatic risk turns into a “check list.” This is the approach of PMI PMBOK® - to provide guidance on the process areas and leave it up to the reader to develop a process around these areas. This is also the approach of CMMI. This is an area too important to leave it up to the read to develop the process.
    • The concept of the Probability and Impact matrix is fatally flawed in PMI PMBOK®. It turns out you cannot multiple probability of occurrence with the impact of this occurrence. These “numbers” are not numbers in the sense of integer or real valued numbers. They are probability distributions themselves. Multiplication is not an operation that can be applied to a probability distribution. Distributions are integral equations and the multiplication operator ´ is actually the convolution operator Ä.
    • The second fatal flaw of the PMI PMBOK® approach to probability of occurrence and impact is these “numbers” are uncalibrated cardinal values. That is they have no actually meaning since their “units of measure” are not calibrated.

Page 124 of DoD PMBOK® summarizes the principles of Risk Management as developed in two seminal sources.

  1. Effective Risk Management: Some Keys to Success, Edmund Conrow, American Institute of Aeronautics and Astronautics, 2000.
  2. Risk Management Guide for DoD Acquisition, Sixth Edition, (Version 1.0), August 2006, US Department of Defense, which is part of the Defense Acquisition Guide, §11.4), which is published within the Office of the Under Secretary of Defense, Acquisition, Technology and Logistics  (OUSD(AT&L)),  Systems and Software Engineering / Enterprise Development.

Now all these pedantic references are here for a purpose. This is how people who manage risk for a living, manage risk. By risk, I mean technical risk that results in loss of mission, loss of life. Programmatic risk that results in loss of Billions of Tax Payer dollars. They are serious enough about risk management to not let the individual project or program manager interpret the vague notions in PMI PMBOK®. These may appear to be harsh words, but the road to the management of enterprise class projects is littered with disasters. You can read every day of IT projects that are 100% over budget, 100% behind schedule. From private firms to the US Government, the trail of destruction is front page news.

A Slight Diversion – Why are Enterprise Projects So Risky?

There are many reasons for failure – too many to mention – but one is the inability to identify and mitigate risk. The words “indentify” and “mitigate,” sound simple. They are listed in the PMI PMBOK® and the DoD PMBOK®. However, here is where the problem starts:

  1. The process of separating risk from issue.
  2. Classifying  the statistical nature risk.
  3. Managing the risk process independently from project management and technical development.
  4. Incorporating the technical risk mitigation processes into the schedule.
  5. Modeling the impact of technical risk on programmatic risk.
  6. Modeling the programmatic risk independent from the technical risk.

Using Conrow as a Guide

Here is one problem. When you use the complete phrase “Project Risk Management” with Google, you get ~642,000 hits. There are so many books, academic papers, and commercial articles on Risk Management, where do we start? Ed Conrow’s book is probably not the starting point for learning how to practice risk management on your project. However, it might be the ending point. If you are in the software development business, a good starting point is – Managing Risk: Methods for Software Systems Development, Elaine M. Hall, Addison Wesley, 1998. Another broader approach is Continuous Risk Management Guidebook, Software Engineering Institute, August 1996. While these two sources focus on software, they provide the foundation for the discussion of risk management as a discipline.

There are public sources as well:

  1. Start with the NASA Risk Management page, http://www.hq.nasa.gov/office/codeq/risk/index.htm
  2. Software Engineering Institute’s Risk Management page, http://www.sei.cmu.edu/risk/index.html
  3. A starting point for other resources from NASA, http://www.hq.nasa.gov/office/hqlibrary/ppm/ppm22.htm
  4. Department of Energy’s Risk Management Handbook, http://www.oecm.energy.gov/Portals/2/RiskManagement.pdf (which uses the DoD Risk Process Map described above)

However, care needs the be taken once you go outside the government boundaries. Here are many voices plying the waters of “risk management,” as well as other voices with “axes to grind” regarding project management methods and risk management processes. The result is many times a confusing message full of anecdotes, analogies, and alternative approaches to the topic of Risk Management.

Conrow in his Full Glory

Before starting into the survey of the Conrow book, let me state a few observations:

  1. This book is tough going. I mean really tough going. Tough in the way a graduate statistical mechanics book is tough going. Or a graduate micro-economics of managerial finance book is tough going. It “professional grade” stuff. By “professional grade” I mean – written by professionals to be used by professionals.
  2. Not every problem has the same solution need. Conrow’s solutions may not be appropriate for a software development project with 4 developers and a customer in the same room. But from projects that have multiple teams, locations, and stake holders some type of risk management is needed.
  3. The book is difficult to read for another reason. It assumes you have a “a reasonable understanding of the issues” around risk management. This means this is not a tutorial style or “risk management for dummies” book. It is a technical reference book. There is little in the way of introductory material or bringing the reader up top speed before presenting the material.

From the introduction:

The purpose of this book is two-fold: first, to provide key lessons learned that I have documented from performing risk management on a wide variety of programs, and second, to assist you, the reader, in developing and implementing an effective risk management process on your program.

A couple of things here. One is the practical experience in risk management. Many in the risk management “talking” community have limited experience with risk management in the way Ed does. I first met Ed on a proposal for a $8B Manned Spaceflight program. He was responsible for the risk strategy and the conveying of that strategy in the proposal. The proposal resulted in an award and now our firm provides Program Planning and Controls for a major subsystem of the program. In this role programmatic and technical risk management is part of the Statement of Work flowed down from the prime contractor.

Second Ed is a technical advisor to the US Arms Control and Disarmament Agency as well as a consultant industry and government on risk management. These “resume” items are meant to show that the practice of risk management is just that – a practice. Speaking about risk management and doing risk management on high risk programs are two different things.

One of Ed’s principle contributions to the discipline was the development of a micro-economic framework of risk management in which the design feasibility (or technical performance) is traded against cost and schedule.

In the end, this is a reference text for the process of managing the risk of projects, written by a highly respected practitioner.

What does the Conrow Book have to offer over the Standard approach?

Ed’s book contains the current “best practices” for managing technical and programmatic risk. These practices are used on high risk, high value programs. The guidelines in Ed’s book are generally applicable to many other classes of projects as well. But there are several critical elements that differentiate this approach from the pedestrian approach to risk management.

  1. The introduction of the “ordinal risk scale.” This approach is dramatically different than the PMI PMBOK description of risk in which the probability of occurrence is multiplied by the consequences to produce a “risk rating.” Neither the probability of occurrence nor the consequences are calibrated in anyway. The result is a meaningless number that may satisfy the C-Level that “risk management” is being done on the program. By ordinal risk scales it is meant a classification of risk, say – A,B,C,D,E,F – that are descriptive in nature. Not just numbers. By the way, the use of letters is intentional. If numbers are used to ordinal risk ranks, there is a tendency to do arithmetic on them. Compute the average risk rank, or multiply them by the consequences. Letters remove this notion and prevent the first failure of the common risk management approach – to produce an overall risk measure.

The ordinal approach works like this. Ed describes some classes of risk scales which include: maturity, sufficiency, complexity, uncertainty, estimative probability, and probability based scales.

A maturity risk scale would be:

Definition

Scale Level

Basic principles observed

E

Concept design analyzed for performance

D

Breadboard or brassboard validation in relevant environment

C

Prototype passes performance tests

B

Item deployed and operational

A

 The critical concept is to relate the risk ordinal value to an objective measure. For a maturity risk assessment, some “calibration” of what it means to have the “basic principles observed” must be developed. This approach can be applied to the other classes – sufficiency, complexity, uncertainty, estimative probability and probability based scales.

It’s the estimative probability that is important to cost and schedule people in our PP&C practice. The estimative probability scale attempts to relate a word to a probability value. “High” to 80%. An ordinal estimative probability scale using point estimates derived from a statistical analysis of survey data might look like.

Definition

Median probability value

Scale Level

Certain

0.95

E

High

0.85

D

Medium

0.45

C

Low

0.15

B

Almost no chance

0.05

A

Calibrating these risk scales is the primary analysis task of building a risk management system. What does it mean to have a “medium” risk, in the specific problem domain?

  1. The second item is the formal use of a risk management process. Simply listing the risk process areas – as is done in PMBOK – is not only poor project management practice, it is poor risk management practice. The process to be used are shown on page 135 of http://www.dau.mil/pubs/gdbks/pmbok.asp. The application of these processes are described in detail. No process area is optional. All are needed. All need to be performed in the right relationship to each other.

These two concepts are the ones that changed the way I perform risk management on the programs I’m involved with and how we advise our clients. They are paradigm changing concepts. No more simple mined arithmetic with probabilities and consequences. No more uncalibrated risk scales. No more tolerating those who claim PERT, Critical Path, and Monte Carlo are unproven, obsolete, or “wrong headed” approaches.

Get Ed’s book. It’ll cost way too much when compared to the “paperback” approach to risk. But for those tasked with “managing risk,” this is the starting point.

 

Categories: Project Management

Practices, Not Platitudes

NOOP.NL - Jurgen Appelo - Tue, 10/14/2014 - 15:27
Practices, Not Platitudes

I recently took part in a conversation about compensation of employees. Some readers offered criticism on the Merit Money practice, described in my new Workout book, claiming that Merit Money is just another way to incentivize people. The feedback I received was, “Money doesn’t motivate people”, followed by, “Don’t incentivize people” and “Just pay people well”.

Let me explain why I think this advice is useless.

The post Practices, Not Platitudes appeared first on NOOP.NL.

Categories: Project Management

5 Decision-Making Strategies that require no estimates

Software Development Today - Vasco Duarte - Tue, 10/14/2014 - 04:00

One of the questions that I and other #NoEstimates proponents hear quite often is: How can we make decisions on what projects we should do next, without considering the estimated time it takes to deliver a set of functionality?

Although this is a valid question, I know there are many alternatives to the assumptions implicit in this question. These alternatives - which I cover in this post - have the side benefit of helping us focus on the most important work to achieve our business goals.

Below I list 5 different decision-making strategies (aka decision making models) that can be applied to our software projects without requiring a long winded, and error prone, estimation process up front.

What do you mean by decision-making strategy?

A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.

Some possible goals for business strategies might be:

  • Growth: growing the number of customer or users, growing revenues, growing the number of markets served, etc.
  • Market segment focus/entry: entering a new market or increasing your market share in an existing market segment.
  • Profitability: improving or maintaining profitability.
  • Diversification: creating new revenue streams, entering new markets, adding products to the portfolio, etc.

Other types of business goals are possible, and it is also possible to mix several goals in one business strategy.

Different decision-making strategies should be considered for different business goals. The 5 different decision-making strategies listed below include examples of business goals they could help you achieve. But before going further, we must consider one key aspect of decision making: Risk Management.

The two questions that I will consider when defining a decision-making strategy are:

  • 1. How well does this decision proposal help us reach our business goals?
  • 2. Does the risk profile resulting from this decision fit our acceptable risk profile?

Are you taking into account the risks inherent in the decisions made with those frameworks?

All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?

A different risk profile requires different decisions

Each decision we make has an impact on the following risk dimensions:

  • Failing to meet the market needs (the risk of what).
  • Increasing your technical risks (the risk of how).
  • Contracting or committing to work which you are not able to staff or assign the necessary skills (the risk of who).
  • Deviating from the business goals and strategy of your organization (the risk of why).

The categorization above is not the only possible. However it is very practical, and maps well to decisions regarding which projects to invest in.

There may good reasons to accept increasing your risk exposure in one or more of these categories. This is true if increasing that exposure does not go beyond your acceptable risk profile. For example, you may accept a larger exposure to technical risks (the risk of how), if you believe that the project has a very low risk of missing market needs (the risk of what).

An example would be migrating an existing product to a new technology: you understand the market (the product has been meeting market needs), but you will take a risk with the technology with the aim to meet some other business need.

Aligning decisions with business goals: decision-making strategies

When making decisions regarding what project or work to undertake, we must consider the implications of that work in our business or strategic goals, therefore we must decide on the right decision-making strategy for our company at any time.

Decision-making Strategy 1: Do the most important strategic work first

If you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.

Decision-making Strategy 2: Do the highest technical risk work first

When you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.

Decision-making Strategy 3: Do the easiest work first

Suppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.

Decision-making Strategy 4: Do the legal requirements first

In medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.

Decision-making Strategy 5: Liability driven investment model

This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.

These are just 5 possible investment or decision-making strategies that can help you make project decisions, or even business decisions, without having to invest in estimation upfront.

None of these decision-making strategies guarantees success, but then again nothing does except hard work, perseverance and safe experiments!

In the upcoming workshops (Helsinki on Oct 23rd, Stockholm on Oct 30th) that me and Woody Zuill are hosting, we will discuss these and other decision-making strategies that you can take and start applying immediately. We will also discuss how these decision making models are applicable in day to day decisions as much as strategic decisions.

If you want to know more about what we will cover in our world-premiere #NoEstimates workshops don't hesitate to get in touch!

Your ideas about decision-making strategies that do not require estimation

You may have used other decision-making strategies that are not covered here. Please share your stories and experiences below so that we can start collecting ideas on how to make good decisions without the need to invest time and money into a wasteful process like estimation.