Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Feed aggregator

Agile Risk Management: Ad-hoc Processes

The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche...

The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche…

There are many factors that cause variability in the performance of projects and releases, including complexity, the size of the work, people and process discipline. Consistency and predictability are difficult when the process is being made up on the spot. Agile has come to reflect (at least in practice) a wide range of values ranging from faster delivery to more structured frameworks such as Scrum, Extreme Programing and Scale Agile Framework Enterprise. Lack of at least some structure nearly always increases the variability in delivery and therefore the risk to the organization.

I recently received the following note from a reader (and listener to the podcast) who will remain nameless (all names redacted at the request of the reader).

“All of the development is outsourced to a company with many off-shore and a few on-site resources.

The development agency has, somehow, sold the business on the idea that because they are “Agile”, their ability to dynamically/quickly react and implement requires a lack of formal “accounting.”  The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche to work on anything and everything ever scratched out on a cocktail napkin without proper project charters, buy-in, and SOW.”

This observation reflects a risk to the organization of an ill-defined process in terms the value that get delivered to the business, financial risk and from the risk to customer satisfaction. Repeatability and consistency of process are not a dirty words.

Scrum and other Agile frameworks are light-weight empirical models. At their most basic levels they summarized as:

  1. Agree upon what your are going to do (build a backlog),
  2. Plan work directly ahead (sprint/iteration planning),
  3. Build a little bit while interacting with the customer (short time box development),
  4. Review what has been done with the stakeholders (demonstration),
  5. Make corrections to the process (retrospective),
  6. Repeat as needed until the goals of the work are met.

Deming would have recognized the embedded plan-do-check-act cycle. There is nothing ad-hoc about the frame even though it is not overly prescriptive.

I recently toured a research facility for a major snack manufacturer. The people in the labs were busy dreaming up the next big snack food. Personnel were involved in both “pure” and applied research, both highly creative endeavors. When I asked about the process they were using what was described was something similar to Scrum. Creatively being pursued within a framework to reduce risk.

Ad-hoc software development and maintenance was never in style. In today’s business environment where software in an integral the delivery of value, just winging the process of development increases risk of an already somewhat risky proposition.

Categories: Process Management

Intellectual Honesty

Software Requirements Blog - - Thu, 10/16/2014 - 17:00
During a recent discussion in the office, the term “intellectual honesty” was bandied about. At Seilevel, intellectual honesty is part of our stated core values, but it’s a term that’s easily misunderstood and misused. Feeling that I needed to understand better what this term really means, I hit the search engines hard. I also, as I […]
Categories: Requirements

Quote of the Month October 2014

From the Editor of Methods & Tools - Thu, 10/16/2014 - 11:48
Minimalism also applies in software. The less code you write, the less you have to maintain. The less you maintain, the less you have to understand. The less you have to understand, the less chance of making a mistake. Less code leads to fewer bugs. Source: “Quality Code: Software Testing Principles, Practices, and Patterns”, Stephen Vance, Addison-Wesley

Agile Risk Management: Dealing With Size

Overall project size influences variability.

Overall project size influences variability.

Risk is reflection of possibilities, stuff that could happen if the stars align. Therefore projects are highly influenced by variability. There are many factors that influence variability including complexity, process discipline, people and the size of the work. The impact of the size can be felt in two separate but equally important manners. The first is the size of the overall project and second is the size any particular unit of work.

Overall project size influences variability by increasing the sheer number of moving parts that have to relate to each other. As an example, the assembly of an automobile is a large endeavor and is the culmination of a number of relatively large subprojects. Any significant variance in how the subprojects are assembled along the path of building the automobile will cause problems in the final deliverable. Large software projects require extra coordination, testing, integration to ensure that all of the pieces fit together, deliver the functionality customers and stakeholders expect and act properly. All of these extra steps increase the possibility of variance.

Similarly large pieces of work, user stories in Agile, cause similar problems as noted for large projects, but at the team level. For example, when any piece of work enters a sprint the first step in the process of transforming that story into value is planning. Large pieces of work are more difficult to plan, if for no other reason that they take longer to break down into tasks increasing the likelihood that something will not be considered generating a “gotcha” later in the sprint.

Whether at a project or sprint level, smaller is generally simpler, and simpler generates less variability. There are a number of techniques for managing size.

  1. Short, complete sprints or iterations. The impact of time boxing on reducing project size has been discussed and understood in mainstream since the 1990’s (see Hammer and Champy, Reengineering the Corporation). Breaking a project into smaller parts reduces the overhead of coordination and provides faster feedback and more synchronization points, which reduces the variability. Duration of a sprint acts as constraint to the amount of work that can be taken and into a sprint and reasonably be completed.
  2. Team sizes of 5 to 9 are large enough to tackle real work while maintaining the stabile team relationships needed to coordinate the development and integration of functionality that can potentially be shipped at the end of each sprint. Team size constrains the amount of work that can enter a sprint and be completed.
  3. Splitting user stories is a mechanism to reduce the size of a specific piece of work so that it can be complete faster with fewer potential complications that cause variability. The process of splitting user stories breaks stories down into smaller independent pieces (INVEST) that be developed and tested faster. Smaller stories are less likely to block the entire team if anything goes wrong. This reduces variability and generates feedback more quickly, thereby reducing the potential for large batches of rework if the solution does not meet stakeholder needs. Small stories increases flow through the development processes reducing variability.

I learned many years ago that supersizing fries at the local fast food establishment was a bad idea in that it increased the variability in my caloric intake (and waistline). Similarly, large projects are subject to increased variability. There are just too many moving parts, which leads to variability and risk. Large user stories have exactly the same issues as large project just on a smaller scale. Agile techniques of short sprints and small team size provide constraints so that teams can control the size of work they are considering at any point in time. Teams need to take the additional step of breaking down stories into smaller pieces to continue to minimize the potential impact of variability.

Categories: Process Management

The Results of My First OKRs (Running)

NOOP.NL - Jurgen Appelo - Wed, 10/15/2014 - 21:34
2014-09-20 14.52.07

A popular topic in the new one-day Management 3.0 workshop is the OKRs system for performance measurement. (See Google’s YouTube video here.) Instead of explaining what OKRs are, I will just share with you the result of my first iteration. If you read this, you will get the general idea of how the OKRs system works.

The post The Results of My First OKRs (Running) appeared first on NOOP.NL.

Categories: Project Management

Big data and analytics – Interesting tidbits for business analysts and product managers

Software Requirements Blog - - Wed, 10/15/2014 - 20:12
I’m at Strata+Hadoop World today as a first timer as part of the data driven business day tutorial. I got to present in the middle of it on requirements analytics. But this whole day is awesome, like a crash course in big data and kinds of results it can get you. The schedule is here […]
Categories: Requirements

Testing CDN and geolocation with

Agile Testing - Grig Gheorghiu - Wed, 10/15/2014 - 19:31
Assume you want to migrate to a new CDN provider. Eventually you'll have to point as a CNAME to a domain name handled by the CDN provider, let's call it To test this setup before you put it in production, the usual way is to get an IP address corresponding to, then associate with that IP address in your local /etc/hosts file.

This works well for testing most of the functionality of your web site, but it doesn't work when you want to test geolocation-specific features such as displaying the currency based on the users's country of origin. For this, you can use a nifty feature from the amazing free service WebPageTest.

On the main page of WebPageTest, you can specify the test location from the dropdown. It contains a generous list of locations across the globe. To fake your DNS setting and point, you can specify something like this in the Script tab:

setDNSName example.cdnprovider.comnavigate
This will effectively associate the page you want to test with the CDN provider-specified URL, so you will hit the CDN first from the location you chose.

Building Better Business Cases for Digital Initiatives

It’s hard to drive digital initiatives and business transformation if you can’t create the business case.  Stakeholder want to know what their investment is supposed to get them

One of the simplest ways to think about business cases is to think in terms of stakeholders, benefits, KPIs, costs, and risks over time frames.

While that’s the basic frame, there’s a bit of art and science when it comes to building effective business cases, especially when it involves transformational change.

Lucky for us, in the book, Leading Digital: Turning Technology into Business Transformation, George Westerman, Didier Bonnet, and Andrew McAfee, share some of their lessons learned in building better business cases for digital initiatives.

What I like about their guidance is that it matches my experience

Link Operational Changes to Tangible Business Benefits

The more you can link your roadmap to benefits that people care about and can measure, the better off you are.

Via Leading Digital:

“You need initiative-based business cases that establish a clear link from the operational changes in your roadmap to tangible business benefits.  You will need to involve employees on the front lines to help validate how operational changes will contribute to strategic goals.”

Work Out the Costs, the Benefits, and the Timing of Return

On a good note, the same building blocks that apply to any business case, apply to digital initiatives.

Via Leading Digital:

“The basic building blocks of a business case for digital initiatives are the same as for any business case.  Your team needs to work out the costs, the benefits, and the timing of the return.  But digital transformation is still uncharted territory.  The cost side of the equation is easier, but benefits can be difficult to quantify, even when, intuitively, they seem crystal clear.”

Start with What You Know

Building a business case is an art and a science.   To avoid getting lost in analysis paralysis, start with what you know.

Via Leading Digital:

“Building a business case for digital initiatives is both an art an a science.  With so many unknowns, you'll need to take a pragmatic approach to investments in light of what you know and what you don't know.

Start with what you know, where you have most of the information you need to support a robust cost-benefit analysis.  A few lessons learned from our Digital Masters can be useful.”

Don’t Build Your Business Case as a Series of Technology Investments

If you only consider the technology part of the story, you’ll miss the bigger picture.  Digital initiatives involves organizational change management as well as process change.  A digital initiative is really a change in terms of people, process, and technology, and adoption is a big deal.

Via Leading Digital:

“Don't build your business case as a series of technology investments.  You will miss a big part of the costs.  Cost the adoption efforts--digital skill building, organizational change, communication, and training--as well as the deployment of the technology.  You won't realize the full benefits--or possibly any benefits--without them.”

Frame the Benefits in Terms of Business Outcomes

If you don’t work backwards from the end-in-mind, you might not get there.  You need clarity on the business outcomes so that you can chunk up the right path to get there, while flowing continuous value along the way.

Via Leading Digital:

“Frame the benefits in terms of the business outcomes you want to reach.  These outcomes can be the achievement of goals or the fixing of problems--that is, outcomes that drive more customer value, higher revenue, or a better cost position.  Then define the tangible business impact and work backward into the levers and metrics that will indicate what 'good' looks like.  For instance, if one of your investments is supposed to increase digital customer engagement, your outcome might be increasing engagement-to-sales conversation.  Then work back into the main metrics that drive this outcome, for example, visits, like inquiries, ratings, reorders, and the like.

When the business impact5 of an initiative is not totally clear, look at companies that have already made similar investments.  Your technology vendors can also be a rich, if somewhat biased, source of business cases for some digital investments.”

Run Small Pilots, Evaluate Results, and Refine Your Approach

To reduce risk, start with pilots to live and learn.   This will help you make informed decisions as part of your business case development.

Via Leading Digital:

“But, whatever you do, some digital investment cases will be trickier to justify, be they investments in emerging technologies or cutting-edge practices.  For example, what is the value of gamifying your brand's social communities?  For these types of investment opportunities, experiment with a test-and-learn approach.  State your measures of success, run small pilots, evaluate results, and refine your approach.  Several useful tools and methods exist, such as hypothesis-driven experiments with control groups, or A/B testing.  The successes (and failures) of small experiments can then become the benefits rationale to invest at greater scale.  Whatever the method, use an analytical approach; the quality of your estimated return depends on it.

Translating your vision into strategic goals and building an actionable roadmap is the firs step in focusing your investment.  It will galvanize the organization into action.  But if you needed to be an architect to develop your vision, you need to be a plumber to develop your roadmap.  Be prepared to get your hands dirty.”

While practice makes perfect, business cases aren’t about perfect.  Their job is to help you get the right investment from stakeholders so you can work on the right things, at the right time, to make the right impact.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

How Digital is Changing Physical Experiences

McKinsey on Unleashing the Value of Big Data Analytics

Categories: Architecture, Programming

Using a SSD Cache in Front of EBS Boosted Throughput by 50%, for Free

Using EBS has lots of advantages--reliability, snapshotting, resizing--but overcoming the performance problems by using Provisioned IOPS is expensive. 

Swrve, an integrated marketing and A/B testing and optimization platform for mobile apps, did something clever. They are using the c3.xlarge EC2 instances, that have two 40GB SSD devices per instance, as a cache.

They found through testing RAID-0 striping using a 4-way stripe along with enhanceio, effectively increased throughput by over 50%, for free. With no filesystem corruption problems.

How is it free? "We were planning on upgrading to the C3 class of instance anyway, and sticking with EBS as the backing store. Once you’re using an instance which has SSD ephemeral storage, there are no additional fees to use that hardware."

For great analysis, lots of juicy details, graphs, and configuration commands, please take a look at How we increased our EC2 event throughput by 50%, for free

Categories: Architecture

Podcast with Cesar Abeid Posted

Cesar Abeid interviewed me, Project Management for You with Johanna Rothman. We talked about my tools for project management, whether you are managing a project for yourself or managing projects for others.

We talked about how to use timeboxes in the large and small, project charters, influence, servant leadership, a whole ton of topics.

I hope you listen. Also, check out Cesar’s kickstarter campaign, Project Management for You.

Categories: Project Management

Connecting the Dots Between Technical Performance and Earned Value Management

Herding Cats - Glen Alleman - Wed, 10/15/2014 - 15:36

We gave a recent College of Performance Management webinar on using techncial progress to inform Earned Value. Here's the annotated charts.

Categories: Project Management

Docker and Microsoft: Integrating Docker with Windows Server and Microsoft Azure

ScottGu's Blog - Scott Guthrie - Wed, 10/15/2014 - 14:30

I’m excited to announce today that Microsoft is partnering with Docker, Inc to enable great container-based development experiences on Linux, Windows Server and Microsoft Azure.

Docker is an open platform that enables developers and administrators to build, ship, and run distributed applications. Consisting of Docker Engine, a lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments.

Earlier this year, Microsoft released support for Docker containers with Linux on Azure.  This support integrates with the Azure VM agent extensibility model and Azure command-line tools, and makes it easy to deploy the latest and greatest Docker Engine in Azure VMs and then deploy Docker based images within them.   Docker Support for Windows Server + Docker Hub integration with Microsoft Azure

Today, I’m excited to announce that we are working with Docker, Inc to extend our support for Docker much further.  Specifically, I’m excited to announce that:

1) Microsoft and Docker are integrating the open-source Docker Engine with the next release of Windows Server.  This release of Windows Server will include new container isolation technology, and support running both .NET and other application types (Node.js, Java, C++, etc) within these containers.  Developers and organizations will be able to use Docker to create distributed, container-based applications for Windows Server that leverage the Docker ecosystem of users, applications and tools.  It will also enable a new class of distributed applications built with Docker that use Linux and Windows Server images together.


2) We will support the Docker client natively on Windows.  Developers and administrators running Windows will be able to use the same standard Docker client and interface to deploy and manage Docker based solutions with both Linux and Windows Server environments.



3) Docker for Windows Server container images will be available in the Docker Hub alongside the Docker for Linux container images available today.  This will enable developers and administrators to easily share and automate application workflows using both Windows Server and Linux Docker images.

4) We will integrate Docker Hub with the Microsoft Azure Gallery and Azure Management Portal.  This will make it trivially easy to deploy and run both Linux and Windows Server based Docker images in Microsoft Azure.

5) Microsoft is contributing code to Docker’s Open Orchestration APIs.  These APIs provide a portable way to create multi-container Docker applications that can be deployed into any datacenter or cloud provider environment. This support will allow a developer or administrator using the Docker command line client to launch either Linux or Windows Server based Docker applications directly into Microsoft Azure from his or her development machine.

Exciting Opportunities Ahead

At Microsoft we continue to be inspired by technologies that can dramatically improve how quickly teams can bring new solutions to market. The partnership we are announcing with Docker today will enable developers and administrators to use the best container tools available for both Linux and Windows Server based applications, and to run all of these solutions within Microsoft Azure.  We are looking forward to seeing the great applications you build with them.

You can learn more about today’s announcements here and here.

Hope this helps,

Scott omni

Categories: Architecture, Programming

Agile Risk Management: Dealing With Complexity

Does the raft have a peak load?

Software development is inherently complex and therefore risky. Historically there have been many techniques leveraged to identify and manage risk. As noted in Agile and Risk Management, much of the risk in projects can be put at the feet of variability. Complexity is one of the factors that drives complexity. Spikes, prioritization and fast feedback are important techniques for recognizing and reducing the impact of complexity.

  1. Spikes provide a tool to develop an understanding of an unknown within a time box. Spikes are a simple tool used to answer a question, gather information, perform a specific piece of basic research, address project risks or break a large story down. Spikes generate knowledge, which both reduces the complexity intrinsic in unknowns and provides information that can be used to simplify the problem being studied in the spike.
  2. Prioritization is a tool used to order the project or release backlog. Prioritization is also a powerful tool of reducing the impact of variability. It generally easier to adapt to a negative surprise in a project earlier in the lifecycle. Teams can allocate part of their capacity to the most difficult stories early in the project using complexity as a criterion for prioritization.
  3. Fast feedback is the single most effective means of reducing complexity (short of avoidance). Core to the DNA of Agile and lean frameworks is the “inspect and adapt” cycle. Deming’s Plan-Do-Check-Act cycle is one representation of “inspect and adapt” as are retrospectives, pair-programming and peer reviews. Short iterative cycles provide a platform to effectively apply the model of “inspect and adapt” to reduce complexity based on feedback. When teams experience complexity they have a wide range of tool to share and seek feedback ranging from daily stand-up meetings to demonstrations and retrospectives.
    Two notes on feedback:

    1. While powerful, feedback only works if it is heard.
    2. The more complex (or important) a piece of work is, the shorter the feedback cycle should be.

Spikes, prioritization and feedback are common Agile and lean techniques. The fact that they are common has led some Agile practitioners to feel that frameworks like Scrum has negated the need to deal specifically with risks at the team level. These are powerful tools for identifying and reducing complexity and the variability complexity generates however they need to be combined with other tools and techniques to manage the risk that is part of all projects and releases.

Categories: Process Management

24hrs in F#

Phil Trelford's Array - Tue, 10/14/2014 - 18:55

The easiest place to see what’s going on in the F# community is to follow the #fsharp hash tag on Twitter. The last 24hrs have been as busy as ever, to the point where it can be hard to keep up these days.

Here’s some of the highlights


Build Stuff conference to feature 8 F# speakers:

@c4fsharp + @silverSpoon & the @theburningmonk too :)

— headintheclouds (@ptrelford) October 13, 2014

and workshops including:

FP Days programme is now live, and features key notes from Don Syme & Christophe Grand, and presentations from:

.@dsyme and @cgrand are Keynoting #fpdays this year!! #fsharp #clojure

— FP Days (@FPDays) October 14, 2014

New Madrid F# meetup group announced:

#fsharp I just created a Madrid F# Meetup Group. Join and let's organize our first meetup! Pls RT

— Alfonso Garcia-Caro (@alfonsogcnunez) October 14, 2014

F# MVP Rodrigo Vidal announces DNA Rio de Janeiro:

Encontro do #DNA Rio de Janeiro dia 21/10/2014 na @caelum #csharp #fsharp @netarchitects

— Rodrigo Vidal (@rodrigovidal) October 13, 2014

Try out the new features in FunScript at MF#K Copenhagen:

MF#K - Let's make some fun stuff with F#unScript #MFK #fsharp

— Ramón Soto Mathiesen (@genTauro42) October 14, 2014

Mathias Brandewinder will be presenting some of his work on F# & Azure in the Bay Area

Pretty excited to show some F# & #Azure stuff I have been working on at BayAzure tomorrow Tues Oct 14: #fsharp

— Mathias Brandewinder (@brandewinder) October 13, 2014

Let’s get hands on session in Portland announced:

#Portland #fsharp meetup next week: Classics Mashup hands-on. #ComeJoinUs!

— James D'Angelo (@jjvdangelo) October 14, 2014

Riccardo Terrell will be presenting Why FP? in Washington DC

#fsharp @dcfsharp I will present at the DC 10.14.2014 6:30 pm @DCFsharp Riccardo Terrell

— DC F# (@DCFSharp) October 13, 2014

Sessions featuring FsCheck, Paket & SpecFlow scheduled in Vinnitsa (Ukraine)

Будем рассказывать с @skalinets про #FSharp RT @dou_calendar: Vinnitsa Ciklum .NET Saturday, 18 октября, Винница

— Akim Boyko (@AkimBoyko) October 14, 2014


Get Doctor Who stats with F# Data HTML Type Provider:

Get Doctor Who stats using #fsharp

— FSharp.Data (@FSharpData) October 13, 2014

Major update to FSharp.Data.SqlClient:

FSharp.Data.SqlClient major update: SqlEnumProvider merged in. #fsharp #typeprovider

— Dmitry Morozov (@mitekm) October 13, 2014

ProjectScaffold now uses Paket:

Excited to announce ProjectScaffold now uses @PaketManager by default! #dotNET #fsharp

— Paulmichael Blasucci (@pblasucci) October 14, 2014

Microsoft Research presentation on new DBpedia Type Provider:

Ultimate powerful data access with ``F# Type Provider Combinators`` video demo by Andrew Stevenson #fsharp #DBpedia

— Functional Software (@Functional_S) October 13, 2014


More F#, Xamarin.Forms and MVVM by Brad Pillow

More #fsharp, #xamarinForms and MVVM, blog:, code:

— Brad Pillow (@BradPillow) October 14, 2014

Cross-platform MSBuild API by Robin Neatherway

New blog post: Cross-platform MSBuild API usage in #fsharp at

— Robin Neatherway (@rneatherway) October 14, 2014

Hacking the Dream Cheeky Thunder Missle Launcher by Jamie Dixon:

Want more?

Check out Sergey Tihon’s F# Weekly!

Categories: Programming

Project Risk Management, PMBOK, DoD PMBOK and Edmund Conrow’s Book

Herding Cats - Glen Alleman - Tue, 10/14/2014 - 17:08

Effective Risk ManagementIn a recent post to “Who Is Ed Conrow?” a responder asked about the differences between the PMBOK® Risk approach and the DoD PMBOK risk approaches as well as a summary of the book Effective Risk Management: Some Keys to Success, Edmund Conrow. Ed worked the risk management processes for a NASA proposal I was on. I was the IMP/IMS lead, so integrating Risk with the Integrated Master Plan / Integrtaed Master Schedule in the mannder he prescribed was a live changing experience. I was naive before, but no longer after that proposal won ˜$7B for the client.


Let me start with a few positioning statements:

  1. Project risk management is a topic with varying levels of understanding, interest, and applicability. The PMI PMBOK® provides a “starter kit” view of project risk. It covers the areas of risk management but does not have guidance on actually “doing” risk management. This results many times in the false sense that “if we’re following PMBOK® then we’re OK.”
  2. The separation of technical risk from programmatic risk is not clearly discussed in PMBOK®. While not a “show stopper” issue for some projects, programmatic risk management is critical for enterprise class projects. By enterprise I mean, ERP, large product development, large construction, aerospace and defense class programs. In fact, DI-MGMT-81861 mandates programmatic risk management processes for procurements greater than $20M. $20M sounds like a very large sum of money for the typical internal IT development project. It hardly keeps the lights on an aerospace and defense program.
  3. The concepts around schedule variances are misunderstood and misapplied in almost every discussion of risk management in the popular literature. The common red-herring is the ineffective use of Critical Path and PERT. This of course is simply a false statement in domains outside IT or small low risk projects. Critical Path, PERT and Monte Carlo Simulation are mandated by government procurement guidelines. Not that these guides make them “correct.” What makes them correct is that programs measurably benefit from their application. This discipline is called Program Planning and Controls (PP&C) and is a profession in aerospace, defense, and large construction. No amount of “claiming the processes don’t work” removes the documented facts they do, when properly applied. Anyone wishing to learn more about these approaches to programmatic risk management need only look to the NASA, Department of Energy, and Department of Defense Risk Management communities.

With all my biases out of the way, let’s look at the DoD PMBOK®

  1. First, the DoD PMBOK® is free and can be found at It appears to be gone so now you can find it here. This is a US Department of Defense approved document. It provides supplemental guidance to the PMI PMBOK®, but in fact replaces a large portion of PMI PMBOK®.
  2. Chapter 11 of DoD PMBOK® is Risk. It starts with the Risk Management Process areas – in Figure 11-1, page 125. (I will not put them here, because you should down load the document and turn to that page.) The diagram contains six (6) process areas. The same number as the PMI PMBOK®. But what’s missing from PMI PMBOK® and present in DoD PMBOK® is how these processes interact to provide a framework for Risk Management.
  3.  There are several missing critical concepts in PMI PMBOK® that are provided in DoD PMBOK®.
    • The Risk Management structure in Figure 11-1 is the first. Without the connections between the process areas in some form other than “linear” the management of technical and programmatic risk turns into a “check list.” This is the approach of PMI PMBOK® - to provide guidance on the process areas and leave it up to the reader to develop a process around these areas. This is also the approach of CMMI. This is an area too important to leave it up to the read to develop the process.
    • The concept of the Probability and Impact matrix is fatally flawed in PMI PMBOK®. It turns out you cannot multiple probability of occurrence with the impact of this occurrence. These “numbers” are not numbers in the sense of integer or real valued numbers. They are probability distributions themselves. Multiplication is not an operation that can be applied to a probability distribution. Distributions are integral equations and the multiplication operator ´ is actually the convolution operator Ä.
    • The second fatal flaw of the PMI PMBOK® approach to probability of occurrence and impact is these “numbers” are uncalibrated cardinal values. That is they have no actually meaning since their “units of measure” are not calibrated.

Page 124 of DoD PMBOK® summarizes the principles of Risk Management as developed in two seminal sources.

  1. Effective Risk Management: Some Keys to Success, Edmund Conrow, American Institute of Aeronautics and Astronautics, 2000.
  2. Risk Management Guide for DoD Acquisition, Sixth Edition, (Version 1.0), August 2006, US Department of Defense, which is part of the Defense Acquisition Guide, §11.4), which is published within the Office of the Under Secretary of Defense, Acquisition, Technology and Logistics  (OUSD(AT&L)),  Systems and Software Engineering / Enterprise Development.

Now all these pedantic references are here for a purpose. This is how people who manage risk for a living, manage risk. By risk, I mean technical risk that results in loss of mission, loss of life. Programmatic risk that results in loss of Billions of Tax Payer dollars. They are serious enough about risk management to not let the individual project or program manager interpret the vague notions in PMI PMBOK®. These may appear to be harsh words, but the road to the management of enterprise class projects is littered with disasters. You can read every day of IT projects that are 100% over budget, 100% behind schedule. From private firms to the US Government, the trail of destruction is front page news.

A Slight Diversion – Why are Enterprise Projects So Risky?

There are many reasons for failure – too many to mention – but one is the inability to identify and mitigate risk. The words “indentify” and “mitigate,” sound simple. They are listed in the PMI PMBOK® and the DoD PMBOK®. However, here is where the problem starts:

  1. The process of separating risk from issue.
  2. Classifying  the statistical nature risk.
  3. Managing the risk process independently from project management and technical development.
  4. Incorporating the technical risk mitigation processes into the schedule.
  5. Modeling the impact of technical risk on programmatic risk.
  6. Modeling the programmatic risk independent from the technical risk.

Using Conrow as a Guide

Here is one problem. When you use the complete phrase “Project Risk Management” with Google, you get ~642,000 hits. There are so many books, academic papers, and commercial articles on Risk Management, where do we start? Ed Conrow’s book is probably not the starting point for learning how to practice risk management on your project. However, it might be the ending point. If you are in the software development business, a good starting point is – Managing Risk: Methods for Software Systems Development, Elaine M. Hall, Addison Wesley, 1998. Another broader approach is Continuous Risk Management Guidebook, Software Engineering Institute, August 1996. While these two sources focus on software, they provide the foundation for the discussion of risk management as a discipline.

There are public sources as well:

  1. Start with the NASA Risk Management page,
  2. Software Engineering Institute’s Risk Management page,
  3. A starting point for other resources from NASA,
  4. Department of Energy’s Risk Management Handbook, (which uses the DoD Risk Process Map described above)

However, care needs the be taken once you go outside the government boundaries. Here are many voices plying the waters of “risk management,” as well as other voices with “axes to grind” regarding project management methods and risk management processes. The result is many times a confusing message full of anecdotes, analogies, and alternative approaches to the topic of Risk Management.

Conrow in his Full Glory

Before starting into the survey of the Conrow book, let me state a few observations:

  1. This book is tough going. I mean really tough going. Tough in the way a graduate statistical mechanics book is tough going. Or a graduate micro-economics of managerial finance book is tough going. It “professional grade” stuff. By “professional grade” I mean – written by professionals to be used by professionals.
  2. Not every problem has the same solution need. Conrow’s solutions may not be appropriate for a software development project with 4 developers and a customer in the same room. But from projects that have multiple teams, locations, and stake holders some type of risk management is needed.
  3. The book is difficult to read for another reason. It assumes you have a “a reasonable understanding of the issues” around risk management. This means this is not a tutorial style or “risk management for dummies” book. It is a technical reference book. There is little in the way of introductory material or bringing the reader up top speed before presenting the material.

From the introduction:

The purpose of this book is two-fold: first, to provide key lessons learned that I have documented from performing risk management on a wide variety of programs, and second, to assist you, the reader, in developing and implementing an effective risk management process on your program.

A couple of things here. One is the practical experience in risk management. Many in the risk management “talking” community have limited experience with risk management in the way Ed does. I first met Ed on a proposal for a $8B Manned Spaceflight program. He was responsible for the risk strategy and the conveying of that strategy in the proposal. The proposal resulted in an award and now our firm provides Program Planning and Controls for a major subsystem of the program. In this role programmatic and technical risk management is part of the Statement of Work flowed down from the prime contractor.

Second Ed is a technical advisor to the US Arms Control and Disarmament Agency as well as a consultant industry and government on risk management. These “resume” items are meant to show that the practice of risk management is just that – a practice. Speaking about risk management and doing risk management on high risk programs are two different things.

One of Ed’s principle contributions to the discipline was the development of a micro-economic framework of risk management in which the design feasibility (or technical performance) is traded against cost and schedule.

In the end, this is a reference text for the process of managing the risk of projects, written by a highly respected practitioner.

What does the Conrow Book have to offer over the Standard approach?

Ed’s book contains the current “best practices” for managing technical and programmatic risk. These practices are used on high risk, high value programs. The guidelines in Ed’s book are generally applicable to many other classes of projects as well. But there are several critical elements that differentiate this approach from the pedestrian approach to risk management.

  1. The introduction of the “ordinal risk scale.” This approach is dramatically different than the PMI PMBOK description of risk in which the probability of occurrence is multiplied by the consequences to produce a “risk rating.” Neither the probability of occurrence nor the consequences are calibrated in anyway. The result is a meaningless number that may satisfy the C-Level that “risk management” is being done on the program. By ordinal risk scales it is meant a classification of risk, say – A,B,C,D,E,F – that are descriptive in nature. Not just numbers. By the way, the use of letters is intentional. If numbers are used to ordinal risk ranks, there is a tendency to do arithmetic on them. Compute the average risk rank, or multiply them by the consequences. Letters remove this notion and prevent the first failure of the common risk management approach – to produce an overall risk measure.

The ordinal approach works like this. Ed describes some classes of risk scales which include: maturity, sufficiency, complexity, uncertainty, estimative probability, and probability based scales.

A maturity risk scale would be:


Scale Level

Basic principles observed


Concept design analyzed for performance


Breadboard or brassboard validation in relevant environment


Prototype passes performance tests


Item deployed and operational


 The critical concept is to relate the risk ordinal value to an objective measure. For a maturity risk assessment, some “calibration” of what it means to have the “basic principles observed” must be developed. This approach can be applied to the other classes – sufficiency, complexity, uncertainty, estimative probability and probability based scales.

It’s the estimative probability that is important to cost and schedule people in our PP&C practice. The estimative probability scale attempts to relate a word to a probability value. “High” to 80%. An ordinal estimative probability scale using point estimates derived from a statistical analysis of survey data might look like.


Median probability value

Scale Level













Almost no chance



Calibrating these risk scales is the primary analysis task of building a risk management system. What does it mean to have a “medium” risk, in the specific problem domain?

  1. The second item is the formal use of a risk management process. Simply listing the risk process areas – as is done in PMBOK – is not only poor project management practice, it is poor risk management practice. The process to be used are shown on page 135 of The application of these processes are described in detail. No process area is optional. All are needed. All need to be performed in the right relationship to each other.

These two concepts are the ones that changed the way I perform risk management on the programs I’m involved with and how we advise our clients. They are paradigm changing concepts. No more simple mined arithmetic with probabilities and consequences. No more uncalibrated risk scales. No more tolerating those who claim PERT, Critical Path, and Monte Carlo are unproven, obsolete, or “wrong headed” approaches.

Get Ed’s book. It’ll cost way too much when compared to the “paperback” approach to risk. But for those tasked with “managing risk,” this is the starting point.


Categories: Project Management

Sponsored Post: Apple, Hypertable, VSCO, Gannett, Sprout Social, Scalyr, FoundationDB, AiScaler, Aerospike, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Apple has multiple openings. Changing the world is all in a day's work at Apple. Imagine what you could do here. 
    • Senior Engineer: Mobile Services. The Emerging Technologies/Mobile Services team is looking for a proactive and hardworking software engineer to join our team. The team is responsible for a variety of high quality and high performing mobile services and applications for internal use. We seek an accomplished server-side engineer capable of delivering an extraordinary portfolio of features and services based on emerging technologies to our internal customers. Please apply here.
    • Apple Pay Automation Engineer. The Apple Pay group within iOS Systems is looking for a outstanding automation engineer with strong experience in building client and server test automation. We work in an agile software development environment and are building infrastructure to move towards continuous delivery where every code change is thoroughly tested by push of a button and is considered ready to be deployed if we choose so. Please apply here
    • Site Reliability Engineer. As a member of the Apple Pay SRE team, you’re expected to not just find the issues, but to write code and fix them. You’ll be involved in all phases and layers of the application, and you’ll have a direct impact on the experience of millions of customers. Please apply here.
    • Software Engineering Manager. In this role, you will be communicating extensively with business teams across different organizations, development teams, support teams, infrastructure teams and management. You will also be responsible for working with cross-functional teams to delivery large initiatives. Please apply here

  • VSCO. Do you want to: ship the best digital tools and services for modern creatives at VSCO? Build next-generation operations with Ansible, Consul, Docker, and Vagrant? Autoscale AWS infrastructure to multiple Regions? Unify metrics, monitoring, and scaling? Build self-service tools for engineering teams? Contact me (Zo, and let’s talk about working together.

  • Gannett Digital is looking for talented Front-end developers with strong Python/Django experience to join their Development & Integrations team. The team focuses on video, user generated content, API integrations and cross-site features for Gannett Digital’s platform that powers sites such as, or Please apply here.

  • Platform Software Engineer, Sprout Social, builds world-class social media management software designed and built for performance, scale, reliability and product agility. We pick the right tool for the job while being pragmatic and scrappy. Services are built in Python and Java using technologies like Cassandra and Hadoop, HBase and Redis, Storm and Finagle. At the moment we’re staring down a rapidly growing 20TB Hadoop cluster and about the same amount stored in MySQL and Cassandra. We have a lot of data and we want people hungry to work at scale. Apply here.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • Sign Up for New Aerospike Training Courses.  Aerospike now offers two certified training courses; Aerospike for Developers and Aerospike for Administrators & Operators, to help you get the most out of your deployment.  Find a training course near you.

  • November TokuMX Meetups for Those Interested in MongoDB. Join us in one of the following cities in November to learn more about TokuMX and hear TokuMX use cases. 11/5 - London;11/11 - San Jose; 11/12 - San Francisco. Not able to get to these cities? Check out our website for other upcoming Tokutek events in your area -
Cool Products and Services
  • Hypertable Inc. Announces New UpTime Support Subscription Packages. The developer of Hypertable, an open-source, high-performance, massively scalable database, announces three new UpTime support subscription packages – Premium 24/7, Enterprise 24/7 and Basic. 24/7/365 support packages start at just $1995 per month for a ten node cluster -- $49.95 per machine, per month thereafter. For more information visit us on the Web at Connect with Hypertable: @hypertable--Blog.

  • FoundationDB launches SQL Layer. SQL Layer is an ANSI SQL engine that stores its data in the FoundationDB Key-Value Store, inheriting its exceptional properties like automatic fault tolerance and scalability. It is best suited for operational (OLTP) applications with high concurrency. Users of the Key Value store will have free access to SQL Layer. SQL Layer is also open source, you can get started with it on GitHub as well.

  • Diagnose server issues from a single tab. Scalyr replaces all your monitoring and log management services with one, so you can pinpoint and resolve issues without juggling multiple tools and tabs. Engineers say it's powerful and easy to use. Customer support teams use it to troubleshoot user issues. CTO's consider it a smart alternative to Splunk, with enterprise-grade functionality, sane pricing, and human support. Trusted by in-the-know companies like Codecademy – learn more!

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Cloud deployable. Free instant trial, no sign-up required.

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

The Business Analyst and The Agile Team

Software Requirements Blog - - Tue, 10/14/2014 - 17:00
The Agile Extension to the BABOK® Guide is well worth the read, and I’d encourage any business analyst heading into an Agile project to take the time to give it serious consideration. The section entitled “What makes a BA Successful on an Agile Team” starts off with this paragraph: The very nature of agile approaches […]
Categories: Requirements

Practices, Not Platitudes

NOOP.NL - Jurgen Appelo - Tue, 10/14/2014 - 15:27
Practices, Not Platitudes

I recently took part in a conversation about compensation of employees. Some readers offered criticism on the Merit Money practice, described in my new Workout book, claiming that Merit Money is just another way to incentivize people. The feedback I received was, “Money doesn’t motivate people”, followed by, “Don’t incentivize people” and “Just pay people well”.

Let me explain why I think this advice is useless.

The post Practices, Not Platitudes appeared first on NOOP.NL.

Categories: Project Management

AngularJS Training Week

Xebia Blog - Tue, 10/14/2014 - 07:00

Just a few more weeks and it's the AngularJS Training Week at Xebia in Hilversum (The Netherlands). 4 days full with AngularJS content, from 17 to 20 October, 2014. In these different days we will cover the AngularJS basics, AngularJS advanced topics, Tooling & Scaffolding and Testing with Jasmine, Karma and Protractor.

If you already have some experience or if you are only interested in one or two of the topics, then you can sign up for just the days that are of interest to you.

Visit for a full overview of the days and topics or sign up on the Xebia Training website using the links below.

5 Decision-Making Strategies that require no estimates

Software Development Today - Vasco Duarte - Tue, 10/14/2014 - 04:00

One of the questions that I and other #NoEstimates proponents hear quite often is: How can we make decisions on what projects we should do next, without considering the estimated time it takes to deliver a set of functionality?

Although this is a valid question, I know there are many alternatives to the assumptions implicit in this question. These alternatives - which I cover in this post - have the side benefit of helping us focus on the most important work to achieve our business goals.

Below I list 5 different decision-making strategies (aka decision making models) that can be applied to our software projects without requiring a long winded, and error prone, estimation process up front.

What do you mean by decision-making strategy?

A decision-making strategy is a model, or an approach that helps you make allocation decisions (where to put more effort, or spend more time and/or money). However I would add one more characteristic: a decision-making strategy that helps you chose which software project to start must help you achieve business goals that you define for your business. More specifically, a decision-making strategy is an approach to making decisions that follows your existing business strategy.

Some possible goals for business strategies might be:

  • Growth: growing the number of customer or users, growing revenues, growing the number of markets served, etc.
  • Market segment focus/entry: entering a new market or increasing your market share in an existing market segment.
  • Profitability: improving or maintaining profitability.
  • Diversification: creating new revenue streams, entering new markets, adding products to the portfolio, etc.

Other types of business goals are possible, and it is also possible to mix several goals in one business strategy.

Different decision-making strategies should be considered for different business goals. The 5 different decision-making strategies listed below include examples of business goals they could help you achieve. But before going further, we must consider one key aspect of decision making: Risk Management.

The two questions that I will consider when defining a decision-making strategy are:

  • 1. How well does this decision proposal help us reach our business goals?
  • 2. Does the risk profile resulting from this decision fit our acceptable risk profile?

Are you taking into account the risks inherent in the decisions made with those frameworks?

All decisions have inherent risks, and we must consider risks before elaborating on the different possible decision-making strategies. If you decide to invest in a new and shiny technology for your product, how will that affect your risk profile?

A different risk profile requires different decisions

Each decision we make has an impact on the following risk dimensions:

  • Failing to meet the market needs (the risk of what).
  • Increasing your technical risks (the risk of how).
  • Contracting or committing to work which you are not able to staff or assign the necessary skills (the risk of who).
  • Deviating from the business goals and strategy of your organization (the risk of why).

The categorization above is not the only possible. However it is very practical, and maps well to decisions regarding which projects to invest in.

There may good reasons to accept increasing your risk exposure in one or more of these categories. This is true if increasing that exposure does not go beyond your acceptable risk profile. For example, you may accept a larger exposure to technical risks (the risk of how), if you believe that the project has a very low risk of missing market needs (the risk of what).

An example would be migrating an existing product to a new technology: you understand the market (the product has been meeting market needs), but you will take a risk with the technology with the aim to meet some other business need.

Aligning decisions with business goals: decision-making strategies

When making decisions regarding what project or work to undertake, we must consider the implications of that work in our business or strategic goals, therefore we must decide on the right decision-making strategy for our company at any time.

Decision-making Strategy 1: Do the most important strategic work first

If you are starting to implement a new strategy, you should allocate enough teams, and resources to the work that helps you validate and fine tune the selected strategy. This might take the form of prioritizing work that helps you enter a new segment, or find a more valuable niche in your current segment, etc. The focus in this decision-making approach is: validating the new strategy. Note that the goal is not "implement new strategy", but rather "validate new strategy". The difference is fundamental: when trying to validate a strategy you will want to create short-term experiments that are designed to validate your decision, instead of planning and executing a large project from start to end. The best way to run your strategy validation work is to the short-term experiments and re-prioritize your backlog of experiments based on the results of each experiment.

Decision-making Strategy 2: Do the highest technical risk work first

When you want to transition to a new architecture or adopt a new technology, you may want to start by doing the work that validates that technical decision. For example, if you are adopting a new technology to help you increase scalability of your platform, you can start by implementing the bottleneck functionality of your platform with the new technology. Then test if the gains in scalability are in line with your needs and/or expectations. Once you prove that the new technology fulfills your scalability needs, you should start to migrate all functionality to the new technology step by step in order of importance. This should be done using short-term implementation cycles that you can easily validate by releasing or testing the new implementation.

Decision-making Strategy 3: Do the easiest work first

Suppose you just expanded your team and want to make sure they get to know each other and learn to work together. This may be due to a strategic decision to start a new site in a new location. Selecting the easiest work first will give the new teams an opportunity to get to know each other, establish the processes they need to be effective, but still deliver concrete, valuable working software in a safe way.

Decision-making Strategy 4: Do the legal requirements first

In medical software there are regulations that must be met. Those regulations affect certain parts of the work/architecture. By delivering those parts first you can start the legal certification for your product before the product is fully implemented, and later - if needed - certify the changes you may still need to make to the original implementation. This allows you to improve significantly the time-to-market for your product. A medical organization that successfully adopted agile, used this project decision-making strategy with a considerable business advantage as they were able to start selling their product many months ahead of the scheduled release. They were able to go to market earlier because they successfully isolated and completed the work necessary to certify the key functionality of their product. Rather then trying to predict how long the whole project would take, they implemented the key legal requirements first, then started to collect feedback about the product from the market - gaining a significant advantage over their direct competitors.

Decision-making Strategy 5: Liability driven investment model

This approach is borrowed from a stock exchange investment strategy that aims to tackle a problem similar to what every bootstrapped business faces: what work should we do now, so that we can fund the business in the near future? In this approach we make decisions with the aim of generating the cash flows needed to fund future liabilities.

These are just 5 possible investment or decision-making strategies that can help you make project decisions, or even business decisions, without having to invest in estimation upfront.

None of these decision-making strategies guarantees success, but then again nothing does except hard work, perseverance and safe experiments!

In the upcoming workshops (Helsinki on Oct 23rd, Stockholm on Oct 30th) that me and Woody Zuill are hosting, we will discuss these and other decision-making strategies that you can take and start applying immediately. We will also discuss how these decision making models are applicable in day to day decisions as much as strategic decisions.

If you want to know more about what we will cover in our world-premiere #NoEstimates workshops don't hesitate to get in touch!

Your ideas about decision-making strategies that do not require estimation

You may have used other decision-making strategies that are not covered here. Please share your stories and experiences below so that we can start collecting ideas on how to make good decisions without the need to invest time and money into a wasteful process like estimation.