Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Project Management

Don't be an Accidental Project Manager

Herding Cats - Glen Alleman - Fri, 06/27/2014 - 18:44

A common problem in our development of the Program Management Office is getting so caught up in putting out fires. This is Covey's “addiction of the urgent.” In this process we lose the big-picture perspective. This note is about the big-picture view of the project management process as it pertains to our collection of projects. These are very rudimentary principles, but they are important to keep in mind.

5 Basic Principles

1. Be conscious of what you're doing, don’t be an accidental manager. Learn PM theory and practice. Realize you don't often have direct control. Focus on being a professional and the PM's mantra:

"I am a project professional. I work on projects. Projects are undertakings that are goal-oriented, complex, finite, and unique. They pass through a life cycle, which begins with project selection and ends with project termination."

2. Invest in front-end work; get it right the first time. We often leap before we look due to an over–focus on results-oriented processes, simple and many times simple-minded platitudes about project management and the technical processes and ignore basic steps. Trailblazers often achieve breakthroughs, but projects need forethought. Projects are complex, and the planning, structure, and time spent with stakeholders are required for success. Doing things right takes time and effort, but this time and effort is much cheaper than rework.

3. Anticipate the problems that will inevitably arise. Most problems are predictable. Well-known examples are:

  • Little direct control over staff, little staff commitment to the project.
  • Staff workers are not precisely what we want or need.
  • Functional managers have different goals, and these will suboptimize the project.
  • Variances to schedule and budget will occur, and customer needs will shift.
  • Project requirements will be misinterpreted.
  • Overplanning and overcontrol are as bad as underplanning and weak control.
  • There are hidden agendas, and these are probably more important than the stated one.

4. Go beneath surface illusions; dig deep to find the real situation. Don't accept things at face value. Don't treat the symptom, treat the root cause, and the symptoms will be corrected. Our customers usually understands their own needs, but further probing will bring out new needs. Robert Block suggests a series of steps: 

  • Identify all the players, in particular those who can impact project outcome.
  • Determine the goals of each player and organization, focusing on hidden goals.
  • Assess your own situation and start to define the problems.

5. Be as flexible as possible; don’t get sucked into unnecessary rigidity and formality. Project Management is the reverse of Fermi's 2nd law: we're trying to create order out of chaos. But in this effort:

  • More formal structure & bureaucracy doesn't necessarily reduce chaos.
  • We need flexibility to bend but not break to deal with surprises, especially with intangibles our information-technology projects.
  • The goal is to have both order and flexibility at the same time.
  • Heavy formality is appropriate on large budget or low-risk projects with lots of communication expense and few surprises. Information-age projects have a low need for this because they deal more with information and intangibles, and have a high degree of uncertainly.

[1] The Politics of Projects, Robert Block, Yourdon Press, 1983.

Related articles Elements of Project Success How to Deal With Complexity In Software Projects?
Categories: Project Management


Herding Cats - Glen Alleman - Fri, 06/27/2014 - 16:11

When we hear about a process, a technique, or a tool, ask in what unit of measure are you assessing the beneficial outcome of applying those?

This idea started with P. W. Bridgman's principle that the meaning of any concept is in its measurement or other test. This was put forth in the 1930's in which Bridgman made a famous, useful, and very operational statement, usually remembered as:

The scientific method is doing your damnedest, no holds barred. †

Developing software is not a scientific process, even though Computer Science is a discipline at the university level, where probability and statistics are taught, IEEE/ACM Computer Science Education Curricula

When we want to make choices about a future outcome, we can apply statistical thinking using the mathematics  used in scientific discussions - cost, schedule, and performance (C,S,P which are random variables).

These decisions are based on the probabilistic and statistical behavior of the underlying processes that create the alternatives for our decisions. Should we spend $X on a system that will return $Y value? Since both X and Y are random variables - they are in the future - our decision making processes needs to estimate the behaviour of these random variables and determine the impact on our outcomes.

Probability and Statistics

When we hear there are alternatives to making decisions about the future impacted by cost, schedule and technical performance without estimating the impact of that decision, we need to ask what are those alternatives, what are their units of measure, and when can we find them described?

For those interested in further reading on the topic of Decision Making in the Presence of Uncertainty

† Reflections of a Physicist, P. W. Bridgman, pp. 535. The passage reads, "The scientific method, as far as it is a method, is nothing more than doing one's damnedest with one's mind, no holds barred."

Categories: Project Management

Coming out of the closet - the life and adventure of a traditional project manager turned Agilist

Software Development Today - Vasco Duarte - Fri, 06/27/2014 - 04:00

I’m coming out of the closet today. No, not that closet. Another closet, the tabu closet in the Agile community. Yes, I was (and to a point still am) a control freak, traditional, command and control project manager. Yes, that’s right you read it correctly. Here’s why this is important: in 2003 when I first started to consider Agile in any shape or form I was a strong believer of the Church of Order. I did all the rites of passage, I did my Gantt charts, my PERT charts, my EVM-charts and, of course, my certification.

I was certified Project Manager by IPMA, the European cousin of PMI.

I too was a control freak, order junkie, command and control project manager. And I've been clean for 9 years and 154 days.

Why did I turn to Agile? No, it wasn’t because I was a failed project manager, just ask anyone who worked with me then. It was the opposite reason. I was a very successful project manager, and that success made me believe I was right. That I had the recipe. After all, I had been successful for many years already at that point.

I was so convinced I was right, that I decided to run our first Agile project. A pilot project that was designed to test Agile - to show how Agile fails miserably (I thought, at that time). So I decided to do the project by the book. I read the book and went to work.

I was so convinced I was right that I wanted to prove Agile was wrong. Turned out, I was wrong.

The project was a success... I swear, I did not see that coming! After that project I could never look back. I found - NO! - I experienced a better way to develop software that spoiled me forever. I could no longer look back to my past as a traditional project manager and continue to believe the things I believed then. I saw a new land, and I knew I was meant to continue my journey in that land. Agile was my new land.

Many of you have probably experienced a similar journey. Maybe it was with Test-Driven Development, or maybe it was with Acceptance Testing, or even Lean Startup. All these methods have one thing in common: they represent a change in context for software development. This means: they fundamentally change the assumptions on which the previous methods were based. They were, in our little software development world a paradigm shift.

Test-driven development, acceptance testing, lean startup are methods that fundamentally change the assumptions on which the previous software development methods were based.

NoEstimates is just another approach that challenges basic assumptions of how we work in software development. It wasn’t the first, it will not be the last, but it is a paradigm shift. I know this because I’ve used traditional, Agile with estimation, and Agile with #NoEstimates approaches to project management and software delivery.

A world premier?

That’s why me and Woody Zuill will be hosting the first ever (unless someone jumps the gun ;) #NoEstimates public workshop in the world. It will happen in Finland, of course, because that’s the country most likely to change the world of software development. A country of only five million people yet with a huge track record of innovation: The first ever mobile phone throwing world championship was created in Finland. The first ever wife-carrying world championship was created in Finland. The first ever swamp football championship was created in Finland. And my favourite: the Air Guitar World Championship is hosted in Finland.

#NoEstimates being such an exotic approach to software development it must, of course, have its first world-premier workshop in Finland as well! Me and Woody Zuill (his blog) will host a workshop on #NoEstimates on the week of October 20th in Helsinki. So whether you love it, or hate it you can meet us both in Helsinki!

In this workshop will cover topics such as:

  • Decision making frameworks for projects that do not require estimates.
  • Investment models for software projects that do not require estimates.
  • Project management (risk management, scope management, progress reporting, etc.) approaches that do not require estimates.
  • We will give you the tools and arguments you need to prove the value of #NoEstimates to your boss, and how to get started applying it right away.
  • We will discuss where we see #NoEstimates going and what are the likely changes to software development that will come next. This is the future delivered to you!

Which of these topics interest you the most? What topics would you like us to cover in the workshop. Tell us now and you have a chance to affect the topics we will cover.

Contact us at and tell us. We will reply to all emails, even flame bombs! :)

You can receive exclusive content (not available on the blog) on the topic of #NoEstimates, just subscribe to the #NoEstimates mailing list below. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to our mailing list* indicates required Email Address * First Name Last Name

Picture credit: John Hammink, follow him on twitter

Are You Going to ALE 2014?

NOOP.NL - Jurgen Appelo - Thu, 06/26/2014 - 15:57
ALE 2014

Did you know ALE is the only event where I pay so that I can attend?
I hope to see you in August in KrakĂłw.

The post Are You Going to ALE 2014? appeared first on NOOP.NL.

Categories: Project Management

Software Development Conferences Forecast June 2014

From the Editor of Methods & Tools - Thu, 06/26/2014 - 07:22
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing and software quality, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. AGILE2014, July 28 – August 1, Orlando, USA Agile on the Beach, September 4-5 2014, Falmouth in Cornwall, UK SPTechCon, September 16-19 2014, Boston, USA STARWEST, October 12-17 2014, Anaheim, USA JAX London, October 13-15 2014,London, UK Pacific Northwest ...

All Project Numbers are Random Numbers — Act Accordingly

Herding Cats - Glen Alleman - Wed, 06/25/2014 - 15:11

Dice5The numbers that appear in projects — cost, schedule, performance — are all random variables drawn from an underlying statistical process. This process is officially called a non-stationary stochastic process. It has several important behaviours that create problems for those trying to make decisions in the absence of understanding how these processes work in practice.

The first issue is that all point estimates for projects are wrong, in the absence of a confidence interval and an error band on that confidence.

How long will this project take is a common question asked by those paying for the project. The technically correct answer is there is an 80% confidence of completing on or before some date, with a 10% error on that confidence. This is a cumulative probability number collecting all the possible completion dates and describing the cumulative probability - the 80% - of an on or before, since the project can complete before that final probabilistic date as well. 

Same conversation for cost. The cost of the project will be at or below "some amount" with a 80% confidence.

The performance of products or services are the third random variables. By technical performance it means anything and everything that is not cost or schedule. This is the wrapper term for the old concept of scope. But in modern terms there are two general purpose categories of Performance with one set of parameters.

  • Measures of Effectiveness - are the operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions. The Measures of Effectiveness:
    • Are stated in units meaningful to the buyer,
    • Focus on capabilities independent of any technical implementation,
    • Are connected to the mission success.
  • Measures of Performance - are the measures that characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions. The Measures of Performance are:
    • Attributes that assure the system has the capability and capacity to perform,
    • Assessment of the system to assure it meets design requirements to satisfy the MoE.
  • Key Performance Parameters - represent the capabilities and characteristics so significant  that failure to meet them can be cause for reevaluation, reassessing, or termination of the program. Key Performance Parameters:
    • Have a threshold or objective value,
    • Characterize the major drivers of performance,
    • Are considered Critical to Customer (CTC).

These measures are all random numbers with confidence intervals and error bands.

So What's The Point?

When we hear you can't forecast the future, that's not true. The person saying that didn't pay attention in the High School statistics class. You can forecast the future. You can make estimates of anything. The answers you get may not be useful, but it's an estimate all the same. If it is unclear on how to do this, here's a reading assignment for the books we use nearly every month to make our estimates at completion and estimates to complete for software intensive project, starting with the simplist:

While on the topic of books, here are some books that should be on your shelf that put those probability and statistics to work.

  • Facts and Fallacies of Software Engineering, Robert Glass - speaks to the common fallacies in software development. The most common is we can't possibly estimate when we'll be done or how much it will cost. Read the book and start calling BS on anyone using that excuse to not do their homework. And a nice update by Jack Atwood, founder of Stack Exchange.
  • Estimating Software-Intensive Systems, Richard Stutzle - this is the book that started the revolution of statistical modeling of software projects. When you hear oh this is so olde school, that person didn't take the HS Stats class either. 
  • Software Engineering Economics, Barry Boehm - is how to pull all this together. And when you hear this concept is olde school, you'll know better as well.

There are several tools that make use these principles and practices:

Here's the End

  • Learn to estimate.
  • Teach others to estimate.
  • When the Dilbert boss comes around, you'll have to tools to have a credible discussion about the Estimate to Complete number he's looking for is bogus. He may not listen or even understand, but you will.

And that's a start in fixing the dysfunction of bad estimating when writing software for money. Start with the person who can actually make a change - You

Related articles Averages Without Variances are Meaningless - Or Worse Misleading Elements of Project Success Can There Be Successful Projects With Fixed Dates, Fixed Budgets, and Promised Minimal Features? Four Critical Elements of Project Success To Stay In Business You Need to Know Both Value and Cost How to Forecast the Future Making Estimates For Your Project Require Discipline, Skill, and Experience How To Assure Your Project Will Fail Random Sample Calculations And My Prediction That 300,000 Lawyers Will Be Using Random Sampling By 2022 The Uncertainty of Predictions


Categories: Project Management

Do You Need to Create Virtual Teams with Freelancers?

Have you seen Esther Schindler’s great article yet? Creating High-Performance Virtual Teams of Freelancers and Contractors.

Here’s the blurb:

Plenty has been written about telecommuting for employees: how to encourage productivity, build a sense of “we’re all in this together,” and the logistics (such as tools and business processes) that streamline a telework lifestyle. But what about when your team is neither employees nor on-site? That gives any project manager extra challenges.

Lots of good tips.


Categories: Project Management

Teams Should Go So Fast They Almost Spin Out of Control

Mike Cohn's Blog - Tue, 06/24/2014 - 15:00

Yes, I really did refer to guitarist Alvin Lee in a Certified Scrum Product Owner class last week. Here's why.

I was making a point that Scrum teams should strive to go as fast as they can without going so fast they spin out of control. Alvin Lee of the band Ten Years After was a talented guitarist known for his very fast solos. Lee's ultimate performance was of the song "I'm Going Home" at Woodstock. During the performance, Lee was frequently on the edge of flying out of control, yet he kept it all together for some of the best 11 minutes in rock history.

I want the same of a Scrum team--I want them going so fast they are just on the verge of spinning out of control yet are able to keep it together and deliver something classic and powerful.

Re-watching Ten Years After's Woodstock performance I'm struck by a couple of other lessons, which I didn't mention in class last week:

One: Scrum teams should be characterized by frequent, small hand-offs. A programmer gets eight lines of code working and yells, "Hey, Tester, check it out." The tester has been writing automated tests while waiting for those eight lines and runs the tests. Thirty minutes later the programmer has the next micro-feature coded and ready for testing. Although a good portion of the song is made up of guitar solos, they aren't typically long solos. Lee plays a solo and soon hands the song back to his bandmates, repeating for four separate solos through the song.

Two: Scrum teams should minimize work in progress. While "I'm Going Home" is a long song (clocking in at over eleven minutes), there are frequent "deliveries" of interpolated songs throughout the performance. Listen for "Blue Suede Shoes, "Whole Lotta Shaking" and others, some played for just a few seconds.

OK, I'm probably nuts, and I certainly didn't make all these points in class. But Alvin Lee would have made one great Scrum teammate. Let me know what you think in the comments below.

We're All Looking for the Simple Fix - There Isn't One

Herding Cats - Glen Alleman - Tue, 06/24/2014 - 14:39

Light Bulb

Every project domain is looking for a simple answer to complex problems. There isn't a simple answer to complex problems. There are answers, but they require hard work, understanding, skill, experience, and tenacity to address the hard problems of showing up on time, on or near the planned cost, and have some acceptable probability that the products or services produced by the project will work, and will actually provide the needed capabilities to fulfill the business case or mission of the project.

So It Comes Down To This

  • If we don't know what done looks like in some unit of measure meaningful to the decision makers, we'll never recognize it before we run out of time and money.
  • If we don't know what it will cost to reach done, we're over budget before we start.
  • If we don't have some probabilistic notion of when the project will be complete, we're late before you start.
  • If we don't measure progress to plan in some units of physical percent complete we have no idea if we are actually making progress. These measures include two classes:
    • Effectiveness - is the thing we're building actually effective at solving the problem.
    • Performance - is the solution performing in a way that allows it to be effective.
  • If we don't know what impediments we'll encounter along the way to done, those impediments will encounter us. They don't go away just because we don't know about them.
  • If we don't have any idea about what resources we'll be needing on the project, we will soon enough when we start to fall behind schedule or our products or services suffer from lack of skills, experience, or capacity for work.

Doing project work is about many things. But it's not just about writing code or bending metal. It's about the synergistic collaboration between all the participants. The notion that we don't need project management is one of those nonsense notions that is stated in the absence of a domain and context. The Product Owner in agile is the glue to pulls the development team together. But someone somewhere needs to fund that development, assure the logistics of deploying the resulting capabilities is in place, users trained, help desk manned and training, regulations complied wtih. The Program Manager on a mega-project in construction or defense does many of the same things.

Core information is need as well. Cost, planned deliverables, risk management. resource management and other house keeping functions are needed.

Delivering on or near the planned time, at or near the planned budget, and more or less with the needd capabilities is hard work. 

Related articles Lean Startup, Innovation, #NoEstimates, and Minimal Viable Features It Can't Be Any Clearer Than This Top impediments to Agile adoptions that I've encountered Managing In The Presence Uncertainty - Redux Risk Management for Dummies How to Deal With Complexity In Software Projects?
Categories: Project Management

Book Tour Schedule 2014

NOOP.NL - Jurgen Appelo - Tue, 06/24/2014 - 10:21
Book Tour 2014

Last week was Sweden-week in the Management 3.0 Book Tour, with workshops in Stockholm and Gothenburg. (Check out the videos!)

This week is Germany-Week, where I am visiting Munich, Frankfurt, and Berlin.

We have a lot of other countries on the list as well. Check out the complete schedule until December. Registration will open soon! (Sorry, no other countries will be added at this time

The post Book Tour Schedule 2014 appeared first on NOOP.NL.

Categories: Project Management

Humans suck at statistics - how agile velocity leads managers astray

Software Development Today - Vasco Duarte - Tue, 06/24/2014 - 04:00

Humans are highly optimized for quick decision making. The so-called System 1 that Kahneman refers to in his book "Thinking fast, thinking slow". One specific area of weakness for the average human is understanding statistics. A very simple exercise to review this is the coin-toss simulation.

Humans are highly optimized for quick decision making.

Get two people to run this experiment (or one computer and one person if you are low on humans :). One person throws a coin in the air and notes down the results. For each "heads" the person adds one to the total; for each "tails" the person subtracts one from the total. Then she graphs the total as it evolves with each throw.

The second person simulates the coin-toss by writing down "heads" or "tails" and adding/subtracting to the totals. Leave the room while the two players run their exercise and then come back after they have completed 100 throws.

Look at the graph that each person produced, can you detect which one was created by the real coin, which was "imagined"? Test your knowledge by looking at the graph below (don't peak at the solution at the end of the post). Which of these lines was generated by a human, and which by a pseudo-random process (computer simulation)?

One common characteristic in this exercise is that the real random walk, which was produced by actually throwing a coin in the air, is often more repetitive than the one simulated by the player. For example, the coin may generate a sequence of several consecutive heads or tails throws. No human (except you, after reading this) would do that because it would not "feel" random. We, humans, are bad at creating randomness and understanding the consequences of randomness. This is because we are trained to see meaning and a theory behind everything.

Take the velocity of the team. Did it go up in the latest sprint? Surely they are getting better! Or, it's the new person that joined the team, they are already having an effect! In the worst case, if the velocity goes down in one sprint, we are running around like crazy trying to solve a "problem" that prevented the team from delivering more.

The fact is that a team's velocity is affected by many variables, and its variation is not predictable. However, and this is the most important, velocity will reliably vary over time. Or, in other words, it is predictable that the velocity will vary up and down with time.

The velocity of a team will vary over time, but around a set of values that are the actual "throughput capability" of that team or project. For us as managers it is more important to understand what that throughput capability is, rather than to guess frantically at what might have caused a "dip" or a "peak" in the project's delivery rate.

The velocity of a team will vary over time, but around a set of values that are the actual "throughput capability" of that team or project.

When you look at a graph of a team's velocity don't ask "what made the velocity dip/peak?", ask rather: "based on this data, what is the capability of the team?". This second question will help you understand what your team is capable of delivering over a long period of time and will help you manage the scope and release date for your project.

The important question for your project is not, "how can we improve velocity?" The important question is: "is the velocity of the team reliable?"

Picture credit: John Hammink, follow him on twitter

Solution to the question above: The black line is the one generated by a pseudo-random simulation in a computer. The human generated line is more "regular", because humans expect that random processes "average out". Indeed that's the theory. But not the the reality. Humans are notoriously bad at distinguishing real randomness from what we believe is random, but isn't.

As you know I've been writing about #NoEstimates regularly on this blog. But I also send more information about #NoEstimates and how I use it in practice to my list. If you want to know more about how I use #NoEstimates, sign up to my #NoEstimates list. As a bonus you will get my #NoEstimates whitepaper, where I review the background and reasons for using #NoEstimates #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to our mailing list* indicates required Email Address * First Name Last Name

Quantifying the Value of Information

Herding Cats - Glen Alleman - Mon, 06/23/2014 - 15:00

From the book How To Measure Anything, there is a notion starting from the McNamara Fallacy.

The first step is to measure whatever can be easily measured. This is okay as far as it goes. The second step is to disregard that which can't easily be measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily isn't important. This is blindness. The fourth step is to say that what can't be easily measured really dosnt' exist. This is suicide. — Charles Handy, The Empty Raincoat (1995), describing the Vietnam-era measurement policies of Secretary of Defense, Robert McNamara

There are three reasons to seek information in the process of making business decisions:

  1. Information reduces uncertainty about decisions that have economic consequences.
  2. Information affects the behaviour of others, which has economic consequences.
  3. Information sometimes has its own market value.

When we read ...

No Estimates

... and there are no alternatives described, then it's time to realize this is an empty statement. To be successful in the software development business we need information about the cost to development of value, the duration of the work effort that produces this value for those paying for the outcomes of our efforts, and the confidence that we can produce the needed capabilities on or near the planned delivery date, at or below the planned budget. (And fixing the budget just leaves two other variables open, so that is an empty approach as well.)

The solution to the first has been around since the 1950's - decision theory. The answer to the second is provided by measuring productivity in regards to uncertainty about investments - an options or analysis of alternatives (AoA) process. The notion of market information is based on Return on Investment, where the value produced in exchange for cost to produce that value in an fundamental principle of all successful businesses.

If we can somehow separate the writing of software from the discussion of determining the cost of that effort, it may become clearer that the software development community needs to consider the needs of those funding their work over their own self-interest of not wanting to estimate the cost of that work. In the end those with the money need to know. If the development community isn't interested in providing viable - credible business processes - to answer how much, when, and what - then it'll be done without them, because to stay in business, the business must know the cost of their products or services.

Related articles Do It Right or Do It Twice We Can Know the Business Value of What We Build "Statistical Science and Philosophy of Science: where should they meet?"
Categories: Project Management

Kanban, Developer Career & Mobile UX in Methods & Tools Summer 2014 issue

From the Editor of Methods & Tools - Mon, 06/23/2014 - 14:54
Methods & Tools – the free e-magazine for software developers, testers and project managers – has just published its Summer 2014 issue that discusses objections to Kanban implementation, How to use a model to evaluate and improve mobile user experience, balancing a software development job and a meaningful life, Scrum agile project management tools, JavaScript unit testing and static analysis for BDD. Methods & Tools Summer 2014 contains the following articles: * Kanban for Skeptics * Using a Model To Systematically Evaluate and Improve Mobile User Experience * Developer Careers Considered Harmful * TargetProcess – ...

Quote of the Day

Herding Cats - Glen Alleman - Sun, 06/22/2014 - 18:30


“Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science.”

Along with those assertions (with no evidence) that this or that will not be solved, it is the assertion that I have a solutiuon for your complex problem that is simple and straightforward and usually involves NOT doing something that is being performed improperly, which I'll label as a dysfunction and ignore the search for the root cause.

H_l_menckenWhich bring the next quote for simple and simple-minded solutions to complex problems.

For every complex problem there is an answer that is clear, simple, and wrong. H. L. Mencken

Categories: Project Management

How to "Lie" with Statistics

Herding Cats - Glen Alleman - Sat, 06/21/2014 - 20:20

How To Lie With StatisticsThe book How To Lie With Statistics, Darrell Huff, 1954 should be on the bookshelf of everyone who spends other peoples money for a very simple reason.

Everything on every project is part of an underlying ststistical process. Those expecting that any number associated with any project in any domain to be a single point estimate will be sorely disappointed to find out that is not the case after reading the book. 

As well, those expecting to make decisions about how to spend other peoples money will be disappointed to know that statistical information is needed to determine the impact of the decision is influenced by the cost of the decision and the cost of the value obtained by the decision, the impact on the schedule of the work needed to produce the value from that decision, and even the statistical outcomes of the benefits produced by making that decision.

One prime example of How To Lie (although unlikley not a Lie, but just poor application of statistical processes) is Todd Little's "Schedule Estimation and Uncertainty Surrounding the Cone of Uncertainty." In this paper the following figure is illustrative of the How to Lie paradigm.

Screen Shot 2014-06-21 at 12.22.18 PM

This figure shows 106 sampled projects, their actual completion and their ideal completion. First let's start with another example of Bad Statistics - the Standish Report - often referenced when trying to sell the idea that software projects are always in trouble. Here's a summary of posts about the Standish Report, which speaks to a few Lies in the How to Lie paradigm.

  • The samples are self-selected, so we don't get to see the correlation between the sampled projects and the larger population of projects at the firms.
    • Those returning the survey for Standish stating they had problems and those not having problems can't be compared to those not returning the survey. And can't be compared to the larger population of IT projects that was not sampled.
    • This is a Huff example - limit the sample space to those examples that support you hypothesis.
  • The credibility of the original estimate is not stated or even mentioned
    • Another good Huff example - no way to test what the root cause of the trouble was, so no way to tell the statistical inference of the suggested solution to the possible corrected outcome.
  • The Root Cause of the over budget, over schedule, and less the promised delivery of features is not investigated, nor any corrective actions suggested, other than hire Standish.
    • Maybe the developers at these firsm are not very good at their job, and can't stay on cost and schedule.
    • Maybe the sampled projects were much harder than first estimated, and the initial estimate was not updated - a new estimate to complete - when this was discovered.
    • Maybe management forced the estimate onto the development team, so the project was doomed from day one.
    • Maybe those making the estimate had no estimating process, skills, or experience in the domain they were asked to estimate for.
    • Maybe a few dozen other Root Causes were in place to create the Standish charts, but these were not seperated from the statistical samples to seek the underlying data.

So let's look at Mr. Little's chart

There is likely good data at his firm, Landmark Graphics, for assessing the root cause of the projects finishing above the line in the chart. But the core issue is the line is not calibrated. It represents the ideal data. That is using the orginal estimate, what did the project do? as stated on page 49 of the paper.

For the Landmark data, the x-axis shows the initial estimate of project duration, and the y-axis shows the actual duration that the projects required.

There is no assessment of the credibility of the initial estimate for the project. This initial estimate might accurately represent the projected time and cost, with a confidence interval. Or this initial estimate could be completely bogus, a guess, made up by uninformed estimators, or worse yet, a estimate that was cooked in all the ways possible from bad management to bad math.

So if our baseline to make comparisons from is bogus from the start, it's going to be hard to draw any conclusion from the actual data on the projects. Both initial estimates and actual measurements must be statistically sound if any credible decisions can be made about the Root Cause of the overage and any possible Corrective Actions that can be taken to prevent these unfavorable outcomes.

This is classic How To Lie - let me present a bogus scale or baseline, then show you some data that supports my conjecture that something is wrong.

In the case of the #NoEstimates approach, that conjecture starts with the Twitter clip below, which can be interpreted as we can make decisions without having to estimate the independent and dependent variables that go into that decision.

No Estimates

So if, estimates are the smell of dysfunction, as the popular statement goes, what is the dysfunction? Let me count the ways:

  • The estimates in many software development domains are bogus to start. That'll cause management to be unhappy with the results and lower the trust in those making the estimates. Which in turn creates a distrust between those providing the moeny and those spending the money - a dysfunction
  • The management in these domains doesn't understand the underlying statistical nature of software development and have an unfounded desire to have facts about the cost, duration, and probability of delivering the proper outcomes in the absence of the statistical processes driving those processes. That'll cause the project to be in trouble from day one.
  • The insistence that estimating is somehow the source of these dysfunctions, and the corrective action is to Not Estimate, is a false trade off - in the same way as the Standish Report saying "look at all these bad IT projects, hire us to help you fix them." This will cause the project to fail on day one again, since those paying for the project have little or no understanding of what they are going to get in the end for an estimated cost if there is one.

So next time you hear estimates are the smell of dysfunction, or we can make decisions without estimating:

  • Ask if there is evidence of the root cause of the problem?
  • Ask to read - in simple bullet point examples - some of these alternatives - so you can test them in your domain.
  • Ask in what domain would not estimating be applicable? There are likley some. I know of some. Let's hear some others.
  • Ask to show how Not Estimating is the corrective action of the dysfunction?
Related articles Averages Without Variances are Meaningless - Or Worse Misleading Statistics, Bad Statistics, and Damn Lies How To Estimate Almost Any Software Deliverable Let's Stop Guessing and Learn How to Estimate Probabilistic Cost and Schedule Processes How to lie with statistics: the case of female hurricanes. How to Fib With Statistics To explain or predict?
Categories: Project Management

Why We Must Learn to Estimate

Herding Cats - Glen Alleman - Fri, 06/20/2014 - 22:18

Screen Shot 2014-06-19 at 1.54.38 PM

The notion of making any decisions without know something about the cost of that decision, the schedule impacts or the resulting impacts on delivered capabilities is like the guys here in the picture. They stared building their bridge. Will run out of  materials, can't see the destination and likely have the wrong tools.

The continued insistence that we can make decisions in the absence of estimates needs to be tested in the market place by those providing the money, not by those consuming the money.

No Estimates

When mentioned that cost can be fixed through a budget process, this still leaves the schedule and delivered capabilities as two random variables that need to be estimated if we are to provide those funding our work with a credible confidence that we'll show up on time with the needed capabilities.


Related articles How To Fix Martin Fowler's Estimating Problem in 3 Easy Steps Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices Making Estimates For Your Project Require Discipline, Skill, and Experience First Comes Theory, Then Comes Practice
Categories: Project Management

Run! Jurgen, Run!

NOOP.NL - Jurgen Appelo - Fri, 06/20/2014 - 15:03

I tried running, and it didn’t work. I suffered from shin splints. No matter what kind of shoes I wore, what stretching exercises I did, or how far I ran, it usually ended up with a sharp stinging pain in my shins. Not good.

I tried Pilates and yoga exercises, and they also didn’t work. It wasn’t the pain this time that made me stop, but a severe lack of motivation to roll out the mat and thrust my skinny legs up in the air. It was so boring. Not good.

I tried swimming, and it didn’t work either. Driving up and down to the local swimming pool cost me far too much time, and I noticed that swimming pools are difficult to carry around when I’m traveling. Not good.

Still, I want to do something in order to improve my health.

The post Run! Jurgen, Run! appeared first on NOOP.NL.

Categories: Project Management

Quote of the Month June 2014

From the Editor of Methods & Tools - Fri, 06/20/2014 - 06:39
A UX team that deals with only the details of radio buttons and check boxes is committing a disservice to its organization. Today UX groups must deal with strategy. Source: Institutionalization of UX (2nd Edition), Eric Schaffer & Apala Lahiri, Addison-Wesley

How To Measure Anything

Herding Cats - Glen Alleman - Wed, 06/18/2014 - 21:41

How To MeasureWhen it is said that we can't forecast or estimate, it brings a smile. Since in fact forecasting and estimating is done all the time. Not always correctly, and not always properly used once the estimate is made, but done all the same, every day in some domains, every week and every month in the domains I work.

In our domains the Estimate At Complete is submitted to the customer every month. And the Estimate At Completion quarterly on most projects we work. These are software intensive projects and some time software only projects. All innovative development, sometimes never been done before, sometimes inventing new physics.

Some of these estimates are very formal, using tools, reference class forecasting, Autoregressive Integrated Moving Average (ARIMA) projections of risk adjusted past performance and compliance with System Engineering Measures of Effectiveness (MOE) and Performance (MOP), traceable to Technical Performance Measures (TPM) and Key Performance Parameters (KPP). Some are simple linear projects of what it will cost give a few parameters - the is it bigger than a bread box type estimates. Here's how to estimate any software deliverable in an informal way.

At last week's ICEAA conference where a colleague and I presented two papers. Cure of Cost and Schedule Growth and Earned Value Management Meets Big Data, along with the briefing deck, we were introduced to this book. It says it's name, you can measure anything. 

Chapter 2 opens with a powerful quote

Success is a function of persistence and doggedness and the willingness to work hard for twenty-minutes to make sense of something that most people would give on after thirty seconds - Malcolm Gladwell, Outliers: The Story of Success.

That chapter and others speak to making estimates about the things we want to measure. Along with Monte Carlo Simulation - another powerful estimating tool we use on our programs. The process entering our domain (space and defense) is Bayesian estimates - adding to what we all ready know. 

The instinctive Bayesian approach is very simple

  • Start with a calibrated estimate
  • Gather additional information
  • Update the calibrated estimate subjectively, without doing additional calculations

So if we hear, we can't forecast the future, estimates are a waste, we can't know anything about the future until it arrives — stop, think about all the estimating and forecasting activities you interact with every day, from the weather, to the stock market, to your drive to work, to the estimated cost of the repainting of your house, or the estimated cost of a kitchen remodel.

Anything can be estimated or forecast. All that has to happen is the desire to learn how. Since the purpose of estimates is to improve the probability of success for the project, the estimates start by providing information to those paying for the project. This is a immutable principle of business

Value is exchanged for the cost of that value. We can't know the value of something until we know it's cost. From the kitchen cabinets, the the garden upgrade, to the software for Medicaid enrollment. It's this simple

ROI = (Value — Cost) / Cost


Related articles Making Estimates For Your Project Require Discipline, Skill, and Experience First Comes Theory, Then Comes Practice How NOT to Estimate Anything Resources for Moving Beyond the "Estimating Fallacy" The Metropolis Algorithm Critical Thinking Insight Software Cost Estimating Information Yes, checking calibration of probability forecasts is part of Bayesian statistics How to Forecast the Future
Categories: Project Management

Domain and Context Are King, Then Comes Process and Experience

Herding Cats - Glen Alleman - Tue, 06/17/2014 - 17:51

In many discussions of process there is a proposed solution to a problem in the absence of domain and context. Over generalization is usually the result. This is so common in the agile development world, that I built a short briefing to communicate the issues with making suggestions in the absence of domain and context.

Mosty ideas are credible, once the domain and context are established. There are a few immutable principles of all project management though and with those principles are a few practices that must be in place if success is going to have a chance of appearing before we run out of time and money. The briefing below provides one set of immutable Principles and the Practices and Processes needed to increase the probability of project success. Principles and Practices of Performance-Based Project Management® from Glen Alleman Related articles Performance-Based Project Management(sm) Released It's Not the People It's the Process? Seven Immutable Activities of Project Success Domain and Context King Agile Requires Discipline, In Fact Successful Projects Require Discipline Three Kinds of Uncertainty About the Estimated Cost and Schedule The Connection Between Domain Expertise and Successful Startups
Categories: Project Management