Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Project Management

Staying Ahead of the Curve

From the Editor of Methods & Tools - Tue, 05/26/2015 - 15:45
We all want to stay ahead of the curve – after all, that’s what you go to a conference for. But have you ever considered how being ahead of the curve might be dangerous? Using a new language before you understand it, putting a technology into production so you can learn it, abandoning “old practices” […]

Early Release of Agile and Lean Program Management Available

 Scaling Collaboration Across the OrganizationI have finished integrating comments from the early review of Agile and Lean Program Management: Scaling Collaboration Across the Organization. I decided that the book was good enough to release to the general public.

I find it difficult to release books in progress. The in-progress part challenges my perfection rules. I also know that some of you who want this book will wait until it’s done, or worse, available in paper.

However, since this is an agile and lean book, it seems nuts to not release it, even though it is not quite done.

If you get the book, please send me comments about what confused you, what you thought was crazy, and anything else.

Thanks so much!

 

Categories: Project Management

Memorial Day

Herding Cats - Glen Alleman - Sun, 05/24/2015 - 19:27

Memorial-dayIn case you thought it was about the 3 day weekend, parties, and the beach

Thanks to all my neighbors, friends, and colleagues for their service.

Categories: Project Management

The Dysfunctional Approach to Using "5 Whys"

Herding Cats - Glen Alleman - Fri, 05/22/2015 - 18:29

It's been popular recently in some agile circles to mention we use the 5 whys when asking about dysfunction. This common and misguided approach assumes - wrongly - causal relationship are linear and problems come from a single source. For example

Estimates are the smell of dysfunction. Let's ask the 5 Whys to reveal these dysfunctions

The natural tendency to assume that in asking 5 whys there is a connection from beginning to end for the thread connecting cause and effect. This single source of the problem - the symptom - is labeled the Root Cause. The question is is the root cause that actual root cause. The core problem is the 5 whys is not really seeking a solution but just eliciting more symptoms masked as causes.

A simple example illustrates the problem from Apollo Root Cause Analysis.

Say we're in the fire prevention business. If preventing fires is our goal, let's look for the causes of the fire and determine the correction actions needed to actual prevent fire from occuring. In this example let's says we've identified 3 potential causes of fire. There is ...

  1. An ignition source
  2. Combustible material
  3. Oxygen

So what is the root cause of the fire? To prevent the fire - and in the follow on example prevent a dysfunction - we must find at least one cause of the fire that can be acted on to meet the goals and objectives of preventing the fire AND are within our control.

If we decide to control of combustable materials then the root cause is the combustibles. Same for the oxygen. This can be done by inerting a confined space, say with nitrogen. Same for the ignition sources. This traditional Root Cause Analysis pursues a preventative solution that is within our control and meets the goals and objectives - prevent fire. But this is not actually the pursuit of the Root Cause. By pursuing this approach, we stop on a single cause that may or may not result in the best solution. We're mislead into a categorical thinking process that looks for solutions. This doesn't means there is no root cause. It means we can't that a root cause cannot be labels until we have decided on which solutions we are able to implement. The root cause is actually secondary to and contingent on the solution, not the inverse. Only after solutions have been established can we identify the actual root cause of the fire not be prevented.

The notion that Estimates are the smell of dysfunction in a software development organization and asking the 5 Whys in search for the Root Cause is equally flawed. 

The need to estimate or not estimate has not been established. It is presumed that it is the estimating process that creates the dysfunction, and then the search - through the 5 Whys - is the false attempt to categorize the root causes of this dysfunction. The supposed dysfunction is them reverse engineered to be connected to the estimating process. This is not only a na√Įve approch to solving the dysfunction is inverts the logic by ignoring the need to estimate. Without confirmation that estimates are needed ot not needed, the search for the cause of the dysfunction has no purposeful outcome.¬†

The decision that estimates are needed or not need does not belong to those being asked to produce the estimates. That decision belongs to those consuming the estimate information in the decision making process of the business - those whose money is being spent.

And of course those consuming the estimates need to confirm they are operating their decision making processes in some framework that requires estimates. It could very well be those providing the money to be spent by those providing the value don't actual need an estimate. The value at risk may be low enough - 100 hours of development for a DB upgrade. But when the value at risk is sufficiently large - and that determination of done again by those providing the money, then a legitimate need to know how much, when, and what is made by the business In this case, decisions are based on Microeconomics of opportunity cost for uncertain outcomes in the future.

This is the basis of estimating and the determination of the real root causes of the problems with estimates. Saying we're bad at estimating is NOT the root cause. And it is never the reason not to estimate. If we are bad at estimating, and if we do have confirmation and optimism biases, then fix them. Remove the impediments to produce credible estimates. Because those estimates are needed to make decisions in any non-trivial value at risk work. 

 

Related articles Let's Get The Dirt On Root Cause Analysis Essential Reading List for Managing Other People's Money The Fallacy of the Planning Fallacy Mr. Franklin's Advice
Categories: Project Management

Software for the Mind

Herding Cats - Glen Alleman - Fri, 05/22/2015 - 00:21

The book Software for Your Head was a seminal work when we were setting up our Program Management Office in 2002 for a mega-project to remove nuclear waste from a very contaminated site in Golden Colorado.

Here's an adaptation of those ideas to the specifics of our domain and problems

Software for your mind from Glen Alleman This approach was a subset of a much larger approach to managing in the presence of uncertainty, very high risk, and even higher rewards, all on a deadline, and fixed budget.  As was stated in the Plan of the Week.
  • Monday - Where are we going this week?¬†
  • Daily - What are we doing along the way?
  • Friday - Where have we come to?

Do this every week, guided by the 3 year master plan and make sure no one is injured or killed.

That project is documented in the book Making the Impossible Possible summarized here.

Making the impossible possible from Glen Alleman Related articles The Reason We Plan, Schedule, Measure, and Correct The Flaw of Empirical Data Used to Make Decisions About the Future There is No Such Thing as Free
Categories: Project Management

We've Been Doing This for 20 Years ...

Herding Cats - Glen Alleman - Thu, 05/21/2015 - 03:58

We've been doing this for 20 years and therefore you can as well

Is a common phrase used when asked in what domain does you approach work? Of course without a test of that idea outside the domain in which the anecdotal example is used, it's going to be hard to know if that idea is actually credible beyond those examples.

So if we hear we've been successful in our domain doing something or better yet NOT doing something, like say NOT estimating, ask in what domain have you been successful? Then the critical question, is there any evidence that the success in that domain is transferable to another domain? This briefing provides a framework - from my domain of aircraft development - illustrating that domains vary widely in their needs, constraints, governance processes and applicable and effective approaches to delivering value.

Paradigm of agile project management from Glen Alleman Google seems to have forgotten how to advance the slides on the Mac. So click on the presentation title (paradigm of agile PM)  to do that. Safari works. Related articles The Reason We Plan, Schedule, Measure, and Correct The Flaw of Empirical Data Used to Make Decisions About the Future There is No Such Thing as Free Root Cause Analysis Domain is King, No Domain Defined, No Way To Test Your Idea Mr. Franklin's Advice
Categories: Project Management

Software Development Linkopedia May 2015

From the Editor of Methods & Tools - Wed, 05/20/2015 - 15:24
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about Agile software development, giving feedback, managing technical debt, normalizing user stories, dependency injection, developer griefs, behavior driven development (BDD) and software architecture. Web site: The GROWS Method Blog: The Failure […]

Variations in Iteration - New Lecture Posted

10x Software Development - Steve McConnell - Tue, 05/19/2015 - 18:26

I've posted this week's lecture in my Understanding Software Projects series at https://cxlearn.com. Most of the lectures that have been posted are still free. Lectures posted so far include:  

0.0 Understanding Sofware Projects - Intro

0.1 Introduction - My Background
     0.2 Reading the News

1.0 The Software Lifecycle Model - Intro
     1.1 Variations in Iteration (New this week) 
     1.2 Lifecycle Model - Defect Removal

2.0 Software Size

Check out the lectures at http://cxlearn.com!

Understanding Software Projects - Steve McConnell

 

Impostor Syndrome: Why Some ScrumMasters Feel Like They’re Faking It

Mike Cohn's Blog - Tue, 05/19/2015 - 15:00

Geoff Watts is one of the leading Scrum thinkers in the world, and one of the few to hold both the Certified Scrum Trainer and Certified Scrum Coach designation. In addition to his book, "Scrum Mastery: From, From Good To Great Servant-Leadership" Geoff has written a new book, "The Coach's Casebook". His new book is not specifically about Scrum or agile, but since a great deal of a ScrumMaster's job is indeed coaching a team to better performance, I think you'll find the book applicable to your work. In the following guest post, Geoff shares with a story about a feeling I think we can all relate to. I know I've felt the way he describes many times. --Mike Cohn

 

"One of these days, I'm going to be found out. They will realise I have no idea what I'm talking about and I'll be fired."

I remember standing in my kitchen 15 years ago saying these words to my wife. We were thinking about buying something for the house and I was worried that my position as a project manager wasn't secure enough for us to make the purchase. I was worried because I was not a technical person, yet I was project managing a technical project full of technical people.

I didn't understand databases or SQL or Java; I didn't even know how a phone worked yet, and I was working for a telecom company! How could I possibly manage a project in this context with so little domain or technical knowledge?!

I was bluffing as best I could, but I thought that one day they would realise I didn't know what I was talking about.

What I didn't realise at the time was that I didn't need to know that stuff because the people that mattered did. In fact, I believe that if I had known that stuff, then I would have been much less successful in my job.

You see, the development team knew about databases and Java and SQL, and the customer knew how phones worked. If I wanted to run the project, I mean really micromanage it, then I would have needed to know all that stuff. But that wasn't my goal.

My goal was to enable the people around me to be able to do what they needed to do, to apply their knowledge, intelligence and experience to making the project and product work. Being an impostor actually helped me avoid being a micromanager.

What I needed to know and learn was how to enable those people. And then I needed to learn to be comfortable with being an impostor.

“Impostor syndrome” is an actual phenomenon – yes, it is widespread enough to have a name – and you may be suffering from Impostor syndrome if you believe:

  • You are a fraud, that you are faking it and that one day soon, you will be "found out"
  • That everyone else knows more than you do
  • The faith that others have in your ability is misplaced
  • That you aren't as good as they seem to believe you are
  • That your successes are largely down to luck, being in the right place at the right time or because of other people

Ever felt like that? It's completely normal. In fact, it would be abnormal if you haven't felt like that because it is believed that up to 70 percent of people have. This certainly matches my personal empirical evidence.

ScrumMasters are perhaps more prone to feelings of Impostor syndrome than anyone else in an agile team or organisation. This is partly because of the lack of authority inherent in the role. They have no power, and so often find themselves doubting themselves and their position.

Their role is also quite loosely defined in terms of their responsibilities and that increases the lack of clarity and confidence that ScrumMasters can have.

Impostor syndrome, like all of the traits that come up in my coaching practice, is not a bad thing. People with Impostor syndrome are generally quite humble, reflective and diligent.

They are constantly trying hard to prove themselves worthy (to themselves and others) and rarely settle for mediocrity because of their anxiety about being found out. As a result, people with a high degree of Impostor syndrome are often high achievers.

It's not therefore a trait to try and get rid of altogether, but rather, it's important to bring it into balance. Harness the positive aspects while trying to mitigate the anxiety and stress that can come from this trait when overdone.

The first step in bringing Impostor syndrome into balance is normalisation – accepting that this is common. Then, consciously appreciating your strengths and bringing others down from the often-overinflated pedestal that you have put them on, because in all likelihood, they are “faking it” just as much as you are!

Bringing this trait into balance for yourself in your role as ScrumMaster may also help you coach others on their Impostor syndrome as well.

There are some great techniques for dealing with this and other traits that can trap us in my new book, “The Coach's Casebook” available now from Amazon here.

Introducing: Me 3.0

NOOP.NL - Jurgen Appelo - Tue, 05/19/2015 - 12:48
Me 3.0

Some people noticed that my avatar pictures on the social networks were deviating from the real-life version at a slow but steady pace.

Yes, I’m getting older!
Thanks for pointing it out.

The post Introducing: Me 3.0 appeared first on NOOP.NL.

Categories: Project Management

Essential Reading List for Managing Other People's Money

Herding Cats - Glen Alleman - Mon, 05/18/2015 - 15:58

Education is not the learning of facts, but the training of the mind to think - Albert Einstein 

So if we're going to learn how to think about managing the spending of other peoples money in the presence of uncertainty, we need some basis of education. 

Uncertainty is a fundamental and unavoidable feature of daily life. Personal life and the life of projects. To deal with this uncertainty intelligently we represent and reason about these uncertainties. There are formal ways of reasoning (logical systems for reasoning found in the Formal Logic and Artificial Intelligence domain) and informal ways of reasons (based on probability and statistics of cost, schedule, and technical performance in the Systems Engineering domain).

If Twitter, LinkedIn, and other forum conversations have taught me anything, it's that many participants base their discussion on personal experience and opinion. Experience informs opinion. That experience may be based on gut feel learned from the  school of hard knocks. But there are other ways to learn as well. Ways to guide your experience and inform your option. Ways based on education and frameworks for thinking about solutions to complex problems.

Samuel Johnson has served me well with his quote...

There are two ways to knowledge, We know a subject ourselves, or we know where we can find information upon it.

Hopefully the knowledge we know ourselves has some basis in fact, theory, and practice, vetted by someone outside ourselves, someone beyond our personal anecdotal experience

Here's my list of essential readings that form the basis of my understanding, opinion, principles, practices, and processes as they are applied in the domains I work - Enterprise IT, defense and space and their software intensive systems.

  • Making Hard Decisions: An¬†Introduction¬†to Decision Analysis, Robert T. Clemen
    • Making decisions in the presence of uncertainty is part of all business and technical endeavors.
    • This book and several other should be the start when making decisions about how much, when, and what.
  • Apollo Root Cause Analysis: Effective Solutions to Everyday Problems, Every Time, Dean L. Gano.
    • There is a powerful quote from Chapter 1 of this book
      • STEP UP¬†¬†TO FAIL
      • Ignorance is a most wonderful thing.
      • It facilitates magic.
      • It allows the masses to be led.
      • It provides answers when there are none.
      • It allows happenings in the presence of danger.
      • All this, while the pursuit of knowledge can only destroy the illusion. It is any wonder mankind chooses ignorance?
    • This book is the starting point for all that follows. I usually only come to an engagement when there is¬†trouble.
    • No need for improvement if there's no trouble.
    • Without a Root Cause Analysis process and corrective actions all problems are just symptoms. And treating the symptoms does little to make improvements to any situation.¬†
    • So this is the seminal book, but any RCA process is better than none.
  • The Phoenix¬†Handbook, William R. Cocoran, PhD, P.E., Nuclear Safety Review Concepts, 19 October 1997 version.
    • This was a book and process used at Rocky Flats for Root Cause Analysis
  • Effective Complex Project Management: An¬†Adaptive¬†Agile Framework for Delivering Business Value, Robert K. Wysocki, J. Ross
    • All project work is probabilistic.
    • All project work is complex. Agile software development is not the same as project management.
    • For agile software development beyond a handful of people in the same room as their customer, project management is needed.
    • This book tells you where to start in performing the functions of Project Management in the Enterprise domain.
  • The Art of System Architecting, 2nd Edition, Mark W. Maier and Eberhardt Recthin, CRC Press
    • Systems have architecture. This architecture is purpose built.
    • The notion¬†the best architectures, requirements, and designs emerge from self-organizing teams needs to be tested in a specific domain.
    • Many domain have¬†reference architectures, DODAF¬†and¬†TOGAF are tow examples.
    • Architectures developed by self-organizing teams may or may not be useful over the life of the system. It depends on the skills and experience of the¬†architects. Brian Foote has a term for self-created architectures -¬†ball of mud. So care is needed in failing to test the self-organizaing team's ability to produce a good architecture.
    • The Recthin book can be your guide for that test.
  • Systems Enigneering: Coping With Complexity, Richard Stevens, Peter Brook, Ken Jackson, Stuart Arnold
    • All non-trivial projects are systems.
    • Systems are complex, they contain complexity
    • Defining complex, complexity, complicated needs to be done with care.
    • Much mis-information is around in the agile community about these terms. Usually used to make so point about how hard it is to manage software development projects.
    • In fact there is a strong case that much of the¬†complexity and¬†complex aspects in software development are simply¬†bad¬†management¬†of the requirements
  • Forecasting and Simulating Software Development Projects: Effective Modeling of Kanban & Scrum Projects using Monte Carlo Simulation, Troy Magennis
    • When we hear¬†Control in a non-determistic paradigm is an illusion at best, delusion at worst start with Troy's book to see that conjecture is actually not true.
    • If the system you're working on is truely non-deterministic - that is chaotic - you've got yourself a long road because you're on a Death March project. Run away as fast as you can.
  • Probability Methods for Cost Uncertainty Analysis: A Systems Engineering Perspective, Paul R. Garvey, CRC Press.
    • All project variables are probabilistic. Managing in the presence of uncertainty created by the statistical processes the result in probability is part of all project management.
    • This book speaks to the uncertainty in cost.
    • Uncertainty in schedule and technical performance are the other two variables.
    • Assuming deterministic variables or assuming you can't manage in the presence of uncertainty are both naive and ignore the basic mathematics of making decisions in the presence of uncertainty
  • Estimating Software-Intensive Systems:¬†Projects,Products and Processes, Richard D. Stutzke, Addison Wesley.
    • Software Intensive Systems¬†s any¬†system¬†where¬†software¬†contributes essential influences to the design, construction, deployment, and evolution of the¬†system¬†as a whole. [IEEE 42101:2011]
    • Such systems are by their nature complex, but estimating the attributes of such systems is a critical success factor in all modern business and technology functions.¬†
    • For anyone conjecyturing estimates can't be made in complex system, this book an mandatory reading.¬†
    • Estimates are hard, but can be done, and are done.¬†
    • So when you hear that conjecture ask¬†how you know that those estimates can't be made? Where's you evidence that counters the work found in this book. Not¬†anecdotes,¬†optioning,¬†¬†conjectures, but actual engineering¬†assessment with the¬†mathematics?
  • Effective Risk¬†Management:¬†Some¬†Keys to Success, 2nd¬†Edition, Edmund H. Conrow.
  • Project Risk Management: Processes, Techniques and Insight, Chris Chapman and Stephen Ward.
    • These two book are the core of Tim Lister's quote
    • Risk Management is How Adults Manage Projects
    • Risk management involves estimating
  • The Economics of¬†Iterative¬†Software Development" Steering Toward Business¬†Results, Walker Royce, Kurt Bittner, and Mike Perrow.
    • All software development is a MicroEconomics paradigm.
    • Where¬†the behavior of individuals and small impacting organizations in making decisions on the allocation of limited resources.
    • When you hear about conjectures for improving software development processes that violate Microeconomics, ignore them.
    • These limited resources are people, time, and money
  • Assessment¬†and Control of¬†Software¬†Risks, Capers Jones.
    • Since all management is risk management, here's a book that clearing states how to manage in the presence of uncertainty.
  • Software Cost¬†Estimating¬†with COCOMO II, Barry Boehm et. al.
    • This is the original basis of estimating with parametric processes
    • Numerous tools and processes are based on COCOMO
    • Parametric estimating makes use of Reference Classes, same as Monte Carlo Simulation
    • With a parametric model or a Reference Class model estimates of future outcomes can be made in every domain where we're not inventing new physics. This means there is no reason not to estimate for any software system found in any normal business environment.
    • This is not to say everyone can estimate. Nor should they. The¬†excuse of¬†we've never done this below really means you should go find someone who has.
  • Facts and Fallacies of Software Engineering, Robert L. Glass
    • There are many fallacies in the development of software
    • This book exposes most of them and provides corrective actions
  • How to Measure Anything: Finding the Value of Intangibles in Business, Douglas Hubbard
    • When we hear¬†we can't measure read this book.
    • This book has a great description of Monte Carlo Simulation (used everywhere in our domains).¬†
      • Monte Carlo started at Los Alamos during the bomb development process
      • MCS samples a large number of value¬†under in Probability Distribution Function that represents the statistical processes that are being modeled.¬†
      • MCS has some relatives,¬†Boot Strapping is one. But it operates in a different manner though, using past performance as a sample population.
  • Hard Fact, Dangerous Half-Truths & Total Nonsense, Jeffrey Pfeffer and Robert Sutton
    • This book was handed out by Ken Schwaber's "The State of Agile"
    • The key here is decisions are best made using facts. When facts aren't directly available, estimates of those facts are needed.
    • Making those estimates are part of every business decision, based on Microeconomics of writing software for money.

So In The End

This list is the tip of the iceberg for access to the knowledge needed to manage in the presence of uncertainty while spending other peoples money.

Related articles Mr. Franklin's Advice Want To Learn How To Estimate? Two Books in the Spectrum of Software Development
Categories: Project Management

Is Agile Working for Your Project?

My column is up on projectmanagement.com. It’s called Is Agile Working for Your Project?

I hope you enjoy it.

Categories: Project Management

How Can I Help You Enjoy Your Job?

NOOP.NL - Jurgen Appelo - Mon, 05/18/2015 - 14:08
helping-hand

People from all over the world sign up to join the Happy Melly business network because–apparently–they think we’re doing a good job. That’s so awesome. It also increases the pressure on us to report on what we’re doing. And it makes me think harder: What can I do to help people enjoy their job?

The post How Can I Help You Enjoy Your Job? appeared first on NOOP.NL.

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Fri, 05/15/2015 - 21:33

Any process that does not have provisions for its own refinement will eventually fail or be abandoned

- W. R. Corcoran, PhD, P.E., The Phoenix Handbook: The Ultimate Event Evaluation Manual for Finding Profit Improvement in Adverse Events, Nuclear Safety Review Concepts, 19 October 1997.

Categories: Project Management

Complex, Complexity, Complicated

Herding Cats - Glen Alleman - Thu, 05/14/2015 - 18:47

In the agile community it is popular to use the terms complex, complexity, complicated many times interchangeably and and many times wrongly. These terms are many times overloaded with an agenda used to push a process or even a method.

First some definitions

  • Complex - consisting of many different and connected part. Not easy to analyze or understand. Complicated or intricate. When a system or problem is considered complex, analytical approaches, like dividing it into parts to make the problem tractable is not sufficient, because it is the interactions of the parts that make the system complex and without these interconnections, the system no longer functions.
  • Complex System -¬†is a functional whole, consisting of interdependent and variable parts. Unlike conventional systems, the parts need not have fixed relationships, fixed behaviors or fixed quantities, and their individual functions may be undefined in traditional terms.
  • Complicated - containing a number of hidden parts, which must be revealed separately because they do not interact. Mutual interaction of the components creates nonlinear behaviors of the system. In principle all systems are complex. The number of parts or components is irrelevant n the definition of complexity. There can be complexity - nonlinear behaviour - in small systems of large systems.¬†
  • Complexity - there is no standard definition of complexity is a view of systems that suggests simple causes result in complex effects. Complexity as a term¬†is generally used to characterize a system with many parts whose interactions with each other occur in multiple ways. Complexity can occur in a variety of forms
    • Complex behaviour
    • Complex mechanisms
    • Complex situations
    • Complex systems
    • Complex data
  • Complexity Theory -¬†states that critically interacting components self-organize to form potentially evolving structures exhibiting a hierarchy of emergent system properties.¬†This theory takes the view that systems are best regarded as wholes, and studied as such, rejecting the traditional emphasis on simplification and reduction as inadequate techniques on which to base this sort of scientific work.

One more item we need is the types of Complexity

  • Type 1 - fixed systems, where the structure doesn't change as a function of time.
  • Type 2 - systems where time causes changes. This can be repetitive cycles or change with time.
  • Type 3 - moves beyond repetitive systems into organic where change is extensive and non-cyclic in nature.
  • Type 4 - are self organizing where we can¬†combine internal constraints of closed systems, like machines, with the creative evolution of open systems, like people.

And Now To The Point

When we hear complex, complexity, complex systems, complex adaptive system, pause to ask what kind of complex are you talking about. What Type of complex system. In what system are you applying the term complex. Have you classified that system in a way that actually matches a real system.

It is common use the terms complex, complicated, and complexity are interchanged. And software development is classified or mis-classified as one or the both or all three. It is also common to toss around these terms with not actual understanding of their meaning or application.

We need to move beyond buzz words. Words like Systems Thinking. Building software is part of a system. There are interacting parts that when assembled, produce an outcome. Hopefully a desired outcome. In the case of software the interacting parts are more than just the parts. Software has emergent properties. A Type 4 system, built from Type 1, 2, and 3 systems. With changes in time and uncertainty, modeling these systems requires stochastic processes. These processes depend on estimating behaviors as a starting point. 

The understanding that software development is an uncertain process (stochastic) is well known, starting in the 1980's [1] with COCOMO. Later models, like Cone of Uncertainty made it clear that these uncertainties, themselves, evolve with time. The current predictive models based on stochastic processes include Monte Carlo Simulation of networks of activities, Real Options, and Bayesian Networks. Each is directly applicable to modeling software development projects.

[1] Software Engineering Economics, Barry Boehm, Prentice-Hall, 1981.

Related articles Decision Analysis and Software Project Management Making Decisions in the Presence of Uncertainty Some More Background on Probability, Needed for Estimating Approximating for Improved Understanding The Microeconomics of a Project Driven Organization How to Avoid the "Yesterday's Weather" Estimating Problem Hope is not a Strategy
Categories: Project Management

Monte Carlo Simulation of Project Performance

Herding Cats - Glen Alleman - Thu, 05/14/2015 - 17:30

Monte-Carlo-3Project work is random. Most everything in the world  is random. The weather, commuter traffic, productivity of writing and testing code. Few things actually take as long as they are planned. Cost is less random, but there are variances in the cost of labor, the availability of labor. Mechanical devices have variances as well.

The exact fit of a water pump on a Toyota Camry is not the same for each pump. There is a tolerance in the mounting holes, the volume of water pumped. This is a variance in the technical performance.

Managing in the presence of these uncertainties is part of good project management. But there are two distinct paradigms of managing in the presence of these uncertainties.

  1. We have empirical data of the variances. We have samples of the hole positions and sizes of the water pump mounting plate for the last 10,000 pumps that were installed. We have samples of how long it took to write a piece of code and the attributes of the code that are correlated to that duration. We have empirical measures.
  2. We have a theoretical model of the water pump in the form of a 3D CAD model with the materials modeling for expansion, drilling errors of the holes and other static and dynamic variances. We have modeling the duration of work using a Probability Distribution Function and a Three Point estimate of the Most Likely Duration, the Pessimistic and Optimistic duration. These can be derived form past performance, but we don't have enough actual data to produce the PDF and have a low enough Sample Error for our needs.

In the first case we have empirical data. In he second case we don't. There are two approaches to modeling what the system will do in terms of cost and schedule outcomes.

Bootstrapping the Empirical Data

With samples of past performance and the proper statistical assessment of those samples, we can re-sample them to produce a model of future performance. This bootstrap resampling shares the principle of the second method - Monte Carlo Simulation - but with several important differences.

  • The¬†researcher - and we are researching what the possible outcomes might be from our model - does not know nor have any control of the Probability Distribution Function that generated the past sample. You take what you got.¬†
  • As well we don;'t have any understanding of¬†Why those samples appear as they do. They're just there.¬†We get what we get.
  • This last piece is critical because it prevents us from defining what performance must be in place to meet some future goal. We can't tell what performance we need because we have not model of the¬†need performance, just samples from the past.
  • This results from the statistical conditions that there is a PDF for the process that ius unobserved. All we have is a few samples of this process.
  • With these few samples, we're going to resample them to produce a modeled outcome. This resampling locks in any behavior of the future using the samples from the past, which may or may not actually represent the¬†true underlying behavior. This may be all we can do because we don't have any theoretical model of the process.

This bootstrapping method is quick, easy, and produces a quick and easy result. But it has issues that must be acknowledged.

  • There is a fundamental assumption that the past empirical samples represent the future. That is the samples contained in the¬†bootstrapped list and their resampling ae also contained in all the future samples.
  • Said in a more formal way
    • If the sample of data we have from the past is a reasonable representation of the underlying population of all samples from the work process, then the distribution of parameter estimates produced from the bootstrap¬† model on a series of resampled data sets will provide a good approximation of the distribution of that statistics in the population.
    • With this sample data and its parameters (statistical moments) we can make a good approximation of the future.
  • There are some important statistical behaviors though that must be considered, starting with the future samples are identical to the statistical behaviors of the past samples.
    • Nothing is going to change in the future
    • The past and the future are identical statistically
    • In the project domain that is very unlikely
  • With all these condition, for a small project, with few if any interdependencies, a static work process with little valance, boot strapping is a nice quick and dirty approach to forecasting (estimating the future) ¬†based on the past.

Monte Carlo Simulation

This approach is more general and removes many of the restriction to the statistical confidence of bootstrapping.

Just as a reminder, in principle both the parametric and the non-parametric bootstrap are special cases of Monte Carlo simulations used for a very specific purpose: estimate some characteristics of the sampling distribution. But like all principles, in practice there are larger differences when modeling project behaviors.

In the more general approach  of Monte Carlo Simulation the algorithm repeatedly creating random data in some way, performing some modeling with that random data, and collecting some result.

  • The duration of a set independent tasks
  • The probabilistic completion date of a series of tasks connected in a network (schedule), each with a different Probability Distribution ¬†Function evolving as the project moves into the future.
  • A probabilistic cost ¬†correlated with the probabilistic schedule model. This is called the Joint Confidence Level. Both cost and schedule are random variance with time evolving changes in their respective PDFs.

In practice when we hear Monte Carlo simulation we are talking about a theoretical investigation, e.g. creating random data with no empirical content - or from reference classes -  used to investigate whether an estimator can represent known characteristics of this random data, while the (parametric) bootstrap refers to an empirical estimation and is not necessary a model of the underlying processes, just a small sample of observations independent from the actual processes that generated that data.

The key advantage of MCS is we don't necessarily need  past empirical data. MCS can be used to advantage if we do, but we don't need it for the Monte Carlo Simulation algorithm to work.

This approach could be used to estimate some outcome, like in the bootstrap, but also to theoretically investigate some general characteristic of an statistical estimator (cost, schedule, technical performance) which is difficult to derive from empirical data.

MCS removes the road block heard in many critiques of estimating - we don't have any past data on which to estimate.  No problem, build a model of the work, the dependencies between that work, and assign statistical parameters to the individual or collected PDFs and run the MCS to see what comes out.

This approach has several critical advantages:

  • The first is a restatement - we don't need empirical data, although it will add value to the modeling process.
    • This is the primary purpose of Reference Classes
    • They are the raw material for defining possible future behaviors form the past
  • We can make judgement of what he future will be like, or most importantly what the future MUST be like to meet or goals, run the simulation and determine is our planned work will produce a desired result.

So Here's the Killer Difference

Bootstrapping models make several key assumptions, which may not be true in general. So they must be tested before accepting any of the outcomes.

  • The future is like the past.
  • The statistical parameters are static - they don't evolve with time. That is the future is like the past, an unlikely prospect on any non-trivial project.
  • The sampled data is identical to the population data both in the past and in the future.

Monte Carlo Simulation models provide key value that bootstrapping can't.

  • Different Probability Distribution Function can be assigned to work as it progresses through time
  • The shape of that PDF can be defined from past performance, or defined from the¬†needed performance.

The critical difference between Bootstrapping and Monte Carlo Simulation is MCS can show what the future performance has to be to stay on schedule (within variance), on cost, and have the technical performance meet the needs of the stakeholder.

Bootstrapping can only show what the future will be like if it like the past, not what it must be like. In Bootstrapping this future MUST be like the past. In MCS we can tune the PDFs to show what performance has to be to manage to that plan. Bootstrapping is reporting yesterday's weather as tomorrow's weather - just like Steve Martin in LA Story. If tomorrow's weather turns out not to be like yesterday's weather, you gonna get wet.

MCS can forecast tomorrows weather, by assigning PDFs to future activities that are different than past activities, then we can make any needed changes in that future model to alter the weather to meet or needs. This is in fact how weather forecasts are made - with much more sophisticated models of course here at the National Center for Atmospheric Research in Boulder, CO

This forecasting (estimating the future state) of possible outcomes and the alternation of those outcomes through management actions to change dependencies, add or remove resources, provide alternatives to the plan (on ramps and off maps of technology for example), buy down risk, apply management reserve, assess impacts of rescoping the project, etc. etc. etc.  is what project management is all about.

Bootstrapping is necessary but far from sufficient for any non-trivial project to show up on of before the need date (with schedule reserve), at o below the budgeted cost (with cost reserve) and have the produce or service provide the needed capabilities (technical performance reserve).

Here's an example of that probabilistic forecast of project performance from a MCS (Risky Project). This picture shows the probability for cost, finish date, and duration. But it is built on time evolving PDFs assigned to each activity in a network of dependent tasks, which models the work stream needed to complete as planned.

When that future work stream is changed to meet new requirements, unfavorable past performance and the needed corrective actions, or changes in any or all of the underlying random variables, the MCS can show us the expected impact on key parameters of the project so management in intervention can take place - since Project Management is a verb.

Untitled

The connection between the Bootstrap and Monte Carlo simulation of a statistic is simple.

Both are based on repetitive sampling and then direct examination of the results.

But there are significant differences between the methods (hence the difference in names and algorithms). Bootstrapping uses the original, initial sample as the population from which to resample. Monte Carlo Simulation uses a data generation process, with known values of the parameters of the Probability Distribution Function. The common algorithm for MCS is Lurie-Goldberg. Monte Carlo is used to test that the results of the estimators produce desired outcomes on the project. And if not, allow the modeler and her management to change those estimators and then mange to the changed plan.

Bootstrap can be used to estimate the variability of a statistic and the shape of its sampling distribution from past data. Assuming the future is like the past, make forecasts of throughput, completion and other project variables. 

In the end the primary differences (and again the reason for the name differences) is Bootstrapping is based on unknown distributions. Sampling and assessing the shape of the distribution in Bootstrapping adds no value to the outcomes. Monte Carlo is based on known or defined distributions usually from Reference Classes.

Related articles Do The Math Complex, Complexity, Complicated The Fallacy of the Planning Fallacy
Categories: Project Management

Just Because You Say Words, It Doesn't Make Then True

Herding Cats - Glen Alleman - Wed, 05/13/2015 - 22:33

When we hear words about any topic, my favorite of course is all things project manage, it doesn't make them true.

  • Earned Value is a bad idea in IT projects because it doesn't measure business value
    • Turns out this is actually true. The confusion was with the word VALUE
    • In Earned Value,¬†Value¬†is Budgeted Cost of Work Performed (BCWP) in the DOD vocabulary or Earned Value in the PMI¬†vocabulary
  • Planning is a waste
    • Planning is a Strategy for the¬†successful¬†completion of the project
    • It'd be illogical not to have a Strategy for the success of the project
    • So we need a plan.
    • As Ben Franklin knew "Failure to Plan, means we're Planning to Fail"
  • The Backlog is a waste and grooming the Backlog is a bigger waste
    • The backlog is a list of planned work to¬†produce¬†the value of the project
    • The backlog can change and this is the "change control paradigm" for agile.
    • Change control is a critical processes for all non-trivial projects
    • Without¬†change¬†control we don't have a stable description of what "Done" looks like for the¬†project. Without having some sense of "Done" we're on a Death March project
  • Unit Testing is a waste
    • Unit testing is the first step of Quality Assurance and¬†Independent¬†Verification and Validation
    • Without UT and in the presence of a QA and IV&V process, it will be "garbage in garbage out" for the software.
    • Assuming¬†the developers can do the testing is naive at best on any non-trivial project
  • Decisions can be made in the presence of uncertainty without estimates
    • This violates the principles of Microeconomics¬†
    • Microeconomics¬†is a branch of economics that studies the behavior of individuals and small impacting organizations in making decisions on the allocation of limited resources¬†
    • All projects have uncertainty - reducible and irreducible.
    • This uncertainty creates risk. This risk impacts the¬†behaviors¬†of the project work.
    • Making decisions - choices - in the presence of these uncertainties and resulting risks needs to assess some¬†behavior¬†that is probabilistic.
    • This probabilistic ¬†behavior¬†is¬†driven¬†by underlying statistical¬†processes.

So when we hear some phrase, idea, or conjecture - ask for evidence. Ask for domain. Ask for examples. If you hear we're just exploring ask who's paying for that? Because it is likely those words are unsubstantiated conjecture from personal experience and not likely very useful outside that personal experience

Related articles Root Cause Analysis The Reason We Plan, Schedule, Measure, and Correct The Flaw of Empirical Data Used to Make Decisions About the Future Want To Learn How To Estimate? There is No Such Thing as Free Estimates
Categories: Project Management

New ''Understanding Software Projects'' Lectures Posted

10x Software Development - Steve McConnell - Wed, 05/13/2015 - 18:25

Two new lectures have been posted in my Understanding Software Projects lecture series at http://cxlearn.com. All the lectures that have been posted are still free (though this won't last forever). Lectures posted so far include: 

0.0 Understanding Sofware Projects - Intro

     0.1 Introduction - My Background (new this week)

     0.2 Reading the News

1.0 The Software Lifecycle Model - Intro

     1.1 Lifecycle Model - Defect Removal (new this week)

2.0 Software Size

Check out the lectures at http://cxlearn.com!

Understanding Software Projects - Steve McConnell

The Fallacy of the Planning Fallacy

Herding Cats - Glen Alleman - Wed, 05/13/2015 - 15:06

The Planning Fallacy is well documented in many domains. Bent Flyvbjerg has documented this issue in one of his books, Mega Projects and Risk. But the Planning Fallacy is more complex than just the optimism bias. Many of the root causes for cost overruns are based in the politics of large projects.

The  planning fallacy is ...

...a phenomenon in which predictions about how much time will be needed to complete a future task display an optimistic bias (underestimate the time needed). This phenomenon occurs regardless of the individual's knowledge that past tasks of a similar nature have taken longer to complete than generally planned. The bias only affects predictions about one's own tasks; when outside observers predict task completion times, they show a pessimistic bias, overestimating the time needed. The planning fallacy requires that predictions of current tasks' completion times are more optimistic than the beliefs about past completion times for similar projects and that predictions of the current tasks' completion times are more optimistic than the actual time needed to complete the tasks.

The critical notion here is about ones own estimates. This is the critical reasons for 

With all that said, there still is a large body of evidence that estimating is still a major problem.† 

I have a colleague who is the former Cost Analysis Director of NASA. He has three reasons projects get in cost, schedule, and technical trouble:

  1. We couldn't know - we're working in a domain where discovery is actually the case. We're inventing new physics, discovering new drugs that have never been discovered before. We're doing unprecedented development. Most people using the term "we're exploring" don't likely know what they[re doing and those paying are paying for that exploring. Ask yourself if you're in the education or actually the research and development business.
  2. We didn't know - we could have known, but we just didn't want to. We couldn't afford to know. We didn't have time to know. We were incapable of knowing because we're outside our domain. Would you hire someone who didn't do his homework when it comes to providing the solution you're paying for? Probably not. Then why accept we didn't know as an excuse?
  3. We don't want to know - we could have known, but if we knew that'd be information that would cause this project to be canceled.

The Planning Fallacy

Daniel Kahneman (Princeton) and Amos Tversky¬†(Stanford) describe it as ‚Äúthe tendency to underestimate the time, costs, and risks of future actions and overestimate the benefit of those actions‚ÄĚ.¬† The results are time and cost overruns as well as benefit shortfalls.¬† The concept is not new: they coined the term in the 1970s and much research has taken place since, see the Resources below.

So the challenge is to not fall victim to this optimism bias and become a statistic in the Planning Fallacy.

How do we do that?

Here's our experience:

  • Start with a credible systems architecture with the topology of the delivered system:
    • By credible i mean not a paper drawing on the wall, but a a sysML description of the system and its components. sysML tool can be had for free along with commercial products.
    • Defining the interactions between the components is the critical issue to discover the location for optimism. The Big Visible Chart from sysML needs to hang on the wall for all to see where these connections take place.
    • Without this BVC, the optimism is¬†It not that complicated, what could possibly be the issue with our estimates.¬†
    • It's the interfaces where the project goes bad. Self contained components have problems for sure, but when connected to other components this becomes a system of systems and the result is an N2 problem.
  • Look for reference classes for the components
    • Has anyone here done this before?
    • No,? Do we know anyone who knows anyone who's done this before?
    • Is there no system like this system in the world?
    • If the answer to that is NO, then we need another approach - we're inventing new physics and this project is actually a research project - act appropriately¬†
  • Do we have any experience doing this work in the past?
    • No, why would we get hired to work on this project?
    • Yes, but we've failed in the past?
      • No problem, did we learn anything.
      • Did we find the Root Cause of the past performance problems and take the corrective actions?
      • Did we follow a know process (APOLLO) in that Root Cause Analysis and Corrective actions?
      • No, you're being optimistic that the problems won't come back
  • Do we have any sense of the Measures of the system that will drive cost?
    • Effectiveness - ¬†are the operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.
    • Performance - characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
    • Key Performance Parameters - represent the capabilities and characteristics so significant¬† that failure to meet them can be cause for reevaluation, reassessing, or termination of the program.
    • Technical Performance Measures - determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal
    • All the ...ilities
    • Without understanding these we have no real understanding of where the problems are going to be and the natural optimism comes out.
  • Do we know what technical and programmatic risks are going to be encountered in this project?
    • Do we have a risk register?
    • Do we know both the reducible and irreducible risks to the success of the project?
    • Do we have mitigation plans for the reducible risks?
    • For reducible risks without mitigation plans, do we have Management Reserve?
    • For irreducible risks do we have cost and schedule margin?
  • Do we have a Plan showing the increasing maturing of the deliverables of the project?
    • Do we know what Accomplishments must be performed to increase the maturity of the deliverable?
    • Do we know the Criteria for each Accomplishment, so we can measure the actual progress to plan?
    • Have we arranged the work to produce the deliverables in a logical network - or some other method like Kanban - that shows the dependencies between the work elements and the deliverables.¬†
    • This notion of dependencies is very underrated.¬†
      • The Kanban paradigm assumes this up front
      • Verifying there are actually NO dependencies is critical to all the processes based on having NO dependencies.¬†
      • It's seem rare that those verifications actually take place
      • This is an Optimism Bias in the agile software development world.
  • Do we have a credible, statistically adjusted, cost and schedule model for assessing the impact of any changes?
    • I'm confident our costs will not be larger than our revenue - sure right. Show me your probabilistic model.
    • No model, we're likely being optimistic and don;t even know it
    • Show Me The Numbers.

So With These And Others...We can remove the fallacy of the Planning Fallacy.

This doesn't mean our project will be successful. Nothing can guarantee that. But the Probability of Success will be increased.

In the end we MUST know the Mission we are trying to accomplish, the units of measure of that Mission in terms meaningful to the decision makers. Without that we can't now what DONE looks like. Amnd with that only our optimism will carry us along until it is too late to turn back.

Screen Shot 2015-05-12 at 1.53.57 PM

Anyone using Planning Fallacy as the excuse for project failure, not planning, not estimating, not actually doing their job as a project and business manager will likely succeed in the quest for project failure and get  what they deserve. Late, Over Budget, and the gadget they're building doesn't work as needed.

† Please note, that just because estimating is a problem in all domains, that's NO reason to not estimate. Like planning is a problem, it's no reason NOT to plan. Any suggestion that estimating or planning is not needed in the presence of uncertain future - as it is on all projects - is willfully ignoring the principles of Microeconomics - making choices in the presence of uncertainty based on opportunity cost . To suggest other wise confirms this ignorance.

Resources

These are some background on the Planning Fallacy problem from the anchoring and adjustment point of view that I've used over the years to inform our estimating processes for software intensive systems. After reading through these I hope you come to a better understanding of many of the mis-conceptions about estimate and the fallacies of how that is done in practice.

Interestingly there is a poster on twitter in the #NoEstimates thread that objects when people post links to their own work or work of others. Please do not fall prey to the notion that everyone has an equally informed opinion, unless you yourself have done all the research needed to cover the foundations of the topics. Outside resource are the very life blood of informed experience and the opinions that come from that experience. 

  1. ¬†Kahneman, Daniel; Tversky, Amos (1979). "Intuitive prediction: biases and corrective procedures".TIMS Studies in Management Science¬†12: 313‚Äď327.
  2.  "Exploring the Planning Fallacy" (PDF). Journal of Personality and Social Psychology. 1994. Retrieved 7 November 2014.
  3. Estimating Software Project Effort Using Analogies, 
  4. Cost Estimation of Software Intensive Projects: A Survey of Current Practices
  5. "If you don't want to be late, enumerate: Unpacking Reduces the Planning Fallacy". Journal of Experimental Social Psychology. 15 October 2003. Retrieved 7 November 2014.
  6. A Causal Model for Software Cost Estimating Error, Albert L. Lederer and Jayesh Prasad, IEEE Transactions On Software Engineering, Vol. 24, No. 2, February 1998.
  7. Assuring Software Cost Estimates? Is It An Oxymoron? 2013 46th Hawaii International Conference on System Sciences.
  8. A Framework for the Analysis of Software Cost Estimating Accuracy,¬†ISESE'06, September 21‚Äď22, 2006, Rio de Janeiro, Brazil.¬†
  9. "Overcoming the Planning Fallacy Through Willpower". European Journal of Social Psychology. November 2000. Retrieved 22 November 2014.
  10. Buehler, Roger; Griffin, Dale, & Ross, Michael (2002). "Inside the planning fallacy: The causes and consequences of optimistic time predictions". In Thomas Gilovich, Dale Griffin, & Daniel Kahneman (Eds.),¬†Heuristics and biases: The psychology of intuitive judgment, pp. 250‚Äď270. Cambridge, UK: Cambridge University Press.
  11. Buehler, Roger; Dale Griffin; Michael Ross (1995). "It's about time: Optimistic predictions in work and love".¬†European Review of Social Psychology¬†(American Psychological Association)¬†6: 1‚Äď32.¬†doi:10.1080/14792779343000112.
  12. Lovallo, Dan; Daniel Kahneman (July 2003). "Delusions of Success: How Optimism Undermines Executives' Decisions".¬†Harvard Business Review: 56‚Äď63.
  13. Buehler, Roger; Dale Griffin; Michael Ross (1994). "Exploring the "planning fallacy": Why people underestimate their task completion times".¬†Journal of Personality and Social Psychology¬†(American Psychological Association)¬†67¬†(3): 366‚Äď381.¬†doi:10.1037/0022-3514.67.3.366.
  14. Buehler, Roger; Dale Griffin; Johanna Peetz (2010). "The Planning Fallacy: Cognitive, Motivational, and Social Origins" (PDF). Advances in Experimental Social Psychology (Academic Press) 43: 9.
  15. Hourglass Is Half Full or Half Empty: Temporal Framing and the Group Planning Fallacy". Group Dynamics: Theory, Research, and Practice. September 2005. Retrieved22 November 2014.
  16. Stephanie P. Pezzoa. Mark V. Pezzob, and Eric R. Stone. "The social implications of planning: How public predictions bias future plans" Journal of Experimental Social Psychology, 2006, 221‚Äď227.
  17. "Underestimating the Duration of Future Events: Memory Incorrectly Used or Memory Bias?". American Psychological Association. September 2005. Retrieved 21 November 2014.
  18. "Focalism: A source of durability bias in affective forecasting.". American Psychological Association. May 2000. Retrieved 21 November 2014.
  19. Jones,, Larry R; Euske, Kenneth J (October 1991).¬†"Strategic misrepresentation in budgeting".Journal of Public Administration Research and Theory¬†(Oxford University Press)¬†1¬†(4): 437‚Äď460. Retrieved¬†11 March¬†2013.
  20. Taleb, Nassem (2012-11-27). Antifragile: Things That Gain from Disorder. ISBN 9781400067824.
  21. "Allocating time to future tasks: The effect of task segmentation on planning fallacy bias". Memory & Cognition. June 2008. Retrieved 7 November 2014.
  22. "No Light at the End of his Tunnel: Boston's Central Artery/Third Harbor Tunnel Project". Project on Government Oversight. 1 February 1995. Retrieved 7 November 2014.
  23. "Denver International Airport" (PDF). United States General Accounting Office. September 1995. Retrieved 7 November 2014.
  24. Lev Virine and Michael Trumper. Project Decisions: The Art and Science, Vienna, VA: Management Concepts, 2008. ISBN 978-1-56726-217-9 
    • Michael and Lev provide the Risk Management tool we use - Risky Project.
    • Risky Project is a Monte Carlo Simulation tool for reducible and irreducible risk from probability distribution functions of the uncertainty in project.
    • Which by the way is an actual MCS tools not based on¬†boot strapping from small number of past samples many times over.
  25. Overcoming the planning fallacy through willpower: effects of implementation intentions on actual and predicted task‚Äźcompletion times,¬†

  26. Anchoring and Adjustment in Software Estimation, Jorge Aranda and Steve Easterbrook, ESEC-FSE‚Äô05, September 5‚Äď9, 2005, Lisbon, Portugal.
  27. Anchoring and Adjustment in Software Estimation, Jorge Aranda, PhD Thesis, University of Toronto, 2005.
  28. Anchoring and Adjustment in Software Project Management: An Experiment Investigation, Timothy P. Costello, Naval Post Graduate School, September 1992.
  29. Anchoring Effect, Thomas Mussweiler, Birte Englich, and Fritz Strack
  30. Anchoring, Non-Standard Preferences: How We choose by Comparing with a Nearby Reference Point.
  31. Reference points and redistributive preferences: Experimental evidence, Jimmy Charité, Raymond Fisman, and Ilyana Kuziemko
  32. Anchoring and Adjustment, (YouTube), Daniel Kahneman. This anchoring and adjustment discussion is critical to how we ask the question how much, how big, and when.
  33. Anchoring unbound, Nicholas Epley and Thomas Gilovich 

  34. Assessing Ranges and Possibilities, Decision Analysis for the Professional, Chapter 12, Strategic Decision and Risk Management, Stanford Certificate Program. 
    • This book by the way should be mandatory reading for anyone suggesting the decisions can be made in the absence of estimates.
    • They can't and don't accept they can, because they can't
  35. Attention and Effort, Daniel Kaheman, Prentice Hall, The Hebrew University of Jerusalem, 1973.
  36. Availability: A heuristic fir Judging Frequency and Probability, Amos Tversky and Daniel Kahneman.
  37. On the Reality of Cognitive Illusions, Daniel Kahneman, Princeton University, Amos Tversky, Stanford University.

  38. Efficacy of Bias Awareness in Debasing Oil and Gas Judgements, Matthew B. Welsh, Steve H. Begg, and Reidar B. Bratvold. 
  39. The framing effect and risky decisions: Examining cognitive functions with fMRI, Cleotilde Gonzalez, Jason Dana, Hideya Koshino, and ,Marcel Just, The Journal of Economic Psychology, 26 (2005), 1-20.

  40.  

    Discussion Note: Review of Tversky & Kahnemann (1974): Judgment under uncertainty: heuristics and biases, Micheal Axelsen UQ Business School The University of Queensland Brisbane, Australia
  41. The Anchoring-and-Adjustment Heuristic, Why the Adjustments Are Insufficient, Nicholas Epley and Thomas Gilovich.

  42.  

    Judgment under Uncertainty: Heuristics and Biases, Amos Tversky; Daniel Kahneman, Science, New Series, Vol. 185, No. 4157. (Sep. 27, 1974), pp. 1124-1131

This should be enough to get you started and set the stage for rejecting any half baked ideas about anchoring and adjustment, planning fallacies, no need to estimate and the collection of other cocka mammy ideas floating around the web on how to make credible decisions with other peoples money.

Related articles The Reason We Plan, Schedule, Measure, and Correct Herding Cats: Five Estimating Pathologies and Their Corrective Actions Tunnel to Nowhere Root Cause Analysis The Flaw of Empirical Data Used to Make Decisions About the Future
Categories: Project Management

Quote of the Month May 2015

From the Editor of Methods & Tools - Wed, 05/13/2015 - 14:45
Agile methods ask practitioners to think, and frankly, that‚Äės a hard sell. It is far more comfortable to simply follow what rules are given and claim you’re ‚Äúdoing it by the book.‚ÄĚ It‚Äôs easy, it‚Äôs safe from ridicule or recrimination; you won‚Äôt get fired for it. While we might publicly decry the narrow confines of a set of rules, there is safety and comfort there. But of course, to be agile – or effective – isn‚Äôt about comfort […]. And if you only pick a handful of rules that you feel ...