Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Project Management

Give the Power to the Programmers

From the Editor of Methods & Tools - Tue, 04/14/2015 - 14:22
Delegation of power to programmers is a smart idea. It is provably and measurably smarter than leaving the power with managers to design the developer‚Äôs work environment, and with IT architects to design the technology that we should program. Devolving the power to create a better working-environment, and to design the technology for our stakeholders, […]

Qui Bono

Herding Cats - Glen Alleman - Tue, 04/14/2015 - 00:57

When we hear a suggestion about a process that inverts the normal process based on a governance framework - say Microeconomics of Software Development, we need to ask who benefits? How would that suggestion be tangibly beneficial to the recipient that is now inverted?

Estimates for example are for the business, why would the business no longer what an estimate of cost, schedule, or technical performance of the provided capabilities?

In the world of spending money to produce value, the one that benefits, should be, must be, the one paying for that value and therefore have a compelling interest in the information needed to make decisions about how the money is spent.

When that relationship between paying and benefit is inverted, then the path to Qui Bono is inverted as well.

In the end follow the money must be the basis of assessing the applicability of any suggestion. If it is suggested that decision making can be done in the absence of estimating the impacts of those decisions, who benefits. If it's not those paying for the value, then Qui Bono is not longer applicable. 

Categories: Project Management

Best Creativity Books

NOOP.NL - Jurgen Appelo - Mon, 04/13/2015 - 20:34
best-creativity-books

After my lists of mindfulness books and happiness books, here you can find the 20 Best Creativity Books in the World.

This list is created from the books on GoodReads tagged with “creativity”, sorted using an algorithm that favors number of reviews, average rating, and recent availability.

The post Best Creativity Books appeared first on NOOP.NL.

Categories: Project Management

The Flaw of Averages and Not Estimating

Herding Cats - Glen Alleman - Mon, 04/13/2015 - 16:04

There is a popular notion in the #NoEstimates paradigm that Empirical data is the basis of forecasting the future performance of a development project. In principle this is true, but the concept is not complete in the way it is used. Let's start with the data source used for this conjecture.

There are 12 sample in the example used by #NoEstimates. In this case stickies per week. From this time series an average is calculated for the future. This is the empirical data is used to estimate in the No Estimates paradigm. The Average is 18.1667 or just 18 stickies per week.

Data

 

But we all have read or should have read Sam Savage's The Flaw of Averages. This is a very nice populist book. By populist I mean an easily accessible text with little or not mathematics in the book. Although Savage's work is highly mathematically based with his tool set.

There is a simple set of tools that can be applied for Time Series analysis, using past performance to forecast future performance of the system that created the previous time series. The tool is R and is free for all platforms. 

Here's the R code for performing a statistically sound forecast to estimate the possible ranges values the past empirical stickies can take on in the future.

Put the time series in an Excel file and save it as TEXT named BOOK1

> SPTS=ts(Book1) - apply the Time Series function in R to convert this data to a time series
> SPFIT=arima(SPTS) - apply the simple ARIMA function to the time series
> SPFCST=forecast(SPFIT) - build a forecast from the ARIMA outcome
> plot(SPFCST) - plot the results

Here's that plot. This is the 80% and 90% confidence bands for the possible outcomes in the future from the past performance - empirical data from the past. 

The 80% range is 27 to 10 and the 90% range is 30 to 5.

Rplot

So the killer question.

Would you bet your future on a probability of success with a +65 to -72% range of cost, schedule, or technical performance of the outcomes?

I hope not. This is a flawed example I know. Too small a sample, no adjustment of the ARIMA factors, just a quick raw assessment of the data used in some quarters as a replacement for actually estimating future performance. But this assessment shows how to  empirical data COULD  support making decisions about future outcomes in the presence of uncertainty using past time series once the naive assumptions of sample size and wide variances are corrected..

The End

If you hear you can make decisions without estimating that's pretty much a violation of all established principles of Microeconomics and statistical forecasting. When answer comes back we sued empirical data, that your time series empirical data, download R, install all the needed packages, put the data in a file, apply the functions above and see if you really want to commit to spending other peoples money with a confidence range of +65 to -72%  of performing like you did in the past? I sure hope not!!

Related articles Flaw of Averages Estimating Probabilistic Outcomes? Of Course We Can! Critical Success Factors of IT Forecasting Herding Cats: Empirical Data Used to Estimate Future Performance Some More Background on Probability, Needed for Estimating Forecast, Automatic Routines vs. Experience Five Estimating Pathologies and Their Corrective Actions
Categories: Project Management

Who Builds a House without Drawings?

Herding Cats - Glen Alleman - Mon, 04/13/2015 - 05:46

This month's issue of Communications of the ACM, has a Viewpoint article titled "Who Builds a House without Drawing Blueprints?" where two ideas are presented:

  • It is a good idea to think about what we are about to do before we do ¬†it.
  • If we're going to write a good program, we need to think above to code level.

The example from the last bullet is there are many coding methods - test driven development, agile programming, and others ...

If the only sorting algorithm we know is a bubble sort no coding method will produce code that sorts in O(n log n) time.

Not only do we need to have somes sense of what capabilities the software needs to deliver in exchange for the cost of the software, but also do those capabilities meet the needs? What are the Measures of Effectiveness and Measures of Performance the software must fulfill? In what order must these be fulfilled? What supporting documentation is needed for the resulting product or service in order to maintain it over it's life cycle.

If we do not start with a specification, every line of code we write is a patch.†

This notion brings up several other gaps in our quest to build software that fulfills the needs of those paying. There are several conjectures floating around that willfully ignore the basic principles of providing solutions acceptable to the business. Since the business operates on the principles of Microeconomics of decision making, let's look at developing software from the point of view of those paying for our work. It is conjectured that ...

  • Good code is it's own documentation.
  • We can develop code just by sitting down and¬†doing it. Our¬†mob¬†of coders can come up with the best solution as they go.
  • We don't need to estimate the final cost and schedule, we'll just use some short term highly variable empirical data to show us the average progress and project that.
  • All elements of the software can be¬†sliced to a standard size and we'll use Kanban to forecast future outcomes.
  • We're bad at estimating and our managers misuse those numbers, so the solution is to Not Estimate and that will fix the root cause of those symptoms of Bad Management.

There are answers to each of these in the literature for the immutable principles of project management, but I came across a dialog that illustrates to na√Įvety ¬†around spending other people's money to develop software without knowing how much, what, and when.

Here's a conversation - following Galileo Galilei's Dialogue Concerning the Two Chief World Systems - between Salviati who argues for the principles of celestial mechanics and Simplicio who is a dedicated follower that those principles have not value for him as his sees them an example of dysfunction. 

I'll co-op the actual social media conversation and use those words by Salviati and Simplicio as the actors. The two people on the social media are both fully qualified to be Salviati. Galileo used Simplicio as a double entendre to make his point, so neither is Simplicio here:

  • Simplicio - my first born is a novice software developer but is really bad at math and especially statistics and those pesky estimating requests asked by the managers he works for. he's thinking he needs to find a job where they let him develop code, where there is #NoMath needed to make those annoying estimates.
  • Salviati - Maybe you tell him you're not suggesting he not learn math, but simply reduce his dependence on math in his work, since it is hard and he's not very good at it.
  • Simplicio - Yes, lots of developers struggle with answering estimate questions based on ¬†statistics and other know and tested approaches. I'm suggesting he find some alternative to having to make estimates, since he's so bad at them.
  • Salviati - I'll agree for the moment, since he doesn't appear to be capable of learning the needed math. Perhaps he should seek other ways to answering the questions asked of him by those paying his salary. Ways in which he can apply #NoMath to answering those questions needed by the business people to make decisions.
  • Simplicio - Maybe he can just pick the most important thing to work on first, do that, then go back and start the next most important thing, and do that until he is done. Then maybe those paying him will stop asking¬†when will you be done and how much will it cost when that day arrives, and oh yes, all that code you developed it will meet the needed¬†capabilities¬†I'm paying¬†you¬†to¬†develop right?
  • Salviati - again this might be a possible solution to your son's dilemma. After all we're not all good at using statistics and other approaches to estimate those numbers needed to make business decisions. Since we really like to just start coding, maybe the idea of #NoMath is a good one and he can just be an excellent coder.¬†Those paying for his work really only want it to work on the needed day for the expected cost and provide the needed capabilities - all within the confidence levels needed to fulfill their business case needs so they can stay in business. ¬†
  • Simplicio - He heard of this idea on the internet. Collect old data and use those for projecting the new data. That'd of course not be be the same as analyzing the future risks, changing sizes of work and all the other probabilistic outcomes. Yea, that's work, add up all the past estimates, find the average and use that.
  • Salviati - that might be useful for him, but make sure you caution him, that those numbers from the past may not represent the numbers in the future if he doesn't assess what capabilities are needed in the future and what the structure of the solution is for those capabilities. And while he's at it, make sure the uncertainties in the future are the same as the uncertainties in the past, otherwise that past numbers are pretty much worthless for making decisions about the future.
  • Simplicio - Sure, but at his work, his managers abuse those numbers and take them as¬†point values and ignore the¬†probabilistic¬†ranges he places on them. His supervisor - the one with the pointy hair - simply doesn't recognize that all project work is probabilistic and wants his developers to just do it.
  • Salviati - Maybe your son can ask his supervisors boss - the one that provides the money for his work,¬†Five Whys¬†as to why he even needs an estimate. Maybe that person will be happy to have you son spend his money with no need to know how much it will cost in the end, or when he'll be done, or really what will be done when the money and time runs out.
  • Simplicio - yes that's the solution. All those books, classes, and papers he should have read, all those tools he could have used, really don't matter any more. He can go back and tell the person paying for the work that he can produce the result without using any math whatsoever. Just take whatever he is producing, one slice at a time, and eventually he'll get what he needs to fulfill his business case, hopefully before time and money runs out.

† Viewpoint: Who Builds a House without Drawing Blueprints?, Leslie Lamport, CACM, Vol.58 No.4, pp. 38-41.

Categories: Project Management

How to Improve OKRs (Flow, not Sync)

NOOP.NL - Jurgen Appelo - Sat, 04/11/2015 - 09:37
improve-okrs

After failing dramatically with my professional OKRs in the first quarter of this year (hint: I was no exception in my team), I want to take a moment to evaluate how the practice works for me, and how it doesn’t. Let’s do a Perfection Game with OKRs!

The post How to Improve OKRs (Flow, not Sync) appeared first on NOOP.NL.

Categories: Project Management

Top Project Management Blogs of 2015

Herding Cats - Glen Alleman - Fri, 04/10/2015 - 15:20

PS-Blog-Best-Blogs

The List in alphabetical order includes this blog. Thanks

 

Categories: Project Management

The Microeconomics of a Project Driven Organization

Herding Cats - Glen Alleman - Thu, 04/09/2015 - 16:30

The notion that we can ignore - many times willfully - the microeconomics of decision making is common in some development domains. Any project driven paradigm has many elements, each interacting with each in random ways, in nonlinear ways, in ways we may not be able to even understand when the maturity of the organization is not yet developed to a level needed to manage in the presence of uncertainty.

ProjectDrivenOrganization

So When We Say Project What Do We Mean?

The term project has an official meaning in many domains. Work that has a finite duration is a good start. But then what is finite? Work that makes a change to an external condition. But what does change mean, and what is external. In most definitions, operations and maintenance are not usually budgeted as projects. There are accounting rules the describe projects as well. Once we land on an operational definition of the project, here's a notional picture of the range of projects.

6a00d8341ca4d953ef01bb07dbef16970dWhen we hear a suggestion about any process for project management, we need to first establish a domain and a context in that domain to test the idea. 

My favorite questionable conjecture is that we can make decisions about the spending of other peoples money without estimating the outcomes for that decisions. Making decisions about an uncertain future is the basis of Microeconomics.

One framework for making decisions in the presence of uncertainty is Organizational Governance. Without establishing a governance framework, ranging from one like that below, to No governance, just DO IT, it's difficult to have a meaningful conversation about the applicability of any project management process.

So when we hear a new and possibly counter intuitive suggestion, start by asking In What Governance Model Do You Think This Idea Might Be Applicable?

Wp1074_ppp_architecture

 

Related articles Decision Analysis and Software Project Management Incremental Delivery of Features May Not Be Desirable
Categories: Project Management

The Microeconomics of Decision Making in the Presence of Uncertainty - Re-Deux

Herding Cats - Glen Alleman - Thu, 04/09/2015 - 14:59

Microeconomics is a branch of economics that studies the behavior of individuals and small impacting organizations in making decisions on the allocation of limited resources.

All engineering is constrained optimization. How do we take the resources we've been given and deliver the best outcomes. That's microeconomics is. Unlike models of mechanical engineering or classical physics, the models of microeconomics are never precise. They are probabilistic, driven by the underlying statistical processes of the two primary actors - suppliers and consumers. 

Let's look at both in light of the allocation of limited resources paradigm.

  • Supplier = development resources - these are limited in both time and capacity for work. And as likely talent and production of latent defects, which cost time and money to remove.
  • Consumer = those paying for the development resources have limited time and money. Limited money is obvious, they have a budget. Limited time, since the¬†time value of money of part of the Return in Capital equation used by the business. Committing capital (not real capital, software development is usually carried on the books as an expense), needs a time when that capital investment will start to return¬†value.¬†

In both case time, money, capacity for productive value are limited (scarce) and compete with each other and compete with the needs of both the supplier and the consumer. In addition, since the elasticity of labor costs is limited by the market, we can't simply buy cheaper to make up for time and capacity. It's done of course but always to the determent of quality and actual productivity.

So cost is inelastic, time is inelastic, capacity for work is inelastic and other attributes of the developed product constrained. The market need is like constrained as well. Business needs are rarely elastic - oh we really didn't need to pay people in the time keeping system, let's just collect the time sheets, we'll run payroll when that feature gets implemented.

Enough Knowing, Let's Have Some Doing

With the principles of Microeconomics applied to software development, there is one KILLER issue, that if willfully ignored ends the conversation for any business person trying to operate in the presence of limited resources - time, money, capacity for work.

The decisions being made about these limited resources are being made in the presence of uncertainty. This uncertainty - as mentioned - is based on random processes. Random process produce imprecise data. Data drawn from random variables. Random variables with variances, instability (stochastic processes), non-linear stochastic processes. 

Quick Diversion Into Random Variables

There are many mathematical definitions of random variables, but for this post let's use a simple one.

  • A variable is an attribute of a system or project that can take on multiple values. If the value of this variable is¬†fixed for example when someone asks what is¬†the number of people on the project can be known by counting then and writing that down. When someone asked you could count and say say 16.
  • When the values of the variable are¬†random then the variable can take on a range of values just like the non-random variable, but we don't know exactly what those values will be when we want to use that variable to ask a question. If the variable is a¬†random variable and¬†someone asks what will be the cost of this project when it is done, you'll have to provide a range of values and the confidence for each of the numbers in the range.¬†

A simple example - silly but illustrative - would be HR wants to buy special shoes for the development team, with the company logo on them. If we could not for some reason (doesn't matter why) measure the shoe size of all the males on our project, we could estimate how many shows of what size woudl be needed from the statistical distribution of males shoe sizes for a large population of make coders.

Mod8-image_shoe_male1

This would get use close to how many shoes of what size we need to order. This is a notional example, so please don't place an order for actual shoes. But the underlying probability distribution of the values the random variable can take on can tell us about the people working on the project.

Since all the variables on any project are random variables, we can't know the exact value of them at any one time. But we can know about their possible ranges and the probabilities of any specific value when asked to produce that value for making a decision. 

The viability of the population values and its analysis should not be seen not as a way of making precise predictions about the project outcomes, but as a way of ensuring that all relevant outcomes produced by these variables have been considered, that they have been evaluated appropriately, and that we have a reasonable sense what will happen for the multitude of values produced by a specific variable. It provides a way of structuring our thinking about the problem. 

Making Decisions In The Presence of Random Variables

To make a decision - a choice among several choices - means making an opportunity cost  decision based in random data. And if there is only one choice, then the choice is either take the choice or don't.

This means the factors that go into that decision are themselves random variables. Labor, productivity, defects, capacity, quality, usability, functionality, produced business capability, time. Each is a random variables, interacting in nonlinear ways with the other random variables.

To make a choice in the presence of this paradigm we must make estimates of not only the behaviour of the variables, but also the behaviors of the outcomes.

In other words

To develop software in the presence of limited resources driven by uncertain processes for each resource (time, money, capacity, technical outcomes), we must ESTIMATE the behaviors of these variables that inform our decision.

It's that simple and it's that complex. Anyone conjecturing decisions can be made in the absence of estimates of the future outcomes of that decision is willfully ignoring the Microeconomics of business decision making in the software development domain.

For those interested in further exploring of the core principle of Software Development business beyond this willful ignorance, here's a starting point.

These are the tip of the big pile of books, papers, journal articles on estimating software systems. 

A Final Thought on Empirical Data

Making choices in the presence of uncertainty can be informed by several means:

  • We have data from the past
  • We have a model of the system that can simulated
  • We have¬†reference classes from which we can extract similar information

This is empirical data. But there are several critically important questions that must be answered if we are not going to be disappointed with our empirical data outcomes

  • Is the past representative of the future?
  • Is the sample of data from the past sufficient to make sound forecasts of the future. The number of sample needed greatly influences the confidence intervals on the estimates of the future.

Calculating the number of samples needed for a specific level of confidence requires some statistics. But here's a place to start. Suffice it to say, those conjecturing estimates based on past performance (number of story point in the past) will need to produce the confidence calculation before any non-trivial decisions should be made on their data. Without those calculations the use of past performance be very sporty when spending other peoples money.

Thanks to Richard Askew for suggesting the addition of the random variable background

Categories: Project Management

CoderTrust

From the Editor of Methods & Tools - Thu, 04/09/2015 - 14:21
At a time when the crowdfunding and microfinance initiatives are growing in different domains, it is also important to ask ourselves what we could do to make the (software development) world a better place. In this domain, I would like to share with you the initiative of CodersTrust that works currently with software developers in […]

Learning Opportunities for All

If you are not on my Pragmatic Manager email list, you might not know about these opportunities to explore several topics with me this month:

An Estimation hangout with Marcus Blankenship this Friday, April 10, 2:30pm EDT. If you have questions, please email me or Marcus. See the Do You Have Questions About Estimation post. Think of this hangout as a clinic, where I can take your questions about estimation and help you address your concerns.

In the Kitchener-Waterloo area April 29&30, I’m doing two workshops that promise to be quite fun as well as educational:

  • Discovering the Leader Inside of You
  • An Agile and Lean Approach to Managing Your Project Portfolio

To see the descriptions, see the KWSQA site.

You do not have to be a manager to participate in either of these workshops. You do need to be inquisitive and willing to try new things. I believe there  is only room for two people for the leadership workshop. I think there is room for five people in the project portfolio workshop. Please do sign up now.

 

Categories: Project Management

Valuing Your Work as an Agile Coach

Mike Cohn's Blog - Tue, 04/07/2015 - 15:00

How should we value our work as agile coaches and consultants? The way I do it is to ask myself if I will have had a positive long-term impact on a team or organization. I’m not particularly interested in short-term impacts. In fact, many coaching engagements could have a negative impact in the short term if I’ve done or suggested anything disruptive.

It would be nice if these changes were always easily and directly measurable. Unfortunately, they really aren’t. To measure the impact of my coaching, we would need at least two identical teams, two identical products, and at least a handful of years.

One team would build their product without my coaching. The other team would build theirs with my coaching. Their sales forces and all other supporting functions would need to be identical.

If all other factors were made equal, though, we could measure the impact of my coaching on that team. We’d simply look at sales for the two products over the handful of years and know which had done better.

In some ways, of course, it will be your clients who determine your value as an agile coach. But sometimes clients are not in a good position to judge value. Some clients want you to parrot back to their teams what they’ve said—regardless of whether that is valuable advice or not. Other clients really do want their teams to receive the best possible advice. These are, of course, the clients that we, as coaches and consultants, treasure.

So ultimately, we are the best judges of the value we add. We can bring a proper long-term view, but we need to look critically at our work. Is our advice helping? Is it pushing people to improve? Is it too disruptive? Not disruptive enough? Is it appropriate for the situation?

A slide in my Certified ScrumMaster class says that a ScrumMaster “unleashes the energy and intelligence of others.” In class I often joke that I want to go home at the end of the day and answer my wife’s question of, “What did you do today?” with, “I unleashed the energy and intelligence of others.”

But, on the days I can do that, I find I’ve delivered value to my clients.

Capability Maturity Levels and Implications on Software Estimating

Herding Cats - Glen Alleman - Mon, 04/06/2015 - 14:52

 An estimate is the most knowledgeable statement you can make at a particular point in time regarding, cost/effort, schedule, staffing, risk, the ...ilities of the product or service.[1]

CMMI  for Estimates

 

Immature versus Mature Software Organizations [3]

Setting sensible goals for improving the software development processes requires  understanding the difference between immature and mature organizations. In an immature organization, processes are generally improvised by practitioners and their management during the course of the project. Even if a process has been specified, it is not rigorously followed or enforced.

Immature organizations are reactionary with managers focused on solving immediate crises. Schedules and budgets are routinely exceeded because they are not based on realistic estimates. When hard deadlines are imposed, product functionality and quality are often compromised to meet the schedule.

In immature organizations, there is no objective basis for judging product quality or for solving product or process problems. The result is product quality is difficult to predict. Activities intended to enhance quality, such as reviews and testing, are often curtailed or eliminated when projects fall behind schedule.

In mature organizations possesses guide the organization-wide ability to manage development and maintenance processes. The process is accurately communicated to existing staff and new employees, and work activities are carried out according to the planned process. The processes mandated are usable and consistent with the way the work actually gets done. These defined processes are updated when necessary, and improvements are developed through controlled pilot-tests and/or cost benefit analyses. Roles and responsibilities within the defined process are clear throughout the project and across the organization.

Let's look at the landscape of maturity on estimating the work for those providing the funding for the work.

1. Initial

Projects are small, short, and while important to the customer, not likely critical to the success of the business in terms of cost and schedule. 

  • Informal or no estimating
  • When there are estimates, they are manual, without any processes, and likely considered¬†guesses

The result of this level of maturity is poor forecasting of the cost and schedule of the planned work. And surprise for those paying for the work.

2. Managed

Projects may be small, short, and possibly important. Some for of estimating, either from past experience or from decomposition of the planned work is used to make linear projects of future cost, schedule, and technical performance.

This past performance is usually not adjusted for the variances of the past, just and average. As well the linear average usually doesn't consider changes in the demand for work, technical differences in the works, and other uncertainties in the future for that work.

This is the Flaw of Averages approach to estimating. As well the effort needed to decompose the work into same sized chunks is the basis of all good estimating processes. In the Space and Defense business the 44 day rule is used to bound the duration of work. This answers the question how long are you willing to wait before you find out you're late? For us, the answer is no more than one accounting period. In practice, project status - physical percent complete is done every Thursday afternoon.

3. Defined

There is an estimating process, using recorded past performance and the statistical adjustments of that past performance. Reference Classes are  used to model future performance from the past. Parametric estimates can be used with those reference classes or other estimating processes. Function Points is common in enterprise IT projects where interfaces to legacy systems, database topology, user interfaces, transactions are the basis of the business processes. 

The notion that we've never done this before so how can we estimate, begs the question do you have the right development team? This is a past performance issues. Why hire a team that has no understanding of the problem and then ask then to estimate the cost of the solution? You wouldn't hire a plumber to install a water system if she hadn't done this before in some way.

4. Quantitatively Managed

Measures, Metrics, Architecture assessments - Design Structure Matrix is one we use - are used to construct a model of the future. External Databases referenced to compare internal estimates with external experiences.

5. Optimized 

There is an estimating organization that supports development, starting with the proposal and continuing through project close out. As well there is a risk management organization helping inform the estimates about possible undesirable outcomes in the future.

Resources

[1] Improving Estimate Maturity for More Successful Projects, SEER/Tracer Presentation, March 2010.

[2] Software Engineering Information Repository, Search Results on Estimates

[3] The Capability Maturity Model for Software

Categories: Project Management

Hope is not a Strategy

Herding Cats - Glen Alleman - Sun, 04/05/2015 - 16:17

We don't need to Hope the sun will come up tomorrow. That probability is 100%. The probability that the outpatient surgery you've scheduled next week will be successful is high. The surgeon has done this procedure 1,000's of time with a 98% success rate.

The probability that you'll be able to arrive on time with the needed features for the first release of the accounts payable and accounts receivable upgrade to the ERP system, is not like the sun coming up or the outpatient surgery. It's actually a probability that you'll have to think about. Actually have to calculate. Actually make an estimate of this probability before those paying for your work can make a decision about when to switch from the old system to the new system.

Hoping we can show up when needed is not a strategy. To show up on time, and on budget, with the needed capabilities. To do that, we need a Closed Loop Control System, in which we have a target performance, measures of actual performance, compared to estimates of the planned performance to create an error signal to take corrective actions.

Control systems from Glen Alleman Related articles How to Avoid the "Yesterday's Weather" Estimating Problem Incremental Delivery of Features May Not Be Desirable Release Early and Release Often Some More Background on Probability, Needed for Estimating Managing in the Presence of Uncertainty Making Decisions in the Presence of Uncertainty
Categories: Project Management

Making Decisions in the Presence of Uncertainty

Herding Cats - Glen Alleman - Sat, 04/04/2015 - 15:57

Tinker_Bell_Official_PoseIf you're on a project that has certainty, then you're wasting your time estimating. If you are certain things are going to turn out the way you think they are when you have choices to make about the future, then estimating the impact of that choice is a waste of time.

If your future is clear as day far enough ahead that you can see what's going to happen long before you get there, estimating is a waste. If you live in Fantasyland you really don't need to estimate the impact of decisions made today for outcomes in the future.

Peter Pan and Tinker Bell will look after you and make sure nothing comes as a surprise.

If how ever you live in the real world of projects - then uncertainty is the dominant force driving your project. 

Let's say some things about uncertainty. First a tautology

Uncertainties are things we can not be certain about

Uncertainty is created by our incomplete knowledge of the future, present, or past. Uncertainty is not about our ignorance of the future, past, or present. When we say uncertain we speak about some state of knowledge of the system in interest that is not fixed or determine. If we are in fat ignorant of the future, then the only approach to is spend money of find things out before proceeding. Estimating is not need in this case either. We can only estimate after we have acquired the the needed knowledge. This knowledge will be probabilistic of course. But starting a project in the presence of ignorance of the future is itself a waste, unless those paying for the project are also willing to pay to discover things they should have know before starting. At that point it's not really a project - which has bounded time and scope.

So when you hear we're exploring, ask first who's paying for that exploration. Is the exploration part of a plan for the project? Can that cost be counted in the cost of the project and therefore be burdened on the ROI of the project? Maybe finding someone who actually knows about the project domain and can define the uncertainties would be faster, better, and cheaper, than hiring someone who knows little about what they're doing and is going to spend your money finding out.

This is one reason for a Past Performance section in every proposal we submit - tell me (the buyer) you guys actually know WTF you're doing and that you're not learning on my dime.

Back to Uncertainty

Uncertainty is related to three aspects of the project management domain:

  • The external world - the activities of the project.
  • Our knowledge of this world - the planned and actual behaviours, past, present, and future of the project.
  • Our perception of this world - the data and information we receive about these behaviours.

There are many sources of uncertainty, here's a few:

  • Lack of precision about the underlying uncertainty.
  • Lack of accuracy about the possible values in the uncertainty probability distributions.
  • Undiscovered Biases used in defining the range os possible outcomes in the project's processes, technologies, staff, and other participants
  • Unknowability of the range of probability distributions.
  • Absence of information about the probability distributions.

This project uncertainty creates Risk to the success of the project. Cost, Schedule, and Performance risk. Performance being the ability to deliver the needed capabilities in exchange for cost and schedule. There is a formal relationship between uncertainty and risk. 

  • Uncertainty is present when probabilities cannot be quantified in a rigorous or valid manner, but can described as intervals within a probability distribution function (PDF).
  • Risk is present when the uncertainty of the outcome can be quantified in terms of probabilities or a range of possible values.
  • This distinction is important for modeling the future performance of cost, schedule, and technical outcomes of a project.

There are two types of uncertainty on all projects:

  • Aleatory - Pertaining to stochastic (non-deterministic) events, the outcome of which is described using probability.
    • From the Latin alea.
    • For example in a game of chance stochastic variabilities are the natural randomness of the process and are characterized by a probability density function (PDF) for their range and frequency.
    • Since these variabilities are natural they are therefore irreducible.
  • Epistemic (subjective or probabilistic) uncertainties are event based probabilities, are knowledge-based, and are reducible by further gathering of knowledge.
    • Pertaining to the degree of knowledge about models and their parameters.
    • From the Greek episteme (knowledge).

Separating these classes helps in design of assessment calculations and in presentation of results for the integrated program risk assessment.

Three Conditions for Aleatory Uncertainty

  • An aleatory model contains a single unknown parameter.
    • Duration
    • Cost
  • The prior information for this parameter is homogeneous and is known with certainty.
    • Reference Classes
    • Past Performance
    • Parametric models
  • The observed data are homogeneous and are known with certainty.
    • A set of information that is made up of similar constituents.
    • A homogeneous population is one in which each item is of the same type.

Aleatory Uncertainty can not be reduced - it is Irreducible

Epistemic Uncertainty

Epistemic Uncertainty is any lack of knowledge or information in any phase or activity of the project. This uncertainty and the resulting risks can be reduced, through testing, modeling, past performance assessments, research, comparable systems, and other processes to increase the knowledge needed to reduce the uncertainty in the knowledge of the project outcomes.

Epistemic uncertainty can be further classified into model, phenomenological, and behavioural uncertainty. (in "Risk-informed Decision-making In The Presence Of Epistemic Uncertainty," Didier Dubois, Dominique Guyonnet, International Journal of General Systems 40, 2 (2011) 145-167)

Dealing with Aleatory and Epistemic Uncertainty

  • Epistemic¬†uncertainty results from gaps in knowledge. For example, we can be uncertain of an outcome because we have never used a particular technology before.¬†Such uncertainty is essentially a state of mind and hence subjective.
  • Aleatory¬†uncertainty results from variability that is intrinsic to the behavior of some systems. For example, we can be confident regarding the long term frequency of throwing sixes but remain uncertain of the outcome of any given throw of a dice. This uncertainty can be objectively determined.

So Now The Punch Line

To manage in the presence of these two uncertainties, reducible and irreducible, we need to know something about what will happen when we make decisions that are mitigating the risks that result from the uncertainties. The actions we need to take to reduce the risk or provide margin for the irreducible risks.

The probability that our actions will produce desirable outcomes in the presence of these uncertainties. The probabilities that the residual uncertainties, will be sufficiently low, so we will still have sufficient confidence of success - defined in any units of measure you want. Ours or on time, on budget, on specification. You can pick your own, but please write them down in some units of measure meaningful to the decision makers.

So Here it is, Wait for It

You can't make decisions in the presence of uncertainty unless you estimate the outcomes of these decisions, influenced by the probabilistic nature of the drivers of the uncertainties. 

Let's make it clear - You can't Make Decisions For Uncertain Outcomes Without Estimating

Anyone says you can, ask to see exactly how. They can't show you. move on they don't know what they're talking about. ‚ąī

Managing in the Presence of Uncertainty The Cost Estimating Problem Calculating Value from Software Projects - Estimating is a Risk Reduction Process Decision Analysis and Software Project Management Distribution of random numbers Related articles
Categories: Project Management

Do You Have Questions About Estimation?

I am doing a google hangout with Marcus Blankenship on April 10. We’ll be talking about estimation and my new book, Predicting the Unpredictable: Pragmatic Approaches to Estimating Cost or Schedule.

The book is about ways you can estimate and explain your estimates to the people who want to know. It also has a number of suggestions for when your estimates are inaccurate. (What a surprise!)

Marcus and I are doing a google hangout, April 10, 2015. There’s only room for 15 people on the hangout live. ¬†If you want me to answer your question, go ahead and send your question in advance by email. Send your questions to marcus at marcusblankenship dot com. Yes, you can send your questions to me, and I will forward to Marcus.

The details you’ll need to attend:

Hangout link: https://plus.google.com/events/c3n6qpesrlq5b8192tkici26lcc

Youtube live streaming link: http://www.youtube.com/watch?v=82IXhj4oI0U

Time & Date: April 10th, 2015 @ 11:30am Pacific, 2:30 Eastern.

Hope you join us!

Categories: Project Management

Decision Analysis and Software Project Management

Herding Cats - Glen Alleman - Thu, 04/02/2015 - 16:54

An announcement can across in email today from AACEI (Association for the Advancement of Cost Engineering International), about a Denver meeting on Decision Analysis. 

Decision Analysis provides ways to incorporate judgments about uncertainty into the evaluation of alternatives. Cost professionals using these methods can provide more-credible analyses. The foundation calculation is expected value, usually solved by a decision tree or by Monte Carlo simulation. A formal definition is:

Decision analysis is the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important decisions in a formal manner.

In project management and especially the software development project management domain, making decisions is always about making decisions in the presence of uncertainty. Uncertainty is always in place for the three core elements of any project, shown below:

Slide1

In order to make decisions about future outcomes of a project subject to these uncertainties, we need to not only know how these three variables randomly interact, but also how they behave as standalone processes. This behaviour - in the presence of uncertainty - has two types:

  • Event based behaviour - the probability that something will or will not happen in the future.
  • Naturally occurring random behaviour - the Probability Distribution Function that describes the possible outcomes.

Both these behaviours are present in all project work. If they are not present the project is likely simple enough, you can just start working and have the confidence of completing on time, on budget and have the outcome work.

When the behaviours are not deterministic and the interactions are not deterministic - then to make decisions we can only do one thing.

We need to estimate these behaviours

When it is suggested we can make decisions in the presence of uncertainty, and there is no method provided to make those decisions, it can't be true.

Recording past performance and then taking the average of that performance as an estimate of future performance is seriously flawed, as discussed in How to Avoid Yesterday's Weather Estimating Problem.

So do the math, random variables abound in our project domain, be it software, hardware, pouring concrete, welding steel - it's all random. Don't fall prey the simple minded statements of we're exploring ways to make decisions without estimates without asking the direct question - show me how that nonexistent description does not violate the principles of Decision Analysis and Microeconomics of software development decision making.

Related articles Calculating Value from Software Projects - Estimating is a Risk Reduction Process Five Estimating Pathologies and Their Corrective Actions The Cost Estimating Problem Why We Need Governance
Categories: Project Management

Software Development Conferences Forecast March 2015

From the Editor of Methods & Tools - Tue, 03/31/2015 - 16:00
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software […]

Should You Share Details from the Retrospective?

Mike Cohn's Blog - Tue, 03/31/2015 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

During a sprint retrospective, team members gather and discuss ways in which they can improve. This should include the ScrumMaster and product owner, as each is part of the team. The team is not limited to finding improvements only within their process. They may, for example, decide they need to improve their skills with a given technology and to seek out training in that area.

A concern many teams have around the retrospectives is whether the improvements they identify should be shared with others or kept within the team.

I think it’s ideal when all retrospective results from all teams can always be shared. Transparency is, after all, a pillar on which the Scrum process is built. But as much as I would love a culture of openness to exist such that everything can always be shared with everyone, that is often not the case.

In my experience, most teams will have no issue making public most of the improvements they identify. The items will be predictable things often similar to those being done by other agile teams in the organization: get better at this, learn how to do that, figure out how to do more of this and how to do the other thing earlier each sprint.

Occasionally, however, a team may have something they don’t want to share. Some examples I’ve seen include:

  • Learn this new technology that is a bit astray from a stated corporate direction, but which the team thinks may have enough promise that they’d want to use it anyway
  • Clean up the code in this part of the system, which the product owner is aware of, but for some reason, they don’t want to advertise is bad enough that it needs significant refactoring
  • Find ways to work better with that other group, who if they read this would want to know why they were being singled out for a better relationship

I want to repeat that while transparency is a virtue for agile teams, not all agile teams may be there yet. While perhaps keeping the results of their next few retrospectives private, those teams can work on feeling more comfortable opening up in the future.

Similarly, sometimes making a planned improvement known can impact the ability to make that improvement. I think that’s the case with the last example above.

Posting that you want to work better with the marketing group may make the marketing group either resist the change, or want to know what’s wrong with them that you need to change.

(Of course, they could also be quite open to changing to improving the relationship, so I would always discuss the situation with someone in the other group.)

Making this very practical, at the end of each sprint, the team has a list of changes they have agreed to make. I like to ask the team if there are any objections to making the list public. If there are not, I will hang the list in the team room or add it to the team’s home page.

When there are objections, I will see if we can leave perhaps one item off the publicly displayed list—usually that’s sufficient.

In the end, Scrum teams should have the courage to be transparent about their planned improvements whenever possible. But, occasionally there is more to be gained by keeping a retrospective item or two private.

Do You Know How to Say No?

Some of my coaching clients have way more to do than they can manage. Some of my project portfolio clients are struggling with how to say no.

My most recent Pragmatic Manager newsletter is all about what to do when you have too much to do. Read it at Do You Have Too Much to Do?

Categories: Project Management