Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
When we hear a suggestion about a process that inverts the normal process based on a governance framework - say Microeconomics of Software Development, we need to ask¬†who benefits?¬†How would that suggestion be tangibly beneficial to the recipient that is now inverted?
Estimates for example are for the business, why would the business no longer what an estimate of cost, schedule, or technical performance of the provided capabilities?
In the world of spending money to produce value, the one that benefits, should be, must be, the one paying for that value and therefore have a compelling interest in the information needed to make decisions about how the money is spent.
When that relationship between paying and benefit is inverted, then the path to Qui Bono is inverted as well.
In the end¬†follow the money must be the basis of assessing the applicability of any suggestion. If it is suggested that decision making can be done in the absence of estimating the impacts of those decisions, who benefits. If it's not those paying for the value, then Qui Bono is not longer applicable.¬†
After my lists of mindfulness books and happiness books, here you can find the 20 Best Creativity Books in the World.
This list is created from the books on GoodReads tagged with “creativity”, sorted using an algorithm that favors number of reviews, average rating, and recent availability.
There is a popular notion in the #NoEstimates paradigm that¬†Empirical¬†data is the basis of forecasting the future performance of a development project. In principle this is true, but the concept is not complete in the way it is used. Let's start with the data source used for this conjecture.
There are 12 sample in the example used by #NoEstimates. In this case¬†stickies per week. From this¬†time series¬†an¬†average is calculated for the future. This is the¬†empirical¬†data is used to estimate in the No Estimates¬†paradigm. The Average is 18.1667 or just 18¬†stickies per week.
But we all have read or should have read Sam Savage's¬†The Flaw of Averages. This is a very nice¬†populist book. By populist I mean an easily accessible text with little or not mathematics in the book. Although Savage's work is highly mathematically based with his tool set.
There is a simple set of tools that can be applied for Time Series analysis, using past performance to forecast future performance of the system that created the previous time series. The tool is R and is free for all platforms.¬†
Here's the R code for performing a statistically sound forecast to estimate the possible ranges values the past empirical¬†stickies¬†can take on in the future.
Put the time series in an Excel file and save it as TEXT named BOOK1
> SPTS=ts(Book1) - apply the Time Series function in R to convert this data to a time series
> SPFIT=arima(SPTS) - apply the simple ARIMA function to the time series
> SPFCST=forecast(SPFIT) - build a forecast from the ARIMA outcome
> plot(SPFCST) - plot the results
Here's that plot. This is the 80% and 90% confidence bands for the possible outcomes in the future from the past performance - empirical data from the past.¬†
The 80% range is 27 to 10 and the 90% range is 30 to 5.
So the killer question.
Would you bet your future on a probability of success with a +65 to -72%¬†range¬†of cost, schedule, or¬†technical¬†performance of the outcomes?
I hope not. This is a flawed example I know. Too small a sample, no adjustment of the ARIMA factors, just a quick raw assessment of the data used in some quarters as a replacement for actually estimating future performance. But this assessment shows how to ¬†empirical¬†data COULD ¬†support making decisions about future outcomes in the presence of uncertainty using past time series once the naive assumptions of sample size and wide variances are corrected..
If you hear you can make decisions without estimating that's pretty much a violation of all established principles of Microeconomics and statistical forecasting. When answer comes back¬†we sued¬†empirical¬†data, that your time series empirical data, download R, install all the needed packages, put the data in a file, apply the functions above and see if you really want to commit to spending other peoples money with a confidence range of¬†+65 to -72% ¬†of performing like you did in the past? I sure hope not!!Related articles Flaw of Averages Estimating Probabilistic Outcomes? Of Course We Can! Critical Success Factors of IT Forecasting Herding Cats: Empirical Data Used to Estimate Future Performance Some More Background on Probability, Needed for Estimating Forecast, Automatic Routines vs. Experience Five Estimating Pathologies and Their Corrective Actions
This month's issue of¬†Communications of the ACM, has a Viewpoint article titled "Who Builds a House without Drawing Blueprints?" where two ideas are presented:
The example from the last bullet is there are many coding methods - test driven development, agile programming, and others ...
If the only sorting algorithm we know is a bubble¬†sort¬†no coding method will produce code¬†that sorts in O(n log n) time.
Not only do we need to have somes sense of what¬†capabilities the software needs to deliver in exchange for the cost of the software, but also do those capabilities meet the needs? What are the Measures of Effectiveness and Measures of Performance the software must fulfill? In what order must these be fulfilled? What supporting documentation is needed for the resulting product or service in order to maintain it over it's life cycle.
If we do not start with a specification,¬†every line¬†of code we write is a patch.‚Ä†
This notion brings up several other¬†gaps¬†in our quest to build software that fulfills the needs of those paying. There are several conjectures floating around that willfully ignore the basic principles of providing solutions acceptable to the business. Since the business operates on the principles of Microeconomics of decision making, let's look at developing software from the point of view of those paying for our work. It is conjectured that ...
There are answers to each of these in the literature for the immutable principles of project management, but I came across a dialog that illustrates to na√Įvety ¬†around spending other people's money to develop software without knowing how much, what, and when.
Here's a conversation - following Galileo Galilei's¬†Dialogue Concerning the Two Chief¬†World¬†Systems¬†- between Salviati who argues for the principles of celestial mechanics and Simplicio who is a dedicated follower that those principles have not value for him as his sees them an example of¬†dysfunction.¬†
I'll co-op the actual social media conversation and use those words by Salviati and Simplicio as the actors. The two people on the social media are both fully qualified to be Salviati. Galileo used Simplicio as a¬†double entendre to make his point, so neither is Simplicio here:
‚Ä† Viewpoint: Who Builds a House without Drawing Blueprints?, Leslie Lamport, CACM, Vol.58 No.4, pp. 38-41.
After failing dramatically with my professional OKRs in the first quarter of this year (hint: I was no exception in my team), I want to take a moment to evaluate how the practice works for me, and how it doesn’t. Let’s do a Perfection Game with OKRs!
The notion that we can ignore - many times willfully - the microeconomics of decision making is common in some development domains. Any¬†project¬†driven paradigm has many elements, each interacting with each in random ways, in nonlinear ways, in ways we may not be able to even understand when the maturity of the organization is not yet developed to a level needed to¬†manage in the presence of uncertainty.
So When We Say Project What Do We Mean?
The term project has an official meaning in many domains. Work that has a finite duration is a good start. But then what is finite? Work that makes a change to an external condition. But what does change mean, and what is external. In most definitions, operations and maintenance are not usually budgeted as projects. There are accounting rules the describe projects as well. Once we land on an operational definition of the project, here's a notional picture of the range of projects.
My favorite questionable conjecture is that we can make decisions about the spending of other peoples money without estimating the outcomes for that decisions. Making decisions about an uncertain future is the basis of Microeconomics.
One framework for making decisions in the presence of uncertainty is Organizational Governance. Without establishing a governance framework, ranging from one like that below, to No governance, just¬†DO IT, it's difficult to have a meaningful conversation about the applicability of any project management process.
So when we hear a new and possibly counter intuitive suggestion, start by asking¬†In What Governance Model Do You Think This Idea Might Be¬†Applicable?
¬†Related articles Decision Analysis and Software Project Management Incremental Delivery of Features May Not Be Desirable
Microeconomics is a branch of economics that studies the behavior of individuals and small impacting organizations in making decisions on the allocation of limited resources.
All engineering is constrained optimization. How do we take the resources we've been given and deliver the best outcomes. That's microeconomics is. Unlike models of mechanical engineering or classical physics, the models of microeconomics are never precise. They are probabilistic, driven by the underlying statistical processes of the two primary¬†actors - suppliers and consumers.¬†
Let's look at both in light of the¬†allocation¬†of¬†limited¬†resources¬†paradigm.
In both case time, money, capacity for productive value are limited (scarce) and compete with each other and compete with the needs of both the supplier and the consumer. In addition, since the elasticity of labor costs is limited by the market, we can't simply¬†buy cheaper to make up for time and capacity. It's done of course but always to the determent of quality and actual productivity.
So cost is inelastic, time is inelastic, capacity for work is inelastic and other attributes of the developed product constrained. The market need is like constrained as well. Business needs are rarely elastic -¬†oh we really didn't need to pay¬†people¬†in the time keeping system, let's just collect the time sheets, we'll run¬†payroll when that feature gets implemented.
Enough Knowing, Let's Have Some Doing
With the principles of Microeconomics applied to software development, there is one KILLER issue, that if willfully ignored ends the conversation for any business person trying to operate in the presence of limited resources - time, money, capacity for work.
The decisions being made about these limited resources are being made in the presence of uncertainty. This uncertainty - as mentioned - is based on random processes. Random process produce imprecise data. Data drawn from random variables. Random variables with variances, instability (stochastic processes), non-linear stochastic processes.¬†
Quick Diversion Into Random Variables
There are many mathematical definitions of random variables, but for this post let's use a simple one.
A simple example - silly but illustrative - would be HR wants to buy special shoes for the development team, with the company logo on them. If we could not for some reason (doesn't matter why) measure the shoe size of all the males on our project, we could estimate how many shows of what size woudl be needed from the statistical distribution of males shoe sizes for a large population of make coders.
This would get use close to how many shoes of what size we need to order. This is a notional example, so please don't place an order for actual shoes. But the underlying probability distribution of the values the random variable can take on can tell us about the people working on the project.
Since all the variables on any project are random variables, we can't know the exact value of them at any one time. But we can know about their possible ranges and the probabilities of any specific value when asked to produce that value for making a decision.¬†
The viability of the population values and its analysis should not be seen not as a way of making precise predictions about the project outcomes, but as a way of ensuring that all relevant outcomes produced by these variables have been considered, that they have been evaluated appropriately, and that we have a reasonable sense what will happen for the multitude of values produced by a specific variable. It provides a way of structuring our thinking about the problem.¬†
Making Decisions In The Presence of Random Variables
To make a decision - a choice among several choices - means making an¬†opportunity cost ¬†decision based in random data. And if there is only one choice, then the choice is either take the choice or don't.
This means the factors that go into that decision are themselves random variables. Labor, productivity, defects, capacity, quality, usability, functionality, produced business capability, time. Each is a random variables, interacting in nonlinear ways with the other random variables.
To make a choice in the presence of this paradigm we must make estimates of not only the behaviour of the variables, but also the behaviors of the outcomes.
In other words
To develop software in the presence of limited resources driven by uncertain processes for each¬†resource¬†(time, money, capacity,¬†technical¬†outcomes), we must ESTIMATE the¬†behaviors¬†of these variables¬†that inform our decision.
It's that simple and it's that complex. Anyone conjecturing decisions can be made in the absence of estimates of the future outcomes of that decision is willfully ignoring the Microeconomics of business decision making in the software development domain.
For those interested in further exploring of the core principle of Software Development business beyond this willful ignorance, here's a starting point.
These are the tip of the big pile of books, papers, journal articles on estimating software systems.¬†
Making choices in the presence of uncertainty can be informed by several means:
This is empirical data. But there are several critically important questions that must be answered if we are not going to be disappointed with our empirical data outcomes
Calculating the number of samples needed for a specific level of confidence requires some statistics. But here's a place to start. Suffice it to say, those conjecturing estimates based on past performance (number of story point in the past) will need to produce the confidence calculation before any non-trivial decisions should be made on their data. Without those calculations the use of past performance be very sporty when spending other peoples money.
Thanks to Richard Askew for suggesting the addition of the random variable background
If you are not on my Pragmatic Manager email list, you might not know about these opportunities to explore several topics with me this month:
An¬†Estimation hangout with Marcus Blankenship this Friday, April 10, 2:30pm EDT. If you have questions, please email me or Marcus. See the Do You Have Questions About Estimation post. Think of this hangout as a clinic, where I can take your questions about estimation and help you address your concerns.
In the Kitchener-Waterloo area April 29&30, I’m doing two workshops that promise to be quite fun as well as educational:
To see the descriptions, see the KWSQA site.
You do not have to be a manager to participate in either of these workshops. You do need to be inquisitive and willing to try new things. I believe there ¬†is only room for two people for the leadership workshop. I think there is room for five people in the project portfolio workshop. Please do sign up now.
How should we value our work as agile coaches and consultants? The way I do it is to ask myself if I will have had a positive long-term impact on a team or organization. I’m not particularly interested in short-term impacts. In fact, many coaching engagements could have a negative impact in the short term if I’ve done or suggested anything disruptive.
It would be nice if these changes were always easily and directly measurable. Unfortunately, they really aren’t. To measure the impact of my coaching, we would need at least two identical teams, two identical products, and at least a handful of years.
One team would build their product without my coaching. The other team would build theirs with my coaching. Their sales forces and all other supporting functions would need to be identical.
If all other factors were made equal, though, we could measure the impact of my coaching on that team. We’d simply look at sales for the two products over the handful of years and know which had done better.
In some ways, of course, it will be your clients who determine your value as an agile coach. But sometimes clients are not in a good position to judge value. Some clients want you to parrot back to their teams what they’ve said—regardless of whether that is valuable advice or not. Other clients really do want their teams to receive the best possible advice. These are, of course, the clients that we, as coaches and consultants, treasure.
So ultimately, we are the best judges of the value we add. We can bring a proper long-term view, but we need to look critically at our work. Is our advice helping? Is it pushing people to improve? Is it too disruptive? Not disruptive enough? Is it appropriate for the situation?
A slide in my Certified ScrumMaster class says that a ScrumMaster “unleashes the energy and intelligence of others.” In class I often joke that I want to go home at the end of the day and answer my wife’s question of, “What did you do today?” with, “I unleashed the energy and intelligence of others.”
But, on the days I can do that, I find I’ve delivered value to my clients.
¬†An estimate is the most knowledgeable statement you can make at a particular point in time regarding, cost/effort, schedule, staffing, risk, the¬†...ilities¬†of the product or service.
Immature versus Mature Software Organizations 
Setting sensible goals for improving the software development processes requires ¬†understanding the difference between immature and mature organizations. In an immature organization, processes are generally improvised by practitioners and their management during the course of the project. Even if a process has been specified, it is not rigorously followed or enforced.
Immature organizations are reactionary with managers focused on solving immediate crises. Schedules and budgets are routinely exceeded because they are not based on realistic estimates. When hard deadlines are imposed, product functionality and quality are often compromised to meet the schedule.
In immature organizations, there is no objective basis for judging product quality or for solving product or process problems. The result is product quality is difficult to predict. Activities intended to enhance quality, such as reviews and testing, are often curtailed or eliminated when projects fall behind schedule.
In mature organizations possesses guide the organization-wide ability to manage development and maintenance processes. The process is accurately communicated to existing staff and new employees, and work activities are carried out according to the planned process. The processes mandated are usable and consistent with the way the work actually gets done. These defined processes are updated when necessary, and improvements are developed through controlled pilot-tests and/or cost benefit analyses. Roles and responsibilities within the defined process are clear throughout the project and across the organization.
Let's look at the landscape of maturity on estimating the work for those providing the funding for the work.
Projects are small, short, and while important to the customer, not likely critical to the success of the business in terms of cost and schedule.¬†
The result of this level of maturity is poor forecasting of the cost and schedule of the planned work. And surprise for those paying for the work.
Projects may be small, short, and possibly important. Some for of estimating, either from past experience or from decomposition of the planned work is used to make linear projects of future cost, schedule, and technical performance.
This past performance is usually not adjusted for the variances of the past, just and average. As well the linear average usually doesn't consider changes in the demand for work, technical differences in the works, and other uncertainties in the future for that work.
This is the¬†Flaw of Averages approach to estimating. As well the effort needed to decompose the work into¬†same sized chunks is the basis of all good estimating processes. In the Space and Defense business the 44 day rule is used to bound the duration of work. This answers the question¬†how long are you willing to wait before you find out you're late?¬†For us, the answer is¬†no more than one accounting period. In practice, project status -¬†physical percent complete is done every Thursday afternoon.
There is an estimating process, using recorded past performance and the statistical adjustments of that past performance. Reference Classes are ¬†used to model future performance from the past. Parametric estimates can be used with those reference classes or other estimating processes. Function Points is common in enterprise IT projects where interfaces to legacy systems, database topology, user interfaces, transactions are the basis of the business processes.¬†
The notion that¬†we've never done this before so how can we estimate, begs the question¬†do you have the right development team? This is a past performance issues. Why hire a team that has no understanding of the problem and then ask then to estimate the cost of the solution? You wouldn't hire a plumber to install a water system if she hadn't done this before in some way.
4. Quantitatively Managed
Measures, Metrics, Architecture assessments - Design Structure Matrix is one we use - are used to construct a model of the future. External Databases referenced to compare internal estimates with external experiences.
There is an¬†estimating¬†organization that supports development, starting with the proposal and continuing through project close out. As well there is a risk management organization helping inform the estimates about possible undesirable outcomes in the future.
¬†Improving Estimate Maturity for More Successful Projects, SEER/Tracer Presentation, March 2010.
 Software Engineering Information Repository, Search Results on Estimates
 The Capability Maturity Model for Software
We don't need to Hope the sun will come up tomorrow.¬†That probability is 100%. The probability that the outpatient surgery you've scheduled next week will be successful is high. The surgeon has done this procedure 1,000's of time with a 98% success rate.
The probability that you'll be able to arrive on time with the needed features for the first release of the accounts payable and accounts receivable upgrade to the ERP system, is not like the sun coming up or the outpatient surgery. It's actually a probability that you'll have to think about. Actually have to calculate. Actually make an estimate of this probability before those paying for your work can make a decision about when to switch from the old system to the new system.
Hoping we can show up when needed is not a strategy. To show up on time, and on budget, with the needed capabilities. To do that, we need a Closed Loop Control System, in which we have a target performance, measures of actual performance, compared to estimates of the planned performance to create an error signal to take corrective actions.
Control systems from Glen Alleman Related articles How to Avoid the "Yesterday's Weather" Estimating Problem Incremental Delivery of Features May Not Be Desirable Release Early and Release Often Some More Background on Probability, Needed for Estimating Managing in the Presence of Uncertainty Making Decisions in the Presence of Uncertainty
If you're on a project that has certainty, then you're wasting your time estimating. If you are certain things are going to turn out the way you think they are when you have choices to make about the future, then estimating the impact of that choice is a waste of time.
If your future is¬†clear as day far enough ahead that you can see what's going to happen long before you get there, estimating is a waste. If you live in Fantasyland you really don't need to estimate the impact of decisions made today for outcomes in the future.
Peter Pan and Tinker Bell will look after you and make sure nothing comes as a surprise.
If how ever you live in the real world of projects - then uncertainty is the dominant force driving your project.¬†
Let's say some things about uncertainty. First a tautology
Uncertainties are things we can not be certain about
Uncertainty is created by our incomplete knowledge of the future, present, or past. Uncertainty is not about our ignorance of the future, past, or present. When we say¬†uncertain we speak about some state of knowledge of the system in interest that is not fixed or determine. If we are in fat¬†ignorant of the future, then the only approach to is spend money of find things out before proceeding. Estimating is not need in this case either. We can only estimate after we have acquired the the needed knowledge. This knowledge will be probabilistic of course. But starting a project in the presence of ignorance of the future is itself a waste, unless those paying for the project are also willing to pay to discover things they should have know before starting. At that point it's not really a project - which has bounded time and scope.
So when you hear¬†we're exploring, ask first¬†who's paying for that exploration. Is the exploration part of a plan for the project? Can that cost be counted in the cost of the project and therefore be burdened on the ROI of the project? Maybe finding someone who actually knows about the project domain and can define the uncertainties would be faster, better, and cheaper, than hiring someone who knows little about what they're doing and is going to spend your money finding out.
This is one reason for a¬†Past Performance section in every proposal we submit -¬†tell me (the buyer)¬†you¬†guys actually know WTF you're doing and that¬†you're not¬†learning¬†on my dime.
Back to Uncertainty
Uncertainty is related to three aspects of the project management domain:
There are many sources of uncertainty, here's a few:
This project uncertainty creates Risk to the success of the project. Cost, Schedule, and Performance risk. Performance being the ability to deliver the needed capabilities in exchange for cost and schedule. There is a formal relationship between uncertainty and risk.¬†
There are two types of uncertainty on all projects:
Separating these classes helps in design of assessment calculations and in presentation of results for the integrated program risk assessment.
Three Conditions for Aleatory Uncertainty
Aleatory Uncertainty can not be reduced - it is¬†Irreducible
Epistemic Uncertainty is any lack of knowledge or information in any phase or activity of the project. This uncertainty and the resulting risks can be reduced, through testing, modeling, past performance assessments, research, comparable systems, and other processes to increase the knowledge needed to reduce the uncertainty in the knowledge of the project outcomes.
Epistemic uncertainty can be further classified into model, phenomenological, and behavioural uncertainty. (in "Risk-informed Decision-making In The Presence Of Epistemic Uncertainty," Didier Dubois, Dominique Guyonnet, International Journal of General Systems 40, 2 (2011) 145-167)
Dealing with Aleatory and Epistemic Uncertainty
So Now The Punch Line
To manage in the presence of these two uncertainties, reducible and irreducible, we need to know something about what will happen when we make decisions that are¬†mitigating¬†the risks that result from the uncertainties. The actions we need to take to reduce the risk or provide margin for the irreducible risks.
The probability that our actions will produce desirable outcomes in the presence of these uncertainties. The probabilities that the residual uncertainties, will be sufficiently low, so we will still have sufficient confidence of success - defined in any units of measure you want. Ours or¬†on time, on budget, on¬†specification.¬†You can pick your own, but please write them down in some units of measure meaningful to the decision makers.
So Here it is, Wait for It
You can't make decisions in the presence of uncertainty unless you estimate the outcomes of these decisions, influenced by the probabilistic nature of the drivers of the uncertainties.¬†
Let's make it clear - You can't Make Decisions For Uncertain Outcomes Without Estimating
Anyone says you can, ask to see exactly how. They can't show you. move on they don't know what they're talking about. ‚ąīManaging in the Presence of Uncertainty The Cost Estimating Problem Calculating Value from Software Projects - Estimating is a Risk Reduction Process Decision Analysis and Software Project Management Distribution of random numbers Related articles
I am doing a google hangout with Marcus Blankenship on April 10. We’ll be talking about estimation and my new book, Predicting the Unpredictable: Pragmatic Approaches to Estimating Cost or Schedule.
The book is about ways you can estimate and explain your estimates to the people who want to know. It also has a number of suggestions for when your estimates are inaccurate. (What a surprise!)
Marcus and I are doing a google hangout, April 10, 2015. There’s only room for 15 people on the hangout live. ¬†If you want me to answer your question, go ahead and send your question in advance by email. Send your questions to marcus at marcusblankenship dot com. Yes, you can send your questions to me, and I will forward to Marcus.
The details you‚Äôll need to attend:
Youtube live streaming link: http://www.youtube.com/watch?v=82IXhj4oI0U
Time & Date: April 10th, 2015 @ 11:30am Pacific, 2:30 Eastern.
Hope you join us!
An announcement can across in email today from AACEI (Association for the Advancement of Cost Engineering International), about a Denver meeting on Decision Analysis.¬†
Decision Analysis provides ways to incorporate judgments about uncertainty into the evaluation of alternatives. Cost professionals using these methods can provide more-credible analyses. The foundation calculation is expected value, usually solved by a decision tree or by Monte Carlo simulation. A formal definition is:
Decision analysis¬†is the discipline comprising the philosophy, theory, methodology, and professional practice necessary to address important¬†decisions¬†in a formal manner.
In project management and especially the software development project management domain, making decisions is always about making decisions in the presence of uncertainty. Uncertainty is always in place for the three core elements of any project, shown below:
In order to make decisions about future outcomes of a project subject to these uncertainties, we need to not only know how these three variables randomly interact, but also how they behave as standalone processes. This behaviour - in the presence of uncertainty - has two types:
Both these behaviours are present in all project work. If they are not present the project is likely simple enough, you can just start working and have the confidence of completing on time, on budget and have the outcome work.
When the behaviours are not deterministic and the interactions are not deterministic - then to make decisions we can only do one thing.
We need to estimate these behaviours
When it is suggested we can make decisions in the presence of uncertainty, and there is no method provided to make those decisions, it can't be true.
Recording past performance and then taking the average of that performance as an estimate of future performance is seriously flawed, as discussed in How to Avoid Yesterday's Weather Estimating Problem.
So do the math, random variables abound in our project domain, be it software, hardware, pouring concrete, welding steel - it's all random. Don't fall prey the simple minded statements of¬†we're exploring ways to make decisions without estimates without asking the direct question - show me how that nonexistent description does not violate the principles of Decision Analysis and Microeconomics of software development decision making.Related articles Calculating Value from Software Projects - Estimating is a Risk Reduction Process Five Estimating Pathologies and Their Corrective Actions The Cost Estimating Problem Why We Need Governance
The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.
During a sprint retrospective, team members gather and discuss ways in which they can improve. This should include the ScrumMaster and product owner, as each is part of the team. The team is not limited to finding improvements only within their process. They may, for example, decide they need to improve their skills with a given technology and to seek out training in that area.
A concern many teams have around the retrospectives is whether the improvements they identify should be shared with others or kept within the team.
I think it’s ideal when all retrospective results from all teams can always be shared. Transparency is, after all, a pillar on which the Scrum process is built. But as much as I would love a culture of openness to exist such that everything can always be shared with everyone, that is often not the case.
In my experience, most teams will have no issue making public most of the improvements they identify. The items will be predictable things often similar to those being done by other agile teams in the organization: get better at this, learn how to do that, figure out how to do more of this and how to do the other thing earlier each sprint.
Occasionally, however, a team may have something they don’t want to share. Some examples I’ve seen include:
I want to repeat that while transparency is a virtue for agile teams, not all agile teams may be there yet. While perhaps keeping the results of their next few retrospectives private, those teams can work on feeling more comfortable opening up in the future.
Similarly, sometimes making a planned improvement known can impact the ability to make that improvement. I think that’s the case with the last example above.
Posting that you want to work better with the marketing group may make the marketing group either resist the change, or want to know what’s wrong with them that you need to change.
(Of course, they could also be quite open to changing to improving the relationship, so I would always discuss the situation with someone in the other group.)
Making this very practical, at the end of each sprint, the team has a list of changes they have agreed to make. I like to ask the team if there are any objections to making the list public. If there are not, I will hang the list in the team room or add it to the team’s home page.
When there are objections, I will see if we can leave perhaps one item off the publicly displayed list—usually that’s sufficient.
In the end, Scrum teams should have the courage to be transparent about their planned improvements whenever possible. But, occasionally there is more to be gained by keeping a retrospective item or two private.
Some of my coaching clients have way more to do than they can manage. Some of my project portfolio clients are struggling with how to say no.