Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Project Management

Software Development Conferences Forecast July 2015

From the Editor of Methods & Tools - 22 hours 21 min ago
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & […]


10x Software Development - Steve McConnell - Thu, 07/30/2015 - 22:13

I've posted a YouTube video that gives my perspective on #NoEstimates. 

This is in the new Construx Brain Casts video series. 


Inversion of Control In Commercial Off The Shelf Products (COTS)

Herding Cats - Glen Alleman - Thu, 07/30/2015 - 19:25

The architecture of COTS products comes fixed from the vendor. As standalone systems this is not a problem. When integration starts, it is a problem. 

Here's a white paper from the past that addresses this critical enterprise IT issue

Inversion of Control from Glen Alleman
Categories: Project Management

Architecture -Center ERP Systems in the Manufacturing Domain

Herding Cats - Glen Alleman - Thu, 07/30/2015 - 13:51

I found another paper presented at Newspaper systems journal around architecture in manufacturing and ERP.

One of the 12 Principles of agile says The best architectures, requirements, and designs emerge from self-organizing teams. This is a developers point of view of architecture. The architects point of view looks like.

Architectured Centered Design from Glen Alleman
Categories: Project Management

Embracing the Zen of Program Management

The lovely folks at Thoughtworks interviewed me for a blog post, Embracing the Zen of Program Management.  I hope you like the information there.

 Scaling Collaboration Across the OrganizationIf you want to know about agile and lean program management, see Agile and Lean Program Management: Scaling Collaboration Across the Organization. In beta now.

Categories: Project Management

Great Review of Predicting the Unpredictable

Ryan Ripley “highly recommends” Predicting the Unpredictable: Pragmatic Approaches to Estimating Cost or Schedule. See his post:¬†Pragmatic Agile Estimation: Predicting the Unpredictable.

He says this:

This is a practical book about the work of creating software and providing estimates when needed. Her estimation troubleshooting guide highlights many of the hidden issues with estimating such as: multitasking, student syndrome, using the wrong units to estimate, and trying to estimates things that are too big. — Ryan Ripley

Thank you, Ryan!

PredictingUnpredictable-small See Predicting the Unpredictable: Pragmatic Approaches to Estimating Cost or Schedule for more information.

Categories: Project Management

IT Risk Management

Herding Cats - Glen Alleman - Wed, 07/29/2015 - 02:46

I was sorting through a desk draw and came across a collection of papers from book chapters and journals done in the early 2000's when I was the architect of an early newspaper editorial system.

Here's one on Risk Management

Information Technology Risk Management from Glen Alleman This work was done early in the risk management development process. Tim Lister's quote came later Risk management is how adults management projects.
Categories: Project Management

Estimates on Split Stories Do Not Need to Equal the Original

Mike Cohn's Blog - Tue, 07/28/2015 - 15:00

It is good practice to first write large user stories (commonly known as epics) and then to split them into smaller pieces, a process known as product backlog refinement or grooming. When product backlog items are split, they are often re-estimated.

I’m often asked if the sum of the estimates on the smaller stories must equal the estimate on the original, larger story.


Part of the reason for splitting the stories is to understand them better. Team members discuss the story with the product owner. As a product owner clarifies a user story, the team will know more about the work they are to do.

That improved knowledge should be reflected in any estimates they provide. If those estimates don’t sum to the same value as the original story, so be it.

But What About the Burndown?

But, I hear you asking, what about the release burndown chart? A boss, client or customer was told that a story was equal to 20 points. Now that the team split it apart, it’s become bigger.

Well, first, and I always feel compelled to say this: We should always stress to our bosses, clients and customers that estimates are estimates and not commitments.

When we told them the story would be 20 points, that meant perhaps 20, perhaps 15, perhaps 25. Perhaps even 10 or 40 if things went particularly well or poorly.

OK, you’ve probably delivered that message, and it may have gone in one ear and out the other of your boss, client or customer. So here’s something else you should be doing that can protect you against a story becoming larger when split and its parts are re-estimated.

I’ve always written and trained that the numbers in Planning Poker are best thought of as buckets of water.

You have, for example an 8 and a 13 but not a 10 card. If you have a story that you think is a 10, you need to estimate it as a 13. This slight rounding up (which only occurs on medium to large numbers) will mitigate the effect of stories becoming larger when split.

Consider the example of a story a team thinks is a 15. If they play Planning Poker the way I recommend, they will call that large story a 20.

Later, they split it into multiple smaller stories. Let’s say they split it into stories they estimate as 8, 8 and 5. That’s 21. That’s significantly larger than the 15 they really thought it was, but not much larger at all than the 20 they put on the story.

In practice, I’ve found this slight pessimistic bias to work well to counter the natural tendency I believe many developers have to underestimate, and to provide a balance against those who will be overly shocked when any actual overruns its estimate.

Why Guessing is not Estimating and Estimating is not Guessing

Herding Cats - Glen Alleman - Mon, 07/27/2015 - 19:00

I hear all the time estimating is the same as guessing. This is not true mathematically nor is not true business process wise. This is an approach used by many (guessing), not understanding that making decisions in the presence of uncertainty requires we understand the impact of that decision. When that future is uncertain, we need to know that impact in probabilistic terms. And with this, comes confidence, precision, and accuracy of the estimate.

What’s the difference between estimate and guess? The distinction between the two words is one of the degree of care taken in arriving at a conclusion.

The word Estimate is derived from the Latin word aestimare, meaning to value. The term is has the origin of estimable, which means capable of being estimated or worthy of esteem, and of course esteem, which means regard as in High Regard.

To estimate means to judge the extent, nature, or value of something - connected to the regard - he is held in high regard, with the implication that the result is based on expertise or familiarity. An estimate is the resulting calculation or judgment. A related term is approximation, meaning close or near.

In between a guess and an estimate is an educated guess, a more casual estimate. An idiomatic term for this type of middle-ground conclusion is ballpark figure. The origin of this American English idiom, which alludes to a baseball stadium, is not certain, but one conclusion is that it is related to in the ballpark, meaning close in the sense that one at such a location may not be in a precise location but is in the stadium.

To guess is to believe or suppose, to form an opinion based on little or no evidence, or to be correct by chance or conjecture. A guess is a thought or idea arrived at by one of these methods. Synonyms for guess include conjecture and surmise, which like guess can be employed both as verbs and as nouns.

We could have a hunch or an intuition, or we can engage in guesswork or speculation. Dead reckoning is same thing as guesswork.  Dead reckoning was originally referred to a navigation process based on reliable information. Near synonyms describing thoughts or ideas developed with more rigor include hypothesis and supposition, as well as theory and thesis.

A guess is a casual, perhaps spontaneous conclusion. An estimate is based on intentional  thought processes supported by data.

What Does This Mean For Projects?

If we're guessing we're making uninformed conclusions usually in the absence of data, experience, or any evidence of credibility. If we're estimating we are making informed conclusions based on data, past performance, models - including Monte Carlo models, and parametric models.

When we hear decisions can be made without estimates. Or all estimating is guessing, we now mathematically and business process - neither of this is true.

This post is derived from Daily Writing Tips 

Related articles Making Conjectures Without Testable Outcomes Strategy is Not the Same as Operational Effectiveness Are Estimates Really The Smell of Dysfunction? Information Technology Estimating Quality
Categories: Project Management

Software Development Linkopedia July 2015

From the Editor of Methods & Tools - Mon, 07/27/2015 - 14:50
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about Agile retrospectives, remote teams, Agile testing, Cloud architecture, exploratory testing, software entropy, introverted software developers and Scrum myths. Blog: 7 Best Practices for Facilitating Agile Retrospectives Blog: How Pairing Powers […]

Making Conjectures Without Testable Outcomes

Herding Cats - Glen Alleman - Fri, 07/24/2015 - 21:05

I never met Carl Sagan. I've read his materials (both technical and popular) and listened to his talks. Dr. Sagan was a fierce questioner of many things. But in that questioning process is a framework by which answers can be discovered. Here's two nice quotes

Tumblr_mizbc3VWht1ri1icuo1_500 Nice Hypothesis

When we hear some conjecture about some solution of a problem and there is not stated problem (root cause) or not stated test of how that solution is going to fixe that problem, think of Carl. 

The conjecture there is cause and effect without confirming cause and effect is another common naive thought process. We get that from the  anti- vaccine , global warming deniers to name a few. We also get this from those who conjecture that estimates are the smell of dysfunction without every stating the dysfunctions, discovering the cause and effect connections for the dysfunction (the unstated dysfunction).

Dr. Sagan primary message was and still is an 

I don't what to believe, I want to Know

If we seek to improve the probability of success for our software intensive systems, we can't just believe the unsubstantiated conjecture of a group on unhappy developers tired of be abused by bad managers. We need tangible evidence that their conjectures are not only testable outside their personal anecdotes, but also those conjectures  are not violations of the basis of all business decision making.

And just for the record.

  • Estimating is hard, so are many things. That does not remove the need to estimate when making decisions in the presence of uncertainty.
  • Estimates are misused by bad managers and even well intentioned managers. That does not remove their need when making non-trivial business and technical decisions. By non-trivial I mean¬†value at risk is at a level that unfavorably impacts the business when the decision turns out to be wrong.
  • All project work is probabilistic with uncertainty of the future outcomes. Making decisions requires making estimates
  • Assessing value requires we know the cost to achieve that value. Since but value and cost are probabilistic, no assessment of value can take place without estimating both cost and value. This is the basis of Microeconomics of business management.

A final thought about unsubstantiated opinions, masking as personal anecdotes (thanks for this Peter)

Screen Shot 2015-07-24 at 1.55.18 PM

Categories: Project Management

Climbing Mountains Requires Good Estimates

Herding Cats - Glen Alleman - Thu, 07/23/2015 - 21:50

LongsswridgerouteThere was an interesting post on the #NoEstimates thread that triggered memories of our hiking and climbing days with our children (now grown and gone) and our neighbor who has summited many of the highest peaks around the world.

The quote was Getting better at estimates is like using time to plan the Everest climb instead of climbing smaller mountains for practice.

A couple background ideas:

  • The picture above is Longs Peak. We can see Longs Peak from our back deck in Niwot Colorado. It's one of 53 14,000 foot mountains in Colorado -¬†Fourteeners. Long is one of 4 along the front range.¬†

In our neighborhood are several semi-pro mountain climbers. Like them, we moved to Colorado for the outdoor life - skiing, mountain and road biking, hiking, and climbing. 

Now to the Tweet suggesting that getting better at estimating is replaced by doing (climb) smaller projects. Turns out estimates are needed for those smaller mountains, estimates are needed for all hiking and climbing. But first...

  • No one is going to climb Everest - and live to tell about it - without first having summited many other¬†high peaks.
  • Anyone interested in the trials and tribulations of Everest should start with John Krakauer's¬†Into Thin Air: A Personal Account of the Mt. Everest Disaster.
  • Before attempting - and attempting is the operative word here - any signifiant peak several things have to be in place.

Let's start with those Things.

No matter how prepared you are, you need a plan. Practice on lower peaks is necessary but far from sufficient for success. Each summit requires planning in depth. For Long's peak you need a Plan A, Plan B, and possibly a Plan C. Most of all you need strong estimating skills and the accompanying experience to determine when to invoke each Plan. People die on Longs because they foolishly think they can beat the odds and proceed with Plan B only to discover Plan C - abandon the day - was what they should have done

So the suggest that summiting something big, like any of the Seven Summits, without both deep experience and deep planning is likely going to not be heard of again.

 So the OP is likely speaking for not having summited much of anything, hard to tell, no experience resume attached.

The estimating part is basic, Can we make it to the key hole on Long's Peak before the afternoon storms come in? On Everest, can we make it to the Hillary Step before 1:00 PM? No? Turn back, you're gonna die if you continue.

Can we make it to the delivery date at the pace we're on now, AND with the emerging situation for the remaining work, AND for the cost we're trying to keep AND with the needed capabilities the customer needs? Remember the use of past performance is fine, If and Only If the future is something like the past, or we know something about how the future is different from the past.

When the future not like the past? We need a Plan B. And that plan has to have estimates of our future capabilities, cost expenditure rate, and our abilities to produce the needed capabilities.


Ask any hiker, climber, development manager, business person. Time to stop managing by platitudes and start managing by the principles of good management.

Related articles There is No Such Thing as Free The Fallacy of the Planning Fallacy Systems Thinking, System Engineering, and Systems Management Myth's Abound Eyes Wide Shut - A View of No Estimates Estimating Processes in Support of Economic Analysis Applying the Right Ideas to the Wrong Problem Making Decisions In The Presence of Uncertainty
Categories: Project Management

Deadlines Always Matter

Herding Cats - Glen Alleman - Thu, 07/23/2015 - 17:04

There is a popular notion in the Agile world that with continuous deployment and frequent shipping, dates should cease to matter. The next big feature should be ‚Äėjust‚Äô one release of many.¬†

But the  Capabilities provided by the software system many times have dependencies on other capabilities. Here's an example of a health insurance provider network system. There is a minimum number of features needed to provide a single Capability that the business can put to work making money. Certainty continuous delivery of features is always a good idea. But the business is looking for capabilities not just features. The Capability to do something of value. This is the Value used - and many times misused - in Agile. 

It's not about working software (which is necessary). It's about that working software being able to produce measurable value for the business. That can be revenue, services, operational processes. In the enterprise these Capabilities need to be delivered in the right order at the right time for the right cost for the business to meet it's business goals. Rarely are they Independent in practice.

Capabilities Flow

To discover what capabilities are needed, here's one approach taken from our Capabilities Based Planning paradigm



Here's a more detailed process description.

The notion that deadlines are no longer needed in agile domains is technically not possible of any non-trivial system when the business is excepting a set of Capabilities that are themselves dependent on other capabilities. An accounting  systems that can issue Purchase Orders, but not do Accounts Payables, is of little use. An HR system that can screen resumes, hire people, but not stand up the payroll records is of little use. When we hear about how agile can do all these wonderful things, make sure you test it with those paying for those wonderful things first. This of course goes for the #Noestimates notion as well - ask those paying if they have no interest in knowing when those capabilities will be available and how much they will cost. You may get a  different answer that the one provided by the developer, who does not  own the money, the business accountability, or the balance sheet performance goals. Related articles Capabilities Based Planning Root Cause of Project Failure What's the Smell of Dysfunction? Capabilities Based Planning - Part 2 Are Estimates Really The Smell of Dysfunction? Estimating Processes in Support of Economic Analysis Applying the Right Ideas to the Wrong Problem
Categories: Project Management

7 Tips for Valuing Features in a Backlog

Many product owners have a tough problem. They need so many of the potential features in the roadmap, that they feel as if everything is #1 priority. They realize they can’t actually have everything as #1, and it’s quite difficult for them to rank the features.

This is the same problem as ranking for the project portfolio. You can apply similar thinking.

Once you have a roadmap, use these tips to help you rank the features in the backlog:

  1. Should you do this feature at all? I normally ask this question about small features, not epics. However, you can start with the epic (or theme) and apply this question there. Especially if you ask, “Should we do this epic for this release?”
  2. Use Business Value Points to see the relative importance of a feature. Assign each feature/story a unique value. If you do this with the team, you can explain why you rank this feature in this way. The discussion is what’s most valuable about this.

  3. Use Cost of Delay to understand the delay that not having this feature would incur for the release.

  4. Who has Waste from not having this feature? Who cannot do their work, or has a workaround because this feature is not done yet?

  5. Who is waiting for this feature? Is it a specific customer, or all customers, or someone else?

  6. Pair-wise and other comparison methods work. You can use single or double elimination as a way to say, “Let’s do this one now and that feature later.”

  7. What is the risk of doing this feature or not doing this feature?

Don Reinertsen advocates doing the Weighted Shortest Job first.  That requires knowing the cost of delay for the work and the estimated duration of the work. If you keep your stories small, you might have a good estimate. If not, you might not know what the weighted shortest job is.

And, if you keep your stories small, you can just use the cost of delay.

Jason Yip wrote Problems I have with SAFe-style WSJF, which is a great primer on Weighted Shortest Job First.

I’ll be helping product owners work through how to value their backlogs in Product Owner Training for Your Agency, starting in August. When you are not inside the organization, but supplying services to the organization, these decisions can be even more difficult to make. Want to join us?

Categories: Project Management

Root Cause of Project Failure

Herding Cats - Glen Alleman - Wed, 07/22/2015 - 19:18

SinkingThere is much discussion around project failures. Looking for the root cause of failure is the approach taken by organizations seeking to establish the processes for avoiding failure in the future.

Many voices in the IT Project Failure domain reference the Standish Reports as the starting point. 

These reports have serious flaws in their approach - not the least of which is the respondents are self-selected. Meaning the population of IT projects is not represented in the returned sample. Another popular misrepresentation is the software crisis. Using a 30 year old NATO Report, it is conjectured the crisis can only be fixed by applying a method, without determining the Root Cause - if there ever was one. 

These approaches can be found in How to Lie With Statistics. That aside there is another serious flaw in this project failure discussion. 

There are solutions looking for a problem to solve. Tools, processes, practices, vendors, consultants. But nearly always the needed Root Cause Analysis is not the starting point. Instead the symptom is used as the target for the solution. But first let's establish the framing assumptions for project success.

Successful execution of Enterprise IT, Aerospace, Defense, and Government Software Intensive Systems (SIS) requires management discipline to identify what ‚ÄúDone‚ÄĚ looks like, provide measures of progress toward ‚ÄúDone,‚ÄĚ identify and remove risks and impediments to reaching ‚ÄúDone,‚ÄĚ and assure timely corrective actions to maintain the planned progress towards ‚ÄúDone.‚Ä̬†

I work in a domain where Performance Assessment and Root Cause Analyses is a standard function of program management. Increasing the Probability of Program Success is a business strategy. There are many approaches to increasing the probability of program success. But first what are some Root Causes of failure? Here's the top 4 from research:

  1. Unrealistic Performance Expectations, missing Measures of Effectiveness (MOE) and Measures of Performance (MOP).
  2. Unrealistic Cost and Schedule estimates based on inadequate risk adjusted growth models.
  3. Inadequate assessment of risk and unmitigated exposure to these risks without proper handling plans.
  4. Unanticipated technical issues without alternative plans and solutions to maintain programmatic and technical effectiveness.

There are dozens more from the Root Cause Analysis efforts in software intensive systems, but these four occur most often. Before suggesting any corrective action to any observed problem (undesirable effect), we need to know the Root Cause. Asking 5 Whys is a start, but without some framework for that process, it too becomes a cause for failure. A method we use is Reality Charting. It forces the conversation to cause and effect and prevents  the story telling approach where Dilbert Cartoons are descriptions of the cause - the SMELL - of the problem. 

One common offender to this tell me a story and I'll tell you a solution is the No Estimates paradigm. Estimates are conjectured to be the smell of dysfunction. No dysfunctions are named, but suggesting we can make decisions with No Estimates is the solution. Besides violating the principles of Microeconomics, not knowing the outcomes of our work in the presence of uncertainty means we have an open loop control system. With Open Loop we don't know where we're going, we don't know if we're getting there, and we don't know when we're done. This in turn lays the groundwork for the Top Four Root Causes of project failure listed above.

  1. The performance expectations are just that - unsubstantiated  expectations. What is the system capable of doing? Since the system under development is not developed yet, we have to make an estimate. This is an engineering problem. What's the model of the systems functions? How do those elements interact. There are simple ways to do this. There are tools used for more complex systems. Mathematics for Dynamic Modeling is a good start for those complex projects.
  2. Unrealistic Cost and Schedule estimates are very common. Any business that's going to stay in business needs to know something about the cost to develop it's products and when that cost will turn into revenue. This is the very core of business decision making. Poor estimating is a Root Cause of many project failures. Estimating Software Intensive Systems, Projects, Products, and Processes  is a good place to start.
  3. Inadequate risk assessment many times means ZERO risk assessment. What could possibly go wrong, let's just get started. Agile is billed as a risk management process. It is not. It provides information to the risk management process, but it alone is not risk management. Continuous Risk Management Guidebook is a starting place for managing risks. As Tim Lister says Risk Management is How Adults Manage Projects.
  4. Unanticipated technical issues are part of all projects. Managing in the presence of uncertainty deals with both programmatic and technical uncertainty. Both are present in the Top 4 Root Causes. As a result of risk management, these technical issues may or may not be revealed. The uncertainties found on projects are reducible and irreducible. For reducible uncertainty we need to spend money to buy down the resulting risk. For irreducible uncertainty we need margin. Both these require we make estimates because they are both about outcomes in the future. Here's a start to managing in the presence of uncertainty.

So here's the punch line. Dealing directly with the Top 4 Root Causes of project failure starts with making estimates. Estimates of the probability of meeting the expected performance goals, when they are needed for project success. 

Estimates of cost and schedule to assure we have enough money, or the cost is not more than the revenue, and that doing to work for the needed cost will show up on the needed time so our revenue stream will pay back that cost. Showing up late and over budget, even with a working product is not project success.

Estimates of risk are the very basis of risk management - managing like an adult. What could go wrong requires we estimate the probability of the risk occurring or the probability distribution function of the natural variances, the probability of impact, the probability of the effectiveness of our mitigation, and the probability of any residual risk.

Unanticipated technical issues are harder. But if we know anything about the technical domain, we can come up with some problems that can be solved before they become problems. This is called Design. If we now nothing about the technical domain, nothing about how to deliver a solution for the customer, nothing about the cost to provide that solution - we're the wrong people on the project. 


Related articles Information Technology Estimating Quality Populist Books Can Be Misleading Are Estimates Really The Smell of Dysfunction? Estimating and Making Decisions in Presence of Uncertainty
Categories: Project Management

Human Variation Introduction - New Lecture Posted

10x Software Development - Steve McConnell - Tue, 07/21/2015 - 17:58

In this week's lecture ( I introduce the topic of human variation. I start by describing the general phenomenon of 10x variation. I briefly overview the research on 10x. I describe the problems that 10x variation presents for research in software engineering. I go into the specific examples of the Chrysler C3 project and the New York Times Chief Programmer Team project. And I summarize a few of the software development issues that are strongly affected by human variation.  

Lectures posted so far include:  

0.0 Understanding Software Projects - Intro
     0.1 Introduction - My Background
     0.2 Reading the News
     0.3 Definitions and Notations 

1.0 The Software Lifecycle Model - Intro
     1.1 Variations in Iteration 
     1.2 Lifecycle Model - Defect Removal
     1.3 Lifecycle Model Applied to Common Methodologies
     1.4 Lifecycle Model - Selecting an Iteration Approach  

2.0 Software Size
     2.05 Size - Comments on Lines of Code
     2.1 Size - Staff Sizes 
     2.2 Size - Schedule Basics 
     2.3 Size - Debian Size Claims (New) 

3.0 Human Variation - Introduction (New)

Check out the lectures at!

Understanding Software Projects - Steve McConnell


Not Everything Needs to Be a User Story: Using FDD Features

Mike Cohn's Blog - Tue, 07/21/2015 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

User stories are great. When you’ve got users, that is. Sometimes, though, the users of a system or product are so far removed that a team struggles to put users into their stories. A sign that this is happening is when teams write stories that begin with “As a developer...” or “as a product owner....”

There are usually better approaches than writing stories like those. And a first step in exploring alternative approaches is realizing that not everything on your product backlog has to be a user story.

A recent look at a product backlog on a product for which I am the product owner revealed that approximately 85% of the items (54 out of 64) were good user stories, approximately 10% (6 of 64) were more technical items, and about 5% (4 of 64) were miscellaneous poorly worded junk.

Since I’m sure you’ll want to know about the junk, let’s dispense with those first. These are items that I or someone else on the project added while in a hurry. Some will later be rewritten as good stories but were initially just tossed into the backlog so they wouldn’t be forgotten. Others are things like “Upgrade Linux server” that could be rewritten as a story. But I find little benefit to doing that. Also, items like that tend to be well understood and are not on the product backlog for long.

My point here: No one should be reading a product backlog and grading it. A little bit of junk on a product backlog is totally fine, especially when it won’t be there long.

What I really want to focus on are the approximately 10% of the items that were more technical and were not written as user stories using the canonical “As a <type of user>, I want <some goal> so that <some reason>” syntax.

The product in question here is a user-facing product but not all parts of it are user facing. I find that to be fairly common. Most products have users somewhere in sight but there are often back-end aspects of the product that users are nowhere near. Yes, teams can often write user stories to reflect how users benefit from these system capabilities. For example: As a user, I want all data backed up so that everything can be fully recovered.

I’ve written plenty of stories like that, and sometimes those are great. Other times, though, the functionality being described starts to get a little too distant from real users and writing user stories when real users are nowhere to be found feels artificial or even silly.

In situations like these I’m a fan of the syntax from the Feature-Driven Development agile process. Feature-Driven Development (FDD) remains a minor player on the overall agile stage despite having been around since 1997. Originally invented by Jeff De Luca, FDD has much to recommend it in an era of interest in scaling agile.

Wikipedia has a good description of FDD so I’m only going to describe one small part of it: features. Features are analogous to product backlog items for a Scrum project. And just like many teams find it useful to use the “As a <user>, I want <goal> so that <benefit>” syntax for user stories as product backlog items, FDD has its own recommended syntax for features.

An FDD feature is written in this format:

<action> the <result> <by|for|of|to> <object>

As examples, consider these:

  • Estimate the closing price of stock
  • Generate a unique identifier for a transaction
  • Change the text displayed on a kiosk
  • Merge the data for duplicate transactions

In each case, the feature description starts with the action (a verb) and ends with what would be an object within the system. (FDD is particularly well suited for object-oriented development.)

This can be a particularly good syntax when developing something like an Application Programming Interface (API). But I find it works equally well on other types of back-end functionality. As I said at the beginning, about 10% of my own recent product backlog I examined was in this syntax.

If you find yourself writing product backlog items for parts of a system and are stretching to think of how to write decent user stories for those items, you might want to consider using FDD’s features. I think you’ll find them as helpful as I do.

Thinking, Talking, Doing on the Road to Improvement

Herding Cats - Glen Alleman - Mon, 07/20/2015 - 19:54

When there is a discussion around making improvements to anything, trouble starts when we don't have a shared understanding of the outcomes. For example, speculating that something can be done or that something should be stopped in pursuit of improvement has difficulty maintaining traction in the absence of a framework for that discussion.

The discussion falls into he said, she said style or I'll tell you a story (anecdote) of how this worked for me and it'll work for you.

Over the years I've been trained to work on proposals, provide training materials, write guidance documents, and other outlets - PodCasts, conference presentations - all designed to convey a new and sometimes controversial topic. Connecting agile and earned value management is the latest.

There are several guides that have formed the basis of my work. The critical success factor for this work is to move away from personal anecdotes - although those are many time used inside a broader context to make the message more personal. Rather start with a framework for the message. 

A good place to start is Cliff Atkinson's Beyond Bullet Points. It's not so much the making of Power Point briefings, but the process of sorting through what are you trying to say. Version 1 of the book is my favorite, because it was simple and actually changed how we thought about communication. Here's a framework from Cliff's 1st edition.

There is a deeper framework though. Our daughter is a teacher and she smiled when I mentioned one time, we're starting to use Bloom's Taxonomy for building our briefing materials that are designed to change how people do there work. Dad, I do that every week, it's called a lesson plan.  Here's an approach we use, from revised Blooms Handout. Screen Shot 2015-07-20 at 10.58.17 AMWhen we start a Pod Cast effort, any information conveyance efforts, we start by asking - what will the listener, attender, reader be able to do when they go back to their desk or place of work. The picture below helps us avoid the open ended taking out your brain and playing with it syndrome. This is the basis of Actionable Outcomes. Screen Shot 2015-07-20 at 10.59.20 AM

So when we hear about we're exploring or all we want is a conversation and at the same time the suggestion - conjecture actually - that what we're talking about is a desire to change an existing paradigm, make some dysfunction go away, take some correcrtive action - ask some importanrt questions:

  • Is this a framework for discussing these topics? Are we trying ti understand the problem before applying a solution?¬†
  • When applying the solution based on the understanding, is there any way to assess the effectiveness of the solution? Is this solution applicable beyond our personal anecdotal experience?
  • Cam we analyze the outcomes of the solution applied to the problem and determine if the solution results in correcting the problem?
  • Do we have some means of evaluating this effectiveness? What are the units of measure by which we can confirm this effectiveness. Anecdotes aren't evidence.
  • And finally can this solution be syndicated outside the personal experience? That is are our problem areas subject to the same solution?
Related articles Capabilities Based Planning Estimating Processes in Support of Economic Analysis Applying the Right Ideas to the Wrong Problem The Art of Systems Architecting Are Estimates Really The Smell of Dysfunction? Strategy is Not the Same as Operational Effectiveness
Categories: Project Management

Quote of the Month July 2015

From the Editor of Methods & Tools - Mon, 07/20/2015 - 15:16
Many users blame themselves for errors that occur when using technology, thinking that maybe they did something wrong. You must reverse this belief if you want to be an effective tester. Here is a rule of thumb: If something unexpected occurs, don‚Äôt blame yourself; blame the technology. Source: Tap Into Mobile Application Testing, Jonathan Koh, […]

Estimating and Making Decisions in Presence of Uncertainty

Herding Cats - Glen Alleman - Fri, 07/17/2015 - 18:03

There is a nice post from Trent Hone on No Estimates. This triggered some more ideas about why we estimates, what the root cause of the problem #NoEstimates is trying to solve and a summary of the problem

A Few Comments

All project work is probabilistic, driven by the underlying statistical uncertainties. These uncertainties are of two types - reducible and irreducible. Reducible uncertainty is driven by the lack of information. This information can be increased with direct work. We can "buy down" the uncertainty, with testing, alternative designs, redundancy. Reducible uncertainty is "event based." Your power outage for example. DDay being pushed one day by weather.

Irreducible uncertainty is just "part of the environment." It's the natural varaibility embedded in all project work. The "vibrations" of all the variables. This is handled by Margin. Schedule margin, cost margin, technical margin.

Here's an approach to "managing in the presence of uncertainty"

For my experience in Software Intensive Systems in a variety of domains (ERP, Realtime embedded systems, defense, space, nuclear power, pulp and paper, New Drug Development, heavy manufacturing, and more) #NE is a reaction to Bad Management. This inverts the Cause and Effect model of Root Cause Analysis. The conjecture that "estimates are the smell of dysfunction" without stating the dysfunction, the corrective action for that dysfunction, applying that corrective action, then reassessing the conjecture is a hollow statement. So the entire notion of #NE is a house built on sand.

Lastly the Microeconomics of decision making in SWDev in the presence of uncertainty means estimating is needed to "decide" between alternatives - opportunity costs. This paradigm is the basis of any non-trivial business governance process

No Estimates is a solution looking for a problem to solve.

Categories: Project Management