Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Project Management

Estimating on Agile Projects

Herding Cats - Glen Alleman - Wed, 11/26/2014 - 15:40

The current issue of ICEEAWorld, has an article on estimating on agile projects. 

Screen Shot 2014-11-25 at 5.06.11 PM Screen Shot 2014-11-25 at 5.06.35 PM

Categories: Project Management

Constructing a Credible Estimate

Herding Cats - Glen Alleman - Tue, 11/25/2014 - 17:56

To build a credible estimate for any project, in any domain, to produce a solution to any problem, we need to start with a few core ideas.

  • Gather historical data.
    • Unless you're inventing new physics, it is very unlikely what you want to do hasn't been done already, somewhere by someone.
    • We hear all the time¬†this project is unique. Really?
    • This has NEVER been done before?
    • There is no¬†reference design for what we want to do?
    • We are actually inventing the solution out of whole cloth?
  • Gather information about this specific project.
    • This doesn't mean full detailed requirements. That's just not going to happen on any real project.
    • Gather¬†needed Capabilities. Follow the Capabilities Based Planning advice.
    • Sort these capabilities using what ever method you want, but sort them in some priority so the¬†Analysis of Alternatives can be performed.
    • Capabilities are not requirements. Capabilities state what you'll be doing with the results of the project and how what you'll be doing will produce the planned value from the project.
  • Break out some statistical tools - excel will work
    • Does the historical have any statistically confident that it represents the actual past performance
    • I see all the time 20 samples of stories that have ¬Ī50% variances over the period of performance. The¬†Average is then used. Don't do this.¬†
      • First the Most Likely is the number you want. That's the Mode, the most recurring value, of all the numbers you have.
      • Next read¬†The Flaw of Averages on how you can be fooled by averages
  • Fianally to produce a credible estimate, you'll need:
    • Experience
    • Skills
    • Knowledge
    • Data
    • Tools
    • People
    • Process

Screen Shot 2014-11-25 at 3.56.13 PM

If you're missing any of the items in this picture, it's going to be a disappointing effort. Some may even call it a waste to estimate. But not for the reasons you think. It  is a waste to estimate if you don't know how to estimate. But estimate are not for you, unless you're the one providing the money. They're for those providing the money, expecting the outcomes from that expense show show up on some need date, with the needed value that provide them with the ability to earn back the money.

Categories: Project Management

Local Firm Has Critical Message for Project Managers

Herding Cats - Glen Alleman - Tue, 11/25/2014 - 16:47

Rally Software is a local firm providing tools for the management of agile project. Project Managers provide the glue for all human endeavors involving complex work processes. Rally has those tools as do many others. Rally also has message that needs to be addressed by the project management community. Organizing, planning, executing social projects is one of the roles projects managers can contribute to.

SIM posium 2014 - Denver from Ryan Martens Related articles Estimating Guidance When the Solution to Bad Management is a Bad Solution Measures of Program Performance Should I Be Estimating My Work? Assessing Value Produced By Investments
Categories: Project Management

Software Development Conferences Forecast November 2014

From the Editor of Methods & Tools - Tue, 11/25/2014 - 15:20
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing and software quality, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. QCon London, March 2-6 2015, London, UK Exclusive 50 pounds Method & Tools discount with promo code “softdevconf50″ Mobile Dev + Test Conference, April 12-17 2015, San Diego, USA ProgSCon London, April 17 2015, London, UK The Call for Submissions ...

Visualizing My Objectives

NOOP.NL - Jurgen Appelo - Tue, 11/25/2014 - 14:00

I love the concept of OKRs (Objectives & Key Results) and while experimenting with this idea at Happy Melly, I’m trying to figure out how to adapt the practice to fit my own context. One thing my team members and I have noticed over the last few months is that it’s hard to remember what we have committed to.

The post Visualizing My Objectives appeared first on NOOP.NL.

Categories: Project Management

Mike Cohn's Agile Quotes

Herding Cats - Glen Alleman - Mon, 11/24/2014 - 17:30

Screen Shot 2014-11-21 at 1.17.19 PMMike Cohn of Mountain Goat Software has a collection of 101 Agile Quotes. 

There are few I have heart burn with, but the vast majority are right on. 

Some of my favorite:

  • Planning is everything, plans are nothing - Field Marshall Helmuth von Moltke. This is a much misused quote. In the military business, like all businesses that spend lots of money, have high risk, and high reward, we need a plan. That plan is a strategy for the success of the project, be it D-Day or an ERP deployment. That strategy is actually a hypothesis, and the hypothesis needs to have tests. That's what the plan describes. The tests that confirm the strategy is working. To conduct the tests, we need to perform work. When the test shows to strategy is not working, we need a new strategy. That is we change the plan.
  • To be uncertain is to be uncomfortable, but to certain is to be ridiculous - Chinese Proverb. Another misused quote. All project work is uncertain. Managing in the presence of uncertainty is part of good management. Uncertainty creates risk, and risk management is how adults manage projects - Time Lister.
  • Scrum without automation is like driving a sports car on a dirt track - you won't experience the full potential, you will get frustrated, and you will probably end up blaming the car - Ilan Goldstein - tools are the basis of all process and process improvement. Paper on the wall, software management tools. Those who suggest that tools are ruining agile aren't working on complex project.
  • If you define the problem successfully, you almost have the solution - Steve Jobs. This is the role of Plans, Integrated Master Planning in our domain, where the outcomes are described in units of Effectiveness and Performance in an increasing maturity cycle.
Categories: Project Management

Quote of the Month November 2014

From the Editor of Methods & Tools - Mon, 11/24/2014 - 14:57
Walking on water and developing software from a specification are easy if both are frozen. Source: Edward V. Berard (1993) Essays on object-oriented software engineering. Volume 1, Prentice Hall

Book Celebration (and Invitation)!

NOOP.NL - Jurgen Appelo - Mon, 11/24/2014 - 11:51
Management 3.0 #Workout

By now all the different editions of my new book Management 3.0 #Workout are finished and published. The book, easily the most colorful and practical management book in the world, is available as PDF, ePub, Kindle and in a printed edition. And, although writing a book is great fun, finishing a book feels even better! Especially since it’s worth a little celebration. If you’ve read any of my work, you know I love to celebrate. :o)

The post Book Celebration (and Invitation)! appeared first on NOOP.NL.

Categories: Project Management

Software Estimating for Non Trivial Projects

Herding Cats - Glen Alleman - Sun, 11/23/2014 - 16:26

When we read on a blog post that estimates are not meaningful unless you are doing very trivial work, † I wonder if the poster has worked on any non-trivial software domain. Places like GPS OCX, SAP consolidation, Manned Space Flight Avionics, or maybe Health Insurance Provider Networks. Because without some hands on experience in those non-trivial domains, it's be hard to actually knowing what you're talking about when it comes to estimating the spend of other peoples money.

Maybe some background on estimates for nontrivial work will shed light on this ill informed notion that only trivial projects can be estimated.

These are a small sample of papers from one journal on software estimating for misison critical, some times National Asset projects. 

Go to Cross Talk, The Journal of Defense Software Engineering, and search for "estimating" to get 10 pages of 10 articles on this topic alone. This notion of estimating in non-trivial domains is well developed, well documented, and many examples of tools, processes, and principles. 

If Do Your Homework and the Test is much easier.

It could be that the original poster has little experience in mission critical, national asset, enterprise class, software intensive systems. Or it could be the poster simply doesn't know what making estimates for project that spends other peoples money, many times significant amounts of money, is all about.

And of course most of the problems describes as the basis for Not Estimating - the illogical notion that if we can't do something well, let's stop doing it - starts with not knowing what Done Looks Like in any units of measure meaningful to the decision makers. 

So start here with my favorite enterprise architect blog amd his list of books when you follow the link to the bottom.

Screen Shot 2014-11-23 at 7.06.35 AM

So when you have some sense of what DONE looks like in terms of capabilities, the estimating process is now on solid ground. From that solid ground you can ask have we done any like this before? Or better yet can we f ind someone who has done something like this before? Or maybe can we look around to see what looks like our problem and figure out how long it took them by simply asking them? I

If the answer to any of those questions is NO and you're NOT working in a research and development domain, then don't start the project because you're not qualified to do the work, you don't know what you're doing and you're going to waste your customers money.


† Scroll to the bottom of and search for "A Thing I Can Estimate," to see the phrase, and remember the questions and the answers above. If you're not answering those in some positive way, you're now on a death march project starting day one, because you don't know what done looks like for the needed capabilities. Not the requirements, not the code, not the testing - that's all straight forward. Without some notion of what the system is supposed to do, you're never recognize it if it were ever to come into view. And since the customer doesn't know as well, all the money they're spending to find out has to be written off as IRAD or flushed down the toliet as a waste of time and effort in the end. And then you'll know why  Standish (improperly) reports projects fail. 

Screen Shot 2014-11-23 at 7.38.19 AM

Related articles Basis of #NoEstimates are 27 Year Old Reports Estimating Guidance Should I Be Estimating My Work? Assessing Value Produced By Investments Trust but Verify How to Estimate Software Development
Categories: Project Management

Show Me Your Math

Herding Cats - Glen Alleman - Sat, 11/22/2014 - 21:02

IScreen Shot 2014-11-21 at 12.32.21 PMn a recent email exchange, the paper by Todd Little showing projects that exceeded their estimates was used as an example for how porrly we estimate, and ultimately one of the reasons to adopt the #NoEstimates paradigm of making decisions in the absence of estimates of cost, schedule, and the probability that the needed capabilities will show up on time and be what the customer wanted.

Sherlock here had it right. This picture by the way  is borrowed from Mike Cohn's eBook of 101 quotes for agile.

I've written about Little's paper before, but it's worth repeating.

It's very sporty to use examples of bad mathematics, bad management, bad processes, bad practices as the basis for something new. This is essential the basis the book Hard Facts, dangerous Half-Truths: Profiting from Evidence-Based Management. When we start talking about something new and disruptive in the absence of the data, facts, root causes, and underlying governance and principles, we're treading on very thin ground in terms of credibility.

Here's the core principle of all software development

Customers exchange money for value. The definition of  value needs to be in units of measure that allows them to make decisions about the future value of that value. That value is exchanged for a cost. A future cost as well. 

Both this future cost and future value are probabilistic in nature, due to the uncertainties in the work processes, technologies, markets, productivity and all the ...ilities associated with project work. In the presence of uncertainty, nothing is for certain - a tautology. There are two type of uncertainty - reducible and irreducible. Reducibe we can do something about. We can spend money to reduce the risk associated with the uncertainty. Irreducible, we can't. We can only have margin, management reserve, or a Plan B.

To make decisions in the presence of these uncertainties - reducible and irreducible - we need to estimate the uncertainty, the cost of handling the uncertainty, and the value produced by the work driven by these uncertainties. When we fail to make these estimates, the uncertainties don't go away. When we slice the work into small chunks, we might also slice the uncertainties into small chunks - this is the basis of agile and the paradigm of Little Bets. But the uncertainties are still there, unless we've explicitly bought them down or installed margin and reserve. They didn't go away. And what you don't know - or choose to explicitly ignore - can hurt you.

So Sherlock is right don't put forth a theory without the data. 


Related articles Should I Be Estimating My Work? Basis of #NoEstimates are 27 Year Old Reports Estimating Guidance Baloney Claims: Pseudo - science and the Art of Software Methods Assessing Value Produced By Investments Trust but Verify Anecdotal Evidence is not Actually Evidence
Categories: Project Management

Make Stories Small When You Have ‚ÄúWicked‚ÄĚ Problems

If you read my Three Alternatives to Making Smaller Stories, you noticed one thing. In each of these examples, the problem was in the teams’ ability to show progress and create interim steps. But, what about when you have a “wicked” problem, when you don’t know if you can create the answer?

If you are a project manager, you might be familiar with the idea of “wicked” problems from¬†¬† from the book Wicked Problems, Righteous Solutions: A Catalog of Modern Engineering Paradigms. If you are a designer/architect/developer, you might be familiar with the term from Rebecca Wirfs-Brock’s book, Object Design: Roles, Responsibilities, and Collaborations.

You see problems like this in new product development, in research, and in design engineering. You see it when you have to do exploratory design, where no one has ever done something like this before.

Your problem requires innovation. Maybe your problem requires discussion with your customer or your fellow designers. You need consensus on what is a proper design.

When I taught agile to a group of analog chip designers, they created landing zones, where they kept making tradeoffs to fit the timebox they had for the entire project, to make sure they made the best possible design in the time they had available.

If you have a wicked problem, you have plenty of risks. What do you do with a risky project?

  1. Staff the project with the best people you can find. In the past, I have used a particular kind of “generalizing specialist,” the kind where the testers wrote code. The kind of developers who were also architects. These are not people you pick off the street. These are people who are—excuse me—awesome at their jobs. They are not interchangeable with other people. They have significant domain expertise in how to solve the problem. That means they understand how to write code and test.
  2. Help those generalizing specialists learn how to ask questions at frequent points in the project. In my inch-pebble article, I said that with a research project, you use questions to discover what you need to know. The key is to make those questions small enough, so you can show progress every few days or at least once week. Everyone in the project needs to build trust. You build trust by delivering. The project team builds trust by delivering answers, even if they don’t deliver working code.
  3. You always plan to replan. The question is how often? I like replanning often. If you subscribed to my Reflections newsletter (before the Pragmatic Manager), back in 1999, I wrote an article about wicked projects and how to manage the risks.
  4. Help the managers stop micromanaging. The job of a project manager is to remove obstacles for the team. The job of a manager is to support the team. Either of those manager-types might help the team by helping them generate new questions to ask each week. Neither has the job of asking “when will you be done with this?” See Adam Yuret’s article The Self-Abuse of Sprint Commitment.

Now, in return, the team solving this wicked problem owes the organization an update every week, or, at the most, every two weeks about what they are doing. That update needs to be a demo. If it’s not a demo, they need to show something. If they can’t in an agile project, I would want to know why.

Sometimes, they can’t show a demo. Why? Because they encountered a Big Hairy Problem.

Here’s an example. I suffer from vertigo due to loss of (at least) one semi-circular canal in my inner ear. My otoneurologist is one of the top guys in the world. He’s working on an implantable gyroscope. When I started seeing him four years ago, he said the device would be available in “five more years.”

Every year he said that. Finally, I couldn’t take it anymore. Two years ago, I said, “I’m a project manager. If you really want to make progress, start asking questions each week, not each year. You won’t like the fact that it will make your project look like it’s taking longer, but you’ll make more progress.” He admitted last year that he took my advice. He thinks they are down to four years and they are making more rapid progress.

I understand if a team learns that they don’t receive the answers they expect during a given week. What I want to see from a given week is some form of a deliverable: a demo, answers to a question or set of questions, or the fact that we learned something and we have generated more questions. If I, as a project manager/program manager, don’t see one of those three outcomes, I wonder if the team is running open loop.

I’m fine with any one of those three outcomes. They provide me value. We can decide what to do with any of those three outcomes. The team still has my trust. I can provide information to management, because we are still either delivering or learning. Either of those outcomes provides value. (Do you see how a demo, answers or more questions provides those outcomes? Sometimes, you even get production-quality code.)

Why do questions work? The questions work like tests. They help you see where you need to go. Because you, my readers, work in software, you can use code and tests to explore much more rapidly than my otoneurologist can. He has to develop a prototype, test in the lab and then work with animals, which makes everything take longer.

Even if you have hardware or mechanical devices or firmware, I bet you simulate first. You can ask the questions you need answers to each week. Then, you answer those questions.

Here are some projects I’ve worked on in the past like this:

  • Coding the inner loop of an FFT in microcode. I knew how to write the inner loop. I didn’t know if the other instructions I was also writing would make the inner loop faster or slower. (This was in 1979 or so.)
  • Lighting a printed circuit board for a machine vision inspection application. We had no idea how long it would take to find the right lighting. We had no idea what algorithm we would need. The lighting and algorithm were interdependent. (This was in 1984.)
  • With clients, I’ve coached teams working on firmware for a variety of applications. We knew the footprint the teams had to achieve and the dates that the organizations wanted to release. The teams had no idea if they were trying to push past the laws of physics. I helped the team generate questions each week to direct their efforts and see if they were stuck or making progress.
  • I used the same approach when I coached an enterprise architect for a very large IT organization. He represented a multi-thousand IT organization who wanted to revamp their entire architecture. I certainly don’t know architecture. I know how to make projects succeed and that’s what he needed. He used the questions to drive the projects.

The questions are like your tests. You take a scientific approach, asking yourself, “What questions do I need to answer this week?” You have a big question. You break that question down into smaller questions, one or two that you can answer (you hope) this week. You explore like crazy, using the people who can help you explore.

Exploratory design is tricky. You can make it agile, also. Don’t assume that the rest of your project can wait for your big breakthrough.¬† Use questions like your tests. Make progress every day.

I thank Rebecca Wirfs-Brock for her review of this post. Any remaining mistakes are mine.

Categories: Project Management

Populist versus Technical View of Problems

Herding Cats - Glen Alleman - Thu, 11/20/2014 - 04:17

On a twitter discussions and email exchanges there is a notion of¬†populist books versus¬†technical¬†books ¬†used to address issues and problems encountered in our project management domains. My recent book¬†Performance-Based Project Management¬ģ is a populist book. There are principles, practices, and processes in the book that can be put to use on real projects, but very few equations and numbers. It's mostly narrative about increasing the probability of project success. But the to calculate that probability based on other numbers, processes, and systems is not there. That's the realm of¬†Technical books and journal papers.

The content of the book was developed with the help of editors at American Management Association, the publisher. The Acquisition Editor contacted me about writing a book for the customers of AMA. He explained up front AMA is in the money making business of selling books. And that although I may have many good ideas, even ideas that people might want to read about, it's an AMA book and I'll be getting lots of help developing those ideas into a book that will make money for AMA.

The distinction between a populist book and a technical book are the differences between a book that addresses a broad audience with a general approach to the topic and a deep dive book focused on a narrow audience.

But one other disticntion is for most of the technical approaches, some form of calculation takes place to support the materials found in the populist material. One simple example is estimating. There are estimating articles and some books that lay out the principles of estimates. We have those in our domain in the form of guidelines and a few texts. But to calculate the Estimate To Complete in a statistically sound manner, technical knowledge and the underlying mathematics of non-linear, non-stationary, stochastic processes (Monte Carlo Simulation of the projects work structure) is needed. 

Two examples of populist versus technical

Two from my past two from my current work. 

Screen Shot 2014-11-19 at 8.02.28 PM

These two books are about the same topic. General relativity and its description of the shape of our universe.  One is a best selling popularization of the topic, found in many home libraries of those interested in this fascinating topic. The one on the left is on my shelf from a graduate school course on General Relativity along with Misner, Thorne, and Wheeler's Gravity.

Dense is an understatement for the math and the results of the book on the left. So if you want to calculate something about a rapidly spinning Black Hole, you're going to need that book. The book on the right will talk about those Black Holes in non-mathematical terms, but no numbers come out from that description.

Screen Shot 2014-11-19 at 8.03.36 PMThe book on the left is about probabilistic processes in everyday life that we misunderstand or are biased to misunderstand. The many cognitive biases we use to convince ourselves we are making the right decisions on projects are illustrated through nice charts and graphs.

We use the book on the left in our work with non-stationary stochastic process of complex project cost and schedule modeling. Making these decisions is critical to quantifying how technical and economics risk may affect a system's cost. This book is a treatment of how probability methods are applied to model, measure, and manage risk, schedule, and cost engineering for advanced systems. Garvey's shows how to construct models, do the calculations, and make decisions with these calculations.

Here's The Point - Finally

If you come across a suggestion that decisions can be made in the absence of knowing anything about the future numbers or about actually doing the math, put that suggestion in the class of populist descriptions of a complex topic.

If you can't calculate something, then you can't make a decision based on the evidence represented by numbers. If you can't decide based on the math, then the only way left is to decide on intuition, hunchs, opinion, or some other seriously flawed non-analytical basis.

Just a reminder from Mr. Deming stated in yesterday's post


If it's not your money, there's likley an expectation that those providing the money are intestered in the calculations needed to make those decisions. 

Action accordingly

Related articles Estimating Guidance When the Solution to Bad Management is a Bad Solution Measures of Program Performance Should I Be Estimating My Work? Decision Making Without Estimates? Trust but Verify Anecdotal Evidence is not Actually Evidence Why Trust is Hard Baloney Claims: Pseudo - science and the Art of Software Methods
Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Wed, 11/19/2014 - 19:39

All ideas require credible evidence to be tested, suspect ideas require that even more so - Deep Inelastic Scattering, thesis adviser, University of California, 1978

Carl Sagan 1When the response to questions about the applicability of an idea is push back with accusations that those asking the questions in an attempt to determine the applicability and truth of the statement are somehow afraid of that truth, suggests there is little evidence as a test of those conjectures.

When there are proposals that ignore the principles of business, microeconomics, control systems theory, and are based on well know bad management practices, with well know and easy to apply corrective actions - there is no there, there. 

Deming 2

So without a testable process, in a testable domain, with evidence based assessment of appliability, outcomes, and benefits, any suggestion is opinion at best and blather at worse.

Related articles Trust but Verify Assessing Value Produced By Investments Should I Be Estimating My Work?
Categories: Project Management

Three Alternatives for Making Smaller Stories

When I was in Israel a couple of weeks ago teaching workshops, one of the big problems people had was large stories. Why was this a problem? If your stories are large, you can’t show progress, and more importantly, you can’t change.

For me, the point of agile is the transparency—hey, look at what we’ve done!—and the ability to change. You can change the items in the backlog for the next iteration if you are working in iterations. You can change the project portfolio. You can change the features. But, you can’t change anything if you continue to drag on and on and on for a give feature. You’re not transparent if you keep developing a feature. You become a black hole.

Managers start to ask, “What you guys doing? When will you be done? How much will this feature cost?” Do you see where you need to estimate more if the feature is large? Of course, the larger the feature, the more you need to estimate and the more difficult it is to estimate well.

Pawel Brodzinski said this quite well last year at the Agile conference, with his favorite estimation scale. Anything other than a size 1 was either too big or the team had no clue.

The reason Pawel and I and many other people like very small stories—size of 1—means that you deliver something every day or more often. You have transparency. You don’t invest a ton of work without getting feedback on the work.

The people I met a couple of weeks ago felt (and were) stuck. One guy was doing intricate SQL queries. He thought that there was no value until the entire query was done. Nope, that’s where he is incorrect. There is value in interim results. Why? How else would you debug the query? How else would you discover if you had the database set up correctly for product performance?

I suggested that every single atomic transaction was a valuable piece. That the way to build small stories was to separate this hairy SQL statement was at the atomic transaction. I bet there are other ways, but that was a good start. He got that aha look, so I am sure he will think of other ways.

Another guy was doing algorithm development. Now, I know one issue with algorithm development is you have to keep testing performance or reliability or something else when you do the development. Otherwise, you fall off the deep end. You have an algorithm tuned for one aspect of the system, but not another one. The way I’ve done this in the past is to support algorithm development with a variety of tests.

Testing Continuum from Manage It!This is the testing continuum from Manage It! Your Guide to Modern, Pragmatic Project Management. See the unit and component testing parts? If you do algorithm development, you need to test each piece of the algorithm—the inner loop, the next outer loop, repeat for each loop—with some sort of unit test, then component test, then as a feature. And, you can do system level testing for the algorithm itself.

Back when I tested machine vision systems, I was the system tester for an algorithm we wanted to go “faster.” I created the golden master tests and measured the performance. I gave my tests to the developers. Then, as they changed the inner loops, they created their own unit tests. (No, we were not smart enough to do test-driven development. You can be.) I helped create the component-level tests for the next-level-up tests. We could run each new potential algorithm against the golden master and see if the new algorithm was better or not.

I realize that you don’t have a product until everything works. This is like saying in math that you don’t have an answer until you have the finished the entire calculation. And, you are allowed—in fact, I encourage you—to show your interim work. How else can you know if you are making progress?

Another participant said that he was special. (Each and every one of you is special. Don’t you know that by now??) He was doing firmware development. I asked if he simulated the firmware before he downloaded to the device. “Of course!” he said. “So, simulate in smaller batches,” I suggested. He got that far-off look. You know that look, the one that says, “Why didn’t I think of that?”

He didn’t think of it because it requires changes to their simulator. He’s not an idiot. Their simulator is built for an entire system, not small batches. The simulator assumes waterfall, not agile. They have some technical debt there.

Here are the three ways, in case you weren’t clear:

  1. Use atomic transactions as a way to show value when you have a big honking transaction. Use tests for each atomic transaction to support your work and understand if you have the correct performance on each transaction.
  2. Break apart algorithm development, as in “show your work.” Support your algorithm development with tests, especially if you have loops.
  3. Simulate in small batches when you have hardware or firmware. Use tests to support your work.

You want to deliver value in your projects. Short stories allow you to do this. Long stories stop your momentum. The longer your project, and the more teams (if you work on a program), the more you need to keep your stories short. Try these alternatives.

Do you have other scenarios I haven’t discussed? Ask away in the comments.

This turned into a two-parter. Read Make Stories Small When You Have “Wicked” Problems.

Categories: Project Management

Have You Signed Up for the Conscious Software Development Telesummit?

Do you know about the Conscious Software Development Telesummit? Michael Smith is interviewing more than 20 experts about all aspects of software development, project management, and project portfolio management. He’s releasing the interviews in chunks, so you can¬† listen and not lose work time. Isn’t that smart of him?

If you haven’t signed up yet, do it now. You get access to all of the interviews, recordings, and transcripts for all the speakers. That’s the Conscious Software Development Telesummit. Because you should make conscious decisions about what to do for your software projects.

Categories: Project Management

Software Development Linkopedia November 2014

From the Editor of Methods & Tools - Tue, 11/18/2014 - 17:19
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.¬† This month you will find some interesting information and opinions about Agile coaching and management, positive quality assurance, product managers and owners, enterprise software architecture, functionnal programming, MySQL performance and software testing at Google. Blog: Fewer Bosses. More Coaches. Please. Blog: Advice for Agile Coaches on “Dealing With” Middle Management Blog: Five ways to make testing more positive Blog: 9 Things Every Product Manager Should Know about Being an Agile Product Owner Article: Managing Trust in Distributed Teams Article: Industrial ...

Are Vanity Metrics Really All That Bad?

Mike Cohn's Blog - Tue, 11/18/2014 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

I have a bit of a problem with all the hatred shown to so-called vanity metrics.

Eric Ries first defined vanity metrics in his landmark book, The Lean Startup. Ries says vanity metrics are the ones that most startups are judged by—things like page views, number of registered users, account activations, and things like that.

Ries says that vanity metrics are in contrast to actionable metrics. He defines an actionable metric as one that demonstrates clear cause and effect. If what causes a metric to go up or down is clear, the metric is actionable. All other metrics are vanity metrics.

I’m pretty much OK with all this so far. I’m big on action. I’ve written in my books and in posts here that if a metric will not lead to a different action, that metric is not worth gathering. I’ve said the same of estimates. If you won’t behave differently by knowing a number, don’t waste time getting that number.

So I’m fine with the definitions of “actionable” and “vanity” metrics. My problem is with some of the metrics that are thrown away as being merely vanity. For example, the number one hit on Google today when I searched for “vanity metrics” was an article on TechCrunch.

They admit to being guilty of using them and cite such metrics as 1 million downloads and 10 million registered users.

But are such numbers truly vanity metrics?

One chapter in Succeeding with Agile, is about metrics. In it, I wrote about creating a balanced scorecard and using both leading and lagging indicators. A lagging indicator is something you can measure after you have done something, and can be used to determine if you achieved a goal.

If your goal is improving quality, a lagging indicator could be the number of defects reported in the first 30 days after the release. That would tell you if you achieved your goal—but it comes with the drawback of not being at all available until 30 days after the release.

A leading indicator, on the other hand, is available in advance, and can tell you if you are on your way to achieving a goal.

The number of nightly tests that pass would be a leading indicator that a team is on its way to improving quality. The number of nightly tests passing, though, is a vanity metric in Ries’ terms. It can be easily manipulated; the team could run the same or similar tests many times to deliberately inflate the number of tests. Therefore, the linkage because cause and effect is weak. More passing tests do not guarantee improved quality.

But is the number of passing tests really a vanity metric? Is it really useless?

To show that it’s not, consider a few other metrics you’re probably familiar with: your cholesterol value, your blood pressure, your resting pulse, even your weight. A doctor can use these values and learn something about your health. If your total cholesterol value is 160, a heart attack is probably not imminent. A value of 300, though, and it’s a good thing you’re visiting your doctor.

These are leading indicators. They don’t guarantee anything. I could have a cholesterol value of 160 and have a heart attack as soon as I walk out of the doctor’s office. The only true lagging indicator would be the number of heart attacks I’ve had in the last year. Yes, absolutely a much better metric, but not available until the end of the year.

So should we avoid all vanity metrics? No. Vanity metrics can possess meaningful information. They are often leading indicators. If a website’s goal is to sell memberships then number of memberships sold is that company’s key, actionable metric.

But number of unique new visitors—a vanity metric—can be a great leading indicator. More new visitors should lead to more memberships sold. Just like more passing tests should lead to higher quality. It’s not guaranteed, but it is indicative.

The TechCrunch article I mentioned has the right attitude. It says, “Vanity metrics aren’t completely useless, just don’t be fooled by them.” The real danger of vanity metrics is that they can be gamed. We can run tests that can’t fail. We can buy traffic to our site that we know will never translate into paid memberships, but make the traffic metrics look good.

As long as no one is doing things like that, vanity metrics can serve as good leading indicators. Just keep in mind that they don’t measure what you really care about. They merely indicate whether you’re on the right path.

How Do You Do It? (A Job Like a Tailored Suit)

NOOP.NL - Jurgen Appelo - Tue, 11/18/2014 - 14:21

I regularly get the question, “How Do You Do It?”

“How are you able to travel so much and not get sick of it?”

“How can you read 50+ books per year and also write your own?”

Gosh, I don’t know.

The post How Do You Do It? (A Job Like a Tailored Suit) appeared first on NOOP.NL.

Categories: Project Management

Measures of Program Performance

Herding Cats - Glen Alleman - Mon, 11/17/2014 - 14:29

In a sufficiently complex project we need measures of progress to plan beyond burning down our list of same sized stories, which by the way require non-trivial work to make same sized and keep same sized. And of course if this same size-ness does not have a sufficiently small variance all that effort is a waste.

But let's assume we're not working on a sufficiently small project where same sized work efforts can be developed, we need measures of progress related to the Effectiveness of the deliverables and the Performance of those deliverable in producing that effectiveness for the customer.

Here's a recent webinar on this topic.

Measurement News Webinar from Glen Alleman And of course we need to define in what domain this approach can be applied, what domain it is too much, and what domain it is actually not enough. Paradigm of agile project management from Glen Alleman Then the actual conversation about any approach to Increasing the Probability of Success for our work efforts can start. Along with identifying the underlying Root Causes of any impediments to that goal that exist today and the corrective actions needed to remove them.  Without knowing the root cause and corrective actions any suggested solution has little value as it is speculative at best and nonsense at worse.
Categories: Project Management

When the Solution to Bad Management is a Bad Solution

Herding Cats - Glen Alleman - Mon, 11/17/2014 - 01:40

SMartin_SB_PICS5Much has been written about the Estimating Problem, the optimism bias, the planning fallacy, and other related issues with estimating in the presence of Dilbert-isk style management. The notion that the solution to the estimating problem is not to estimate, but to start work, measure the performance of the work, and use that to forecast completion dates and efforts is essentially falling into the trap Steve Martin did in LA Story. 

Using yesterday's weather becasue he was too lazy to make tomorrow's forecast

By the way each of those issues has a direct and applicable solution. So next time you hear someone use them as the basis of a new idea, ask if they have tried the known to work solution to the planning fallacy, estimating bias, optimism bias, and the myriad of other project issues with knowing solutions?

All measuring performance to date does is measure yesterday's weather. This yesterday's weather paradigm has been well studied. If in fact your project is based on Climate then yesterday's weather is likely a good indicator of tomorrow's weather.

The problem of course with the yesterday's weather approach, is the same problem Steve Martin had in LA Story when he used a previously recorded weather forecast for the next day. 

Today's weather turned out not to be like yesterday's weather.

Those posting that stories settle down to a rhythm assume - and we know what assume means - that the variances in the work efforts are settling down as well. That would mean the word assume comes true Ass out of U and Me. That's a hugely naive approach, without actual confirmation that the variances are small enough to not impact the past performance. When you have statistical processes looking like this, from small sampled projects in the absence of actual reference class - in this case self-reference class - you're also being hugely naive about the possible behaviours of stochastic processes.


Then when you slice the work to same sized efforts - this is actually process used in the domains we work: DOD, DOE, ERP - you're actually estimating future performance base on a reference class and calling it Not Estimating.

So when you hear examples and Bad Management, over commitment of work, assigning a project manager to a project that is 100's of time larger than that PM has ever experienced and expecting success, getting a credible estimate and cutting it in half, or any other Dilbert style management process - and you start with dropping the core process needed to increase the probability of success.

This approach is itself contrary to good project management principles, which are quite simple:

Principles and Practices of Performance-Based Project Management¬ģ from Glen Alleman ¬†

 If we start with a solution to a problem of Bad Management, before assuring that the Principles and Practices of Good Management are in place, we'll be paving the cow path as we say in our enterprise, space, defense domain. This means that the solution will have not actually fixed the problem. It will have not treated the root cause of the problem, just addressed the symptoms.

There is no substitute for Good Management.

Inigo1And when you hear there is a smell of bad management and there is no enumeration of the root causes and the corrective actions to those root causes, remember Ingio Montoya's retort to Vizzini's statement

You keep using that word. I do not think it means what you think it means.

That word is dysfunction, smell, root cause - all of which are missing the actual innumerated root causes, assessment of the possible corrective actions, and resulting removal of the symptoms. 

I speak about this approach from my hands on experience working the Performance Assessment and Root Cause Analysis on programs that are in the headlines.

Related articles Should I Be Estimating My Work? Estimating Guidance Assessing Value Produced By Investments Basis of #NoEstimates are 27 Year Old Reports
Categories: Project Management