Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/6' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Project Management
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Software Development Conferences Forecast August 2016

From the Editor of Methods & Tools - Wed, 08/24/2016 - 15:34
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

What Are Story Points?

Mike Cohn's Blog - Tue, 08/23/2016 - 15:00

Story points are a unit of measure for expressing an estimate of the overall effort that will be required to fully implement a product backlog item or any other piece of work.

When we estimate with story points, we assign a point value to each item. The raw values we assign are unimportant. What matters are the relative values. A story that is assigned a 2 should be twice as much as a story that is assigned a 1. It should also be two-thirds of a story that is estimated as 3 story points.

Instead of assigning 1, 2 and 3, that team could instead have assigned 100, 200 and 300. Or 1 million, 2 million and 3 million. It is the ratios that matter, not the actual numbers.

What Goes Into a Story Point?

Because story points represent the effort to develop a story, a team’s estimate must include everything that can affect the effort. That could include:

  • The amount of work to do
  • The complexity of the work
  • Any risk or uncertainty in doing the work

When estimating with story points, be sure to consider each of these factors. Let’s see how each impacts the effort estimate given by story points.

The Amount of Work to Do

Certainly, if there is more to do of something, the estimate of effort should be larger. Consider the case of developing two web pages. The first page has only one field and a label asking to enter a name. The second page has 100 fields to also simply be filled with a bit of text.

The second page is no more complex. There are no interactions among the fields and each is nothing more than a bit of text. There’s no additional risk on the second page. The only difference between these two pages is that there is more to do on the second page.

The second page should be given more story points. It probably doesn’t get 100 times more points even though there are 100 times as many fields. There are, after all, economies of scale and maybe making the second page is only 2 or 3 or 10 times as much effort as the first page.

Risk and Uncertainty

The amount of risk and uncertainty in a product backlog item should affect the story point estimate given to the item.

If a team is asked to estimate a product backlog item and the stakeholder asking for it is unclear about what will be needed, that uncertainty should be reflected in the estimate.

If implementing a feature involves changing a particular piece of old, brittle code that has no automated tests in place, that risk should be reflected in the estimate.

Complexity

Complexity should also be considered when providing a story point estimate. Think back to the earlier example of developing a web page with 100 trivial text fields with no interactions between them.

Now think about another web page also with 100 fields. But some are date fields with calendar widgets that pop up. Some are formatted text fields like phone numbers or Social Security numbers. Other fields do checksum validations as with credit card numbers.

This screen also requires interactions between fields. If the user enters a Visa card, a three-digit CVV field is shown. But if the user enters an American Express card, a four-digit CVV field is shown.

Even though there are still 100 fields on this screen, these fields are harder to implement. They’re more complex. They’ll take more time. There’s more chance the developer makes a mistake and has to back up and correct it.

This additional complexity should be reflected in the estimate provided.

Consider All Factors: Amount of Work, Risk and Uncertainty, and Complexity

It may seem impossible to combine three factors into one number and provide that as an estimate. It’s possible, though, because effort is the unifying factor. Estimators consider how much effort will be required to do the amount of work described by a product backlog item.

Estimators then consider how much effort to include for dealing with the risk and uncertainty inherent in the product backlog item. Usually this is done by considering the risk of a problem occurring and the impact if the risk does occur. So, for example, more will be included in the estimate for a time-consuming risk that is likely to occur than for a minor and unlikely risk.

Estimators also consider the complexity of the work to be done. Work that is complex will require more thinking, may require more trial-and-error experimentation, perhaps more back-and-forth with a customer, may take longer to validate and may need more time to correct mistakes.

All three factors must be combined.

Consider Everything in the Definition of Done

A story point estimate must include everything involved in getting a product backlog item all the way to done. If a team’s definition of done includes creating automated tests to validate the story (and that would be a good idea), the effort to create those tests should be included in the story point estimate.

Story points can be a hard concept to grasp. But the effort to fully understand that points represent effort as impacted by the amount of work, the complexity of the work and any risk or uncertainty in the work will be worth it.

Connecting Project Benefits to Business Strategy for Success

Herding Cats - Glen Alleman - Tue, 08/23/2016 - 05:01

The current PMI Pulse titled Delivering Value: Focus on Benefits during Project Execution provides some guidance on how to manage the benefits side of an IT project. But the article misses the mark on an important concept. This is a chart in the paper, suggesting the metrics of the benefits.

But where do these metrics come from?

Screen Shot 2016-08-16 at 1.00.29 PM

The question is where do the measures of the benefits listed in the above chart come from? 

The answer is they come from the Strategy of the IT function. Where is the strategy defined? The answer is in the Balanced Scorecard. This is how ALL connections are made in enterprise IT projects. Why are we doing something? How will we recognize that it's the right thing to do? What are the measures of the outcomes connected to each other and connected to the top level strategy to the strategic needs of the firm.

When you hear we can't forecast the benefits in the future from our work, you can count on the firm spending a pile of money for probably not much value. Follow the steps starting on page 47 in the presentation above and build the 4 perspectives and connect the initiatives.
  • Stakeholder - what does the business need in terms of beneficial outcomes?
  • Internal Processes - what governance processes will be used to produce these outcomes?
  • Learings and Growth - what people, information, and organizational elements will be needed to execute the process to produce the benefical outcomes?
  • Budget - what are you willing to spend to achieve¬†these beneficial outcomes

As always, each of these is a random variable operating in the presence of uncertanty, creating risk that they will not be achieved. As always, this means making estimates of both the beneficial outcomes and the cost to achieve them. 

Like all non-trivial projects, estimating is a critical success factor. Uncertainty is unavoidable. Making decisions in the presence of uncertanty is unavoidable. Having some hope that the decision will result in a beneficial outcomes requires making estimates of that outcome and choosing the most likely beneficial outcome.  

Anyone telling you otherwise is working in a de-minimis project. 

So Let's Apply These Principles to a Recent Post

A post Uncertainty of benefits versus costs, has some ideas that need addressing ...

  • Return of an investment is the benefits minus the costs.¬†
    • And both are random variables subject to reducible and irreducible uncertanties.
    • Start by building a model of these uncertainties.
    • Apply that model and update it with data from the project as it proceeds.
  • Most people focus way too much on costs and not enough on benefits.
    • Why? This is bad management. This is naive management. Stop doing stupid things on purpose.
    • Risk Management (from the underlying uncertainties)¬†is how adults manage projects - Tim Lister.
    • Behave like an adult, manage the risk.
  • If you are working on anything innovative, your benefit uncertainty is crazy high.
    • Says who?
    • If you don't have some statistically confident sense of what the pay off¬†is going to be, you'd better be ready to spend money to find out before you spend all the money.
    • This is naive project management and naive business management.
    • It's counter to the first bullet - ROI = (Value - Cost)/Cost.¬†
    • Having an acceptable level of¬†confidence in both Value and Cost is part of Adult Management of other people's money.
  • But we can‚Äôt estimate based on data, it has to be guesses!
    • No estimates, are not guesses unless done by a Child.¬†
    • Estimates ARE based on data. This is called Reference Class Forecasting. Also parametric models use past performance to project future performance.
    • If Your cost estimation might be off by +/- 50%, but your benefit estimation could be off by +/-95% (or more),¬†you're pretty much clueless¬†about what the customer wants. Or you're spending money on a R&D project to find out. This is one of those examples conjectured by inexperienced estimators. This is not how it works in any mature¬†firm.
    • Adults don't guess, they estimate.
    • Adults know how to estimate. Lots of books, papers, and tools.
  • So we should all stop doing estimates, right?
    • No -¬†an estimate is a forecast and a commitment.
    • The¬†commitment MUST have a confidence level.
    • We have 80% confidence of launching on or before the 3rd week in November 2014 for 4¬†astronauts¬†in our¬†vehicle¬†to the International Space station. This was a VERY innovative system. This is why a contract¬†for $3.5B was awarded. This approach is applicable to ALL projects
    • Any de minimis projects have not deadline or a Not to Exceed target cost.

All projects are probabilistic. All projects have uncertainty in cost and benefits. Estimating both cost and benefit, continuously updating those estimates, and taking action to correct unfavorable variances from plan, is how adults manage projects.

  Related articles Strategy is Not the Same as Operational Effectiveness Decision Making Without Estimates? Local Firm Has Critical Message for Project Managers Architecture -Center ERP Systems in the Manufacturing Domain The Purpose Of Guiding Principles in Project Management Herding Cats: Large Programs Require Special Processes Herding Cats: The Problems with Schedules Quote of the Day The Cost Estimating Problem Estimating Processes in Support of Economic Analysis
Categories: Project Management

Invoking "Laws" Without a Domain or Context

Herding Cats - Glen Alleman - Thu, 08/18/2016 - 22:31

It seems to be common invoke Laws in place of actual facts when trying to support a point. Here's two recent ones I've encountered with some Agile and #NoEstimates advocates. Two of my favorite are:

  • Goodhart's Law
  • Hofstadter's law

These are not Laws in the same way as the Laws of Physics, Laws of Chemistry, Laws of Queuing theory - which is why it's so easy to misapply them, misuse them, use them to obviscate the situation and hide behind fancy terms which have no meaning to the problem at hand. Here's some real laws.

  • Newton's Law(s), there are three of them:
    • First law: When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by a net force.
    • Second law: In an inertial reference frame, the vector sum of the forces F on an object is equal to the mass m of that object multiplied by the acceleration vector a of the object: F = ma.
    • Third law: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.
  • Boyle's Law -¬†For a fixed mass of gas at constant temperature, the volume is inversely proportional to the pressure.¬†pv = Constant.
  • Charle's Laws - For a fixed mass of gas at constant pressure, the volume is directly proportional to the kelvin temperature.¬†V = Constant x T
  • 2nd Law of Thermodynamics¬†states that the total entropy of an isolated system always increases over time, or remains constant in ideal cases where the system is in a steady state or undergoing a reversible process. The increase in entropy accounts for the irreversibility of natural processes, and the asymmetry between future and past.
  • Little's Law - which is¬†l = őĽw, which asserts that the time average number of customers in a queueing system, l, is equal to the rate at which customers arrive and enter the system, őĽ, times¬†the average sojourn time of a customer, w. And just to be clear the statistics of the processes in Little's Law are IID - Independent, Identicially Distribution and Stationary. Rarely the case in software development, where Little's Law is misused often.

Misuse of Goodhart's Law

This post, like many of other posts, was stimulated by a conversation on social media. Sometimes the conversations trigger ideas that have laid dormant for awhile. Sometimes, I get a new idea from a word or a phrase. But most of the time, they come from a post that was either wrong, misinformed, or worse misrepresenting  no principles.

The OP claimed Goodhart's Law was the source of most of the problems with software development. See the law below. 

But the real issue with invoking Goodhart's Law has several dimensions, using Goodhart's Law named after the economist who originated it, Charles Goodhart. Its most popular formulation is: "When a measure becomes a target, it ceases to be a good measure." This law is part of a broader discussion of making policy decision on macro economic models. 

Given that the structure of an econometric model consists of optimal decision rules of economic agents, and that optimal decision rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models.

What this says is again when the measure becomes the target, that target impacts the measure, changing the target. 

So first a big question

Is this macroeconomic model a correct  operational model for software development processes - measuring changes the target?

Setting targets and measuring performance against that target is the basis of all closed loop control systems used to manage projects. In our domain this control system is the Risk Adjusted Earned Value Management System (EVMS). EVM is a project management technique for measuring project performance and progress in an objective manner. A baseline of the planned value is established, work is performed, physical percent complete is measured, and the earned value is calculated. This process provides actionable information about the performance of the project using Quantifiable Backup Data (QBD) for the expected outcomes of the work, for the expected cost, at the expected time all adjusted for the reducible and irreducible uncertanties of the work.

Without setting a target to measure against, we have:

  • No baseline control.
  • No measures of effectiveness.
  • No measures of performance.
  • No technical performance measures.
  • No Key Performance Parameters.

With no target and no measures of progress toward the target ... 

We have no project management, no program controls, we have no closed loop control system.

With these missing pieces, the project is doomed on day one. And then we're surprised it runs over cost and schedule, and doesn't deliver the needed capabilities in exchange for the cost and time invested.

When you hear Goodhart's Law is the cause of project failure, you're likely talking to someone with little understanding of managing projects with a budget and do date for the needed capabilities - you know an actual project. So what this means in economics and not in project management is ...

... when a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it. - Mario Biagioli, Nature (volume 535, page 201, 2015)

Note the term Economy, not cost, schedule, and technical performance measures of projects. Measuring the goals and activities of monetary policy Goodhart might be applicable. For managing development of products with other people's money, probably not.

Gaming of the system is certainly possible on projects. But unlike the open economy, those gaming the project measures can be made to stop, with a simple command. Stop gaming or I'll find someone else to take your place.

Misuse of Hofstadter's Law

My next favorite misused law is this one, which is popular among the #Noestimates advocates who claim estimating can't be done. 

Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law¬†‚ÄĒ‚ÄĮDouglas Hofstadter,¬†G√∂del, Escher, Bach: An Eternal Golden Braid

Hofstadter's Law is about the development and use of self-referencing systems. The statement is about how long it takes is itself a self-referencing statement. He's speaking about the development of a Chess playing program - and doing so from the perspective of 1978 style software development. The game playing programs use a look ahead tree with branches of the moves and countermoves. The art of the program is to avoid exploring every branch of the look ahead tree down to the terminal nodes. In chess - actual chess, people - not the computer - have the skill to know what branches to look down and what branches to not look down. 

In the early days (before 1978) people used to estimate that it would be ten years until the computer was a world champion, But after ten years (1988) it was still estimated that day was still ten years away. 

This notion is part of the recursive Hofstadter's Law which is what the whole book is about. The principle of Recursion and Unpredictability is described at the bottom of page 152. 

For a set to be recursively enumerable (the condition to traverse the look ahead tree for all position moves), means it can be generated from a set of starting points (axioms), by the repeated application of rules of inference. Thus, the set grows and grows, each new element being compounded somehow out of previous elements, in a sort of mathematical snowball. But this is the essence of recursion - something being defined in terms of simpler versions of itself, instead of explicitly. 

Recursive enumeration is a process in which new things emerge from old things by fixed rules. There seem to be many surprises in such processes ...

So if you work on the development of recursive enumeration based software systems, then yes - estimating when you'll have your program working is likely going to be hard. Or if you work on the development of software that has no stated Capabilities, no Product Roadmap, no Release Plan, no Product Owner or Customer that may have even the slightest notion of what Done Looks like in units of measure meaningful to the decision makers, then probably you can apply Hofstadter's Law. Yourdan calls this type of project A Death March Project - good luck with that.

If not, then DO NOT fall prey to the misuse of Hofstadter's Law by those likely to not have actually read Hofstadter's book, nor have the skills and experience to understand the processes needed to produce credible estimates.

So once again, time to call BS, when quotes are misused

Related articles Agile Software Development in the DOD Empirical Data Used to Estimate Future Performance Thinking, Talking, Doing on the Road to Improvement Herding Cats: The Misuse Hofstadter's Law Just Because You Say Words, It Doesn't Make Then True There is No Such Thing as Free Doing the Math Building a Credible Performance Measurement Baseline Your Project Needs a Budget and Other Things
Categories: Project Management

Invoking "Laws" Without a Domain or Context

Herding Cats - Glen Alleman - Thu, 08/18/2016 - 22:31

It seems to be common invoke Laws in place of actual facts when trying to support a point. Here's two recent ones I've encountered with some Agile and #NoEstimates advocates. Two of my favorite are:

  • Goodhart's Law
  • Hofstadter's law

These are not Laws in the same way as the Laws of Physics, Laws of Chemistry, Laws of Queuing theory - which is why it's so easy to misapply them, misuse them, use them to obviscate the situation and hide behind fancy terms which have no meaning to the problem at hand. Here's some real laws.

  • Newton's Law(s), there are three of them:
    • First law: When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by a net force.
    • Second law: In an inertial reference frame, the vector sum of the forces F on an object is equal to the mass m of that object multiplied by the acceleration vector a of the object: F = ma.
    • Third law: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.
  • Boyle's Law -¬†For a fixed mass of gas at constant temperature, the volume is inversely proportional to the pressure.¬†pv = Constant.
  • Charle's Laws - For a fixed mass of gas at constant pressure, the volume is directly proportional to the kelvin temperature.¬†V = Constant x T
  • 2nd Law of Thermodynamics¬†states that the total entropy of an isolated system always increases over time, or remains constant in ideal cases where the system is in a steady state or undergoing a reversible process. The increase in entropy accounts for the irreversibility of natural processes, and the asymmetry between future and past.
  • Little's Law - which is¬†l = őĽw, which asserts that the time average number of customers in a queueing system, l, is equal to the rate at which customers arrive and enter the system, őĽ, times¬†the average sojourn time of a customer, w. And just to be clear the statistics of the processes in Little's Law are IID - Independent, Identicially Distribution and Stationary. Rarely the case in software development, where Little's Law is misused often.

Misuse of Goodhart's Law

This post, like many of other posts, was stimulated by a conversation on social media. Sometimes the conversations trigger ideas that have laid dormant for awhile. Sometimes, I get a new idea from a word or a phrase. But most of the time, they come from a post that was either wrong, misinformed, or worse misrepresenting  no principles.

The OP claimed Goodhart's Law was the source of most of the problems with software development. See the law below. 

But the real issue with invoking Goodhart's Law has several dimensions, using Goodhart's Law named after the economist who originated it, Charles Goodhart. Its most popular formulation is: "When a measure becomes a target, it ceases to be a good measure." This law is part of a broader discussion of making policy decision on macro economic models. 

Given that the structure of an econometric model consists of optimal decision rules of economic agents, and that optimal decision rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models.

What this says is again when the measure becomes the target, that target impacts the measure, changing the target. 

So first a big question

Is this macroeconomic model a correct  operational model for software development processes - measuring changes the target?

Setting targets and measuring performance against that target is the basis of all closed loop control systems used to manage projects. In our domain this control system is the Risk Adjusted Earned Value Management System (EVMS). EVM is a project management technique for measuring project performance and progress in an objective manner. A baseline of the planned value is established, work is performed, physical percent complete is measured, and the earned value is calculated. This process provides actionable information about the performance of the project using Quantifiable Backup Data (QBD) for the expected outcomes of the work, for the expected cost, at the expected time all adjusted for the reducible and irreducible uncertanties of the work.

Without setting a target to measure against, we have:

  • No baseline control.
  • No measures of effectiveness.
  • No measures of performance.
  • No technical performance measures.
  • No Key Performance Parameters.

With no target and no measures of progress toward the target ... 

We have no project management, no program controls, we have no closed loop control system.

With these missing pieces, the project is doomed on day one. And then we're surprised it runs over cost and schedule, and doesn't deliver the needed capabilities in exchange for the cost and time invested.

When you hear Goodhart's Law is the cause of project failure, you're likely talking to someone with little understanding of managing projects with a budget and do date for the needed capabilities - you know an actual project. So what this means in economics and not in project management is ...

... when a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it. - Mario Biagioli, Nature (volume 535, page 201, 2015)

Note the term Economy, not cost, schedule, and technical performance measures of projects. Measuring the goals and activities of monetary policy Goodhart might be applicable. For managing development of products with other people's money, probably not.

Gaming of the system is certainly possible on projects. But unlike the open economy, those gaming the project measures can be made to stop, with a simple command. Stop gaming or I'll find someone else to take your place.

Misuse of Hofstadter's Law

My next favorite misused law is this one, which is popular among the #Noestimates advocates who claim estimating can't be done. 

Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law¬†‚ÄĒ‚ÄĮDouglas Hofstadter,¬†G√∂del, Escher, Bach: An Eternal Golden Braid

Hofstadter's Law is about the development and use of self-referencing systems. The statement is about how long it takes is itself a self-referencing statement. He's speaking about the development of a Chess playing program - and doing so from the perspective of 1978 style software development. The game playing programs use a look ahead tree with branches of the moves and countermoves. The art of the program is to avoid exploring every branch of the look ahead tree down to the terminal nodes. In chess - actual chess, people - not the computer - have the skill to know what branches to look down and what branches to not look down. 

In the early days (before 1978) people used to estimate that it would be ten years until the computer was a world champion, But after ten years (1988) it was still estimated that day was still ten years away. 

This notion is part of the recursive Hofstadter's Law which is what the whole book is about. The principle of Recursion and Unpredictability is described at the bottom of page 152. 

For a set to be recursively enumerable (the condition to traverse the look ahead tree for all position moves), means it can be generated from a set of starting points (axioms), by the repeated application of rules of inference. Thus, the set grows and grows, each new element being compounded somehow out of previous elements, in a sort of mathematical snowball. But this is the essence of recursion - something being defined in terms of simpler versions of itself, instead of explicitly. 

Recursive enumeration is a process in which new things emerge from old things by fixed rules. There seem to be many surprises in such processes ...

So if you work on the development of recursive enumeration based software systems, then yes - estimating when you'll have your program working is likely going to be hard. Or if you work on the development of software that has no stated Capabilities, no Product Roadmap, no Release Plan, no Product Owner or Customer that may have even the slightest notion of what Done Looks like in units of measure meaningful to the decision makers, then probably you can apply Hofstadter's Law. Yourdan calls this type of project A Death March Project - good luck with that.

If not, then DO NOT fall prey to the misuse of Hofstadter's Law by those likely to not have actually read Hofstadter's book, nor have the skills and experience to understand the processes needed to produce credible estimates.

So once again, time to call BS, when quotes are misused

Related articles Agile Software Development in the DOD Empirical Data Used to Estimate Future Performance Thinking, Talking, Doing on the Road to Improvement Herding Cats: The Misuse Hofstadter's Law Just Because You Say Words, It Doesn't Make Then True There is No Such Thing as Free Doing the Math Building a Credible Performance Measurement Baseline Your Project Needs a Budget and Other Things
Categories: Project Management

A Growth Job

Herding Cats - Glen Alleman - Thu, 08/18/2016 - 16:23
  • Is never permanent.
  • Makes you like yourself.
  • Is fun.
  • Is sometimes tedious, painful, frustrating, monotonous, and at the same time gives a sense of accomplishment.
  • Bases compensation on productivity.
  • Is complete: One thinks, plans, manages and is the final judge of one's work.
  • Addresses real need in the world are large - people want what you do because they need it.
  • Involves risk-taking.
  • Has a few sensible entrance requirements.
  • Ends automatically when a task is completed.
  • Encourages self-competitive excellence.
  • Causes anxiety because you don't necessarily know what you're doing.
  • Is one where you manage your time, money and people, and where you are accountable for specific results, which are evaluated by people you serve.
  • Never involves saying¬†Thank God It's Friday.
  • Is where the overall objectives of the organizations are supported by your work.
  • Is where good judgment is one, maybe the only, job qualification.¬†
  • Gives every jobholder the chance to influence, sustain or change organizational objectives.
  • Is when you can quit or be fired at any time.
  • Encourages reciprocity and and parity between the boss and the bossed.
  • Is when we work from a sense of mission and desire, not obligation and duty.

From If things Don't Improve Soon I May Ask You To Fire Me - Richard K. Irish

Related articles IT Risk Management Applying the Right Ideas to the Wrong Problem Build a Risk Adjusted Project Plan in 6 Steps
Categories: Project Management

The Problems with Schedules #Redux #Redux

Herding Cats - Glen Alleman - Wed, 08/17/2016 - 17:35

Here's an article, recently referenced by a #NoEstimates twitter post. The headline is deceiving, the article DOES NOT suggest we don't need deadlines, but that deadlines without credible assessment of their credibility are the source of many problems on large programs...

Screen Shot 2016-08-10 at 10.12.36 AM

The Core Problem with Project Success

There are many core Root Causes of program problems. Here's 4 from research at PARCA

Bliss Chart

  • Unrealistic performance expectations missing Measures of Effectiveness and Measures of Performance.
  • Unrealistic Cost and Schedule estimates based on inadequate risk adjusted growth models.
  • Inadequate accessment of risk and unmitigated exposure to these risks with proper handling plans.
  • Unanticipated Technical issues with alternative plans and solutions to maintain effectiveness.

Before diving into the details of these, let me address another issue that has come up around project success and estimates. There is a common chart used to show poor performance of projects that compares Ideal project performance with the Actual project performance. Here's the notional replica of that chart.

Screen Shot 2016-08-17 at 10.11.56 AM

This chart shows several things

  • The¬†notion of¬†Ideal is just that - notional. All that line says is this was the baseline Estimate at Completion for the project work. It says nothing about the credibility of that estimate, the possibility that one or all of the Root Causes above are in play.
  • Then the chart shows that many projects cost more or take longer (costing more) in the sample population of projects.¬†
  • The term¬†Ideal¬†is a misnomer. There is no ideal in the estimating business. Just the estimate.
    • The estimate has two primary attributes - accuracy and precision.
  • The chart (even the notional charts) usually don't say what the accuracy or precision is of the value that make up the line.

So let's look at the estimating process and the actual project performance 

  • There is no such thing as the¬†ideal cost estimate. Estimates are probabilistic. They have a probability distribution function (PDF) around the Mode of the possible values from the estimate. This Mode is the Most Likely value of the estimate. If the PDF is symmetric (as shown above) the upper and lower limits are usually done in some 20/80 bounds. This is typical in or domain. Other domains may vary.
  • This says here's our estimate with an 80% confidence.¬†
  • So now if the actual cost or schedule, or so technical parameter falls inside the¬†acceptable range¬†(the confidence interval) it's considered GREEN. This range of variances addresses the uncertanty in both the estimate and the project performance.

But here's three problems. First, there is no cause stated for that variance. Second, the ideal line can never be ideal. The straight line is for the estimate of the cost (and schedule) and that estimate is probabilistic. So the line HAS to have a probability distribution around it. The confidence interval on the range of the estimate. The resulting actual cost or schedule may well be within acceptable range of the estimate. Third is are the estimates being updated, when work is performed or new work is discovered and are those updates the result of changing scope? You can't state we did make our estimate if the scope is changing. This is core Performance Measurement Baseline struff we use every week where we work.

As well since ideal line has no probabilistic attributes in the original paper(s), no shown above - Here's how we think about cost, schedule, and technical performance modeling in the presence of the probabilistic and statistical processes of all project work. †

So let's be clear. NO point estimates can be credible. The Ideal line is a point estimate. It's bogus on day and continues to mislead as more data is captured from projects claimed to not match the original estimate. Without the underlying uncertanties (aleatory and epistemic) in the estimating model the ideal estimates are worthless. So when the actual numbers come in and don't match the ideal estimate there is NO way to know why. 

Was the estimate wrong (and all point estimates are wrong) or was one or all of Mr. Bliss's root causes the cause of the actual variance

CostSo another issue with the Ideal Line is there is no confidence intervals around the line. What if the actual cost came inside the acceptable range of the ideal cost? Then would the project be considered on cost and on schedule? Add to that to coupling  between cost, schedule, and the technical performance as shown above. 

The use of the Ideal is Notional. That's fine if your project is Notional.

What's the reason a project or a collection of projects don't match the baselined estimate. That estimate MUST have an accuracy and precision number before being useful to anyone. 

  • Essentially that straight line is likely an unquantified¬†point estimate.¬†And ALL point estimates are WRONG, BOGUS, WORTHLESS. (Yes I am shouting on the internet).
  • Don't ever make decisions in the presence of uncertanty with point estimates.
  • Don't ever do analysis of cost and schedule variances without first understanding the accuracy and precision of the original estimate.
  • Don't ever make suggestions to make changes to the processes without first finding the root cause of why the actual performance has a variance with the planned performance.

 So what's the summary so far:

  • All project work is probabilistic, driven by the underlying uncertainty of many processes. These process are coupled - they have to be for any non-trivial projects. What's the coupling factors? The non-linear couplings? Don't know these, no way to suggest much of anything about the time phased cost and schedule.
  • Knowing the reducible and irreducible uncertainties of the project is the minimal critical success factor for project success.
  • Don't know these? You've doomed the project on day one.

So in the end, any estimate we make in the beginning of the project, MUST be updated at the project proceeds. With this past performance data we can make improved estimates of the future performance as shown below. By the way, when the #NoEstimates advocates suggest using past data (empirical data) and don't apply the statistical assessment of that data to produce a confidence interval for the future estimate (a forecast is an estimate of a future outcome) they have only done half the work needed to inform those paying what is the likelihood of the future cost, schedule, or technical performance.

6a00d8341ca4d953ef01bb07e7f187970d

So Now To The Corrective Actions of The Causes of Project Variance

If we take the 4 root causes in the first chart - courtesy of Mr. Gary Bliss, Director Performance Assessment and Root Cause Analysis (PARCA), let's see what the first approach is to fix these

Unrealistic performance expectations missing Measures of Effectiveness and Measures of Performance

  • Defining the Measures of Performance, the resulting Measures of Effectiveness, and the Technical Performance Measures of the resulting project outcomes is a critical success factor.
  • Along with the Key Performance Parameters, these measures define what DONE looks like in units of measure meaningful to the decision makers.
  • Without these measures, those decision makers and those building the products that implement the solution have no way to know what DONE looks like.

Unrealistic Cost and Schedule estimates based on inadequate risk adjusted growth models

  • Here's where estimating comes in. All project work is subject to uncertainty. Reducible (Epistemic) uncertainty and Irreducible (Aleatory) uncertainty.¬†
  • Here's how to Manage in the Presence of Uncertainty.
  • Both these cause risk to cost, schedule, and technical outcomes.
  • Determining the range of possible values for aleatory and epistemic uncertainties means making estimates from past performance data or parametric models.

Inadequate assessment of risk and unmitigated exposure to these risks without proper handling plans

  • This type of risk is held in the Risk Register.
  • This means making estimates of the probability of occurrence, probability of impact, probability of the cost to mitigate, the¬†probability of any residual risk, the probability of the impact of this residual risk.
  • Risk management means making estimates.¬†
  • Risk management is how adults manage projects. No risk management, no adult management. No estimating no adult management.

Unanticipated Technical issues with no alternative plans and solutions to maintain effectiveness

  • Things go wrong, it's called development.
  • When thing go wrong, where's Plan B? Maybe even Plan C.

When we hear we can't estimate, planning is hard or maybe not even needed, we can't forecast the future, let's ask some serious questions.

  • Do you know what DONE looks like in meaningful units of measure?
  • Do you have a plan to get to Done when the customer needs you to, for the cost the customer can afford?
  • Do you have the needed resources to reach Done for the planned cost and schedule?
  • Do you know something about the risk to reaching Done and do you have plans to mitigate those risks in some way?
  • Do you have some way to measure physical percent complete toward Done, again in units meaningful to the decision makers, so you can get feedback (variance) from your work to take corrective actions to keep the project going in the right direction?

The answers should be YES to these Five Immutable Principles of Project Success

If not, you're late, over budget, and have a low probability of success on Day One.

†NRO Cost Group Risk Process, Aerospace Corporation, 2003

Related articles Applying the Right Ideas to the Wrong Problem Herding Cats: #NoEstimates Book Review - Part 1 Some More Background on Probability, Needed for Estimating A Framework for Managing Other Peoples Money Are Estimates Really The Smell of Dysfunction? Five Estimating Pathologies and Their Corrective Actions Qualitative Risk Management and Quantitative Risk Management
Categories: Project Management

Software Development Linkopedia August 2016

From the Editor of Methods & Tools - Wed, 08/17/2016 - 15:25
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about team management, the (new) software crisis, saying no, software testing, user experience, data modeling, Scrum retrospectives, java microservices, Selenium tests and product backlog refinement. Blog: The Principles of Quantum Team […]

Range of Domains in Sofwtare Development

Herding Cats - Glen Alleman - Tue, 08/16/2016 - 17:45

Once again I've encountered a conversation about estimating where there was a broad disconnect between the world I work in - Software Intensive System of Systems - and our approach to Agile software development, and someone claiming things that would be unheard of here.

Here's a briefing I built to sort out where on the spectrum you are, before proceeding further with what works in your domain may actually be forbidden in mine. 

So when someone starts stating what can or can't be done, what can or can't be known, what can or can't be a process - ask what domain do you work in?

Paradigm of agile project management from Glen Alleman Related articles Agile Software Development in the DOD How Think Like a Rocket Scientist - Irreducible Complexity Herding Cats: Value and the Needed Units of Measure to Make Decisions The Art of Systems Architecting Complex Project Management
Categories: Project Management

Value and the Needed Units of Measure to Make Decisions

Herding Cats - Glen Alleman - Sun, 08/14/2016 - 20:18

For some reason the notion if value is a big mystery in the agile community. Many blogs, tweets, books are spent of speaking about Value as the priority in agile software development

We focus on Value over cost. We produce Value at the end of every Sprint Value is the most important aspect  of Scrum based development.

Without units of measure of Value beyond time and money, there can be not basis of comparison between one value based choice and another.

In the Systems Engineering world where we work, there are four critical units of measure for all we done.

  • Measures of Effectiveness -¬†these are¬†operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.
    • MOE's are stated in units of measure meaningful to the buyer
    • They focus on capabilities independent of any technical implementation
    • They are connected with mission success
    • MOP's belong to the End User
  • Measure of Performance -¬†characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
    • MOP's are attributes that assure the systems to capability to perform
    • They are an assessment of the system to assure it meets the design requirements to satisfy the Measures of Effectiveness¬†
    • MOP's belong to the project
  • Technical Performance Measures - are attributes that determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal
    • TPMs assess the design process
    • They define compliance¬†to performance requirements
    • They identify technical risk
    • They are limited to critical thresholds, and
    • They include projected performance
  • Key Performance Parameters - represent the capabilities and characteristics so significant that failure to meet them can be cause for reevaluation, reassessing, or termination of the program
    • KPP's have a threshold or objective value
    • They characterize the major drivers of performance
    • The buyer defines the KPP's during the operational concept development process - KPP's say what DONE looks like

So when you read about value and don't hear about the units of measure of Effectiveness and Performance and their TPM's and KPP's, it's going to be hard to have any meaningful discussion about the return on investment for the cost needed to produce that value.

Here's how these are related...

Screen Shot 2016-08-14 at 1.16.52 PM

Related articles Capabilities Based Planning First Then Requirements Systems Thinking, System Engineering, and Systems Management What Can Lean Learn From Systems Engineering?
Categories: Project Management

Quote of the Month August 2016

From the Editor of Methods & Tools - Wed, 08/10/2016 - 09:29
Are there unbreakable laws ruling the process of software development? I asked myself this question while reflecting on a recent project, and the answer leads to many conclusions, some already known and some more revealing. Scientific laws reflect reality and cannot be broken. They have strong implications onto how we build things. For instance, there […]

The Dangers of a Definition of Ready

Mike Cohn's Blog - Tue, 08/09/2016 - 15:00

Although not as popular as a Definition of Done, some Scrum teams use a Definition of Ready to control what product backlog items can enter an iteration.

You can think of a Definition of Ready as a big, burly bouncer standing at the door of the iteration. Just as a bouncer at a nightclub only lets certain people in—the young, the hip, the stylishly dressed—our Definition-of-Ready bouncer only allows certain user stories to enter the iteration.

And, as each nightclub is free to define who the bouncers should let into the club, each team or organization is free to define its own definition of ready. There is no universal definition of ready that is suggested for all teams.

A Sample Definition of Ready

So what types of stories might our bouncer allow into an iteration? Our bouncer might let stories in that meet rules such as these:

  • The conditions of satisfaction have been fully identified for the story.
  • The story has been estimated and is under a certain size. For example, if the team is using story points, a team might pick a number of points and only allow stories of that size or smaller into the iteration. Often this maximum size is around half of the team’s velocity.
  • The team’s user interface designer has mocked up, or even fully designed, any screens affected by the story.
  • All external dependencies have been resolved, whether the dependency was on another team or on an outside vendor.
A Definition of Ready Defines Pre-Conditions

A Definition of Ready enables a team to specify certain pre-conditions that must be fulfilled before a story is allowed into an iteration. The goal is to prevent problems before they have a chance to start.

For example, by saying that only stories below a certain number of story points can come into an iteration, the team avoids the problem of having brought in a story that is too big to be completed in an iteration.

Similarly, not allowing a story into the iteration that has external dependencies can prevent those dependencies from derailing a story or an entire iteration if the other team fails to deliver as promised.

For example, suppose your team is occasionally dependent on some other team to provide part of the work. Your user stories can only be finished if that other team also finishes their work—and does so early enough in the iteration for your team to integrate the two pieces.

If that team has consistently burned you by not finishing what they said they’d do by the time they said they’d do it, your team might quite reasonably decide to not bring in any story that has a still-open dependency on that particular team.

A Definition of Ready that requires external dependencies to be resolved before a story could be brought into an iteration might be wise for such a team.

A Definition of Ready Is Not Always a Good Idea

So some of the rules our bouncer establishes seem like good ideas. For example, I have no objection against a team deciding not to bring into an iteration stories that are over a certain size.

But some other rules I commonly see on a Definition of Ready can cause trouble—big trouble—for a team. I’ll explain.

A Definition of Ready can be thought of like a gate into the iteration. A set of rules is established and our bouncer ensures that only stories that meet those rules are allowed in.

If these rules include saying that something must be 100 percent finished before a story can be brought into an iteration, the Definition of Ready becomes a huge step towards a sequential, stage-gate approach. This will prevent the team from being agile.

A Definition of Ready Can Lead to Stages and Gates

Let me explain. A stage-gate approach is characterized by a set of defined stages for development. A stage-gate approach also defines gates, or checkpoints. Work can only progress from one stage to the next by passing through the gate.

When I was a young kid, my mom employed a stage-gate approach for dinner. I only got dessert if I ate all my dinner. I was not allowed to eat dinner and dessert concurrently.

As a product development example, imagine a process with separate design and coding stages. To move from design to coding, work must pass through a design-review gate. That gate is put in place to ensure the completeness and thoroughness of the work done in the preceding stage.

When a Definition of Ready includes a rule that something must be done before the next thing can start, it moves the team dangerously close to stage-gate process. And that will hamper the team’s ability to be agile. A stage-gate approach is, after all, another way of describing a waterfall process.

Agile Teams Should Practice Concurrent Engineering

When one thing cannot start until another thing is done, the team is no longer overlapping their work. Overlapping work is one of the most obvious indicators that a team is agile. An agile team should always be doing a little analysis, a little design, a little coding, and a little testing. Putting gates in the development process prevents that from happening.

Agile teams should practice concurrent engineering, in which the various activities to deliver working software overlap. Activities like analysis, design, coding, and testing will never overlap 100%—and that’s not even the goal. The goal is overlap activities as much as possible.

A stage-gate approach prevents that by requiring certain activities to be 100% complete before other activities can start. And a definition of ready can lead directly to a stage-gate approach if such mandates are included in the Definition of Ready.

That’s why, for most teams, I do not recommend using a Definition of Ready. It’s often unnecessary process overhead. And worse, it can be a large and perilous step backwards toward a waterfall approach.

In some cases, though, I do acknowledge that a Definition of Ready can solve problems and may be worth using.

Using a Definition of Ready Correctly

To use a Definition of Ready successfully, you should avoid including rules that require something be 100 percent done before a story is allowed into the iteration—with the possible exception of dependencies on certain teams or vendors. Further, favor guidelines rather than rules on your Definition of Ready.

So, let me give you an example of a Definition of Ready rule I’d recommend that a team rewrite: “Each story must be accompanied by a detailed mock up of all new screens.”

A rule like this is a gate. It prevents work from overlapping. A team with this rule cannot practice concurrent engineering. No work can occur beyond the gate until a detailed design is completed for each story.

A better variation of this would be something more like: “If the story involves significant new screens, rough mock ups of the new screens have been started and are just far enough along that the team can resolve remaining open issues during the iteration.”

Two things occur with a change like that.

  1. The rule has become a guideline.
  2. We’re allowing work to overlap by saying the screen mockups are are sufficiently far along rather than done.

These two changes introduce some subjectivity into the use of a definition of ready. We’re basically telling the bouncer that we still want young, hip and stylishly dressed people in the nightclub. But we’re giving the bouncer more leeway in deciding what exactly “stylishly dressed” means.

Large Programs Require Special Processes

Herding Cats - Glen Alleman - Wed, 08/03/2016 - 15:51

Acquisition Category 1 (ACAT 1) programs are large - large is greater than $5B, yes Billion with a B. This Integrated Program Management Conference (IPMC) 2013 presentation addresses the issues of managing these programs in the presence of uncertanty. 

Like all projects or programs, uncertanty comes in two forms - Irreducible (Aleatory) and Reducible (Epistemic). Both these uncertainties create risk. Both are present no matter the size - from small to mega. When we hear estimates are hard, we're not good at estimating, estimates are possibly misused, or any other dysfunction around estimating, they're just symptoms of the problem. Trying to fix the symptom does little to actually fix the problem. Find the root cause, fix that, and the symptom can then have a probability of going away.

This may look like a unique domain, but the core principles of managing in the presence of uncertainty are immutable.

Forecasting cost and schedule performance from Glen Alleman
Categories: Project Management

Agile for Large Scale Government Programs

Herding Cats - Glen Alleman - Wed, 08/03/2016 - 01:32

It would seem counter intuitive to apply Agile (Scrum) to large Software Intensive System of Systems. But it's not. Here's  how we do it with success.

Agile in the government from Glen Alleman Related articles The Microeconomics of a Project Driven Organization GAO Reports on ACA Site All Project Work is Probabilistic Work
Categories: Project Management

Board Tyranny in Iterations and Flow

I was at an experience report at Agile 2016 last week, Scaling Without Frameworks-Ultimate Experience Report. One of the authors, Daniel Vacanti said this:

Flow focuses on unblocking work. Iterations (too often) focus on the person doing the work.

At the time, I did not know Daniel’s twitter handle. I now do. Sorry for not cc’ing you, Daniel.

Blankboard-1024x743

Possible Scrum Board. Your first column might say “Ready”

Here’s the issue. Iteration-based agile, such as Scrum, limits work in progress by managing the scope of work the team commits to in an iteration. Scrum does not say, “Please pair, swarm or mob to get the best throughput.”

When the team walks the board asking the traditional three questions, it can feel as if people point fingers at them. “Why aren’t you done with your work?” Or, “You’ve been working on that forever…” Neither of those questions/comments is helpful. In Manage It! I suggested iteration-based teams change the questions to:

  • What did you complete today?
  • What are you working on now?
  • What impediments do you have?

Dan and Prateek discussed the fiinger-pointing, blame, and inability to estimate properly as problems. The teams decided to move to flow-based agile.

kanban.iteration-1080x905

Possible Kanban board. You might have a first column, “Analysis”

In flow-based agile, the team creates a board of their flow and WIP (work in progress) limits. The visualization of the work and the WIP limits manage the scope of work for the team.

Often—and not all the time—the team learns to pair, swarm, or mob because of the WIP limits.

Iteration-based agile and flow-based agile both manage the team’s work in progress. Sometimes, iteration-based agile is more helpful because the iterations provide a natural cadence for demos and retrospectives.

Sometimes, flow-based agile is more helpful because the team can manage interruptions to the project-based work.

Neither is better in all cases. Both have their uses. I use personal kanban inside one-week iterations to manage my work and make sure I reflect on a regular basis. (I recommend this approach in Manage Your Job Search.)

In the experience report, Daniel and Prateek SIngh spoke about the problems they encountered with iteration-based agile. In iterations, the team focused on the person doing the work.  People took stories alone. The team had trouble estimating the work so that it would fit into one iteration. When the team moved to flow-based agile, the stories settled into a more normalized pattern. (Their report is so interesting. I suggest you read it. Page down to the attachment and read the paper.)

The tyranny was that the people in teams each took a story alone. One person was responsible for a story. That person might have several stories open. When they walked the board, it was about that one person’s progress. The team focused on the people, not on moving stories across the board.

When they moved to flow, they focused on moving stories across the board, not the person doing the stories. They moved from one person/one story to one team/a couple of stories. Huge change.

One of the people who read that tweet was concerned that it was an unfair comparison between bad iterations and good flow. What would bad flow look like?

I’ve seen bad flow look like waterfall: the team does analysis, architecture, design specs, functional specs, coding, testing in that order. No, that’s not agile. The team I’m thinking of had no WIP limits. The only good thing about their board was that they visualized the work. They did not have WIP limits. The architect laid down the law for every feature. The team felt as if they were straightjacketed. No fun in that team.

You can make agile work for you, regardless of whether you use iterations or kanban. You can also subvert agile regardless of what you use. It all depends on what you measure and what the management rewards. (Agile is a cultural change, not merely a project management framework.)

If you have fallen into the “everyone takes their own story” trap, consider a kanban board. If you have a ¬†ton of work in progress, consider using iterations and WIP limits to see finished features more often. If you never retrospect as a team, consider using iterations to provide you a natural cadence for retrospectives.

As you think about how you use agile in your organization, know that there is no one right way for all teams. Each team needs the flexibility to design its own board and see how to manage the scope of work for a given time, and how to see the flow of finished features. I recommend you consider what iterations and flow will buy you.

Categories: Project Management

Why We Don't Need to Question Everything

Herding Cats - Glen Alleman - Tue, 08/02/2016 - 16:15

Tilting at WindmillsIt's popular in some agile circles to question everything. This begs the question - is there any governance process in place? No? Then you're pretty much free to do whatever you want with the money provided to you to build software. If there is a Governance process in place, then that means there are decision rights in place as well. This decision rights almost always belong to the people providing the money for you to do your work. 

Questioning those governance processes and questioning the principles, processes, and procedure that implement the governance processes usually starts with the owners of the governance process. If there is a mechanism for assessing the efficacy of the governance, that's where the questioning starts. Go find that place, put in your suggestions for improvement, become engaged with the Decision Rights Owners and then provide your input.

Standing outside the governance process shouting challenge everything is tilting at windmills.

So when you hear that phrase, ask do you have the right to challenge the governance process?

Related articles Planning is the basis of decision making in the presence of uncertainty What is Governance? Why We Need Governance
Categories: Project Management

Systems, Systems Engineering, Systems Thinking

Herding Cats - Glen Alleman - Fri, 07/29/2016 - 15:27

On our morning road bike ride, the conversation came around to Systems. Some of our group are like me - a techie - a few others are business people in finance and ops. The topic was what's a system and how does that notion impact or world. The retailer in the group had a notion of a system - grocery stores are systems that manage the entire supply chain from field to basket.

Here's my reading list that has served me well for those interested in Systems

  • Systems Engineering: Coping with Complexity, Richard Stevens, Peter Brook, Ken Jackson, Stuart Arnold
  • The Art of Systems Architecting, Mark Maier and Eberhardt Rechtin
  • Systems Thinking: Coping with 21st Century Problems, John Boardman and Brian Sauser
  • Systemantics: How Systems Work and Especially How They Fail, John Gall
  • The Art of Modeling Dynamic Systems: Forecasting for Chaos, Randomness and Determinism, Foster Morrison
  • Systems Thinking: Building Maps for Worlds of Systems, John Boardman and Brian Sauser
  • The Systems Bible: The Beginner's Guide to Systems Large and Small, John Gall
  • A Primer for Model-Based Systems Engineering, 2nd Edition, David Long and Zane Scott
  • Thinking in Systems: A Primer, Donella Meadows

These are all actionable outcomes books. 

Systems of information-feedback control are fundamental to all life and human endeavor, from the slow pace of biological evolution to the launching the latest space satellite ... Everything we do as individuals, as an industry, or as a society is done in the context of an information-feedback system. - Jay W. Forrester

Related articles Systems Thinking, System Engineering, and Systems Management Estimating Guidance Can Enterprise Agile Be Bottom Up? Essential Reading List for Managing Other People's Money Systems Thinking and Capabilities Based Planning Herding Cats: Systems, Systems Engineering, Systems Thinking What Can Lean Learn From Systems Engineering?
Categories: Project Management

Assessing Value Produced in Exchange for the Cost to Produce the Value

Herding Cats - Glen Alleman - Fri, 07/29/2016 - 04:56

A common assertion in the Agile community is we focus on Value over Cost.

Both are equally needed. Both must be present to make informed decisions. Both are random variables. As random variables, both need estimates to make informed decisions.

To assess value produced by the project we first must have targets to steer toward. A target Value must be measured in units meaningful to the decision makers. Measures of Effectiveness and Performance that can monetized this Value.

Value cannot be determined without knowing the cost to produce that Value. This is fundamental to the Microeconomics of Decision making for all business processes.

The Value must be assessed using...

  • Measures of Effectiveness - is an Operational measure of success that is¬†closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions.
  • Measures of Performance - is a Measure that characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
  • Key Performance Parameter - is a Measure that represents the capabilities and characteristics so significant that failure to meet them can be cause for reevaluation, reassessing, or termination of the program.
  • Technical Performance Measures - are Attributes that determine how well a system or system element satisfies or expected to satisfy a technical requirement or goal.

Without these measures attached to the Value there is no way to confirm that the cost to produce the Value will breakeven. The Return on Investment to deliver the needed Capability is of course.

ROI = (Value - Cost)/Cost

So the numerator and the denominator must have the same units of Measure. This can usually be dollars. Maybe hours. So when we hear ...

The focus on value is what makes the #NoEstimates idea valuable - ask in what units of measure is that Value? Are those units of measure meanigful to the decision makers? Are those decision makers accountable for the financial performance of the firm?

 

Related articles The Reason We Plan, Schedule, Measure, and Correct Estimating Processes in Support of Economic Analysis The Microeconomics of Decision Making in the Presence of Uncertainty
Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Fri, 07/29/2016 - 04:54

If you can't explain what you are doing as a process, then you don't know what you are doing - Deming

Process is the  answer to the question How do we do things around here? All organizations should have a widely accepted Process for making decisions.  "A New Engineering Profession is Emerging: Decision Coach," IEEE Engineering Management Review, Vol. 44, No. 2, Second Quarter, June 2016

Related articles Plan Management Three Increasingly Mature Views of Estimate Making in IT Projects What's in a Domain?
Categories: Project Management