Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Herding Cats - Glen Alleman
Syndicate content
Performance-Based Project Management¬ģ - an integrated approach to Principles, Practices, and Processes to increase Probability of Project Success. ab actu ad posse valet illatio
Updated: 3 hours 37 min ago

Can There Be Actionable Suggestions Without Evidence of Them Working?

Sat, 07/19/2014 - 01:38

Visiting the Montana State Museum of the Rockies this weekend and came across this sign in an exhibit. 

Now writing software for money is not this kind of science, but it is closely related to engineering and the enablement of engineering processes in our domain - things that fly away, swim away, drive away, and the enterprise IT systems that support those outcomes.

Evidence

When we hear about some new way to do something around managing projects that spend other peoples money, we do need to ask the questions posed by the sign above.

Is there any evidence that the suggested way - this new alternative of doing something - has the desired outcomes?

No? Then it's going to be difficult for those of us working in a domain that provides mission critical solutions - ERP, embedded software, infrastructure that other systems depend on - to know how to assess those suggestions.

The process of asking and answering a question like that is found in the Governance paradigm. Since our role is to be stewards of our customer's money in the delivery of value in exchange for that money, it's a legitimate question and deserves a legitimate answer. Without an answer, or at least and answer than can be tested outside the personal anecdotal experience of the proposer, it tends to be unsubstantiated opinion. 

Related articles Performance Based Management Positive Deviance Can Agile Be Integrated with Governance Based Development Processes? What Software Domain Do You Work In? The Myth of Incremental Development How to "Lie" with Statistics What Does "Done" Look Like? Why is Statistical Thinking Hard? If We're Going To Make Improvements, We Have To Be Able To Calculate Real Numbers
Categories: Project Management

The Myth of Incremental Development

Thu, 07/17/2014 - 17:07

Screen Shot 2014-07-13 at 6.27.34 PMIn the agile world there is a common notion that incremental delivery is a desired approach. Many taught rapid release, even multiple releases a day.

The question is two fold. Can the customer accept the release into use and the other does the customer have the ability to make use of the incremental capabilities of these releases?

Let's start with the incremental release. I know the picture to the left is  considered a metaphor by some. But as a metaphor it's a weak. Let's look a a few previous posts. Another Bad Agile Analogy, Use, Misuse, and Danger of Metaphor. Each of these presents some issues with using Metaphors.

But let's be crystal clear here. Incremental development in the style of the bottom picture may be a preferred method, once the customer agrees. Much of the reterotic around agile assumes the customer can behave in this way, without looking outside the ancedotal and many times narrow experiences of the those making that suggestion. For agile to succeed in the enterprise and mission critical product and project domain, testing the applicability of both pictures is needed.

Ask the customer if they are willing to use the skateboard while waiting for the car? Otherwise you have a solution looking for a problem to solve.

Now to the bigger issue. In the picture above, the top series is a linear development and the bottom an iterative or incremental depending on where you work. Iterating on the needed capabilities to arrive at the car. Or incrementally delivering a mode of transporatation.

The bottom's increment shows 5 vehicles produced by the project. The core question that is unanswered is does the customer want a skate board, scooter, bicycle, motorcycle, and then a car for transportation. If yes, no problem. But if the customer actually needs a car to conduct business, drive the kids to school, or arrive at the airport for your vacation trip.

The failure of the metaphor and most metaphors is they don't address the reason for writing software for money

Provide capabilities for the business to accomplish something - Capabilities Based Planning

The customer didn't buy requirements, software, hardware, stories, features, or even the agile development process. They bought a capability to do something. Here's how to start that process.

Capabilities Based Planning

Here's the outcome and an insurnace provider network enrollemtn ERP system.

Capabilities Map

Producing skateboards, then scooters, then bicycles and then finally the car isn't going to meet the needs of the customer if they want a car or a fleet of cars. In the figure above the Minimal Viable Features, aren't features they are capabilities. For example this statement is a minimal viable product is likey a good description of a Beta Feature. Could be connected to a business capability, but without a Capabilities Based Plan as in above, can't really tell.

Screen Shot 2014-07-16 at 12.16.26 PM

So How Did We Get In This Situation?

Here's a biased opinion informed by my several decades of experience writing code and managing others who write code - we're missing the systems engineering paradigm in commercial software development. That is for software development of mission critical systems, and Enterprise IT is an example of mission critical systems.

Here's some posts:

The patradigm of Systems Engineering fills 1,000's pages and dozen's of books, but it is boiled down to this.

You need to know what DONE looks like in units of measure meaningful to the decision makers. Those units start with Measures of Effectiveness and Measures of Performance.

Each of those measures is probabilistic, driven by the underlying statistical processes of the system. These mean you must be able to estimate not only cost and schedule, but how that cost and schedule will deliver the needed system capabilities measured in MOE's and MOP's.

SE in One Page

Related articles Do It Right or Do It Twice Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices Agile vs Waterfall: Introduction Lean Startup, Innovation, #NoEstimates, and Minimal Viable Features
Categories: Project Management

Control Systems - Their Misuse and Abuse

Wed, 07/16/2014 - 03:23

Modern Contrrol EngineeringProject Management is a control system, subject to the theory and practice of control systems. The Project Management Control System provides for the management of systems and processes - cost estimating, work scope structuring and authorization, scheduling, performance measurement, reporting, for assessing the progress of spending other peoples money.

The level of formality for these processes varies according to domain and context. From sticky notes on the wall for a 3 person internal warehouse locator website of a plastic shoe manufacture -  to a full DCMA ANSI-748C validated Earned Value Management System (EVMS) on a $1B software development project and everything in between.

The key here is if we're going to say we have a control system it needs to be a Closed Loop control system, not an Open Loop control system. On Open Loop system is called train watching, we sit by the side of the tracks and count the trains going by and report that number. How many trains should go by, could go by? We don't know. That's what's shown in the first picture. We sample the data, we apply that data to the process and it generates an output. There is no corrective action, it's just a signal based on the past performance of the system. Some examples of Open Loop control implemented in the first picture:

  • A light switch. Turn it on the light goes on. Turn it off the light goes off. Turn it on and the light doesn't go on, don't know why. Could be the switch, could be the¬†blub is burned out, could be the power is out in the neighborhood.
  • Same for a faucet, the burned on the stove, a simple cloths dryer when you use the timer rather than the¬†sense cloths are dry feature.
  • The really cool shade we just installed for the upper deck. Push the button and it lowers to a present position, push it again and it goes back to the storage position.

The key attribute of Open Loop Control 

  • It is a non-feedback system, is a type of continuous control system in which the output has no influence or effect on the control action of the input signal.
  • In an open-loop control system the output is neither measured nor fed back for comparison with the input.
  • An open-loop system is expected to faithfully follow its input command or set point regardless of the final result.
  • An open-loop system has no knowledge of the output condition so cannot self-correct any errors it could make when the preset value drifts, even if this results in large deviations from the preset value.

The key disadvantage of open-loop systems is it is poorly equipped to handle disturbances or changes in the conditions which that reduce its ability to complete the desired task. 

A close loop system behaves differently. Here's some example of controllers used in the second picture

  • Thermostat for the furnace or air conditioner - Set a target temperature and it holds that temperature pretty much constant¬†
  • Refrigerator cold/hot setting - keeps the food in the refrigerator at a preset temperature¬†
  • Same for the temperature setting for oven.

The key attributes of  Close Loop Control, shown in the second picture

  • Closed-loop systems are designed to automatically achieve and maintain the desired output condition by comparing it with the actual condition.
  • This is done by generating an error signal which is the difference between the output and the reference input.
  • A ‚Äúclosed-loop system‚ÄĚ is a fully automatic control system in which its control action being dependent on the output in some way.

Because the closed-loop system has  knowledge of the output condition - in the case of projects the desired cost, schedule, and technical performance, it is  equipped to handle  system disturbances or changes in the conditions which may reduce its ability to complete the desired task.

When we have a target cost - defined on day one by the target budget, a planned need date, and some technical performance target, closed loop control provides the needed feedback to make decisions along the when, when the actual performance is not meeting our planned or needed performance

Open Closed Loop

In the end it comes back to the immutable principle of microeconomics. When we are spending money to produce a value, we need to make decisions about which is the best path to take, which are the best of multiple options to choose. In our to do this we need to know something about the cost, schedule, and performance forecasts from each of the choices. Then we need feedback from the actual performance to compare with our planned performance to create an error signal. With this error signal, we can then DECIDE what corrective actions to take.

Without this error signal, derived from the planned values compared with the actual values there is no information needed to decide. Sure we can measure what happened in the past and decide, just like we can count trains and make some decision. But that decision is not based on a planned outcome, a stated need, or an Estimated Arrival time for example. 

Without that estimated arrival time, we can't tell if the train is late or early, just that it arrived. Same with the project measurements.

  • We need on average 4.5 stories per iteration. How many stories did you need to do to finish the project on the planned day with the planned capabilities.

Open Loop provides no feedback, so you're essentially driving in the rear view mirror, when you should be looking out the windshield deciding where to go next to escape the problem.

Objects are closer3

Related articles Can We Make Decisions Without Feedback? Seven Immutable Activities of Project Success Agile Requires Discipline, In Fact Successful Projects Require Discipline The DCMA 14 Point Schedule Assessment Control system and its classification The Failure of Open Loop Thinking First Comes Theory, Then Comes Practice All Project Numbers are Random Numbers - Act Accordingly
Categories: Project Management

Quote of the Day - Writing SW For Money Is Micro-Economics

Wed, 07/16/2014 - 03:12

We find no sense in talking about something unless we specify how we measure it. A definition by the method of measuring a quantity is the one sure way of avoiding talking nonsense¬†‚ÄĒ Sir Hermann Bondi

So when we hear a suggested approach to solving any problem, what are the units of measure of the discussion elements, the inputs to that discussion, and the outcomes?

Micro-economics is defined as

A branch of economics that studies the behavior of individuals and small impacting players in making decisions on the allocation of limited resources. Typically, it applies to markets where goods or services are bought and sold.

Certainly in the software development business, goods are bought and sold. Software is developed in exchange for money. The resulting product is then put to work to generate some monetized value for the buyer. The value exchanged for the cost of that value is usually assessed as the Return on Investment

ROI = (Value - Cost of the Value) / Cost of the Value

Economics of Iterative DevelopmentLet's start with some basic concepts of writing software for money. I'd suggest these are immutable concepts in a for proft business world. The book on the left is a ggod start, but there are other materials about the economics of software development. The one that comes to mind is Software Engineering Economics, Barry Boehm, Prentice Hall. While some has suggested this book is dated and no longer applicable to our modern software development paradigm, that could only be true if our modern paradigm is not subject to the principles of micro-economics. And that is unlikely, so let's proceed with applying the principles of micro-economics to the problem at hand. That problem is

How can we know the cost, schedule, and performance of a software product with sufficient confidence into order to make business (micro-economic) decisions about how to manage the limited resources of the project?

Those resources are of course the variables we are trying to determine. Cost, Schedule, and Performance. Each of which contains resources. Cost in software development is primarily driven by staff. Schedule is driven by the efficacy of that staff's ability to write code. And the performance of the resulting outcomes are driven by the skills and experience of the staff, who is consuming the funds (cost) provided to the project.

So if we look at the basics of the economics of writing software for money, we'll see some simple principles.

If it's a product development effort, someone in the marketing and sales department has a planned release date. This date and the projected revenue from that release is in the sales plan. These are random numbers of course -  so I won't repeat that, but all numbers in business are random numnbers until they get entered into the General Ledger.

  • These projected revenue numbers are based on releasing the product with the features needed for customers to buy it.
  • The cost to develop that product is subtracted from the revenue - usually in some complex manner - to produce the retained earnings¬†attributed to the product.¬†
  • This of course is the simple formula
    • Earnings = Revenue - Cost¬†
    • Where cost is categorized in many ways, some attributable to the development of the product, some to overhead, benefits, fringe, and other indirect costs (ODC).
    • Revenue recognition is a continuously evolving issue with taxes and performance reporting in actual firms
    • But for the purposes here, the simple formula will do.¬†Managerial Finance, Brigham and Weston is a good place to look for the details.

If it's a service effort, the customer has engaged the firm to perform some work in writing software, doing consulting around that software, integrating existing software or some combination of these and other software related services. Managing the Professional Services Firm, was mandatory reading along wioth other internal company written books when I worked for a large Professional Services (PS) firm. With our small firm now, we still refer to that book.

  • Some type of problem needs to be solved involving a solution that uses software (and maybe hardware), processes, and people.
  • The cost, schedule, and capabilities of the solution need to be worked out in some way in order to know what¬†DONE looks like. Any one subscribing to this Blog know the¬†Knowing What Done Looks Like is a critical success factor for any effort.
  • But in the case of a services solution, this¬†knowing is a bit more difficult than the product solution, since the¬†customer may not know themselves.¬†
  • This is the golden opportunity for incremental and itertaive development.
  • But in the end the PS customer still needs to know the cost, schedule, and what will be delivered, because that person has a similar ROI calculation to do for those funding the PS work.
Related articles Earning Value from Earned Value How To Assure Your Project Will Fail Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices The Value of Information
Categories: Project Management

The Power of Misattributed and Misquoted Quotes

Tue, 07/15/2014 - 15:30

Warning this is an Opinion Piece.

In a conversation this week the quote Insanity is doing everything the same way and expecting a different outcome. Or some variant of that. Attributed to Einstein. As if attributing it to Einstein makes it somehow more credible, than attributing it to Dagwood Bumstead.

Well it turns out it is not a quote from dear olde Albert. It is also mis-attributed to Ben Franklin, Confucius, and a Chinese proverb.

The first printed record of this quote is in the 1981 Narcotics Anonymous approval version of their handbook. No other printed record is found. 

Why is this Seemingly Trival Point Important

We toss around platitudes, quotes, and similar phrases in the weak and useless attempt to establish credibility of an idea by referencing some other work. Like quoting a 34 year old software report from NATO, when only mainframes and FORTRAN 77 were used, to show the software crisis and try to convince people it's the same today. Or use un-vetted, un-reviewed, charts and graphs from an opinion piece in popular techncial magazine as the basis of statistical analysis of self-selected data

Is it world shaking news? No. Is the balanced of the universe disrupted? Hardly. 

But is shows a lack of mental discipline that leaks into the next level of thought process. It's always the little things that count, get those right and the big things follow. That is a quote from somewhere. But it also shows laziness of thought, use of platitudes in place of the hard work to solve nearly intractable problems, and all around disdain for working on those hard problems. It's a sign of our modern world - look for the fun stuff, the easy stuff, and the stuff we don't really want to be held accountable for if it goes wrong

I will use the Edwin Land quote though, that is properly attributed to him

Don't undertake a project unless it is manifestly important and nearly impossible. 

That doesn't sound like much fun, let's work on small, simple, and easy projects and tell everyone how those successful processes we developed can be scaled to the manifestly important and nearly impossible ones.

Related articles Resources for Moving Beyond the "Estimating Fallacy" How to Deal With Complexity In Software Projects? 50 Common Misquotations, but no Howard Thurman Fake quotes flood social media 31 Famous Quotations You've Been Getting Wrong 6 Famous Misquotes & Where They Came From (Mis)quoting Neil Armstrong
Categories: Project Management

The Failure of Open Loop Thinking

Mon, 07/14/2014 - 14:12

Screen Shot 2014-07-13 at 9.18.12 PMWhen there are charts showing an Ideal line or a chart of samples of past performance - say software delivered - in the absence of a baseline for what the performance of the work effort or duration should have been, was planned to be, or even better could have, this is called Open Loop control.

The issue of forecasting the Should, Will, Must cost problem has been around for a long time. This work continues in DOD, NASA, Heavy Construction, BioPharma, and other high risk, software intensive domains.

When we see graphs where the baseline to which the delays or cost overages are compared and those baselines are labeled Ideal, (like the chart below), it's a prime example of How to LIe With Statistics, Darrell Huff, 1954. This can be over looked in an un-refereed opinion paper in a IEEE magazine, or a self-published presentation, but a bit of homework will reveal that charts like the one below are simply bad statistics.

Screen Shot 2014-07-13 at 9.26.15 PM

This chart is now being used as the basis of several #NoEstimates presentations, which further propagates the misunderstandings of how to do statistics properly.

Todd does have other papers that are useful Context Adaptive Agility is one example from his site. But this often used and misused chart is not an example of how to properly identify problems with estimates,

Here's some core issues:

  • If we want to determine something about a statistical process, we of course need to collect data about that process. This data is¬†empirical - much misused term itself - to show what happened over time. A time series of samples.
  • To computer a trend, we can of course draw a line through population of data, like above.
  • Then we can compare this data with some¬†reference data to determine the¬†variances between the reference data and the data under measurement.

Here's where the process goes in the ditch - literally.

  • The reference data has no basis of reference. It's just labeled¬†ideal.¬†Meaning a number that was established with no¬†basis of estimate. Just¬†this is what was estimated, now let's compare actuals to it and if actuals matched the estimate' let's call it ideal.
  • Was that¬†ideal credible? Was it properly constructed? What's the confidence level of that estimate? What's the allowable variance of that estimate that can still be considered OK (within the upper and lower limites of OK)? Questions and their answers are there. It's just a line.

We can use the ne plus ultra put-down of theoretical physicist Wolfgang Pauli's "This isn't right. It's not even wrong."  As well the projects were self-selected, and like the Standish Report, self-selected statistics can be found in the How to Lie book

It's time to look at these sort of conjectures in the proper light. They are Bad Statisics, and we can't draw any conclusion from any of the data, since the baseline to which the sampled values are compared Aren't right. They're not even wrong."  We have no way of knowing why the sampled data has a variance from the ideal the bogus ideal

  • Was the original estimate simple na√Įve?
  • Was the project poorly managed?
  • Did the project change direction and the¬†ideal estimate never updated?
  • Were the requirements, productivity, risks, funding stability, and all the other project variables held constant, while assessing the completion date? if not the fundamental principles¬†of experiment desgin was violated. These principles are taught in every¬†design of experiments class in every university on the planet.¬†Statistics for Experimenters is still on my shelf. George Box as one of the authors, whose often misused and hugely misunderstood statement¬†all models are wrong, some are useful.

So time to stop using these charts and start looking for the Root Causes for the estimating problem.

  • No reference classes
  • No past performance
  • No parametric models
  • No skills or experience constructing credible estimates
  • No experience with estimating tools, processes, databases (and there are many for both commerical and government software intensive programs).
  • Political pressure to come up with the¬†right number
  • Misunderstanding of the purpose of estimating - provide information to make decisions.

A colleague (former NASA cost director) has three reasons for cost, schedule, and technical shortfalls

  1. They didn't know
  2. They couldn't know
  3. They didn't want to know

Only the 2nd is a credible reason for project shortfalls in performance.

Without a credible, calibrated, statistically sound baseline, the measurements and the decisions based on those measurements are Open Loop.

You're driving your car with no feedback other than knowing you ran off the road after you ran off the road, or you arrived at your destination after you arrived at your destination.

Just like this post Control Systems - Their Misuse and Abuse

 

Related articles How to "Lie" with Statistics How to Fib With Statistics Control Systems - Their Misuse and Abuse Seven Immutable Activities of Project Success
Categories: Project Management

Software Development is Like Play Writing

Sun, 07/13/2014 - 23:23

WaitingForGodotI stole this idea of this blog from Stephen Wilson's post of a similar name. And like all good borrows I've added, subtracted, and made some changes, because everything is a remix.

I don't know Stephen, but his post is provacutative. I'm assigned to a client outside my normal Defense Department and NASA comfort zone. The client needs a Release Management System integrated with a Change Control Board. Both are the basis of our defense and space software world. This client is trying to use agile, but has little in the way of the discipline needed to actually make it work.

The SWDev is like play writing is a beautiful concept that can be applied to the choas of the new client and also connected back to our process driven space and defense, which by the way makes heavy use of agile, but without all the drama of the it's all about me developer community.

Let's start here:

In both software and play writing, structure is almost entirely arbitrary. Because neither obey the laws of physics, the structure of software and plays comes from the act of composition. A good software engineer will know their composition from end to end. But another programmer can always come along and edit the work, inserting their own code as they see fit. It is received wisdom in programming that most bugs arise from imprudent changes made to old code.

It turns out of course neither of those statements is correct in the sense we may think. There is the act of composition, but that composition needs a framework in which to be developed. Otherwise we wouldn't know what we're watching or know what we're developing until it is over. And neither is actually how plays are written or software is written. It may be an act of composition, but it is not an act of spontaneous creation.

Let's start with play writing. It may be that the act of writing a play where the structure is entirely arbitrary is possible, but it's unlikely that would be a play you'd pay money to see. A Harold Pinter play may be unstructured Waiting for Gadot may be unstructured, but that's not really how plays are written. They follow a structured approach - there is a law of physics for play writing.

  • You need characters - a protagonist and the supporting cast
  • You need a setting - where are we and what's going on that supports the story
  • You need some sort of imbalance between the characters, the setting
  • You need some way to restore the imbalance or leaving it hanging
  • You need to engage your audience in some way that will resonant with the personal feelings

That's about the story of the play. To actually write a play, here's a well traveled path to success. These guidelines are for the outcome of the writing effort starting in the beginning.

  • A rough title is needed to anchor the play in the readers mind.
  • An action statement, describing what ¬†the characters are going to do in the play, as a group, as individual. How are they going to change and who changes
  • The form of the play describing the organization of the characters, the situation, the environment. How does the play's action relate to the emotional qualities of the characters and most importantly to the audience.
  • The circumstances of the time and place of the action and other important conditions.
  • The subject as an informational platform.
  • The characters of course, describing what are the forces driving them and their relationships with each other, the circumstance, and the environment.
  • The conflict. It's boring watching a play that is boring. What are the obstacles that have to be overcome by the characters?
  • The meaning for the play. What is the point of view? What are the key thoughts for the play as a whole and the characters in the play?
  • The style of the dialogue, the composition and manner of this dialogue?
  • And finally a schedule for writing the play. When will it be done?

When we talk about writing software there is a similar story line

  • Who are the protagonist? - they're the end users of the software.
  • What is the setting? - there is a stated need for something new.
  • What is the imbalance? - something is not getting done and it needs to be improved.
  • How do we restore the balance? - by closing the gaps between the imbalance and the balance with a software solution.

The story line is the basis of Capabilities Based Planning. With the capabilities, the requirements can be elicited. From those requirements, decisions can be made for what order to deliver them to produce the best value for the business or the mission. 

This process is about decision making. And decision making uses information about the future. This future information comes many times from estimates about the future. 

 

 

 

Related articles Agile Requires Discipline, In Fact Successful Projects Require Discipline People, Process, or Tools
Categories: Project Management

Quote of the Day

Fri, 07/11/2014 - 23:44

Gentlemen, we have run out of money; now we have to think - Winston Churchill

The role of estimating in project and product development is many fold...

  • For product, the cost of development must recouped ¬† over the life cycle of the product. Knowing the sunk cost of the product provides decision making information to the business if the¬†target margin will be achieved and on what day.
  • For projects, the cost of development is part of the ROI equation. ROI = (Value - Cost) / Cost
  • For day to day business operations¬†cash flow is the actual cost of producing outcomes. Budget is not the same as cost. We have define a budget for our work, but some forecast of the cost of that work, gathered from current operations or past performance, let's us know if we have sufficient budget.
  • For products when marginal cost exceeds marginal profit, we're going go out of business if we don't do something about controlling the cost. But our cost forecast and revenue forecast are the¬†steering points to provide feedback for making choices.
  • For projects, the marginal cost and the marginal benefits obey the same rules of microeconomics.

In both cases the future cost and future monetized value are probabilistic numbers.

  • This project or product will cost $257,000 or less with an 80% confidence
  • This project or product will complete on or before May 2015 with a 75% confidence

With both these numbers and their Probability Distribution Function, decisions can be made about options - choices that can be made to influence the probability of project or product success.

Without this information, the microeconomics of writing software for money is not possible and the foundation of business processes abandoned.

In order to make these estimates of cost, schedule, and the technical performance of the project or product, some  model is needed and the underlying uncertainty of the elements of the model. These uncertainties come in two forms

  • Reducible (epistemic uncertainty) - money can be spent to reduce this uncertainty. Testing, prototypes, incremental development.¬†
  • Irreducible (aleatory uncertainty) - this is the normal variance in the process or technical components. The Deming uncertainty. The only action to reduce this uncertainty is¬†margin, Cost margin, schedule, and technical margin. The cost margin is then part of the total project or product budget and the schedule margin part of the total period of performance for the project or the planned release date for the product.

To suggest decisions can be made without knowing this future information violates the principles of microeconomics of business 

Related articles Both Aleatory and Epistemic Uncertainty Create Risk Uncertainty is the Source of Risk Elements of Project Success More Uncertainty and Risk
Categories: Project Management

It's Not Bottoms Up or Top Down, It's Capabilities Based Delivery

Tue, 07/08/2014 - 21:36

There is a popular notion that agile is bottoms up and tradititional is top down. Neither is actually effective in deliverying value to the customer based on the needd capabilties, time phased to match the business or mission need. 

The traditional - read PMI and an over generalization - project life cycle is requirements elicitaton based. Go gather the requirements, arrange them in some order that makes sense and start implementing them. The agile approach (this is another over generlaizaiton) is to let the requirements emerge, implement them in the priority the customer says - or discovers.

Both these approaches have serious problems as evidenced by the staistics of software development

  • Traditional approaches take too long
  • Agile approach ignore the underlying architectural impacts of¬†mashing up the requirements
Categories: Project Management

Critical Thinking Skills Needed for Any Change To Be Effective

Tue, 07/08/2014 - 15:40

Why is it hard to think beyond our short term vision? Rapid delivery of incremental value is common sense, no one would object to that - within the ability of the business to absorb this value of course. This is called the Business Rhythm. 

But that rapid redelivery of incremental value is only a means to an end. The end is a set of capabilities of the business that allows that business to accomplish their Mission. To do something as a whole with those incremental features. That is turn the features into a capability.

Think about a voice over IP system, who's feature set was incrementally delivered to 5,000 users at a nation wide firm. This week we can call people, receive calls from people, but we don't have the Hold feature yet. Are you really interested in taking that product and putting it to use? 

How about an insurance enrollment system, where you can sign up, provide your financial and health background, choose between policies, but can't see which doctors in your town take the insurance, because the Provider Network piece isn't complete yet.

These are not notional examples, they're real projects I work on. For these type projects - most projects in the enterprise IT world -  an All In feature set is needed. Not the Minimum Viable Product (MVP). But the set of Required Capabilities to meet the business case goals of providing a service or product to customers. No half baked release with missing market features.

You might say, that incremental release of features could be a market strategy, but looking at actual products or integrated services, it seems there is little room for partial capabilities in anything, let alone Enterprise class products. Either the target market gets the set of needed capabilities to capture market share or provide the business service or it doesn't and someone else does.

An internal system may have different behaviours, I can't say since I don't work in that domain. But we've heard loud and strident voices telling us deliver fast and deliver often when there is no consideration for the Business Rhythm of the market or user community for those incremental - which is a code word for partially working - capabilities.

Of course the big bang, design, code test, paradigm was nonsense to start with. That's not what I talking about here. I'm talking about the lack of critical assessment of what is the value flow of the business and only then applying a specific set of processes to deliver that value. Outcome first, then method.

So Now The Hard Part

The conversation around software delivery seems to be dominated by those writing software, rather than by those paying for the software to be written. Where are the critical thinking skills to ask those hard nosed business questions:

  • When will you be done with all the features I need to implement my business strategy?
  • How much will it cost for all those features I to provide those capabilities that fulfill my business plan?

Questions like that have been replaced with platitudes and simple and many times simple minded phrases.

  • Deliver early and often - without consideration of the business needs
  • Unit testing is a waste - because those tests like the internal documentation that provides a long term maintainability platform, aren't what the customer bought
  • We can decide about all kinds of things in the software business without having to estimate anything - a completion violation of the principle of microeconomics, which requires we know the impact of our choices in some unit of measure meaningful to the decision maker. You know something like¬†Money.
Related articles Is There Such a Thing As Making Decisions Without Knowing the Cost? Capabilities Based Planning and Development Business Rhythm Drives Process Do It Right or Do It Twice What Does It Mean To Be DONE? How To Estimate Almost Any Software Deliverable Alan Kay: The pitfalls of incrementalism Don't Start With Requirements Start With Capabilities How To Create Confusion About Root Causes
Categories: Project Management

Quote of the Day

Mon, 07/07/2014 - 07:02

Movement without direction will create a hole in the ground ‚ÄĒ¬†Sophia Bedford-Pierce

Categories: Project Management

Quote of the Day

Fri, 07/04/2014 - 16:23

It is the mark of an educated mind to rest satisfied with the degree of precision which the nature of the subject admits and not to seek exactness where only an approximation is possible¬†‚ÄĒAristotle (384 B.C - 322 B.C.)

When we hear someone say estimates are guesses, When we estimate we act as if we believe the plan will not change, or similar uninformed nonsense, think of Aristotle. Without the understanding, from education, experience, and skill to realize that all project variables are random variables and vary naturally and vary from external events.

As such in order to determine the future impacts from decisions that involve cost, schedule, and performance, we need to estimate that impact of those random processes on the outcome of our decision.

This is the basis of all decision making in the presence of uncertainty. It's been claimed decisions can be made without estimates, but until someone comes up with the way to make decisions without estimating those impacts, statistical estimating is the way.

Categories: Project Management

The Value of Information †

Thu, 07/03/2014 - 15:18

Since all variables on all projects are random - cost, schedule, and delivered capabilities, in the economics of projects, the chance of being wrong and the Cost of being wrong is the expected opportunity cost. When we write software for money, we are participating in the microeconomics process of decision making based on information about the future:

  • What value will be returned in exchange for the cost to produce that value? This value can be tangible - revenue from the sales of our product, revenue from the efforts produced through our contracting firm to our client, cost savings to the internal IT operation, increased sales from our ability to be faster and better than our competition,¬†or intangible value in some broader sense - a public good for example.
  • What is the cost to produce that value? This cost is almost always tangible. Money spent over some period of time.¬†

Information is needed to assess both the cost and the value in order to DECIDE what to do. The formula for the value of this information can be mathematical as well as intuitive.

We make better decisions when we can reduce uncertainty about those decisions. Knowing the value of the information used to make those decisions is part of the microeconomics of writing software for money. 

If we are uncertain about a business decision, or a decision for the business based on technology, that means we have a chance of making a wrong decision. By wrong it means the consequences of the alternatives cannot be assessed and one chocie that might have been preferable was not choosen. The cost of being wrong is the difference between the wrong choice and the best alternative.

In order to make an informed decision we need information - as mentioned above. This information itself has uncertainty, and therefore most times we need to estimate the actual numbers from the source of the information:

  • How much will it cost? Good question. Can't really tell unless we have a firm fixed price quote from a vendor. Cost is not the same as budget. We might be able to fix the budget, but cost is always varaible in practice. We can stop when cost reaches budget, but then there is a chance we're not done - we have an incomplete deliverable, missing needed features.
  • What value will be produced? Good question. Can't really tell unless we have revenue from the same product or similar products, or have price quotes from our competitors for the same product we are trying to sell.
  • How long will it take to produce a¬†value producing outcome?¬†Good question. Can't really tell since we haven't done this before.

These questions and their answers are critical to the successful operation of any business, whose fundamental principle of operations is to turn expense into revenue. Since the variables involved in our projects are actually random variables, we'll need to estimate the answers, leaving the bigger question unanswered to date...

Can we make decisions without estimating the future impact on cost, schedule, and performance or that decision?

Gather information in support of decision making is decision risk reduction. The desire to reduce risk is good business practice. The decision maker needs information about the behaviour of the random variables involved in the decision making process. These must be estimated before the fact to make a decision about the future. 

To develop the needed estimates we need a Basis of Estimate process, which means building the estimates from Reference Classes, parametric models, or similar cardinal based processes that have calibrated in some way. The Ordinal (relative) estimate are not credible. This removes the ill conceived notion that estimates are guesses.

† Extracted from How To Measure Anything, Douglas W. Hubbard.

Related articles Making Estimates For Your Project Require Discipline, Skill, and Experience The Calculus of Writing Software for Money, Part II Why is Statistical Thinking Hard? How to "Lie" with Statistics All Project Numbers are Random Numbers - Act Accordingly How To Assure Your Project Will Fail Economics of Iterative Software Development Everything is a Random Variable Four Critical Elements of Project Success Why We Must Learn to Estimate
Categories: Project Management

Economics of Iterative Software Development

Tue, 07/01/2014 - 15:41

Economics of Iterative DevelopmentSoftware development is microeconomics. Microeconomics is about making decisions - choices - based on knowing something about cost, schedule, and techncial impacts from that decision. In the microeconomics paradigm, this information is part of the normal business process.

This is why when there is conjecture that you can make decisions in the absence of estimating the impacts of that decision, it ignores the principles of business. Or the notion that when numbers are flying around in an organization they lay the seeds for dysfunction, we need to stop and think about how business actually works. Money is used to produce value which is then exchanged for more money. No business will survivie for long in the absence of knowing about the numbers contained in the balance sheet and general ledger.

This book should be mandatory reading anyone thinking about making improvements in what they see as dysfunctions in their work environment. No need to run off and start inventing new untested ideas, they're right here for the using. With this knowledge comes the understanding about why estimates are needed to make decisions. In the microeconomics paradigm, making a choice is about opportunity cost. What will it cost me to NOT do something. The set of choices that can be acted on given their economic behaviour. Value produced from the invested cost. Opportunities created from the cost of development. And other trade space discussions. 

To make those decisions with any level of confidence, information is needed. This information is almost always about the future - return on investment, opportunity, risk reduction strategies. That information is almost always probabilistically driven by an underlying statistical process. This is the core motivation for learning to estimate - to make decisions about the future that are most advantageous for the invested cost.

That's the purpose of estimates, to support business decisions.

This decision making processes is the basis of Governance which is the structure, oversight, and management process that ensures delivery of the needed benefits of IT in a controlled way to enhance the long term sustainable success of the enterprise.

Related articles Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices The Calculus of Writing Software for Money, Part II How To Estimate, If You Really Want To How to "Lie" with Statistics How to Deal With Complexity In Software Projects? Why is Statistical Thinking Hard?
Categories: Project Management

Everything's a Remix

Mon, 06/30/2014 - 23:12

In the estimating discussion there is a popular notion that we can't possibly estimate something we haven't done before. So we have to explore - using the customers money by the way - to discover what we don't know.

The when we hear about we've never done this before and estimating is a waste of time, think about the title of the post.

Everything's a Remix

Other than inventing new physics all software development has been done in some form or another before. The only true original thing in the universe is the Big Bang. Everything else  is derived from something that came before.

Now we may not know about this thing in the past, but that's a different story. It as done before in some form, but we didn't realize it. There are endless examples of copying ideas from the past is thinking they are innovative, new and break through. The iPad and all laptops came from Allan Kay's 1972 paper, "A personal computer for childern of all ages." Even how the touch screen on the iPhone works was done before Apple announced it as the biggest breakthrough in the history of computing.

In our formal defense acquisition paradigm there are many programs that are research. This looks like this flow. Making estiimates about the effort and duration is difficult, so blocks of money are provided to find out. But these are not product production or systems development processes. The Systems Design and Development (SDD) is between MS-B and MS-C. We don't confuse exploring from developing. Want to explore work on a DARPA program. Want to develop, work in post-MS-B and know somethiong about what came before.

5000.02

The Pre-milestone A works is to identify what capabilities will be needed in the final product. The DARPA programs I work are even further to the left of Milestone A. 

On the other end of the spectrum from this formal process, a collection of sticky notes on the wall could have similar flow of maturity. But the principles are still the same.

So How To Estimate in the Presence of We've Never Done This Before

  • The first thing to do is go find someone who has. Hire them, buy them dinner, pick their brain.¬†
  • The next would¬†be to find an example of what you want to build and take it apart. This is what every product designer does. In the fashion business they just copy. In the software business they copy and make it better.
  • Long ago I had an idea, along with several others, of writing a book of¬†reusable code in our domain. Algorithms that could be reused. The IBM FORTRAN Scientific Subroutine Library was our model. The¬†remix of the code elements is now built into hardware chips for doing what we did - process signals from radar systems. The¬†Collected Algorithms of the ACM is a similar idea.

Here's a critical concept - we can't introduce anything new until we're fluent in the language of our domain, and we do that through emulation.† This means for us to move forward we have to have done something like this in the past. So if we haven't done something like this in the past, don't know anyone who has, or can't find an example of it being done, we will have little success being innovative. As well, we will hopelessly fail in trying to estimate the cost, schedule, and probability of delivering capabilities. In other words we'll fail and blame it on the estimating process and assume that we'll be successful if we stop estimating.

So stop thinking about we can't know what we don't know and start thinking someone has done this before and we just need to find that someone, somewhere, something. Nobody starts out being original, we need copying to get started. Once copied, transformation is the next step. With the copy we can estimate size and effort. We can now transform it into something that is better, and since we now know about the thing we copied, we have a reference class. Yes that famous Reference Class Forecasting used by all mature estimating shops. With the copy and it's transformed item, we can them combine ideas into something new. The Alto from Xerox and then the Xerox Star for executives, was the basis of the Lisa and Mac.

The End

You can estimate almost anything, and every software system if you do some home work and suspend the belief it can't be done. WHY? because it's not your money, and those providing you the money have an interest in several things about their money - what will it cost, when will you be done, and using the revenue side of the balanced sheet, when will they break even on the exchange of money for value? This is the principle of every for profit business on the planet. The not-for-profits have to pay the electric bill as well, as do the non-profits. So everyone, everywhere needs to know the cost of that value they asked us top produce BEFORE we've spent all their money and ran out of time to reach the target market for that pesky break even equation. 

Anyone tells you otherwise is not in the business of business, but just on the expense side and that means not on the decision making side either, just labor doing what they're told to do - which is a noble profession, but unlikely to influence how decisions are made. 

The notion of decision rights is the basis of governance. When you hear about doing or not doing something in the absence of who needs this information, ask who needs this information and is it your decision right to decide to fulfill or not fulfill the request for that information? As my colleague, retired NASA Cost Director, says follow the money, that's where you find the decider. 

† Everything is a remix, Part 3, Kirby Furgeson.

Related articles Let's Stop Guessing and Learn How to Estimate How To Fix Martin Fowler's Estimating Problem in 3 Easy Steps How to Deal With Complexity In Software Projects? Do It Right or Do It Twice An Agile Estimating Story Measurement of Uncertainty Reference Design Concept Project Finance

 

Categories: Project Management

Business Rhythm Drives Process

Mon, 06/30/2014 - 16:05

The agile notion of delivering early, delivering often is a wonderful platitude, but ignores the underlying business rhythm for accepting the software features into producitive use by the dynamics of any business or market channel. Here's some examples of business rhythms I've worked.

  • Paypal - release continually as features arrive from development. I taught a class on software development management at Carnegie Mellon, with the lead for the development process.
  • Medicaide enrollment - release minimally once a week in support of the enrollment agencies - states, counties, and health providers. Provide emergency releases when rules change without notice.
  • Oracle DB updates use to occur on a weekly basis. Our joke was when asked what version we had in production?¬†Look at your watch, tell me the time and where is the second hand. Oracle figured that was a bad idea and went to announced release dates with a few weeks notice.
  • Health Insurance Provider Network - release with capabilities to move to the next business process is available in the figure below. This approach defines the needed business capabilities, there order of delivery, and the planning processes defines when they will be available. This provides the basis of¬†putting them to work, since more than the software is needed for the business benefits to be accrued. Training, integration, data migration, promotion and advertising, and general¬†Go Live activities.

Capabilities Flow

  • In this example above, the planning process for the needed deliverables, in the proper order was worked out through a¬†Capabilities Based Planning process.
  • Release updates to major defense systems on a planned schedule - with coordination of dozens of sites around the world and dozens of vehicles on orbit. Full verification and validation regression testing of 10's of millions of lines of code, some legacy code going back to the late 1970's which I actually worked on in Fortran 77, running on Vax 11/780's and Cyber 74 mainframes running real time standalone operating system.¬†

Capabilities based planning (v2) from Glen Alleman
  • Next is a larger release process. The flight avionics for the Orion spacecraft is released periodically into a systems integration and test environment. Coordination with other software and hardware elements of the spacecraft in the Crew Exploration Vehicle Avionics Integration Laboratory. This software is developed incrementally with capabilities coming in¬†chunks.¬†But¬†dropping this software on the CAIL requires coordination with other elements on the spacecraft at the pace those become available.
  • A SAP rollout has similar external dependencies for the business rhythm - plan to rollout SAP to 53 sites worldwide for complete intergation across a $37B industrial market. On the site I worked, the go live¬†Monday was a non-event.¬†¬†

What's the Point of All This? When we hear deploy fast, deploy often, maybe once a day, test that platitude  against your business rhythm first to see if it matches. Related articles Do It Right or Do It Twice Agile Requires Discipline, In Fact Successful Projects Require Discipline Delivering Needed Capabilities Orion Powered Up for 26 Hours in New Test Capabilities Based Planning and Development
Categories: Project Management

Why is Statistical Thinking Hard?

Sat, 06/28/2014 - 21:26

I'll admit up front I'm hugely biased toward statistical thinking. As one trained in physics  and the mathematics that goes with physics, and Systems Engineering and the math that goes with that  thinking about statistics is what we do in our firm. We work programs with cost and schedule development, do triage on programs for cost and schedule, guide the development of technology solutions using probabilistic models, assess risk to cost, schedule, and technical performance using probability and statistics, and build business cases, performance models, Estimates To Complete, Estimates At Completion, the probability of program success, the probability  of a proposal win, and  the probability that the Go-Live date will occur on or before the need date, and be at or below the planned cost, and the probability the mandatory needed capabilities will be there as well.

We use probability and statistics not because we want to, but because we have to. Many intelligent, trained, and educated people in our domain - software intensive systems and the management of projects - find themselves frozen in fear when confronted by any mathematical problem beyond the level of basic arithmetic - especially in the software development domain. The algorithm writers on flight control systems we work  with are not actually software developers in the common sense, but are control system engineers who implement their algorithm in Handel-C - so they don't count - at leats not in this sense.

We have to deal with probability and statistics for a simple reason - ever variable on a project is a random variable. Only accountants deal with Point Numbers. The balance in your checking account is not subject to a statistical estimate. The price of General Electric stock in your 401(K) is a random variable. All the world is a non-stationary, stochastic process, and many times a non-linear, non-stationary, stochastic process.

Stochastic processes are everywhere. They are time series subject to random fluctuations. Your heart beat, the stock market, the productivity of your software team, the stability of technical requirements, the performance of the database server, the number of defects in the code you write.

In our software development domain there is an overwhelming need to predict the future. Not for the reason you may think. Because at the same time there is a movement underway to Not Estimate the future. But it turns out this is a need not necessarily a desire. The need to predict - to have some sense of what is going to happen - is based on a few very simple principles of microeconomics

  • It's not our money. If it were we could do with it as we please.
  • Those providing the money have a ¬†finite amount of money. They alos have a finite amount of time in which to exchange that money for value produced by us, the development team.
  • If there were a non-finite amount of money and time, we won't have to talk about things like estimates of when we'll be done, or how much it will cost, or the probability that the produced outcomes will meet the needs of the users.

Our natural tendencies are to focus on observation - empirical data - rather than the statistical data that drives the probabilistic aspects of our work.

Probability and Statistics

This approach - the statistical processes and probabilistic outcomes requires we know something about our underlying processes. Our capacity for work, the generated defect rate, the defect fix rate. Without that knowledge the probabilistic answers aren't forth coming and if th ey are forced out in the Dilbert Style management, they'll be bogus at best, and down right lies at worst.

Let's stop here for some critically important points:

  • If we don't have some sense of the underlying processes driving our project, we're in much bigger trouble than we think - we don't know what done looks like in units of measure meaningful to the decision makers
    • When will we be done? Approximately? -¬†I don't know -¬†¬†then we're late before we start.
    • How much will this cost? Approximately? - I don't know - we're over budget before we start.
    • What's the probability we'll be able to deliver all the needed features - minimal features or mandatory features, for a cost and schedule goal? I don't know - this project is going to be a¬†Death March project before we run out of time and money.
  • If we don't know our capacity for work, which should be developed from empirical data - we can't make duration and cost estimates.
    • Knowing this once the project is going if fine. But it's likely too late to make business decisions needed to start the project.
    • The very naive assumption that all the work can be broken down into¬†same sized chunks has not broad evidence, and is likely to be highly domain dependent.¬†

So let's look at the core problem of estimating

As humans we are poor at estimating. Fine, does then mean we should not estimate. Hardly. We need to become aware of our built in problem and deal with them. The need for estimating in business is not going away, it is at the heart of business itself and core to all decision making.

Here's an example from Daniel Kahneman's "Thinking. Fast and Slow",

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which of these is more probable?

1. Linda is a bank clerk.
2. Linda is a bank clerk and an active member of a feminist movement.

In Kahneman's studies, 90% of all respondents selected the latter option. Why? Because the description of Linda and the word feminism intuitively raise an idea of compatibility. Being outspoken and an activist are not usually associated with the job of a bank clerk. Quickly thinking, the bank clerk is a slightly more probable option if she is at least a feminist. However, the second option is horribly wrong because the probability is lower when there are more variables. Statistical probability is always lower when the target group grows ‚Äď there are more bank clerks who are not active feminists.

So Now What, We've Confirmed We're Bad At Making Estimates

How do we make it better? First, come to realize the good - meaning credible - estimates are part of good business. Knowing the cost of the value delivered is at the core of all business success. Second, is look for the root causes of poor estimating outcomes. These comes in many sizes. But the Dilbert excuse is not an excuse. It's a carton of bad management. So let's drop that right away. If you work for a Dilbert boss or have a Dilbert boss for a customer, not much good estimating is going to do for you. So let's dispense with the charde of trying too.

So what are some root causes of poor estimates:

  1. Poor understanding of what done looks like - what capabilities do we need and when do we need them. Without this understanding building a list of requirements has no home and the project becomes an endless series of emerging work efforts in an attempt to discover the needed capabilities.
  2. Inability to create a model of past performance with statistical behaviour
    • All project variables are random variables¬†
    • Discover these variables, use the capabilities delivery
  3. Lack of experience and knowledge in the basic processes of software estimating
  4. The mistaken belief that the future cannot be discovered before it arrives 

 

Related articles Statistical Process Control The Basis of All Modern Control Systems Four Critical Elements of Project Success All Project Numbers are Random Numbers - Act Accordingly Everything is a Random Variable Critical Thinking Insight Probability Theory - A Primer How to "Lie" with Statistics How To Fix Martin Fowler's Estimating Problem in 3 Easy Steps
Categories: Project Management

Don't be an Accidental Project Manager

Fri, 06/27/2014 - 18:44

A common problem in our development of the Program Management Office is getting so caught up in putting out fires. This is Covey's ‚Äúaddiction of the urgent.‚ÄĚ In this process we lose the big-picture perspective. This note is about the big-picture view of the project management process as it pertains to our collection of projects. These are very rudimentary principles, but they are important to keep in mind.

5 Basic Principles

1. Be conscious of what you're doing, don’t be an accidental manager. Learn PM theory and practice. Realize you don't often have direct control. Focus on being a professional and the PM's mantra:

"I am a project professional. I work on projects. Projects are undertakings that are goal-oriented, complex, finite, and unique. They pass through a life cycle, which begins with project selection and ends with project termination."

2. Invest in front-end work; get it right the first time. We often leap before we look due to an over‚Äďfocus on results-oriented processes, simple and many times simple-minded platitudes about project management and the technical processes and ignore basic steps. Trailblazers often achieve breakthroughs, but projects need forethought. Projects are complex, and the planning, structure, and time spent with stakeholders are required for success. Doing things right takes time and effort, but this time and effort is much cheaper than rework.

3. Anticipate the problems that will inevitably arise. Most problems are predictable. Well-known examples are:

  • Little direct control over staff, little staff commitment to the project.
  • Staff workers are not precisely what we want or need.
  • Functional managers have different goals, and these will suboptimize the project.
  • Variances to schedule and budget will occur, and customer needs will shift.
  • Project requirements will be misinterpreted.
  • Overplanning and overcontrol are as bad as underplanning and weak control.
  • There are hidden agendas, and these are probably more important than the stated one.

4. Go beneath surface illusions; dig deep to find the real situation. Don't accept things at face value. Don't treat the symptom, treat the root cause, and the symptoms will be corrected. Our customers usually understands their own needs, but further probing will bring out new needs. Robert Block suggests a series of steps: 

  • Identify all the players, in particular those who can impact project outcome.
  • Determine the goals of each player and organization, focusing on hidden goals.
  • Assess your own situation and start to define the problems.

5. Be as flexible as possible; don’t get sucked into unnecessary rigidity and formality. Project Management is the reverse of Fermi's 2nd law: we're trying to create order out of chaos. But in this effort:

  • More formal structure & bureaucracy doesn't necessarily reduce chaos.
  • We need flexibility to bend but not break to deal with surprises, especially with intangibles our information-technology projects.
  • The goal is to have both order and flexibility at the same time.
  • Heavy formality is appropriate on large budget or low-risk projects with lots of communication expense and few surprises. Information-age projects have a low need for this because they deal more with information and intangibles, and have a high degree of uncertainly.

[1] The Politics of Projects, Robert Block, Yourdon Press, 1983.

Related articles Elements of Project Success How to Deal With Complexity In Software Projects?
Categories: Project Management

Operationalism

Fri, 06/27/2014 - 16:11

When we hear about a process, a technique, or a tool, ask in what unit of measure are you assessing the beneficial outcome of applying those?

This idea started with P. W. Bridgman's principle that the meaning of any concept is in its measurement or other test. This was put forth in the 1930's in which Bridgman made a famous, useful, and very operational statement, usually remembered as:

The scientific method is doing your damnedest, no holds barred. †

Developing software is not a scientific process, even though Computer Science is a discipline at the university level, where probability and statistics are taught, IEEE/ACM Computer Science Education Curricula

When we want to make choices about a future outcome, we can apply statistical thinking using the mathematics  used in scientific discussions - cost, schedule, and performance (C,S,P which are random variables).

These decisions are based on the probabilistic and statistical behavior of the underlying processes that create the alternatives for our decisions. Should we spend $X on a system that will return $Y value? Since both X and Y are random variables - they are in the future - our decision making processes needs to estimate the behaviour of these random variables and determine the impact on our outcomes.

Probability and Statistics

When we hear there are alternatives to making decisions about the future impacted by cost, schedule and technical performance without estimating the impact of that decision, we need to ask what are those alternatives, what are their units of measure, and when can we find them described?

For those interested in further reading on the topic of Decision Making in the Presence of Uncertainty

† Reflections of a Physicist, P. W. Bridgman, pp. 535. The passage reads, "The scientific method, as far as it is a method, is nothing more than doing one's damnedest with one's mind, no holds barred."

Categories: Project Management

All Project Numbers are Random Numbers ‚ÄĒ Act Accordingly

Wed, 06/25/2014 - 15:11

Dice5The numbers that appear in projects ‚ÄĒ¬†cost, schedule, performance ‚ÄĒ¬†are all random variables drawn from an underlying statistical process. This process is¬†officially called a non-stationary stochastic process. It has several important behaviours that create problems for those trying to make decisions in the absence of understanding how these processes work in practice.

The first issue is that all point estimates for projects are wrong, in the absence of a confidence interval and an error band on that confidence.

How long will this project take is a common question asked by those paying for the project. The technically correct answer is there is an 80% confidence of completing on or before some date, with a 10% error on that confidence. This is a cumulative probability number collecting all the possible completion dates and describing the cumulative probability - the 80% - of an on or before, since the project can complete before that final probabilistic date as well. 

Same conversation for cost. The cost of the project will be at or below "some amount" with a 80% confidence.

The performance of products or services are the third random variables. By technical performance it means anything and everything that is not cost or schedule. This is the wrapper term for the old concept of scope. But in modern terms there are two general purpose categories of Performance with one set of parameters.

  • Measures of Effectiveness -¬†are the operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment, under a specific set of conditions. The Measures of Effectiveness:
    • Are stated in units meaningful to the buyer,
    • Focus on capabilities independent of any technical implementation,
    • Are connected to the mission success.
  • Measures of Performance - are the measures that characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions. The Measures of Performance are:
    • Attributes that assure the system has the capability and capacity to perform,
    • Assessment of the system to assure it meets design requirements to satisfy the MoE.
  • Key Performance Parameters - represent the capabilities and characteristics so significant¬† that failure to meet them can be cause for reevaluation, reassessing, or termination of the program. Key Performance Parameters:
    • Have a threshold or objective value,
    • Characterize the major drivers of performance,
    • Are considered Critical to Customer (CTC).

These measures are all random numbers with confidence intervals and error bands.

So What's The Point?

When we hear you can't forecast the future, that's not true. The person saying that didn't pay attention in the High School statistics class. You can forecast the future. You can make estimates of anything. The answers you get may not be useful, but it's an estimate all the same. If it is unclear on how to do this, here's a reading assignment for the books we use nearly every month to make our estimates at completion and estimates to complete for software intensive project, starting with the simplist:

While on the topic of books, here are some books that should be on your shelf that put those probability and statistics to work.

  • Facts and Fallacies of Software Engineering, Robert Glass - speaks to the common fallacies in software development. The most common is¬†we can't possibly estimate when we'll be done or how much it will cost. Read the book and start calling BS on anyone using that excuse to not do their homework. And a nice update¬†by Jack Atwood, founder of Stack Exchange.
  • Estimating Software-Intensive Systems, Richard Stutzle - this is¬†the book that started the revolution of statistical modeling of software projects. When you hear¬†oh this is so olde school, that person didn't take the HS Stats class either.¬†
  • Software Engineering Economics, Barry Boehm - is how to pull all this together. And when you hear this concept is¬†olde school, you'll know better as well.

There are several tools that make use these principles and practices:

Here's the End

  • Learn to estimate.
  • Teach others to estimate.
  • When the Dilbert boss comes around, you'll have to tools to have a credible discussion about the¬†Estimate to Complete number he's looking for is bogus. He may not listen or even understand, but you will.

And that's a start in fixing the dysfunction of bad estimating when writing software for money. Start with the person who can actually make a change - You

Related articles Averages Without Variances are Meaningless - Or Worse Misleading Elements of Project Success Can There Be Successful Projects With Fixed Dates, Fixed Budgets, and Promised Minimal Features? Four Critical Elements of Project Success To Stay In Business You Need to Know Both Value and Cost How to Forecast the Future Making Estimates For Your Project Require Discipline, Skill, and Experience How To Assure Your Project Will Fail Random Sample Calculations And My Prediction That 300,000 Lawyers Will Be Using Random Sampling By 2022 The Uncertainty of Predictions

 

Categories: Project Management