Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Project Management

How to Estimate Software Development - Update

Herding Cats - Glen Alleman - Thu, 10/09/2014 - 02:49

There is a popular quote used by many in the #NoEstimates community, that is sadly misinformed.

Those who have knowledge, don‚Äôt predict. Those who predict, don‚Äôt have knowledge. ‚ąí Lao Tseu

This of course was from a 6th Century BC Chinese philosopher, who was not likely familiar with the notion of probability and statistics developed some 900 years later. The quoting and re-quoting of Lao Tseu as an example of why estimates can't be made brings to light one of the more troublesome aspects of our modern age.

The lack of understanding of basic probability and statistics when applied to human endeavors.

Or possibly the intentional ignorance of probability and statistics as it is applied to the development of software systems. I can't really say if it is for lack of understanding, lack of exposure, or just a simple intent to ignore. 

But for any of those reasons and more, here's a starting point on how to actually become a member of the modern of statistical estimating community, once it is decided that is better than ignoring the basic knowledge needed to be a steward of other peoples money.

Here's some starting points in no particular order, other than that's how they came off the office book shelf.

These are just a small sample of the information readily available at your local book store or through the mail. If you google "software cost estimating," (all in quotes) there will be 100's of more articles, papers, and web sites. As well tools for estimating software are used every single day in a variety of domains. 

The Value at Risk is a starting point as well. Low value - this is defined by those providing the money, not by those doing the work, and low risk - this usually defined by those doing the work, not by those providing the money - at least in the domains we work. This Value at Risk, sets the tone. Low Value, Low Risk - and this is in absolutely no way an assessment of the relative value and risk - usually doesn't need much estimating.

Got a 6 week, 2 person database update project. Just do it. Got a 38 month, 400 person National Asset sofwtare project, probably so. Everything and anything in between needs to ask and answer that value at risk question before deciding. 

So poor Mr. Tzu was sadly informed when he made his quote. As are those repeating it. In the 21 century

Those who have knowledge of probability, statistics, and the processes described by them can predict their future behaviour. Those without this knowledge, skills, or experience cannot.

Related articles How to Estimate Software Development
Categories: Project Management

Large Program? Release More Often

I’m working on the release planning chapter for Agile and Lean Program Management: Collaborating Across the Organization. There are many ways to plan releases. But the key? Release often. How often? I suggest once a month.

Yes, have a real, honest-to-goodness release once a month.

I bet that for some of you, this is counter-intuitive. “We have lots of teams. Lots of people. Our iterations are three weeks long. How can we release once a month?”

Okay, release every three weeks. I’m easy.

Look, the more people and teams on your program, the more feedback you need. The more chances you have for getting stuck, being in the death spiral of slowing inertia. What you want is to gain momentum.

Large programs magnify this problem.

If you want to succeed with a large agile program, you need to see progress, wherever it is. Hopefully, it’s all over the program. But, even if it’s not, you need to see it and get feedback. Waiting for feedback is deadly.

Here’s what you do:

  1. Shorten all iterations to two weeks or less. You then have a choice to release every two or four weeks.
  2. If you have three-week iterations, plan to release every three weeks.
  3. Make all features sufficiently small so that they fit into an iteration. This means you learn how to make your stories very small. Yes, you learn how. You learn what a feature set (also known as a theme) is. You learn to break down epics. You learn how to have multiple teams collaborate on one ranked backlog. Your teams start to swarm on features, so the teams complete one feature in one iteration or in flow.
  4. The teams integrate all the time. No staged integration.

Remember this picture, the potential for release frequency?

Potential Release Frequency

Potential for Release Frequency

That’s the release frequency outside your building.

I’m talking about your internal releasing right now. You want to release all the time inside your building. You need the feedback, to watch the product grow.

In agile, we’re fond of saying, “If it hurts, do it more often.” That might not be so helpful. Here’s a potential translation:¬† “Your stuff is too big. Make it smaller.”

Make your release planning smaller. Make your stories smaller. Integrate smaller chunks at one time. Move one story across the board at one time. Make your batches smaller for everything.

When you make everything smaller (remember Short is Beautiful?), you can go bigger.

Categories: Project Management

Lean Change Management: A Truly Agile Change Management approach

Software Development Today - Vasco Duarte - Wed, 10/08/2014 - 04:00

"I've been working in this company for a long time, we've tried everything. We've tried involving the teams, we've tried training senior management, but nothing sticks! We say we want to be agile, but..."

Many people in organizations that try to adopt agile will have said this at some point. Not every company fails to adopt agile, but many do.

Why does this happen, what prevents us from successfully adopting agile practices?

Learning from our mistakes

Actually, this section should be called learning from our experiments. Why? Because every change in an organization is an experiment. It may work, it may not work - but for sure it will help you learn more about the organization you work for.

I learned this approach from reading Jason Little's Lean Change Management. Probably the most important book about Agile adoption to be published this year. I liked his approach to how change can be implemented in an organization.

He describes a framework for change that is cyclical (just like agile methods):

  • Generate or gain insights: in this step we - who are involved in the change - do small experiments (like for example asking questions) to generate insights into how the organization works, and what possible things we could use to help people embrace the next steps of change.
  • Define options: in this step we list what are the options we have. What experiments could we run that would help us towards our Vision for the change.
  • Select and run experiments: each option will, after being selected, be transformed into an experiment. Each experiment will have a step of actions, people to involve, expected outcomes, etc.
  • Review, learn and...: After the experiments are concluded (and sometimes right after starting those experiments) we gain even more insights that we can feed right back into what Jason call the Lean Change Management Cycle.

The Mojito method of change

The overall cycle for Lean Change Management is then complemented in the book with concrete practices that Jason used and explains how to use in the book. Jason uses the story of The Commission to describe how to apply the different practices he used. For example, in Chapter 8 he goes into details of how he used the Change Canvas to create alignment in a major change for a large (and slow moving) organization.

Jason also reviews several change frameworks (Kotter's 8 steps, McKinsey's 7S, OCAI, ADKAR, etc.) and how he took the best out of each framework to help him walk through the Lean Change Management cycle.

The most important book about Agile adoption right now

After having worked on this book for almost a year together with Jason, I can say that I am very proud to be part of what I think is a critical knowledge area for any Agile Coach out there. Jason's book describes a very practical approach to changing any organization - which is what Agile adoption is all about.

For this reason I'd say that any Agile Coach out there should read the book and learn the practices and methods that Jason describes. The practices and ideas he describes will be key tools for anyone wanting to change their organization and adopt Agile in the process.

Here's where you can find more details about what the book includes.

What Is Quality?

Mike Cohn's Blog - Tue, 10/07/2014 - 15:00

Agile teams build high-quality products. Agile team members write high-quality code. Agile teams produce functionality quickly by not sacrificing quality.

Each of these is something I’ve said before. And if you haven’t said these exact things, you’ve likely said something similar.

Quality gets mentioned a lot in discussions about agile. And so, perhaps it’s worth clarifying my definition of quality. Of course, others have thought about quality more deeply than I’m capable of. And so, I won’t be providing a new definition of quality here. But I will explain how I think of quality.

One of the leading advocates for quality was Philip Crosby. In the 1970s he proclaimed that “quality is free” because doing something right the first time at a high level of quality was cheaper than fixing it later. Crosby defined quality as “conformance to requirements.”

I never really bought into Crosby “conformance with requirements” approach (even before agile came around) because there was never a way to be confident requirements were accurate. Saying something like old Microsoft Bob was high quality because it complied with some ill-conceived requirements document never felt right to me.

Similarly, quality isn’t just being bug-free though, as that’s the same problem.

Another approach to defining quality comes from Joseph Juran. He was one of a number of management theorists who worked in Japan in the 1950s. Juran defined quality as “fitness for use”:

"An essential requirement of these products is that they meet the needs of those members of society who will actually use them. This concept of fitness for use is universal. It applies to all goods and services, without exception. The popular term for fitness for use is Quality, and our basic definition becomes: quality means fitness for use."

This definition of quality really resonates with me. Quality is “fitness for use.” A high-quality product does what its customers want in such a way that they actually use the product. Something that conforms to ill-conceived requirements (such as Microsoft Bob) is not high quality. Something that is buggy isn’t high quality because it isn’t fit for use.

What do you think? Is quality best thought of as “conformance to requirements?” Or “fitness for use?” Or perhaps something else entirely? Please share your thoughts in the comments below.

What Is Quality?

Mike Cohn's Blog - Tue, 10/07/2014 - 15:00

Agile teams build high-quality products. Agile team members write high-quality code. Agile teams produce functionality quickly by not sacrificing quality.

Each of these is something I’ve said before. And if you haven’t said these exact things, you’ve likely said something similar.

Quality gets mentioned a lot in discussions about agile. And so, perhaps it’s worth clarifying my definition of quality. Of course, others have thought about quality more deeply than I’m capable of. And so, I won’t be providing a new definition of quality here. But I will explain how I think of quality.

One of the leading advocates for quality was Philip Crosby. In the 1970s he proclaimed that “quality is free” because doing something right the first time at a high level of quality was cheaper than fixing it later. Crosby defined quality as “conformance to requirements.”

I never really bought into Crosby “conformance with requirements” approach (even before agile came around) because there was never a way to be confident requirements were accurate. Saying something like old Microsoft Bob was high quality because it complied with some ill-conceived requirements document never felt right to me.

Similarly, quality isn’t just being bug-free though, as that’s the same problem.

Another approach to defining quality comes from Joseph Juran. He was one of a number of management theorists who worked in Japan in the 1950s. Juran defined quality as “fitness for use”:

"An essential requirement of these products is that they meet the needs of those members of society who will actually use them. This concept of fitness for use is universal. It applies to all goods and services, without exception. The popular term for fitness for use is Quality, and our basic definition becomes: quality means fitness for use."

This definition of quality really resonates with me. Quality is “fitness for use.” A high-quality product does what its customers want in such a way that they actually use the product. Something that conforms to ill-conceived requirements (such as Microsoft Bob) is not high quality. Something that is buggy isn’t high quality because it isn’t fit for use.

What do you think? Is quality best thought of as “conformance to requirements?” Or “fitness for use?” Or perhaps something else entirely? Please share your thoughts in the comments below.

Quote of the Day

Herding Cats - Glen Alleman - Mon, 10/06/2014 - 17:52

Novum_organum_scientiarumThe unassisted hand, and the understanding left to itself, posses but little power. Effects are produced by the means of instruments and helps, which the understanding requires no less than the hand. And as instruments either promote or regulate the motion of the hand, so those that are applied to the mind prompt or protect the understanding - Novum Organum Sciantiarium (Aphorisms concerning the Interpretation of Nature and the Kingdom of Man), Francis Bacon (1561 - 1626).

 

 

When we hear we can make decisions in the absence of estimating the impacts of those decisions, the cost when complete, or the lost opportunity cost for make alternative decisions, think of Bacon.

He essentially says show me the money.

Control systems from Glen Alleman
Categories: Project Management

Ten Thousand Baby Steps

NOOP.NL - Jurgen Appelo - Mon, 10/06/2014 - 15:35
Euro Disco/Dance

Last week, I added the last songs to my two big Spotify playlists: Euro Dance Heaven (2,520 songs) and Euro Disco Heaven (1,695 songs). I added the first of those songs to my playlists on December 16, 2010. That’s almost four years ago.

The post Ten Thousand Baby Steps appeared first on NOOP.NL.

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Sun, 10/05/2014 - 15:33

Nothing is too wonderful to be true, if it is consistent with the laws of nature, and in ¬†such things as these, experiment is the best test of such consistency ‚ąí Michael Faraday's Diary, March 19, 1849

When we hear wonderful concepts conjectured that are untested outside personal anecdote, ask where has this worked, in what domain, in what governance framework, and what was the value at risk to applying the idea.

Categories: Project Management

Estimating Software-Intensive Systems

Herding Cats - Glen Alleman - Fri, 10/03/2014 - 23:27

Estimating SW Intensive SystemsThe are numerous conjectures about waste of software project estimates. Most are based on personal opinion divorced from the business processes of writing sofwtare for money.

From the Introduction of the book to the left.

Good estimates are key to project (and product) success. Estimates provide information to make decisions, define feasible performance objectives, and plans. Measurements provide data to gauge adherence to performance specifications and plans, make decisions, revise designs and plans, and improve future estimates and processes.

Engineers use estimates and measurements to evaluate the feasibility and affordability of proposed products, choose amoung alternatives designs, assess risk, and support business decisions. Engineers and planners estimate the resources needed to develop, maintain, enhance, and deploy a product. Project planners use the estimated staffing level to identify needed facilities.

Planners and managers use the resource estimates to compute project cost and schedule, and prepare budgets and plans. Estimates of product, project and process characteristics provide "baselines"to assess progress the execution of the project. Managers compare compare estimates and actual values to identify deviations from the project plan and to understand the causes of the variation.

For products, engineers compare estimates of the technical baseline to observed performance to decide if the product meets its functional and operational requirements. Process capability baselines establish norms for process performance. Managers use these norms to control the process and detect compliance problems. Process engineers use capability baselines to improve the production process.

Bad estimates affect everyone associated with the project - the engineers and managers, the customer who buys the product, and sometimes even the stockholders of the company responsible for delivering the software. Incomplete or inaccurate resource estimates for a project mean that the project may not have enough time and money to complete the required work.

If you work in a domain where none of these conditions are in place, then by all means don't estimate.

If you do recognize some or all of these conditions, then here's a summary of the reasons to estimate and measure, from the book.

  • Product Size, Performance, and Quality
    • Evaluate feasibility of requirements
    • Analyze alternative product designs
    • Determine the required capacity and performance of hardware.
    • Evaluate product performance - accuracy, speed, reliability, availability, and all the¬†...ilities.¬†(ACA web site missed answering this question).
    • Identify and assess technical risks
    • Provide technical baselines for tracking and controlling - this is called Closed Loop Control. No steering targets with measures of actual performance assessed against desired performance is called Open Loop Control.
  • Project Effort, Cost, and Schedule - yes Virginia real business managers need to know when you'll be done, how much it will cost, and what you'll deliver on that day for that cost.¬†And yes Virginia there is no Santa Claus
    • Determine project feasibility in terms of cost and time
    • Identify and assess project risks -¬†Risk Management is how Adults Manage Projects¬† - Tim Lister
    • Negotiate achievable commitments
    • Prepare realistic plans and budgets
    • Evaluate business value - cost versus benefit is how business stay in business
    • Provide cost and schedule baseline for tracking and controlling
  • Process Capability and Performance
    • Predict resource consumption and efficiency
    • Establish norms of expected performance -¬†back to the steering targets
    • Identify opportunities for improvement.

 

Related articles The Failure of Open Loop Thinking Why Johnny Can't Estimate or the Dystopia of Poor Estimating Practices An Agile Estimating Story How To Make Decisions Incremental Commitment Spiral Model Probabilistic Cost and Schedule Processes Project Finance The Three Elements of Project Work and Their Estimates How Not To Make Decisions Using Bad Estimates
Categories: Project Management

Critical Thinking - The Missing Element in Project Management Processes

Herding Cats - Glen Alleman - Thu, 10/02/2014 - 21:56

The_Thinker_Musee_RodinComplex and unstable environments encountered in project work - especially software development project work - calls for critical thinking by all participants. Complexity comes in part from technical uncertainties, starting with requirements for software capabilities. If there is uncertainty in what capabilities are needed, the project is starting off on the path to failure on day one. Certainly the functional and operational requirements have emergent properties that create uncertainty. As do staffing, productivity and risks created by reducible and irreducible uncertainty.

To deal with these complexities, critical thinking is needed on behalf of the project leaders and project participants alike.

The first responsibility of the project staff as well as management is to think. To think what is being asked of them. To consider that they are being paid to produce value by someone other than themselves. To think about why they are there, what they are being asked to do, and how they can go about being stewards of the money provided by those paying for their work. To be true professionals applying their education, training, and experience through analysis and creative, informed thought to make daily decisions.

Failure to apply critical thinking creates a  disconnect between those providing the value and those paying for the value. This is best illustrated in the notion that business decisions can be made in the absence of knowing the cost and outcomes of those decisions. The first gap in critical thinking occurs when the decsion making process ignores the principles of MicroEconomics. This gap is future reinforced when the probabilistic nature of all project work is also ignored. 

The three core elements of all projects are the delivered capabilities, the cost of producing those capabilities, and the time frame over which those capabilities are delivered for the cost. Each of these acts as a random variable, interacting with the other two in statistically complex ways.

To manage in the presence of these random variables, means making decisions in the presence of uncertainties - and resulting risks - created by the randomness of the variables. These uncertainties are reducible - we can pay more to find out more information. Or they are irreducible, we can only manage in their presence with margin for cost, schedule, and technical performance.

To make decisions in the presence of this paradigm we need to Estimate. Making decisions in the absence of estimating first violates the principles of microeconomics, and secondly ignores the underlying statistical and probabilistic nature of all project work.

When we hear that decisions be made in the absence of estimates, ask how Microeconomics and statistical uncertainty are handled. 

Slide1
 

Categories: Project Management

Quote of the Day

Herding Cats - Glen Alleman - Thu, 10/02/2014 - 15:33

We shall not cease from exploration We shall not cease from exploration and the end of all our exploring will be to arrive where we started and be to arrive where we started and know the place for the first time. -T.S. Eliot

Quoting 29 year old reports or referencing 30 year old books it not likely the best way to reveal the problems of the day. These problems have existed of 30 year, but new more effective, and much more transparent solutions are available today.

Categories: Project Management

Management Feedback: Are You Abrasive or Assertive?

Let me guess. If you are a successful woman, in the past, you’ve been told you’re too abrasive, too direct, maybe even too assertive. Too much. See The One Word Men Never See In Their Performance Reviews.

Here’s the problem. You might be.

I was.

But never in the “examples” my bosses provided. The “examples” they provided were the ones when I advocated for my staff. The ones where I made my managers uncomfortable. The examples, where, if I had different anatomy, they would have relaxed afterwards, and we’d gone out for a beer.

But we didn’t.

Because my bosses could never get over the fact that I was a woman, and “women didn’t act this way.” Now, this was more than 20 years ago. (I’ve been a consultant for 20 years.) But, based on the Fast Company article, it doesn’t seem like enough culture has changed.

Middle and senior managers, here’s the deal: At work, you want your managers to advocate for their people. You want this. This is a form of problem-solving. Your first-line and middle managers see a problem. If they don’t have the entire context, explain the context to them. Now, does that change anything?

If it does, you, senior or middle manager, have been derelict in your management responsibility. Your first-line manager might have been able to solve the problem with his/her staff without being abrasive if you had explained the context earlier. Maybe you need to have more one-on-ones. Maybe all your first-line managers could have solved this problem in your staff meeting, as a cross-functional team. Are you canceling one-on-ones or canceling problem-solving meetings? Don’t do that.

Do you have a first-line manager who doesn’t want to be a manager? Maybe you fell prey to the myth of promoting the best technical person into a management position. You are not alone. Find someone who wants to work with people, and ask that person to try¬† management.

We all need feedback. Managers need feedback, too. Because managers leverage the work of others, they need feedback even more than technical people.

If you think a manager on your management team is “too” abrasive or assertive,” ask yourself, is this person female? Then ask yourself, “Would I say the same thing if this person looked as if she could be a large sports figure, male attributes and all?”

You see, the fact that I have the physical attributes of a short, kind-of cute woman has not bothered me one bit. I feel seven feet tall. I often act like it. I am not afraid to take chances or calculated risks. I am not afraid to talk to anyone in the organization about anything. How else would I accomplish the work that needs to be done? (You may have noticed that I write tall, too.)

Abrasive and assertive are code words for fearless problem solvers. Don’t penalize the women—or the men—in your organization who are fearless problem solvers.

Categories: Project Management

No Estimates Needs to Come In Contact With Those Providing the Money

Herding Cats - Glen Alleman - Wed, 10/01/2014 - 17:36

For all the words written and posted around estimating or not estimating - and I've contributed my share - the basis of estimates has yet to be addressed outside of a few people. @PeterKretzman @aritanninen @kalapaistos@fscavo come to mind.

The gap here is simple. No one seems to ask - or even want to ask - Who are the estimates for? They are not likely for developers, who rightly, so in some cases see estimating as taking away from their valuable development duties.

Who Are Estimates For?

Estimates are for business managers providing the money that appears in the developers paycheck. Estimates are for those same business managers accountable for the Profit & Loss statement of the firm employing the developers writing the code. Those estimates forecast confidence intervals of profit or loss on a project or service before that profit or loss arrives and is irrevocable. 

Estimates are for the business marketing staff in a product firm, who are forecasting the "break even" plan for the sunk cost of developing  software that will be sold in the market. Whose revenue will pay back the short term loan (line of credit) used to pay the salaries of the developers. Without this forecast, decisions about spending or further spending have to be made in the dark.

Estimates are for the business development staff in a professional services and development firm to forecast the confidence in the assure that the contractual obligations to provide working software will not cost more - including management reserve and contingency - than they quoted the customer during the early phases of the project. Since all forecasting are probabilistic, this confidence is - or should be - discussed as the probability of cost at of below or completing on or before. The dysfunction of using estimates as commitments, is recognized as just that - dysfuntion. But as a dysfunction, it's classified as Bad Management. Don't Do Stop Things on Purpose is good advice for any business.

Estimates are for the internal business finance staff accountable for managing and forecasting costs for internal software development or procurement used to run the business - and likely used to generate revenue - and assure the senior finance people that the "value" produced by this software measured in monetized units of "money" will exceed the cost to achieve that value when the project completes. And some sense of when the date will be, so those monetized benefits can start to appear on the balance sheet using FASB 86 accounting rules.

The estimates are not for the developers

Those talking about #NoEstimates from the developers point of view are talking to the wrong people. They appeat to be talking to their own self-selected group and not the group that provides the money for their work. As my former NASA Cost Director colleague reminds me "follow the money." So follow the money. Unless the developers are providing the money themselves, the question of estimating or not estimating is a self-referencing conversation in the absence of these people. Because of that, those best to say if estimates are of value or not are not in the conversation. 

So Back To The Original Question

Ignoring for the moment the observed or perceived dysfunctions found in low maturity software development organizations of the misuse of estimates. Ignore for the moment the preception that making estimates of the future cost, duration, and probabilistic outcomes of development work is part of normal engineering processes. Ignore the emotional rhetoric of the Dilbert approach to management. 

The core principle of Microeconomics of software development requires we  have some approximation of the future to make decisions about alternatives. The opportunity cost, the trade-space of decision making, requires we approximate the cost and outcomes of our decisions. 

Now add the core business process of managing expenditures against a planned and targeted Return on Investment, which has both Value and Cost in it's equation. 

Then ask those conjecturing there are:

  • Decision making frameworks for project that do not require estimates
  • Investment models for software projects that do not require estimates
  • Project management approaches of dealing with risk, scope management, progress reporting that do not require estimates

To connect the dots to those conjectures with Microeconomics of software development and ROI assessments of standard business processes.

 

Related articles How NOT to Estimate Anything How To Fix Martin Fowler's Estimating Problem in 3 Easy Steps More #NoEstimates All Project Numbers are Random Numbers - Act Accordingly How To Estimate, If You Really Want To Resources for Moving Beyond the "Estimating Fallacy" Back To The Future How to "Lie" with Statistics

 

Categories: Project Management

What Can Lean Learn From Systems Engineering?

Herding Cats - Glen Alleman - Tue, 09/30/2014 - 16:33

The Lean Aerospace Initiative and the Lean Aerospace Initiative Consortium define processes applicable in many domains for applying lean. At first glance there is no natural connection between Lean and System Engineering. The ideas below are from a paper Igave at a Lean conference.

Key Takeaways

  • Lean and Systems engineering are cousins.
  • All but trivial projects are systems and many are systems of systems. Thinking like a systems engineer is the basis of implementing Lean processes. Thinking in the absence of systems, does little to add sustaining value to any process improvement.
  • Product development is a value stream process, but how the components interact at the technical, business, financial, and operational levels is a systems engineering process. Lean itself does not possess the vocabulary to speak to these systems complexity issues¬†[1]

Core Concepts of Systems Engineering

  1. Capture and understand the requirements in terms of Capabilities assessed through Measures of Effectiveness (MOE) and Measures of Performance (MOP).
  2. Ensure requirements are consistent with what is predicted to be possible in a solution in these MOEs and MOPs.
  3. Treat goals as desired characteristics for what may not be possible.
  4. Define the MOE, MOP, goals, and solutions for the whole lifecycle of the project in units meaningful to the buyer.
  5. Maintain the distinction between the statement of the problem and the description of the solution.
  6. Baseline each statement of the problem and the statement of the solution.
  7. Identify descriptions of alternative solutions.
  8. Develop descriptions of the solution.
  9. Except for simple problems, develop a logical  solution description.
  10. Be prepared to iterate in design to drive up effectiveness.
  11. Base the solution of the evaluation of its effectiveness, in units of measure meaningful to the buyer.
  12. Independently verify all work products.
  13. Validate all work products from the perspective of the stakeholders.
  14. Some management is needed to plan and implement effective and efficient transformation of requirements and goals into a description of the solution.

Typical System Engineering Activities

  1. Technical management
  2. System design
  3. Product realization
  4. Technical analysis and evaluation
  5. Product control
  6. Process control
  7. Post implementation support

Steps to Lean Thinking [2]

  1. Specify value
  2. Identify value stream
  3. Make value flow continuously
  4. Let customers pull value
  5. Pursue perfection

Differences and Similarities between Lean and Systems Engineering

  1. Both emerged from practice. Only later were the principles and theories codified.
  2. Both have focused on different phases of the product lifecycle. SE is generally on product development. SE is more focused on planning. Lean generally on product production. While Lean is more focused on empirical action.
  3. Unlike Lean, SE has less focus on quality, except for Integrated Product and Product Development (IPPD).

Despite these differences and similarities both Lean and Systems Engineering are focused on the same objectives ‚Äď delivering products or lifecycle value to the stakeholders.

It is the lifecycle value that drives both paradigms and must drive any other process paradigm associated with Lean and Systems Engineering. Paradigm like software development, the management of any form of a project and the very notion of agile. A critical understanding often missed is that Lifecycle Value includes the cost of delivering that value.

Value can't be determined in the absence of knowing the cost. ROI and Microeconomics of decision making require both variables to be used to make decisions.

What do we mean by lifecycle?

Generally lifecycle is a combination of product performance, quality, cost and fulfillment of the buyers needed capabilities.[3]

Lean and Systems Engineering share this common goal. The more complex the system, the more contribution there from Lean and SE.

Putting Lean and Systems Engineering Together on Real Projects

First some success factors on complex projects [4]

  1. Dedicated and stable interdisciplinary teams
  2. Use of prototypes and models to generate tradeoffs
  3. Prioritizing product features
  4. Engagement with senior management and customers at every point in the project
  5. Some form of high performing front end decision process that reduces instability of key inputs and improve the flow of work throughout the product lifecycle.

This last success factor is core to any complex environment, no matter what the process is called. In the absence of stability of requirements and funding, improvements to the flow of work is constrained.

The notion of adapting to changing requirements is not the same as having the requirements ‚Äď and the associated funding ‚Äď be unstable.

Mapping of the Value Stream to the work process requires some level of stability. It is the search for this stability where Systems Engineering ‚Äď as a paradigm ‚Äď adds measureable value to any Lean initiative.

The standardization and commonality of processes across complex systems is the basis for this value. [5]

Conclusions

  1. Lean and SE are two side of the same coin regarding the objective of creating value for the stakeholder
  2. Lean and SE complement each other during different phases of the project ‚Äď ideation, product trades for SE and production waste removal for Lean anchor both ends of the spectrum of improvement opportunities.
  3. Value stream thinking makes visible the paths to be taken in transitioning to a Lean paradigm while maintaining the principles of systems engineering. [6]
  4. The result is the combination of Speed and Robustness ‚Äď systems are easily adaptable to change while maintaining fewer surprises, using leading indicators to make decisions and decreasing sensitivity to production and use variables.

[1]¬†‚ÄúThe Lean Enterprise ‚Äď A Management Philosophy at Lockheed Martin,‚ÄĚ Joyce and Schechter,¬†Defense Acquisition Review Journal, 2004.

[2] Lean Thinking, Womack and Jones, Simon and Schuster, 1996

[3] Lean Enterprise Value: Insights from MIT’s Lean Aerospace Initiative, Murman, et al, Palgrave 2002.

[4]¬†‚ÄúLean Systems Engineering: Research Initiatives in Support of a New Paradigm,‚ÄĚ Rebentisch, Rhodes, and Murman,¬†Conference on Systems Engineering, April 2004.

[5] LM21 Best Practices, Jack Hugus, National Security Studies, Louis A. Bantle Symposium, Syracuse University Maxwell School, October 1999

[6] ‚ÄúEnterprise Transition to Lean Roadmap,‚ÄĚ MIT Lean Aerospace Initiative, 2004 Plenary Conference.

 

Related articles Why Projects Fail, No Matter the Domain When We Say Risk What Do We Really Mean? How to Deal With Complexity In Software Projects? Big Systems Acquisitions - Lessons for ACA Web Site
Categories: Project Management

Handling Requests for Unnecessary Artifacts

Mike Cohn's Blog - Tue, 09/30/2014 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

“Working software over comprehensive documentation.” You’ve certainly seen that statement on the Agile Manifesto. It is perhaps the most important of the Manifesto’s four value statements—working software is, after all, the reason a team has undertaken a software development effort.

It is also one of the most misused parts of the Manifesto. This is the quote people cite when trying to get out of all documentation, which is not what the Manifesto says we should value.

Some documentation on a project can be great. But most non-agile teams write too much and talk too little. Some agile teams go to the opposite extreme, but many seem to find a good balance.

Occasionally, though, a team and product owner may disagree on the necessity of a document—usually with the product owner wanting a document and the team saying it’s not necessary. I’ve found two guidelines helpful in determining how to handle requests for various artifacts, especially documentation, on an agile project.

Guideline No. 1: If a team would produce an artifact while in the process of creating working software, that artifact is just naturally produced.

This guideline covers essentially everything a team would want to produce while on the way to building a system or product. It includes, for example, source code. It also includes any design documents, user guides and other items that the team wants to produce for the benefit of the current team, future teams maintaining the software or end users.

Guideline No. 2: If an artifact would not naturally be produced in the process of creating working software, the artifact is added to the product backlog.

The second guideline is there to cover cases when the product owner (or any other outside stakeholder) wants an artifact produced (usually a document) that the team would not normally produce.

For example, suppose the product owner asks the team to write a document describing every table and field in the database. I’ve certainly seen projects where such a document has been extremely helpful. (In fact, I’ve both requested and written such a document before.) But, I’ve always seen projects where this would have been unnecessary.

If the team thought this database description document were helpful, they would produce it in the process of creating the working software. And Guideline No. 1 would apply. But if they don’t think this document is necessary, they won’t produce it. Unless, that is, the product owner insists, which is where Guideline No. 2 comes in.

If the product owner wants this document, the product owner creates a new product backlog item saying so. The team can then estimate the time it will take to develop this document, just like they’d estimate any other product backlog item.

Putting an estimate on creating the document makes its cost explicit. This forces a product owner to think about the opportunity cost of developing that document. The product owner will be able to ponder: This five-point document or five points worth of new features?

I don’t know which the product owner will choose, but this approach makes the cost of that artifact explicit, allowing it to be compared with the value of additional features instead.

I’d love to know your thoughts on this. How does your team handle product owner requests for artifacts the team wouldn’t naturally produce? What artifacts does your team find helpful? Please share your thoughts in the discussion below.

Handling Requests for Unnecessary Artifacts

Mike Cohn's Blog - Tue, 09/30/2014 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

“Working software over comprehensive documentation.” You’ve certainly seen that statement on the Agile Manifesto. It is perhaps the most important of the Manifesto’s four value statements—working software is, after all, the reason a team has undertaken a software development effort.

It is also one of the most misused parts of the Manifesto. This is the quote people cite when trying to get out of all documentation, which is not what the Manifesto says we should value.

Some documentation on a project can be great. But most non-agile teams write too much and talk too little. Some agile teams go to the opposite extreme, but many seem to find a good balance.

Occasionally, though, a team and product owner may disagree on the necessity of a document—usually with the product owner wanting a document and the team saying it’s not necessary. I’ve found two guidelines helpful in determining how to handle requests for various artifacts, especially documentation, on an agile project.

Guideline No. 1: If a team would produce an artifact while in the process of creating working software, that artifact is just naturally produced.

This guideline covers essentially everything a team would want to produce while on the way to building a system or product. It includes, for example, source code. It also includes any design documents, user guides and other items that the team wants to produce for the benefit of the current team, future teams maintaining the software or end users.

Guideline No. 2: If an artifact would not naturally be produced in the process of creating working software, the artifact is added to the product backlog.

The second guideline is there to cover cases when the product owner (or any other outside stakeholder) wants an artifact produced (usually a document) that the team would not normally produce.

For example, suppose the product owner asks the team to write a document describing every table and field in the database. I’ve certainly seen projects where such a document has been extremely helpful. (In fact, I’ve both requested and written such a document before.) But, I’ve always seen projects where this would have been unnecessary.

If the team thought this database description document were helpful, they would produce it in the process of creating the working software. And Guideline No. 1 would apply. But if they don’t think this document is necessary, they won’t produce it. Unless, that is, the product owner insists, which is where Guideline No. 2 comes in.

If the product owner wants this document, the product owner creates a new product backlog item saying so. The team can then estimate the time it will take to develop this document, just like they’d estimate any other product backlog item.

Putting an estimate on creating the document makes its cost explicit. This forces a product owner to think about the opportunity cost of developing that document. The product owner will be able to ponder: This five-point document or five points worth of new features?

I don’t know which the product owner will choose, but this approach makes the cost of that artifact explicit, allowing it to be compared with the value of additional features instead.

I’d love to know your thoughts on this. How does your team handle product owner requests for artifacts the team wouldn’t naturally produce? What artifacts does your team find helpful? Please share your thoughts in the discussion below.

Handling Requests for Unnecessary Artifacts

Mike Cohn's Blog - Tue, 09/30/2014 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

“Working software over comprehensive documentation.” You’ve certainly seen that statement on the Agile Manifesto. It is perhaps the most important of the Manifesto’s four value statements—working software is, after all, the reason a team has undertaken a software development effort.

It is also one of the most misused parts of the Manifesto. This is the quote people cite when trying to get out of all documentation, which is not what the Manifesto says we should value.

Some documentation on a project can be great. But most non-agile teams write too much and talk too little. Some agile teams go to the opposite extreme, but many seem to find a good balance.

Occasionally, though, a team and product owner may disagree on the necessity of a document—usually with the product owner wanting a document and the team saying it’s not necessary. I’ve found two guidelines helpful in determining how to handle requests for various artifacts, especially documentation, on an agile project.

Guideline No. 1: If a team would produce an artifact while in the process of creating working software, that artifact is just naturally produced.

This guideline covers essentially everything a team would want to produce while on the way to building a system or product. It includes, for example, source code. It also includes any design documents, user guides and other items that the team wants to produce for the benefit of the current team, future teams maintaining the software or end users.

Guideline No. 2: If an artifact would not naturally be produced in the process of creating working software, the artifact is added to the product backlog.

The second guideline is there to cover cases when the product owner (or any other outside stakeholder) wants an artifact produced (usually a document) that the team would not normally produce.

For example, suppose the product owner asks the team to write a document describing every table and field in the database. I’ve certainly seen projects where such a document has been extremely helpful. (In fact, I’ve both requested and written such a document before.) But, I’ve always seen projects where this would have been unnecessary.

If the team thought this database description document were helpful, they would produce it in the process of creating the working software. And Guideline No. 1 would apply. But if they don’t think this document is necessary, they won’t produce it. Unless, that is, the product owner insists, which is where Guideline No. 2 comes in.

If the product owner wants this document, the product owner creates a new product backlog item saying so. The team can then estimate the time it will take to develop this document, just like they’d estimate any other product backlog item.

Putting an estimate on creating the document makes its cost explicit. This forces a product owner to think about the opportunity cost of developing that document. The product owner will be able to ponder: This five-point document or five points worth of new features?

I don’t know which the product owner will choose, but this approach makes the cost of that artifact explicit, allowing it to be compared with the value of additional features instead.

I’d love to know your thoughts on this. How does your team handle product owner requests for artifacts the team wouldn’t naturally produce? What artifacts does your team find helpful? Please share your thoughts in the discussion below.

Is Agile Dead or Can Good Software Development Scale?

From the Editor of Methods & Tools - Tue, 09/30/2014 - 13:23
As Agile becomes widely accepted as a software development approach, many large organizations have adopted it, mainly in its Scrum form to reduce development cycle. There might be even a fair share of adopters that are trying really to apply Agile values. If the topic of scaling Agile has been discussed for many years and you can read the excellent books of Graig Larman and Bas Vodde on this topic. We have also recently seen the emergence of proprietary” approaches, like SAFE, to achieve this goal. At the same time, ...

The Cognitive Illusion of Bad Software Project Outcomes

Herding Cats - Glen Alleman - Tue, 09/30/2014 - 00:41

Daniel Kahneman's and Amos Tversky's paper On The Reality of Cognitive Illusion. ‡ They suggest, through their research, that intuitive predictions and judgements are often mediated by a small number of distinctive mental operations, called judgement heuristics. These heuristics often lead to characteristic errors and biases.

For example, the effect of aerial perspective on apparent distance is confirmed by the observation that the same mountain appears closer on a clear day rather than a hazy day. The intuitive predication and judgement of probability are often based on the relations of similarity between evidence and possible outcomes. This representativeness is an assessment of the degree of correspondence between a sample and a population. 

The next heuristic is the availability bias in which the probability is estimated by assessing availability or associative distance. † Experience shows and experiments confirm that large classes are recalled better and faster than instances of less frequent classes. That likely occurrences are easier to imagine than unlikely ones. And associative connections are strengthened when two events frequently co-occur. That these associative bonds are strengthened by repetition is the basis of memory. 

So Here's the Issue

When we hear or read that software projects fail often or Standish report says ..., or a personal anecdote that resonates with our own personal experience, we recall that experience from memory. The actual data from the population of all data are not used for comparison. Rather we assume - by applying the cognitive illusion - that the sample sata represents the large class of population data, since our repeated observations of the sample data class has reinforced our illusion that that sample data IS the population.

This is the core issue with anecdotal information when making decisions in the presence of uncertainty. Or speaking about a condition in the absence of statistically testable hypothesis. Or attempting to convey a message in the absence of external confirmation that the message is on solid footing compared to the population of data.

Why This Is Not Good Management

When we hear we're all bad at making estimates, in the absence of actual population statistics about estimate making, we're using Cognitive Illusions and Availability Heuristics. Because we have personal experience with making bad estimates and the majority of people we associate with have the same experience.

This experience is in no way representative of the population of people tasked with making estimates. This would be irrelevant of course if the conversation were simple chatter at the bar. But once that conversation enters the realm of policy making, method development, or suggestions that the anecdotal observations need to result in changing how business conducts its business - we're bad at making estimates so the solution is to stop making estimates - then both availability bias and Cognitive Illusions have displaced the actual conversation about the very validity of the anecdotal concepts. And it is replaced by strong defense of the cognitively biased dea, no matter the credibility of the concept - which is most often weak at best and simply false at worse.

So next time you hear some statement about something involving observational and anecdotal data, ask a simple question.

What's the process by which these anecdotal observation have been tested in the broader population of conditions?

This is the core issue with the Standish Reports. They are self selected samples of projects that are troubled in the absence of the population of projects that are troubled and not troubled. 

Always ask for references, data representative of the references, and an assessment of the statistical confidence that the anecdotal data is in fact correlated with the population data. Otherwise it's just an opinion, and very likely an uniformed opinion.

And if you're paying money to listen to someone tell you ancedotal data and don't speak up and ask those questions, you've participated in the availability heuristic and cognitive illusion along with the speaker.

† Availability: A Heuristic for Judging Frequency and Probability, Amos Tversky and Daniel Kahneman, a chapter appearing in Cognitive Psychology, 1973

‡ On the Reality of Cognitive Illusions, Daniel Kahneman and Amos Tversky, Psychological Review, Vol. 103, No. 3, pp. 582-591

Categories: Project Management

Scale Agile With Small-World Networks Posted

I posted my most recent Pragmatic Manager newsletter, Scale Agile With Small-World Networks on my site.

This is a way you can scale agile out, not up. No hierarchies needed.

Small-world networks take advantage of the fact that people want to help other people in the organization. Unless you have created MBOs (Management By Objectives) that make people not want to help others, people want to see the entire product succeed. That means they want to help others. Small-world networks also take advantage of the best network in your organization—the rumor mill.

If you enjoy reading this newsletter, please do subscribe. I let my readers know about specials that I run for my books and when new books come out first.

Categories: Project Management