Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/6&page=5' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Project Management
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Find the Root Cause Before Assuming Your Fix Can Actually Fix Anything

Herding Cats - Glen Alleman - Mon, 03/21/2016 - 17:47

A burst of Tweets of No Estimates fixes these problems came across Twitter this morning. I won't repeat who they are attributed to, to protect the guilty. But here's a core concept that is totally missing from not only the No Estimates conjecture, but from most every discussion, where a solution is proposed before the problem has been actually defined. Let's start with Dean Gano's introduction to Apollo Root Cause Analysis, which is a George Bernard Shaw quote so fitting to the discussion of ignoring the root cause and going straight for a solution to the symptom - in many cases an Unnamed symptom.

Ignorance is a most wonderful thing.
It facilitates magic.
It allows the masses to be led.
It provides answers when there are none.
It allows happiness in the presence of danger.
All this while, the pursuit of knowledge can only destroy the illusion.
Is it any wonder mankind chooses ignorance?
~ George Bernard Shaw

So until the symptom is named - and the smell of dysfunction is not a symptom. Until the search for the root cause of the symptom, and applying the 5 whys to an unnamed symptom is not a root cause analysis process. And the root cause is discovered, there will be no chance that any suggested processes, method, or change in behaviors will have any impact on the symptom - named or unnamed.

To see how the statement Estimating is the smell of Dysfunction is seriously flawed and the¬†approach of asking 5 Whys equally flawed, please read¬†RealityCharting¬ģ Seven¬†Steps¬†to Effective¬†Problem-Solving and¬†Strategies¬†for Personal Success, Dean L. Gano

The Seven Steps are:

  1. Define the problem.
  2. Determine the known causal relationships to include the actions and condition of each effect.
  3. Provide a graphical representation of the causal relationships to include specific actions and conditional causes.
  4. Provide EVIDENCE to support the existence of each cause.
  5. Determine if each set of causes is sufficient and necessary to cause the effect.
  6. Provide effective solutions that remove, change, or control one of more causes of the event. Solutions must have been shown to prevent recurrence, meet our goals and objectives, be within our control and not cause other problems.
  7. Implement and track the effectiveness of each solution.

When one or all of these steps are missing, anyone conjecturing their solution - or worse conjecturing we're just exploring for the solution - that conjectured solution is NOT a solution, it's just unsubstantiated conjecture. 

One of my favorite quotes when hearing unsubstantiated claims is this

How many legs does a dog have if you call the tail a leg? Four. Calling a tail a leg doesn't make it a leg - Abraham Lincoln

Calling estimating the smell of Dysfunction doesn't make estimating the smell of dysfunction. You've only identified an unsubstantiated symptom. You have found the cause and certainly can't suggest Not estimating is the corrective action.

When we have willful ignorance of the Microeconomics of decision making, managerial finance as a governance process for managing other people's money, denial that the uncertainties of projects - aleatory and epistemic uncertainty - can be addressed without estimates the impact of those uncertainties. Then we are no better than the people mentioned in George Bernard Shaw's quote above. And we are doomed to repeating the symptoms that result from ignoring these principles of managing in the presence of uncertainty. 

 

Related articles The Dysfunctional Approach to Using "5 Whys" Carl Sagan's BS Detector Myth's Abound Making Conjectures Without Testable Outcomes Are Estimates Really The Smell of Dysfunction? Architecture -Center ERP Systems in the Manufacturing Domain The Fallacy of the Planning Fallacy IT Risk Management
Categories: Project Management

Load Testing, Communication and #NoProjects in Methods & Tools Spring 2016 issue

From the Editor of Methods & Tools - Mon, 03/21/2016 - 12:01
Methods & Tools has published its Spring 2016 issue that discusses the load testing scripts, communications in project teams, the Kano model for requirements and #NoProjects. Methods & Tools is a free e-magazine for software developers, testers and project managers. * Eradicating Load Test Errors – Part 1: Correlation Errors * Breaking Bad – The […]

Agile at Scale - A Reading List (Update 10)

Herding Cats - Glen Alleman - Fri, 03/18/2016 - 23:29

Screen Shot 2015-11-30 at 8.59.46 AMI'm working two programs where Agile at Scale is the development paradigm. When we start an engagement using other peoples money, in this case the money of a sovereign, we make sure everyone is on the same page. When Agile at Scale is applied, it is usually applied on programs that have tripped the FAR 34.2/DFARS 234.2 levels for Earned Value Management. This means $20M programs are self assessed and $100M and above are validated by the DCMA (Defense Contract Management Agency).

While these programs are applying Agile, usually Scrum, they are also subject to EIA-748-C compliance and a list of other DID's (Data Item Description) and other procurement, development, testing, and operational guidelines . These means there are multiple constraints on how the progress to plan is reported to the customer - the sovereign.

These programs are not 5 guys at the same table as their customer exploring what will be needed for mission success when they're done. These programs are not everyone's cup of tea, but agile is a powerful tool in the right hands of Software Intensive System of Systems for Mission Critical programs. Programs that MUST, deliver the needed Capabilities, at the Needed Time, for the Planned Cost, within the planned Margins for cost, schedule, and technical performance.

One place to start to improve the probability that we're all on the same page is this reading list. This is not an exhaustive list, and it is ever growing. But it's a start. It's hoped this list is the basis of a shared understanding that while Agile is a near universal principle, there are practices that must be tailored to specific domains. And one's experience in one domain may or may not be applicable to other domains. 

Like it says in the Scrum Guide. 

Scrum (n): A framework within which people can address complex adaptive problems, while productively and creatively delivering products of the highest possible value.

And since Scrum is an agile software development framework, Scrum is a framework not a methodology. Scrum of Scrums, Agile At Scale, especially Agile at Scale inside EIA-748-C programs has much different needs than 5 people sitting at the same table with their customer with an emerging set of requirements where the needed capabilities are vague until they appear.

One of the classes every aspiring grad student has to take is research methods. This class teaches the PhD hopefuls (I didn't make the cut and got a consolation prize of a MS), all about doing research and preparing to be a real scientist. A topic in this class is literature search. This makes sure that your cleaver idea of a research topic, in case your advisor hasn't gotten around at actually talking to you, has already been taken, researched, and solved. This is one problem in the physics world - you need an original idea. Replicating old ideas doesn't get you very far.

Here's a start of a literature search on merging Agile at Scale with Earned Value Management. I haven't gotten to the European and Far East journals yet. Instead is a list, I'll just type this once and repurpose the resources here. This PDF is the Resources section of a briefing being used with our clients who are integrating Agile onto EVM programs. Go to the LinkedIn Slide Share site - the LI logo in lower right, to open the PDF and follow the links. 

Agile at scale resources from Glen Alleman Slicing Work Into Small Pieces Agile Software Development in the DOD Technical Performance Measures What Do We Mean When We Say "Agile Community?" Can Enterprise Agile Be Bottom Up? How To Measure Anything Business Rhythm Drives Process Empirical Data Used to Estimate Future Performance Related articles
Categories: Project Management

Influential Agile Leader on Projects at Work

I spoke with Dave Prior on a podcast for Projects at Work. The podcast is titled “Influential Agile Leader.” We spoke about how leaders need to practice coaching and influence, and how to use experiential approaches to helping people learn how.

You can learn with Gil Broza and me at the Influential Agile Leader. We are leading sessions in Boston April 6-7 and in London May 4-5. We will close registration March 24, so register now.

Categories: Project Management

Beautiful Example of the Disconnect Between Those who Pay and Those Who Spend

Herding Cats - Glen Alleman - Thu, 03/17/2016 - 15:38

Managers and Agile

Perhaps Mr. Elliott could provide answers to these questions to our clients, when he suggests that estimates are worthless. 

  • When are¬†those working demos being sent to production so the business can start earning back their investment?
  • How many more working demos will be coming before we can¬†go live with the software I'm paying you to build?
  • What's your¬†Estimate to Complete¬†and Estimate at¬†Completion¬†for those¬†working demos for all the Capabilities I need to start down the path to breakeven for the software I'm paying you to build?
  • You do have a Product Roadmap, showing where this working demo and all the future working demos fit along the way to DONE ,right?
  • You have a Cadence Release Plan (or maybe a Capabilities Release Plan) showing where this working demo fits in to the overall plan that will result in a working production system capable of¬†earning revenue at the rate I need to start the¬†Break even plan my CFO needs to show the Board¬†of Directors at next months board meeting, right?
  • Our main¬†investor just¬†called and asked about the revenue rate from the product your showing in these working demos. How many more working demos before we can start¬†shipping the production version of the software he's¬†paying us to¬†build?

Instead of just delivering demos, how about delivering capabilities in an order needed to earn the value in exchange for the cost to produce that value? And do this according the plan described in the Product Roadmap, using the Cadence or Capability Release Plan? That way those paying for the production of value can have confidence they'll be getting their investment returned as needed to fulfill the business plan or accomplish the mission?

Project maturity flow is the incremental delivery of business value from Glen Alleman Related articles Software Engineering is a Verb No Estimates Needs to Come In Contact With Those Providing the Money GAO Reports on ACA Site What's the Smell of Dysfunction? Closed Loop Control
Categories: Project Management

Writing Easy to Maintain Code

From the Editor of Methods & Tools - Wed, 03/16/2016 - 20:43
Wikimedia software developer and Software Craftmanship advocate Jeroen De Dauw discusses about how to maintain code. A significant amount of time is spend on reading code, sometimes more than on writing code. Jeroen asks questions like how does elegant code tend to rot over time, and what can we do to make this clearer? In […]

Estimating is an Everyday Life and Everyday Projects

Herding Cats - Glen Alleman - Wed, 03/16/2016 - 17:48

A recent twitter post started out with I predict my train will depart from platform 12 in 10 minutes: degree of predictability correlates with length of time I then asked and what is the evidence on which you base this estimated time of departure? I got back I didn't estimate departure, there is a timetable there, but Trains someone's run late & platform sometimes changes.

But in fact - mathematical fact - there was an estimate made. The trains are sometimes late informs the probability of leaving as planned

With the time table there is a target departure time, but unless you're riding the S-Ban from our Eching office to downtown Munich office, as I did for a year - the train departure is approximate. The S-Ban departure was "exactly" 8:04, first because it was Germany 1986, and second because the train was parked at the end of the line.

But no matter if the train is departing Eching station or a London station, or the Station in Lower Downtown Denver, "margin" is needed for both you the traveler and the train. This "margin" protects the Aleatory uncertainties that exist in ALL systems, even trivial systems. The airlines bake this into their schedules. I fly a lot. Many times we arrive early to no gate and have to wait. Rarely in good weather do we arrive late.

When we do arive late on Southwest Airlines, it's most always due to some Event based uncertainty. These are Epistemic uncertainties.

Both Aleatory and Epistemic uncertainties exist on projects and in real life. They are part of life all life. 

Aleatory uncertainty is handled by margin. Epistemic uncertainty is handled with by down processes. Here's the details on managing in the presence of uncertainty. Uncertainties that are ALWAYS present. Uncertainties that ALWAYS require making an estimate of the probability of occurrence (Epistemic), range of variance (Aleatory), and probability of outcomes, and probability of the impact from those outcomes, and the probability of the residual uncertainty and associated risk when the initiating uncertainty is not 100% removed.

In other word, you can't make a decision in the presence of uncertainty without estimating all those variables - occurrence, outcome, impact, residual uncertainty. That's the way life - and projects work. Saying decisions can be made without estimating - #Noestimates - doesn't change the way nature and projects work, no matter how many times it is said. Especially how many times it is said without evidence of how to actually make those decisions without estimates.

Managing in the presence of uncertainty from Glen Alleman Related articles Intellectual Honesty of Managing in the Presence of Uncertainty Managing in the Presence of Uncertainty Some More Background on Probability, Needed for Estimating
Categories: Project Management

Common Ground for a Conversation about Estimating

Herding Cats - Glen Alleman - Sun, 03/13/2016 - 05:44

Neil Killick posted a good question, what's the common ground for talking about estimates.

In this post I'd suggest predictability is very difficult to achieve in the presence of uncertainty. And all projects operate in the presence of uncertainty. All estimates have two attributes - accuracy and precision. The values of these two attributes are what those needing the estimates are after. With the knowledge of the two values of these two attributes the decision makers can assess the "value" of the estimate.

It may well be all they need is an order of magnitude (10X). "How much does it cost to install 50 sites of SAP, with 100 users at each site?" A question asked to our firm in the past. 100M, 200M, 500M? Or something more accurate and precise. "Can these two Features 2015007 and 2015008 going to make into the next Cadence release, scheduled for the end of November?"

Predictable is not actually an attribute without knowing the "Desired" accuracy and precision. That request comes from those asking for the estimate. In our domain that starts with a Broad Area Announcement to provide a "sanity check" to those asking for the solution to "test the bounds of the budget they may need to acquire the capabilities from the project.

Regarding the dysfunctions of business that are connected to estimates - there are many. Many in our domain and most other domains. But that dysfunction is not "caused" by the estimate. It may a "symptom" of poor estimates, but not the "cause." Without first identifying the Root Cause of the symptom, not suggestion for improvement will be effective. Root Cause Analysis is the key here. It's been suggested Estimates are the smell of dysfunction, but not root cause is defined, nor any Corrective Actions. Just the conjecture of a symptom. This is bad root cause analysis, no matter how many times the 5 Whys are suggested, it's still Bad RCA. So if we want to actually find the root cause of the dysfunction, it's going to have to follow a process. 

Finally without a context for asking for the estimate and providing the estimate, the assessment of the needed precision and accuracy cannot be determined. Here's a paradigm to agile project management, where we can separate the  domains and ask when is it appropriate to estimate and when is it not needed

My suggestion for a common ground is to establish

  • The understanding that estimating are needed to make decisions for those providing the money
  • The needed precision and accuracy of that requested estimate
  • The available information (past performance), model (parametric or probabilistic) that the estimate will use, and how that information will impact the accuracy and precision
  • Confirm that those making the estimate have the needed skills, experience, and knowledge needed to produce an accurate and precision estimate to meet the needs of those requesting the estimate
  • Acknowledge on both sides of the conversation that making decisions in the presence of uncertainty¬† requires making a decision based on an estimate of the outcomes of that decision. This estimate be "quick and dirty" even to the point of "it's my feel this will be the outcome," to a detailed bottom up Basis of Estimate for spending Millions if not even Billions of the customers money. This is the Value at Risk¬†conversation. What are you willing to risk of you make that decision without the needed accuracy and precision to "protect" your decision for a loss?

Then those exchanging ideas about the need for estimates can have a common set of principles to exchange ideas. At this point there are no common set of principles. I'd suggest that the #NoEstimates advocates have not provided the principles by which decisions can be made without estimating the impact of that decision. And until there are principles from the #NoEstimates advocates, it's going to be hard to actually have that conversation. 

Related articles Critical Success Factors of IT Forecasting Approximating for Improved Understanding Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Successfully Implementing Change

Herding Cats - Glen Alleman - Wed, 03/09/2016 - 18:41

Two Critical Videos for Agile Transformation. Here's Part 1

 Here's Part 2

 

So when we hear estimates are the smell of Dysfunction - what are the actionable outcomes needed to address these dysfunctions? To date there have been ZERO suggestions. In fact one of the originators claims he's not going to provide any - we're just exploring

For anyone accountable for delivering Value in exchange for the Cost of that Value, we're exploring is the very definition of Muda for the business.

Related articles Architecture -Center ERP Systems in the Manufacturing Domain IT Risk Management Why Guessing is not Estimating and Estimating is not Guessing
Categories: Project Management

Agile Transformation - Keys to Success

10x Software Development - Steve McConnell - Wed, 03/09/2016 - 15:44

I wanted to let you know that I've posted a two-part series on Construx's experience with Agile Transformations, pitfalls, keys to success, and so on. 

The videos focus on two models that describe the transformation issues we have seen on the ground. You might have seen one or both of the models before, but they aren't often applied specifically to Agile adoption work. The focus of the videos is on showing how these general models specifically apply to Agile transformations. We have found that these models predict very well the challenges to expect in a transformation initiative and contain good insights into how to successfully overcome the challenges. 

Part 1: Agile Transformation - Change Model

Part 2: Agile Transformation - Adoption Model

Check out the talks!


Agile Transformation - Keys to Success - Two Part Series by Steve McConnell

Software Development Linkopedia March 2016

From the Editor of Methods & Tools - Wed, 03/09/2016 - 10:03
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about retrospectives, keeping a zen programming attitude, learning organization, project teams a Google, product backlog refinement, mobile testing, scaling agile, pair testing, software development automation and programming with Java. Web site: […]

How Agile Changes Testing, Part 4

In Part 1, I discussed the agile project system. In Part 2, I discussed the tester’s job in agile. In Part 3, I discussed expectations about documentation (which is what the original question was on Twitter). In this part, I’ll talk about how you “measure” testers.

I see a ton of strange measurement when it comes to measuring the value of testers. I’ve seen these measurements:

  • How many bugs did a single tester report?
  • How many times did a tester say, “This isn’t any good. I’m sending it back.”
  • How many test cases did a tester develop?

All of these measures are harmful. They are also surrogate measures.

The first rule of measurement is:

Measure what you want to see.

Anything other than what you want to see is a surrogate measurement.

Do you want to see many bug reports? (Notice I did not say defects. I said bug reports.) If you measure the number of bug reports, you will get that. You might not get working software, but you’ll get bug reports. (Rant on: Bug reports might not report unique problems, defects, in the product. Rant off)

Do you want testers to pass judgment on the code? Ask how many times they threw something back “over the wall” or rejected the product (or the build).

Do you want to measure test cases? You’ll get a large number. You might have terrible code coverage or even scenario coverage, but you’ll get test cases.

In waterfall or phase-gate, you might have measured those surrogate measures, because you could not see working product until very late in the project.

In agile, we want to see running tested features, working product. Why not measure that?

Running tested features provide us a possibility of other measures:

  • Cycle time: how long it takes for a feature to get through the team.
  • Velocity: how many features we can finish over a time period.
  • If we look at a kanban board, we can see the flow through the team. That allows us to see where we have blockers for the team. What’s queued for test?
  • When can we see working software? If we only have running tested features every week or so, we can see new working software only that often. Is that often enough?
  • What is the team happiness? Is the team working together, making progress together?

You “measure” the team, looking for throughput. If the team doesn’t have throughput, do some root cause analysis and impediment removal. That’s because we have a team approach to product development. (See Part 1.)

Back in the 80’s and early 90’s, we learned we had a software “crisis.” Our systems got more complex. We, as developers, could not test what we created. The testing industry was born.

Some people thought testing software was like testing manufacturing. In manufacturing, you duplicate (I apologize to manufacturing people, this is a simplification) the same widget every time. ¬†It’s possible in manufacturing to separate the widget design and prototyping from widget manufacturing. ¬†The SEI and the CMM/CMMI¬†used the metaphors of manufacturing when they described software development. We emphasized process before (remember structured design and CASE tools?), but now—wow—process was everything.

Software product development is nothing like manufacturing. It’s a lot more like the design and engineering of the widget, where we learn. We learn as we develop and test code.

That’s the point of agile. We can incorporate learning, to improve and better the product as we proceed.

If you measure people as if they are widgets, they will behave like widgets. If you measure people as individuals, they may well behave as individuals, maximizing their own returns.

When a team uses agile, they create working product. Look for and measure that. When a team uses agile, they work as a team. Reward them as a team.

Categories: Project Management

Advice on How to Split Reporting User Stories

Mike Cohn's Blog - Tue, 03/08/2016 - 16:00

I've had a handful of emails lately about the difficulty of completing a complex reporting user story in a sprint.

These emails made the claim that perhaps reports were not something well suited for development with agile because some reports are complicated and take more than a sprint to develop.

I'd argue that the opposite is the case. When something will take a long time to develop, that's exactly when I want to use a process that forces me to get early and frequent feedback. This will counter many developers’ tendency (including my own) to retreat into a cave thinking we know what our users want. So in this blog post, I want to look at a few ways that a complicated reporting user story can be split. I think it's a useful example because its lessons can be applied to other types of agile work.

One way to split a complicated report is to completely finish the presentation of the report but to to bring in only a subset of its data sources. This resulting report at the end of this initial sprint will show useless data. But a report built in this way can be useful for gathering feedback on the presentation of the data.

In building a report this way, I'll often select one of the easiest data sources first. There is often a great deal of work just to make that happen independent of any source-specific complexities. So I'll save the complex data sources for second or third sprints once I know the report itself is working properly.

A second approach is the exact opposite: Bring in all data sources but prepare an incomplete report. Perhaps only summary values are shown. Or perhaps the itemized rows are displayed but they are not summarized and totaled (whichever is easier). Or perhaps they are shown in detail and total form but only in one way.

For example, suppose we are creating a report showing information on revenue earned at a movie theater. In the initial sprint, the report may show it split and totaled by movie title. But it may not yet show the detailed by day of week or by the start times each day.

Each of these approaches works because it is true to goal of being done with something at the end of the iteration. The entire user story is not done. (We are assuming this report is too big to be completed in a single iteration.) Yet, even when the full story cannot be delivered, the team delivers some portion of it to a fully working state. They do not content themselves with delivering a full but untested version.

Instead, they deliver a version that works but uses data from only one source. Or that works fully but only fills in one portion of the eventually complete report.

And this is how successful agile teams approach all of the work, not just reporting user stories. Each iteration, the team fully develops some portion of the user story.

In the comments below, please let me know what you think of the approaches I’ve described. And let me know about other approaches you’ve taken to splitting large stories about reports or any other type of functionality.

How Agile Changes Testing, Part 3

In Part 1, I discussed how an agile approach changes testing. In Part 2, I discussed how the testers’ job changes. In this part, I’ll talk about expectations.

Since the developers and testers partner in agile, the testers describe their approach to testing as they work with developers on the code. (This is the same way as the developers describe their approach to development.) You might not need any test planning documents. At all.

If you have acceptance criteria on stories, the developers know what unit tests to write. The testers know what kind of system tests to write. All from acceptance criteria. You don’t need test case definition—the acceptance criteria define what the tests need to do.

What if your customer wants test documents? Show them working software.

What if your customer wants to see traceability? (If you are in a regulated environment, you might have this requirement.) Show them how the user stories encapsulate the unit tests and system tests.

What if your customer wants to know you are testing what the developers write? Well, I want to know the answer to this question: What else would you test?

If the developers have moved onto a new feature and you are not done, you need a kanban board to show the workflow. Your team might have too much work in progress queued for the testers. I see this in teams with six developers and one lonely tester. (That lonely tester is often time-space removed from the developers.)

Separating developers from testers and having developers run fast on coding provides the illusion of progress. In fact, in agile, you want to measure throughput, not what any given person has completed. (See my posts on resource vs. flow efficiency.)

Maybe you found a bunch of problems in a feature the developers thought was already done. If so, the developers should stop working on anything new, and work with you to fix this feature. If they don’t, they will context-switch and make more mistakes. Review your team’s definition of done.

If there is some other circumstance you encounter, let me know in the comments and I’ll update the post.

Remember, in agile, developers and testers work together to create finished features. Go back to Part 1 and look at the agile picture. You might need writers, DBAs, UX people, whatever. The team works together.

If you look for surrogate measures, such as test designs or test cases written, you will get surrogate measures. You will not get working software. That’s because people deliver the surrogate measures, not working software.

If you want working software, ask for working software. The team’s responsibility is to provide working software. What else would a customer need to see and why?

The closer the developers and testers work together, the faster the feedback to each other. The faster the customer can see working product. Isn’t that what we want?

In part 4, I’ll discuss¬†how measurements about testers change.

Categories: Project Management

How Agile Changes Testing, Part 1

Last week, I attempted to have a Twitter conversation about agile and testing. I became frustrated because I need more than 140 characters to explain.

general-agile-picture-copyright-1024x645This is my general agile picture. For those of you can’t see what I’m thinking, the idea is that a responsible person (often called a Product Owner) gathers the requirements for the project. The cross-functional product development team works on some of those features (which I like as small stories), and outputs small chunks of working product that have value to the customer. You might or might not have a customer with you. The Product Owner either is the customer or takes the customer role.

After some number of features complete, the team demonstrates the features and retrospects on the project and the team’s process. They get more features from the PO.

Now, let’s contrast agile with more traditional planning and testing.

If you used a phase-gate or a waterfall life cycle, you were accustomed to planning the entire project—including system test plans—at the beginning. You predicted what you needed to do.

If you were a tester, you often discovered that what you planned at the beginning was not what you needed by the time it was time for you to test. You might have been interrupted by the need to test current projects, with the promise you would return to this one. Testers were often at the end and possibly delayed by not-quite-finished work in flight.

If you could work on this project, you wrote a system test plan. In addition, you might have written system test specifications, designed test cases, and maybe even documented test cases. Why? So you wouldn’t forget what to do. And, your customer (management, maybe a real customer) would want to know what you had been doing all along and what they could expect. You planned for the future.

In agile, we don’t do a ton of upfront planning. I like to plan for no more than a couple of days, including generating the initial backlog for a cross-functional team to work on. Yes, two days. More planning might easily be waste. What might you do in those two days?

  • Write a project charter as a team, which includes the project vision and release criteria.
  • Generate a story map or product backlog for the first release’s worth of stories. (If you read my roadmap posts, you know I like internal releases at least as often as every month. I prefer releasing every day. I’ll take visual working software at least once a month.)
  • If you are a new team, ¬†you might develop working agreements and a definition of done.

Do you need a system test plan? Maybe. I often discover that when we discuss the project charter, we might say something like, “We want to focus our testing in these areas,” or “These are the areas of high risk that we can see now.” My system test plan is still only one page. (See the Manage It templates for all my templates of what you might need in a project, including the system test plan.)

Now, you start the project. What happened to all that test planning and test plans and test cases? They go with the stories—if you need that level of planning. See Part 2.

Categories: Project Management

How Agile Changes Testing, Part 2

In Part 1, I discussed the project system of agile. In this part, I’ll discuss the need for testing documentation.

In a waterfall or phase-gate life cycle, we needed documentation because we might have had test developers and test executors. In addition, we might have had a long time separating the planning from the testing. We needed test documentation to remember what the heck we were supposed to do.

In agile, we have just-in-time requirements. We implement—and test—those requirements soon after we write them. When I say “soon,” I mean “explain the requirements just before you start the feature” in kanban or “explain the requirements just before you start an iteration” if you timebox your feature development. That’s anywhere from hours to a few days before you start the feature. We have a cross-functional team who work together.They provide feedback in the moment about the features in development and test. The most adaptable teams are able to work together to move a feature across the board.

In waterfall or phase-gate, we might have had to show the test documentation to a customer (of some sort). In agile, we have working software (deliverables). What you see and measure changes in agile.

In waterfall or phase-gate, people often defined requirements as functional and non-functional. I bet you have read, “The system shall” until you fell asleep. In agile, we often use user stories—or something that looks like it—to define requirement via persona, or a specific user. We know who we are implementing this feature for, why they want it and the value they expect to receive from the feature.

You’ve noticed I keep saying, “implementing” in these posts. That’s because for me, and for many agile teams, implementing means writing code-and-tests that might ship and code-and-tests that stay. The developers write the code and tests that ship. The testers write code and tests that stay.

The developers and testers are part of a product development team. They may well be attached to their roles. ¬†For example, when I am a dev, I don’t test my work nearly as well as I test other people’s work. Not all devs want to write system tests. Not all testers want to write product code. That’s okay. People can contribute in many ways.

The key is that developers and testers work together to create features. They need each other. What does that mean for test planning?

  • You might need something about the test strategy in the project charter. I often recommend that the team think about scenarios they want to test every day as they build.
  • You might need guidance for the testers: “We use this pre-configured database for this kind of testing.” Or, “We test performance once a day with pre-specified scenarios,” or whatever you need as team norms.
  • You do not need to separate the test planning from the testing. You might decide to automate tests you will use over and over. You might decide to explore/use manual testing for infrequent tests. This is a huge discussion that I will not delve into in this series. Make a conscious decision about automation and exploration.

I have yet to see a humongous document of test cases be useful in a waterfall team. That’s because we learn as we develop the software. The document is outdated as soon as the requirements change even a little. If you need to document (for traceability) test cases, it’s easy to document as you write and test features.

This changes the testers’ job. Testers partner with developers. They test, regardless of whether the code exists. They can write automated tests (in code or in pseudo-English) in advance of having code to test. They provide feedback to developers as soon as the test passes or fails.

When I was a tester, I checked in my code and told the developers where it was so they could run my code. I then explored the product and found nasty hairy defects. That was my job. It wasn’t my job to withhold information from the developers.

The testers’ job changes from judging the quality of the product to providing information about the product. I think of it as a little game: how fast can I provide useful feedback when I test?

This game means that as a tester, I will automate test cases I need to run more than once. I will automate anything that bores me (text input, for example). I will work with developers so they create hooks into the code for me to make my test automation easier. I have this conversation when we discuss the requirements.

Just as the stories and the code describe the stories, my tests describe my test cases.

In agile, we use deliverables—especially in the form of running tested features—to see progress throughout the project. Test cases lose their value as a deliverable.

In part 3, I’ll talk about what customers or managers need to see and when.

Categories: Project Management

Releases and Deadlines in Agile

Herding Cats - Glen Alleman - Mon, 03/07/2016 - 18:08

Had a Twitter conversation today about deadlines and how to remove the stigma of a deadline and replace it with a more Politically Correct term. Like most Twitter conversations there is a kernel of truth in there somewhere that sparks a thought that turns into a Blog post. This was one of those.

In our Software Intensive System of Systems world, we are not 5 people sitting around the table with the customer building a warehouse management application for our privately help gadget making company. We work on large scale, mission critical systems - critical to the Enterprise or critical to the sovereign funding an Enterprise application, building a weapon, or accomplishing something that'll you'll read about in the paper. This is not to say those 5 developers sitting around the table with the Product Owner and the Scrum Master are not working on vitally important code. But Agile at Scale has a  different paradigm than Agile at the Table.

One critical paradigm is the Product Roadmap and the resulting Release Plan. Release Plans come in two flavors.

  • Cadence Release - when a fixed period ends, go with what is ready to go. Cadence Release paradigm, is a¬†flow-based¬†approach. A predictive rhythm defines when the release will be¬†released. The variability of the development work is minimized through the¬†planned cadence. What is variable is what is the in the Release. Focusing on meeting the needs of the customer or market is key to success in the Cadence approach. Releasing with less than the planned¬†Capabilities reduces the ROI. These¬†planned¬†Capabilities are defined in the Product Roadmap. Success of the Cadence approach relies on:
    • Schedule margin is needed to protect the deliverables.
    • The wait time to receive the deliverables is predictable - fixed actually.
    • Small batch sizes for work help control variances of the work that¬†fits inside the cadence.
  • Capabilities Release - a collection¬†of needed business capabilities ready to go, when they are complete. This approach starts with a customer agreement that specifies what the product Capabilities must do in context to the release plan/ When those capabilities are¬†ready they are released. The timing of the release is dependent on the completion of the set of capabilities.

Capability Releases have variable delivery dates for the capabilities  - Cadence Releases have fixed delivery dates for the Capabilities

What's the Schedule For Getting the Products in the Product Release Plan?

Customers have a fiduciary obligation to have some sense of when the work they are paying for will be completed, how much it will cost and some assurance that what they are paying for will deliver the needed capabilities in exchange for that investment.

This is the basis of managerial finance and decision making in the presence of uncertainty - Microeconomics of Decision Making.

In both the Cadence and Capabilities Release Plan, the Product Roadmap speaks to what is planned to be released. The Release Plan is built during the Release Management process which plans, manages, and governs release of products resulting from the development effort. The owners of this governance process has the decision rights to determine what gets released. 

The Capabilities are laid out Cadence Releases in the chart below. The Sprints containing the Stories that implement the Features that produce the Capabilities that occur on regular periods of performance.

Screen Shot 2016-03-06 at 10.14.37 PMThe Product Roadmap that is connected to the Cadence Release Plan above, shows what Capabilities are produced in what order for the business to meet its goals

Screen Shot 2016-03-06 at 10.17.12 PM

With the Product Backlog, the Product Roadmap, and the Cadence or Capabilities release plan, we've got all we need to define what Done looks like. 

By Done it means what Capabilities are needed to fulfill the business case or accomplish the mission delivered by the project. It doesn't mean requirements, it doesn't mean tasks, it doesn't mean the details of the work. But if you don't know what Done looks like in units of measure meaningful to the decision makers the only other measure is we ran out of time and money. This has been shown through extensive research to be in the Top 5 of Root Causes of Project failure. The other 4 include Bad Estimates for the cost and schedule to reach Done.

And by the way, the term Done is NOT the Definition of Done in Scrum. That's a term used to state what processes have been applied to the work - a list of criteria which must be met before a product increment . It's a developers definition of Done. A critical activity for sure, but still far removed from the Mission Accomplished Definition of Done. Both are needed, both are Critical Success Factors, but at the business management level, developers DOD, is just the start of recognizing that the project has fulfilled the business case or accomplished the mission.

Done Done and Contrastive Reduplication

While the VP of Program Management at a nuclear weapons cleanup program, one of the Project Managers working for me introduced an idea. When we talked about being Done he chimed in and said, no Glen that's not the question. We need to know when we are Done Done. The use of the term Done Done Contrastive focus reduplication and is very useful in the Agile world, where there the notion of Definition of Done that is NOT based on a formal technical specification, but on customer approval that the results will deliver the needed Capabilities.

Related articles What is a Team? Making Decisions In The Presence of Uncertainty Quote of the Day GAO Reports on ACA Site We're All Looking for the Simple Fix - There Isn't One Scaling Agility - building an agile foundation Process is King New the Project Management Bookshelf
Categories: Project Management

Quote of the Month March 2016

From the Editor of Methods & Tools - Mon, 03/07/2016 - 14:46
Large organizations present serious challenges, but for the manager of such an organization they typically revolve around managing managers, not just programmers. Source: Managing the Unmanageable, Mickey W. Mantle & Ron Lichty, Addison-Wesley

Thu, 01/01/1970 - 01:00