Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/5&page=2' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

SPaMCAST 387 ‚ÄďStorytelling As A Tool, Critical Roles, QA Career Path

Software Process and Measurement Cast - Sun, 03/27/2016 - 22:00

The Software Process and Measurement Cast 387 includes three features.  The first is our essay on storytelling.  Storytelling is a tool that is useful in many scenarios, for presentations, to help people frame their thoughts and for gathering information. A story provides both a deeper and more nuanced connection with information than most lists of PowerPoint bullets or even structured requirements documents. The essay provides an excellent supplement to our interview with Jason Little (which you can listen to here).

The second feature this week is Steve Tendon discussing Chapter 9 of¬†Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban¬†published J Ross. Chapter 9 is titled ‚ÄúCritical Roles, Leadership and More‚ÄĚ.¬† We discuss why leadership roles are important to achieve hyper-productive performance. Sometimes in Agile and other approaches, it is easy to overlook the role of leaders outside of the team.

Remember Steve has a great offer for SPaMCAST listeners. Check `out  https://tameflow.com/spamcast for a way to get Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach, and Its Application to Scrum and Kanban at 40% off the list price.

Anchoring the cast this week is a visit to the QA Corner.  Jeremy Berriault discusses whether a career and the path your career might take in testing is an individual or a team sport.  Jeremy dispenses useful advice even if you are not involved in testing.

Re-Read Saturday News

This week we are back with Chapter 14 of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog.¬† Chapter 14 is titled A Universal Measurement Method.¬† In this chapter, Hubbard provides the readers with a process for applying Applied Information Economics.

We will read¬†Commitment ‚Äď Novel About Managing Project Risk¬†by¬†Olav Maassen¬†and¬†Chris Matts for our next Re-Read.¬† Buy your copy today and start reading (use the link to support the podcast).¬†In the meantime, vote in our poll for the next book. ¬†As in past polls please vote twice¬†or suggest a write-in candidate in the comments.¬†¬†We will run the poll for two more weeks.

Upcoming Events

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd.  I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th.  Register now!

I will be speaking at the CMMI Institute’s Capability Counts 2016 Conference in Annapolis, Maryland May 10th and 11th. Register Now!

Next SPaMCAST

The next Software Process and Measurement Cast will feature our interview with Dr. Mark Bojeun.  Dr. Bojeun returns to the podcast to discuss how a PMO can be a strategic tool for an organization.  If a PMO is merely a control point or an administrative function, their value and longevity are at risk.  Mark suggests that there is a better way.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Compile-Time Evaluation in Scala with macros

Xebia Blog - Sun, 03/27/2016 - 15:20
Many 'compiled' languages used to have a strict separation between what happens at 'compile-time' and what happens at 'run-time'. This distinction is starting to fade: JIT compilation moves more of the compile phase to run-time, while conversely various kinds of optimizations do 'run-time' work at compile time. Powerful type systems allow the expression things previously

How To Measure Anything, Chapter 14: A Universal Measurement Method: Applied Information Economics

HTMA

How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

Chapter 14 of¬†How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition¬†is the last chapter in the book.¬† Next week I will spend a few moments reflecting on the value I have gotten from this re-read; HOWEVER, the last chapter continues to deliver content, so let‚Äôs not get ahead of ourselves. ¬†This chapter shows us:

  • A process for applying Applied Information Economics (which I used recently), and that
  • AIE is applicable in nearly every scenario.

Hubbard introduced Applied Information Economics (AIE) in Chapter One (page 9 to be exact).  The methodology includes five steps:

  1. Define the decision. 
  2. Determine what you know. 
  3. Compute the value of additional information. 
  4. Measure where the information value is high. 
  5. Make a decision; act upon it.

AIE is the centerpiece of¬†How To Measure Anything. Chapter 14 brings all pieces together into an overall process populated with procedures and techniques. Hubbard lays out the application of AIE in four phases (0 ‚Äď 3).

Phase 0 is a preparation phase which includes identifying workshop participants, developing the first cut of the measurement questions and then assigning the workshop participants pre-reading (homework) based on those initial questions.  Maximizing the value of the workshops requires priming the participants with homework.  The homework makes sure everyone is prepared for the workshops so that time is note wasted having coming up to speed. This also helps to reset any organizational anchor bias.

Phase 1:  Hold workshop(s) for problem definition, building a decision model, and developing initially calibrated estimates. Calibration exercises aid participants so they can quantify the initial variables as a range at a 90% confidence interval or as a probability distribution, rather than a single number.

Phase 2: This phase focuses on analyzing the value of information, the first cut at the measurement methods, refining the measurement methods, updating the decision model and then re-running the value of information analysis to make sure we don’t have ¬†to change¬†the measurement approach . Hubbard points out (and my experience attests) that during this step, you often determine that most variables have sufficient certainty, so the organization needs to do no further measurement beyond the calibrated estimate.¬†This step ensures that the variables that move forward in the measurement process add value.

Phase 3: Use the data to make the decision(s) to run a Monte Carlo analysis to refine any of the metrics procedures needed, use the data to make the decisions identified and generate a final report and presentation (even Hubbard is a consultant, thus, a presentation).

The basic flow espoused by Hubbard is meant to cut through the standard rationalization to find the real questions.  Then to determine how to answer those questions, using measurement with an emphasis on making sure the organization does not already have the data needed to answer the questions, and then getting the data that make economic sense. The process sounds simple; however as a practitioner, the problem I have observed is often that generating the initial involvement is difficult and that participants often have pet theories that are difficult to disarm.  For example, I once ran across an executive that was firmly convinced that having his software development teams work longer hours would increase productivity (he forgot that productivity equals output divided by input). Therefore, he wanted to measure which monitoring applications would make his developers work more hours.  It took several examples to retrain him to recognize that to increase productivity, he had to increase output (functionality) more than he increased input (effort). The process described by Hubbard is extremely useful, but remember that making it work requires both math and facilitation skills.

The remainder of the chapter focuses on providing examples that show the concepts in the book in action.  The cases cover a wide range of scenarios, from improving logistics (forecasting fuel needs for the Marine Corps) to measuring the value of a department.  Each case provides a lesson for the reader; however three messages make my bottom line:

  • While some say that the data is too hard to get, it usually isn‚Äôt.
  • Reducing uncertainty¬†often requires only one or few measures.
  • Framing the question as a reduction in uncertainty means that almost anything is measurable.

These three bottom line lessons summarize the philosophy of How To Measure Anything. But like the process to apply this philosophy, the devil is in the details.

Past installments of the Re-read Saturday of  How To Measure Anything, Third Edition

Chapter 1: The Challenge of Intangibles

Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily

Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t

Chapter 4: Clarifying the Measurement Problem

Chapter 5: Calibrated Estimates: How Much Do You Know Now?

Chapter 6: Quantifying Risk Through Modeling

Chapter 7: Quantifying The Value of Information

Chapter 8 The Transition: From What to Measure to How to Measure

Chapter 9: Sampling Reality: How Observing Some Things Tells Us about All Things

Chapter 10: Bayes: Adding To What You Know Now

Chapter 11: Preferences and Attitudes: The Softer Side of Measurement

Chapter 12: The Ultimate Measurement Instrument: Human Judges

Chapter 13: New Measurement Instruments for Management

We continue with the selection process for the next‚Äôish book for the Re-Read Saturday.¬†¬†We will read¬†Commitment ‚Äď Novel About Managing Project Risk¬†by¬†Olav Maassen¬†and¬†Chris Matts next.¬† Buy your copy today and start reading (use the link to support the podcast).¬† Mr. Adams has suggested that we will blow through the read of this book, therefore, doing the poll now will save time in a few weeks! ¬†As in past polls please vote twice¬†or suggest a write-in candidate in the comments.¬†¬†We will run the poll for two more weeks.

Take Our Poll (function(d,c,j){if(!d.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src='https://s1.wp.com/wp-content/mu-plugins/shortcodes/js/polldaddy-shortcode.js';s=d.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);} else if(typeof jQuery !=='undefined')jQuery(d.body).trigger('pd-script-load');}(document,'script','pd-polldaddy-loader'));
Categories: Process Management

How To Measure Anything, Chapter 14: A Universal Measurement Method: Applied Information Economics

HTMA

How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

Chapter 14 of¬†How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition¬†is the last chapter in the book.¬† Next week I will spend a few moments reflecting on the value I have gotten from this re-read; HOWEVER, the last chapter continues to deliver content, so let‚Äôs not get ahead of ourselves. ¬†This chapter shows us:

  • A process for applying Applied Information Economics (which I used recently), and that
  • AIE is applicable in nearly every scenario.

Hubbard introduced Applied Information Economics (AIE) in Chapter One (page 9 to be exact).  The methodology includes five steps:

  1. Define the decision. 
  2. Determine what you know. 
  3. Compute the value of additional information. 
  4. Measure where the information value is high. 
  5. Make a decision; act upon it.

AIE is the centerpiece of¬†How To Measure Anything. Chapter 14 brings all pieces together into an overall process populated with procedures and techniques. Hubbard lays out the application of AIE in four phases (0 ‚Äď 3).

Phase 0 is a preparation phase which includes identifying workshop participants, developing the first cut of the measurement questions and then assigning the workshop participants pre-reading (homework) based on those initial questions.  Maximizing the value of the workshops requires priming the participants with homework.  The homework makes sure everyone is prepared for the workshops so that time is note wasted having coming up to speed. This also helps to reset any organizational anchor bias.

Phase 1:  Hold workshop(s) for problem definition, building a decision model, and developing initially calibrated estimates. Calibration exercises aid participants so they can quantify the initial variables as a range at a 90% confidence interval or as a probability distribution, rather than a single number.

Phase 2: This phase focuses on analyzing the value of information, the first cut at the measurement methods, refining the measurement methods, updating the decision model and then re-running the value of information analysis to make sure we don’t have ¬†to change¬†the measurement approach . Hubbard points out (and my experience attests) that during this step, you often determine that most variables have sufficient certainty, so the organization needs to do no further measurement beyond the calibrated estimate.¬†This step ensures that the variables that move forward in the measurement process add value.

Phase 3: Use the data to make the decision(s) to run a Monte Carlo analysis to refine any of the metrics procedures needed, use the data to make the decisions identified and generate a final report and presentation (even Hubbard is a consultant, thus, a presentation).

The basic flow espoused by Hubbard is meant to cut through the standard rationalization to find the real questions.  Then to determine how to answer those questions, using measurement with an emphasis on making sure the organization does not already have the data needed to answer the questions, and then getting the data that make economic sense. The process sounds simple; however as a practitioner, the problem I have observed is often that generating the initial involvement is difficult and that participants often have pet theories that are difficult to disarm.  For example, I once ran across an executive that was firmly convinced that having his software development teams work longer hours would increase productivity (he forgot that productivity equals output divided by input). Therefore, he wanted to measure which monitoring applications would make his developers work more hours.  It took several examples to retrain him to recognize that to increase productivity, he had to increase output (functionality) more than he increased input (effort). The process described by Hubbard is extremely useful, but remember that making it work requires both math and facilitation skills.

The remainder of the chapter focuses on providing examples that show the concepts in the book in action.  The cases cover a wide range of scenarios, from improving logistics (forecasting fuel needs for the Marine Corps) to measuring the value of a department.  Each case provides a lesson for the reader; however three messages make my bottom line:

  • While some say that the data is too hard to get, it usually isn‚Äôt.
  • Reducing uncertainty¬†often requires only one or few measures.
  • Framing the question as a reduction in uncertainty means that almost anything is measurable.

These three bottom line lessons summarize the philosophy of How To Measure Anything. But like the process to apply this philosophy, the devil is in the details.

Past installments of the Re-read Saturday of  How To Measure Anything, Third Edition

Chapter 1: The Challenge of Intangibles

Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily

Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t

Chapter 4: Clarifying the Measurement Problem

Chapter 5: Calibrated Estimates: How Much Do You Know Now?

Chapter 6: Quantifying Risk Through Modeling

Chapter 7: Quantifying The Value of Information

Chapter 8 The Transition: From What to Measure to How to Measure

Chapter 9: Sampling Reality: How Observing Some Things Tells Us about All Things

Chapter 10: Bayes: Adding To What You Know Now

Chapter 11: Preferences and Attitudes: The Softer Side of Measurement

Chapter 12: The Ultimate Measurement Instrument: Human Judges

Chapter 13: New Measurement Instruments for Management

We continue with the selection process for the next‚Äôish book for the Re-Read Saturday.¬†¬†We will read¬†Commitment ‚Äď Novel About Managing Project Risk¬†by¬†Olav Maassen¬†and¬†Chris Matts next.¬† Buy your copy today and start reading (use the link to support the podcast).¬† Mr. Adams has suggested that we will blow through the read of this book, therefore, doing the poll now will save time in a few weeks! ¬†As in past polls please vote twice¬†or suggest a write-in candidate in the comments.¬†¬†We will run the poll for two more weeks.


Categories: Process Management

Common Sense Agile Scaling (Part 1: Intro)

Xebia Blog - Sat, 03/26/2016 - 10:05
Agile is all about running experiments and see what works for you. Inspect and adapt. Grow. This also applies to scaling your agile organization. There is no out of the box scaling framework that will fit your organization perfectly and instantly. Experiment, and combine what you need from as many agile models and frameworks as

Refactoring to Microservices - Introducing a Process Manager

Xebia Blog - Fri, 03/25/2016 - 14:15
A while ago I described the first part of our journey to refactor a monolith to microservices (see here). While this was a useful first step, a lot can be improved. I was inspired by Greg Young's course at Skills Matter, see CQRS/DDD course. Because I think it’s useful to reflect on the steps you

Basics: Difference Between Models, Frameworks, and Methodologies

20160324_165235

Nesting Easter eggs show each layer of the process improvement architecture

One of my children owned a (Russian nesting doll) that is now somewhere in our attic.  I was always struck how one piece fit within the other and how getting the assembly out of order generated a mess. I think I learned more from the toy than my children did.  The matryoshka doll is a wonderful metaphor for models, frameworks, and methodologies. A model represents the outside shell into which a framework fits followed by the next doll representing a methodology placed inside the framework. Like models, frameworks, and methodologies, each individual doll is unique, but they are related to the whole group of dolls.

 

Models, the outside layer of our doll, are an abstraction that provides a rough definition of practices and inter-relationships needed by an organization to deliver a¬†product or service. Models are valuable if they are theoretically consistent, fit the real world¬† and have predictive power. All firms or groups use a model as a pattern to generate a structure. ¬†For example, the hierarchical corporation is a common model for commercial organizations (visualize an organization chart). The Capability Maturity Model Integration ‚Äď Development (CMMI-DEV) is a model leveraged in many software development organizations. The CMMI provides a core set of practices that organizations have typically needed to deliver software. The CMMI defines an outline of what needs to be done but not how or in what order. The model that is chosen influences the models and methods that will be leveraged by different layers of the organization.¬† For example, an organization using a lean start-up model for their corporate governance model might not see the CMMI-DEV model a viable for their development organization (we will discuss this common misperception later on the blog).

Frameworks, the next inner layer in our process architecture matryoshka doll, provide the structure needed to implement a model (or some part of the a model).¬† Shifting the operational metaphor we are using for a moment to that of a skyscraper, the framework is the lattice like a frame that supports all of the components¬†in¬†the superstructure. ¬†The Scaled Agile Framework Enterprise is a framework which leverages other frameworks and methods as tools to deliver value. SAFe defines the flow of work from an organization’s portfolio to the Agile teams that will develop and integrate the code. The Framework calls leverages other frameworks and methodologies such as DevOps, Scrum, Kanban and Extreme Programing.¬†

Methods, nestled inside of frameworks, provide an approach to achieve a specific goal.  In software development, methodologies define (or impose) a disciplined set of the processes so that developing software is more predictable and more efficient. The difference between Agile and heavier methods is the amount of structure that defined. Agile methods provide only just enough structure to allow the method embrace the principles stated in the Agile Manifesto. Extreme Programming is another software development methodology.   

Each layer of our process architecture matryoshka doll is a step closer to a core set of steps and tasks. The doll metaphor is not perfect.  Some small start-up organizations may not seem to need the structure of a framework or may even eschew a method until they grow. In an interview for the Software Process and Measurement Cast, Vinay Patankar, CEO of Process Street, said that as he and his partner began their start-up they used a code and fix approach, but growth forced the adoption of Agile as framework combined with Scrum and Extreme Programing (at least parts) as methodologies. A model provides an environment to implement a framework. You would not implement SAFe inside a waterfall model. Methodologies are the tools that translate a framework into something actionable. Without one or more or of the layers in our doll, what remains of the doll might make a better rattle than a tool to deliver software. 


Categories: Process Management

Basics: The Hierarchy of Models, Methods, Frameworks, and More

1481963246_fc8d418e34_o

Similar, but not the same.

Models, frameworks, methods, processes, procedures, and the list goes on and on.  Whether we are discussing Agile or plan based software development, works like methods, models, frameworks, processes and others are often used.  The people that use these terms in polite conversation often assume or imply a hierarchy in these terms.  For example, a model might include one or more frameworks and be instantiated in several methods. Each layer in the hierarchy breaks down into one or more items at the next level. Words and their definitions are an important tool to help understand how all the pieces and parts fit together and how to interpret the conversations about how software is developed and maintained in the lunch room or in hallways at conferences like Agile 2016. The unfortunate part is that few people agree on the hierarchy of models, methods, and frameworks.  These words are often used synonymously sowing seeds of confusion and mass hysteria (ok , that might be a teeny tiny overstatement). 

A proposed process hierarchy or architecture is as follows:

Components focused on defining ‚Äúwhat‚ÄĚ steps or tasks needed to build or deliver a product. This level of a component often encompasses many patterns of work. For example, the Scaled Agile Framework (SAFe) includes tracks for both software and systems engineering and includes optional paths for very large value streams as well as programs.¬† Each path is a different combination of lower level components to deliver a type of specific product.

The outer levels of the hierarchy that define what needs to be done include:

  1. Model –¬†Abstractions of complex ideas¬†and concepts that we find useful to explain the world around us. The CMMI is a model.
  2. Framework ‚Äď A logical structure for organizing a set of processes, procedures, techniques and tools for delivering products. SAFe is a framework
  3. Methodology – An approach (usually documented and often branded) for performing activities consistently and repeatability to achieve a particular goal. Extreme Programming, as defined by Kent Beck, is a methodology.

Components focused on defining ‚Äúhow‚ÄĚ to do a specific task or group of tasks.¬† The inner levels of the hierarchy translate the “whats” so they can be accomplished on a repeatable basis. ¬†They explain how to accomplish work. ¬†As the how components are decomposed they become more granular. A¬†process, as defined in ISO, would be a group of steps to generate an outcome. For example, an organisation might define a process for continuous integration which would include many procedures (nightly code builds and smoke testing might be two procedures). At the lowest level, we might include specific techniques that could be used when executing the procedure.¬†

The inner levels of the hierarchy that define how to accomplish what needs to be done and include:

  1. Process РA defined set of steps for delivering a specific outcome. The activity triggered by pushing the garage door opener is a process.  
  2. Procedure ‚Äď A defined set of actions conducted in a specific order to achieve a specific step (or subgroup of steps) within a process. The set of instructions packed in the Ikea box for assembling the table in my living room is a procedure.
  3. Technique ‚Äď A way of accomplishing a specific task or step in a procedure. The three questions is a technique for holding a stand-up meeting.
  4. Template ‚Äď A pattern used provide guidance or standardization for a specific deliverable.¬† The famous ‚Äúpersona-goal-benefit‚ÄĚ format is a template for user stories.

When discussing¬†the adoption of Agile or process improvement the discussion invariably drifts into a discussion of the models, methodologies and processes that the organisation will leverage. ¬†Often the first round of ¬†these discussions includes a lot of confusion because not everyone using the same set of words or at least not in the same way. ¬†Just to make sure this confusion was not mine alone, I asked several people, including methodologist, developers, testers, scrum masters and testers, how they would structure the process architecture or hierarchy without providing definitional guidance.¬† The responses varied widely, although¬†the ‚Äúwhat‚ÄĚ and the ‚Äúhow‚ÄĚ components clustered together. Words matter because they affect how we understand and react when we hear them (assuming we are listening). Unless we have a common set of definitions it is hard to have a valuable conversation.¬†

Next articles in this theme include:

  1. The differences between models, frameworks, and methodologies
  2. The differences between processes, procedures, and techniques
  3. Process hierarchies: Is there a simpler way?

Categories: Process Management

Teams Don’t Need to Think of Everything During Sprint Planning

Mike Cohn's Blog - Tue, 03/22/2016 - 15:00

A couple of weeks ago, I participated in a painful sprint planning meeting. You might have been in the same type of meeting. The team was going to great lengths to identify every task they'd need to do in the sprint. And they were debating endlessly over the precise number of hours each task would take.

That level of detail is not necessary.

The purpose of sprint planning is to select a set of product backlog items to work on and have a rough idea of how to achieve them. Achieving this does not require the team to know every task they'll perform. And it certainly doesn't require the team to know if one of those tasks will take four hours instead of five hours.

The Answer Is Not “It Depends”

I’m frequently asked how long teams should spend in sprint planning. Rather than give some crappy answer like “It depends” or “Just enough that your sprint is well planned,” here’s a practical way to determine if you’re spending the right amount of time in sprint planning ...

From looking at planning meetings over many years and at teams I consider successful, my advice is that teams should identify about two-thirds of a sprint’s tasks during sprint planning. That means one-third of the tasks that will be done during the sprint will be left to be identified during the sprint.

An Example

Consider this as an example: At the end of a sprint, a team has finished 60 tasks that delivered some number of product backlog items. My recommendation is that about two-thirds of those tasks (40, in this case) should have been identified during the sprint planning meeting. The remaining one-third (20 tasks) should have been left to be discovered during the sprint.

Sure, the ScrumMaster could have locked the door on the planning meeting and made the team think harder and longer, and they would have come up with perhaps another 10 tasks. But at what cost? It’s not worth it. The goal in sprint planning is to select a set of product backlog items to deliver during the sprint. The secondary goals are to get in and get out of the meeting as quickly as possible.

If your team is accustomed to filling your sprint extremely full, you may need to back off of that a bit. Leave the sprint a bit less full. In Agile Estimating and Planning, I’ve referred to this as “unplanned time.”

The idea is that your team is going to plan a sprint by more quickly identifying all the big things they need to do. Some little tasks will remain unidentified. Some could be identified if the team thought harder and longer--but it’s not worth it. They’d never think of every task, anyway.

Get in. Get out. Get started.

Leave Space for Unplanned Work

Leave room in your sprint plan for the tasks the team hasn’t thought of. Leave room for the tasks they have thought of but that might get bigger. How much room? Take a guess. Next sprint, adjust that guess up or down and iterate to about the right amount over a few sprints.

Note that I am referring to the number of tasks, not the hours within the sprint. The tasks that a team does not think of during sprint planning will tend to be smaller tasks. No one forgets “program the thing.” They forget the smaller tasks associated with that.

What Do You Think?

In the comments below, please share your thoughts. What have you tried to shorten sprint planning meetings? What’s been successful? What hasn’t?

Load Testing, Communication and #NoProjects in Methods & Tools Spring 2016 issue

From the Editor of Methods & Tools - Mon, 03/21/2016 - 12:01
Methods & Tools ‚Äď the free e-magazine for software developers, testers and project managers ‚Äď has published its Spring 2016 issue that discusses the load testing scripts, communications in project teams, the Kano model for requirements and #NoProjects. * Eradicating Load Test Errors – Part 1: Correlation Errors * Breaking Bad – The Cult of […]

SPaMCAST 386 ‚Äď Jason Little, Storytelling in Change Management

www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast 386 features our interview with Jason Little. Jason and I discussed his exploration of storytelling in change management.  Stories are a powerful tool to develop and hone a big picture view of organizational change.

Jason began his career as a web developer when Cold Fusion roamed the earth. Over the following years, he moved into management, Agile Coaching and consulting. The bumps and bruises collected along the way brought him to the realization that helping organizations adopt Agile practices is less about the practices, and all about change.

In 2008, he attended an experiential learning conference about how people experience change, and since then he’s been writing and speaking all over the world about helping organizations discover more effective practices for managing organizational change. He is the author of Lean Change Management and an international speaker who has spoken all over the world from Canada, the US, Finland, Germany, Australia, Belgium and more.

Contact Data:
http://www.agilecoach.ca/about/
http://ca.linkedin.com/in/jasonlittle/
http://www.twitter.com/jasonlittle

Re-Read Saturday News

This week we are back with Chapter 13 of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter 13 we discuss New Measurement Instruments for Management.¬† Hubbard shifts gears in this chapter to focus the reader on the new tools that our dynamic, electronically-tethered environment has created. ¬†Here is a summary of the¬†chapter in a few bullet points:

 

  • Everyone creates data that is trackable and measurable.
  • The internet is a measurement instrument.
  • Prediction markets are a way to synthesize a wide variety of opinions.

 

It is time to begin the selection process for the next‚Äôish book for the Re-Read Saturday.¬†¬†We will read¬†Commitment ‚Äď Novel About Managing Project Risk¬†by¬†Olav Maassen¬†and¬†Chris Matts based on the recommendation of Steven Adams first then move to the next book.¬†¬†As in past polls please vote twice¬†or suggest a write-in candidate in the comments.¬†¬†We will run the poll for three weeks.

Upcoming Events

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd.  I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th.  Register now!

I will be speaking at the CMMI Institute’s Capability Counts 2016 Conference in Annapolis, Maryland May 10th and 11th. Register Now!

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on storytelling. In the Harvard Business Review¬†article,¬†The Irresistible Power of Storytelling as a Strategic Business Tool¬†by Harrison Monarth (March 11, 2014), Keith Quesenberry,¬†a researcher¬†from Johns Hopkins,¬†notes¬†‚ÄúPeople are attracted to stories because we‚Äôre social creatures and we relate to other people.‚ÄĚ The power of storytelling is that it helps us understand each other and develop empathy. Storytelling is a tool¬†that is useful in many scenarios; for¬†presentations, but also to help¬†people frame their thoughts and for gathering information. A story provides both a deeper and more nuanced connection with information than most lists of PowerPoint bullets or even structured requirements documents.¬†The essay provides an excellent supplement to our interview with Jason Little.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 386 ‚Äď Jason Little, Storytelling in Change Management

Software Process and Measurement Cast - Sun, 03/20/2016 - 22:00

The Software Process and Measurement Cast 386 features our interview with Jason Little. Jason and I discussed his exploration of storytelling in change management.  Stories are a powerful tool to develop and hone a big picture view of organizational change.

Jason began his career as a web developer when Cold Fusion roamed the earth. Over the following years, he moved into management, Agile Coaching and consulting. The bumps and bruises collected along the way brought him to the realization that helping organizations adopt Agile practices is less about the practices, and all about change.

In 2008, he attended an experiential learning conference about how people experience change, and since then he’s been writing and speaking all over the world about helping organizations discover more effective practices for managing organizational change. He is the author of Lean Change Management and an international speaker who has spoken all over the world from Canada, the US, Finland, Germany, Australia, Belgium and more.

Contact Data:
http://www.agilecoach.ca/about/
http://ca.linkedin.com/in/jasonlittle/
http://www.twitter.com/jasonlittle

Re-Read Saturday News

This week we are back with Chapter 13 of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter 13 we discuss New Measurement Instruments for Management.¬† Hubbard shifts gears in this chapter to focus the reader on the new tools that our dynamic, electronically-tethered environment has created. ¬†Here is a summary of the¬†chapter in a few bullet points:

 

  • Everyone creates data that is trackable and measurable.
  • The internet is a measurement instrument.
  • Prediction markets are a way to synthesize a wide variety of opinions.

 

It is time to begin the selection process for the next‚Äôish book for the Re-Read Saturday.¬†¬†We will read¬†Commitment ‚Äď Novel About Managing Project Risk¬†by¬†Olav Maassen¬†and¬†Chris Matts based on the recommendation of Steven Adams first then move to the next book.¬†¬†As in past polls please vote twice¬†or suggest a write-in candidate in the comments.¬†¬†We will run the poll for three weeks.

Upcoming Events

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd.  I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th.  Register now!

I will be speaking at the CMMI Institute’s Capability Counts 2016 Conference in Annapolis, Maryland May 10th and 11th. Register Now!

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on storytelling. In the Harvard Business Review¬†article,¬†The Irresistible Power of Storytelling as a Strategic Business Tool¬†by Harrison Monarth (March 11, 2014), Keith Quesenberry,¬†a researcher¬†from Johns Hopkins,¬†notes¬†‚ÄúPeople are attracted to stories because we‚Äôre social creatures and we relate to other people.‚ÄĚ The power of storytelling is that it helps us understand each other and develop empathy. Storytelling is a tool¬†that is useful in many scenarios; for¬†presentations, but also to help¬†people frame their thoughts and for gathering information. A story provides both a deeper and more nuanced connection with information than most lists of PowerPoint bullets or even structured requirements documents.¬†The essay provides an excellent supplement to our interview with Jason Little.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

How To Measure Anything, Chapter 13: New Measurement Instruments for Management

 How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

Chapter 13 of¬†How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition¬†is the second chapter in the final section of the book.¬† Hubbard titled Chapter 13: ¬†New Measurement Instruments for Management.¬† Hubbard shifts gears in this chapter to focus the reader on the new tools that our dynamic, electronically-tethered environment has created. ¬†Here is a summary of the¬†chapter in a few bullet points:

  • Everyone creates data that is trackable and measurable.
  • The internet is a measurement instrument
  • Prediction markets are a way to synthesize ¬†

One of biggest takeaways of the¬†post-Snowden¬†era is that our technology provides businesses and governments with the ability to track us.¬† All those options for tracking provide different mechanisms for data collection and measurement. The use of GPS is just one example.¬† For example, recently I attended a friend‚Äôs wife’s surprise birthday party. Her husband used the GPS on her phone to track her approach the house so that everyone could hide at the appropriate moment (we also should have put the guacamole away before she got home). This is the heart of big data. We have no shortage of data¬†to measure those items that we once thought of as intangible.

The internet is one of the most ubiquitous measurement instruments ever devised.  An interesting example of using internet behavior to measure real world behavior is the use of Google searches prior to primary elections to determine how late-deciding voters are behaving. Bloomberg Business published an article on February 11 indicating that Google searches nailed the New Hampshire Primary. Nate Silver of FiveThiryEight has also indicated he uses Google search data as an input into his models.  The internet is a data magnet and a measurement tool. Less Orwellian examples abound in the A/B testing field.  The article, 3 Real-Life Examples of Incredibly Successful A/B Tests provides examples of using the internet as a measurement tool to reduce uncertainty.

Prediction markets are a tool to harness the opinion of crowds; they aggregate the knowledge of the group of people. Most prediction markets use either exchange or betting mechanisms to attach a value to a decision.¬† My favorite two prediction markets are¬†BetFair¬†(the UK betting website) or¬†PredictWise.¬† Both ‚Äúmarkets‚ÄĚ use different approaches to aggregate a wide range of opinions. ¬†Prediction markets provide a better forecast than almost any of the individual participant in the markets can deliver. Hubbard points out that in prediction markets prices seem to match the probability of occurrence. ¬†Involving money ¬†(BetFair for example) incentivizes people to do research on their trades or bets.Chapter 13 drives home the point that the evolution of technology provides a wide range of new possibilities to measure what once might have thought of as intangible. Between tools like GPS, the internet and prediction markets it is difficult to find anything that is truly intangible. For example, ¬†EA Games use of an A/B test to determine that direct pre-order promotional offers did not drive more business by measuring behavior. Not all of the new measurement tools are accepted as Hubbard points out in the sidebar on the DARPA ‚ÄúTerrorism Market‚ÄĚ Affair.¬†In this case, the prediction market was shut down before useful data was collected. None of these new measurement tools are magic but they are often useful for translating the intangible into the measurable.

Past installments of the Re-read Saturday of  How To Measure Anything, Third Edition

Chapter 1: The Challenge of Intangibles

Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily

Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t

Chapter 4: Clarifying the Measurement Problem

Chapter 5: Calibrated Estimates: How Much Do You Know Now?

Chapter 6: Quantifying Risk Through Modeling

Chapter 7: Quantifying The Value of Information

Chapter 8 The Transition: From What to Measure to How to Measure

Chapter 9: Sampling Reality: How Observing Some Things Tells Us about All Things

Chapter 10: Bayes: Adding To What You Know Now

Chapter 11: Preferences and Attitudes: The Softer Side of Measurement

Chapter 12: The Ultimate Measurement Instrument: Human Judges

It is time to begin the selection process for the next‚Äôish book for the Re-Read Saturday.¬† We will read Commitment ‚Äď Novel About Managing Project Risk¬†by¬†Olav Maassen¬†and¬†Chris Matts based on the recommendation of Steven Adams first then move to the next book.¬† As in past polls please vote twice¬†or suggest a write-in candidate in the comments.¬† We will run the poll for three weeks.¬†

Take Our Poll (function(d,c,j){if(!d.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src='https://s1.wp.com/wp-content/mu-plugins/shortcodes/js/polldaddy-shortcode.js';s=d.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);} else if(typeof jQuery !=='undefined')jQuery(d.body).trigger('pd-script-load');}(document,'script','pd-polldaddy-loader'));


Categories: Process Management

Beginning Agile: Obstacles to Effective Listening

Are you listening?

Are you listening?

Listening is important for anyone involved in developing, enhancing and maintaining software. Teams that don’t listen well have difficulty identifying and refining needs and coordinating their work. With the demonstrated criticality of listening, organizations and teams should work  hard to facilitate improved listening.  Every development methodology and framework advanced EVER has made great effort to improve communication, of which listening is a key component. However, significant obstacles to effective listening remain. For example:

  1. Inattentive / Distracted Listening.  There is a common fallacy that multitasking is a 21st Century productivity tool.  Even if you can brush your teeth and text emails at the same time, multitasking while trying to listen means you are not focusing entirely on the speaker and the content, so you’re not really listening. Just don’t do it.
  2. Believing you know more than anyone else or that your opinion is more important than the speaker erects a barrier between you and the speaker.  In this scenario, the listener is often looking for a reason to reject or attack the speaker’s position rather than listening to gain deeper knowledge and understanding.  If you really do know it all the best course of action is disengage, leave and find your own platform to pontificate from.
  3. Selective Listening. The listener is only listening for the ideas and concepts that fittheir¬†interests or provide support for their position.¬† Selective listening is often a reflection of a cognitive bias.¬† For example, when exercising the the Engineering preference (recently identified in¬†How To Measure Anything, Chapter 12: The Ultimate Measurement Instrument: Human¬†Judges), listeners change their mind about information to provide a supporting¬†rationalization for a¬†decision.¬† My dog provides a perfect example of selective listening.¬† If I whisper the word ‚Äútreat‚ÄĚ under my breath, he immediately trots up to me.¬† On the other hand, when I tell him to ‚Äúsit‚ÄĚ, he sometimes looks at me as if I have lost my grip on sanity.¬† Note:¬† Instead of the dog, I could have pointed to an example involving my children or coworkers with equal ease.
  4. Listening to Identify the Next Question.  This is a particulary pernicious form of selective listening most often practiced in meetings.  The listener is focusing not on information and concepts but rather on the how they can challenge the speaker either to burnish their image as a critical thinker or to discredit the speaker. The shift in focus reduces the listener’s ability to hear and interpret what the speaker is saying.
  5. Talking Over the Speaker.  This listening obstacle is the ultimate in dismissals of the speaker.  Instead of listening, the anti-listener has decided to not listen.  The act of talking over someone removes not only your own ability to hear, but also that of the people around you.  I recently participated in a class in which one of the participants got bored and decided to strike up a conversation about a different topic with a colleague during one of the lecture segments in the class.  The better behavior would have been to focus on the speaker or to leave.

There are many other obstacles to effective listening, but  there are two simple solutions to most of them.  The first is the golden rule.  Paraphrased for listening, listen unto others as you would have them listen to you.  Focus on the speaker, avoid distractions and listen to the message.  Second, unless your goal is confrontation, if you don’t want to listen, go somewhere else.

 


Categories: Process Management

Writing Easy to Maintain Code

From the Editor of Methods & Tools - Wed, 03/16/2016 - 20:43
Wikimedia software developer and Software Craftmanship advocate Jeroen De Dauw discusses about how to maintain code. A significant amount of time is spend on reading code, sometimes more than on writing code. Jeroen asks questions like how does elegant code tend to rot over time, and what can we do to make this clearer? In […]

Beginning Agile: Active, Passive and Inattentive Listening

Tom listening to music

Actively listening

Listening is important. Like reading, it is fundamental to almost every activity needed to build, enhance or maintain a product. Our complicated business environment impacts how we listen through the situations we face and because of the interruptions we invite.  Examining three listening styles: active, passive and inattentive listening, is useful to understand how we can affect engagement and information transfer based on how we listen.

Active listening is generally the most effective means of listening. However, it is not fully applicable to all listening scenarios.¬† A simple definition of active listing is focusing on the speaker(s) and providing verbal and non-verbal signs of feedback.¬† The feedback loop is important as is shows the speaker that you are attentive to them and provide encouragement to them to continue.¬† Feedback can include smiling at the speaker, making eye contact, posture (such as leaning toward the speaker) and not paying attention to distractions. The listener‚Äôs role is in active listening requires engagement.¬† The listener should take notes, ask questions to elicit clarifications, and provide positive verbal and non-verbal reinforcement. The verbal reinforcement allows the listener to reflect and process the information and is often accomplished by reflecting, paraphrasing or summarizing what they have heard.¬† I have heard these activities called responding to listen (responding to gain clarity to what has been heard). The listener is paraphrasing what they heard to get confirmation of what they heard and to encourage the speaker to continue.¬† When paraphrasing, the listener should use emotive words to build bridges.¬† If interaction with the speaker is not possible one slightly less active technique is to mentally repeat the conversation as it occurs.¬†Active listening forces the listener to pay attention to the speaker.¬† Active listeners pay attention, show they are listening, provide supportive feedback (reflect or seek feedback), defer judgement (don’t interrupt with counter arguments) and finally respond appropriately.

Passive is listening that occurs in almost every circumstance unless the listener is locked in the cone of silence.¬† Passive listeners hear the sounds that are flowing around them. This type of listening is mechanical and often involuntary.¬† Passive listening is typical when you are listening to a lecture or a podcast while you jog. ¬†The structure of many business scenarios¬†causes¬†passive listening.¬† Several years ago I addressed 500 colleagues to build awareness of my group‚Äôs plans for process improvement (picture a lecture hall full of people at 8:30 AM, most of the people had not had coffee yet ‚Äď it was not an engaged audience). The lecture format was designed for passive listening even though we included a question and answer period at the end.¬† Knowing what I now know about listening, I would have designed a different approach that would have been more conducive to active listening such as scheduling multiple¬†small group meetings.

In scenarios that evoke passive listening, as a listener, you can take a more active approach that will benefit your comprehension and provides support to the speaker.  Consider, mentally repeating the ideas and concepts as you are listening, this is a passive form of reflecting.  Lean into the speaker or presenter, leaning in signals that you are engaged to both the speaker and your subconscious.  Focus on the speaker paying attention to the words, ideas and body language.  Silence your internal and external dialog to enhance your ability to focus.

Inattentive / Distracted listening is one of the banes of modern listening.  Whether you a person listening to a teleconference while driving or checking your email while listening to partner, you are breaking the cardinal rule of all forms of listening. Inattention or distraction acts as a filter shoving whatever is not top mind (driving or email in the examples above) into the background.  Distracted listening is ineffective and only makes sense when you do not care about what is happening in the background. 

Active listening is very useful when you are interacting in small groups or other scenarios where you can engage with the speaker.  Team meetings or one-on-one discussions with your product owner are great examples where using active listening techniques are valuable. Sitting in an auditorium listening to your CEO describe this quarter’s results is an example of where passive listening makes sense.  Driving the car with the radio on is a scenario where inattentive listening is perfect.  However, while driving with the radio on might provide great a background, I am fairly certain listening in your team’s stand-up meeting requires a more active approach.


Categories: Process Management

SPaMCAST 385 ‚Äď Agile Portfolio Metrics, Why Diversity, Fast Is Not Enough

 

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

The Software Process and Measurement Cast 385 features our essay on Agile portfolio metrics.  Agile portfolio metrics are integral to prioritization and validating the flow of work. But, Agile portfolio metrics are only useful if they provide value. Metrics and measures add value if they reduce uncertainty so that we can make better decisions.

In the second segment, Kim Pries, the Software Sensei asks the question, ‚ÄúWhy should we care about diversity?‚ÄĚ No spoilers here, but the answer might have something to do with value!

Anchoring the cast, Gene Hughson discusses Architecture and OODA Loops: Fast Is Not Enough from his blog Form Follows Function!  For those of you that don’t remember, OODA  stands for observe, orient, decide, and act.

Re-Read Saturday News

This week we are back with Chapter 12 of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter 12 we discussed The Ultimate Measurement Instrument: Human Judges.¬† Humans can be a valuable measurement tool; however, that value requires using techniques to correct for the certain errors that are common in unaided human judgment.

 

Upcoming Events

I am facilitating the CMMI Capability Challenge. This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on March 15th at 1 PM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

 

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd.  I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th.  Register now!

 

Next SPaMCAST

The next Software Process and Measurement Cast will feature our interview with Jason Little. Jason and I discussed his exploration of the use of storytelling in change management.  Stories are a powerful tool to develop and hone a big picture view of organizational change.

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 385 - Agile Portfolio Metrics, Why Diversity, Fast Is Not Enough

Software Process and Measurement Cast - Sun, 03/13/2016 - 22:56

The Software Process and Measurement Cast 385 features our essay on Agile portfolio metrics. Agile portfolio metrics are integral to prioritization and validating the flow of work. But, Agile portfolio metrics are only useful if they provide value. Metrics and measures add value if they reduce uncertainty so that we can make better decisions. 


In the second segment, Kim Pries, the Software Sensei asks the question, ‚ÄúWhy should we care about diversity?‚ÄĚ No spoilers here, but the answer might have something to do with value!

Anchoring the cast, Gene Hughson discusses Architecture and OODA Loops: Fast Is Not Enough from his blog Form Follows Function! For those of you that don’t remember, OODA stands for observe, orient, decide, and act.

Re-Read Saturday News
This week we are back with Chapter 12 of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter 12 we discussed The Ultimate Measurement Instrument: Human Judges. Humans can be a valuable measurement tool; however, that value requires using techniques to correct for the certain errors that are common in unaided human judgment.

Upcoming Events
I am facilitating the CMMI Capability Challenge. This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference. The next CMMI Capability Challenge session will be held on March 15th at 1 PM EST.
http://cmmiinstitute.com/conferences#thecapabilitychallenge

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd. I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th.  Register now!

Next SPaMCAST
The next Software Process and Measurement Cast will feature our interview with Jason Little. Jason and I discussed his exploration of the use of storytelling in change management. Stories are a powerful tool to develop and hone a big picture view of organizational change.

Shameless Ad for my book!
Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English Chinese.

Categories: Process Management

How To Measure Anything, Chapter 12: The Ultimate Measurement Instrument: Human Judges

HTMA

How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition


Chapter 12 of¬†How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition¬†is the second chapter in the final section of the book.¬† Hubbard titled Chapter 12¬†The Ultimate Measurement Instrument: Human Judges.¬† The majority of HTMA has focused on different statistical tools and techniques.¬† This chapter examines the human as a measurement tool. ¬†Here is a summary of the¬†chapter in a few bullet points:

  • Expert judgement is often impacted by cognitive biases.
  • Improve unaided expert judgment by using simple (sic) statistical techniques.
  • Above all else, don’t use a method that adds more error to the initial estimate.

Hubbard begins the chapter by pointing out that the human mind has some remarkable advantages over the typical mechanical instrument. It has a unique ability to assess complex situations, but . . . the human mind often falls prey to a long list of common human biases and fallacies that generate error! If we want to use the human mind as a measurement instrument (and every shred of evidence is that we will), we need to develop techniques to exploit its strengths while adjusting for the mind’s weaknesses.

The rationale we humans use for many decisions is in Hubbard’s words “weird.”¬†Cognitive biases¬†hamper the decision-making process.¬† Cognitive biases are¬†patterns of behavior that reflect a deviation in judgment that occurs under particular situations. ¬†Biases¬†affect¬†how people¬†perceive¬†information, how teams and individuals behave and even¬†our¬†perception of ourselves. Hubbard identifies a few biases that affect how we interpret and how we make decisions. They include:

  • Anchor bias¬†refers to the tendency to rely heavily on one piece of information when making decisions.¬† ¬†This type of bias is often seen in early estimates for a project or tasks.
  • Halo/horns effect¬†¬†is the tendency for either positive or negative traits of an individual to overwhelm the perception of other traits by those around him or her.
  • Bandwagon bias¬†(bandwagon effect) occurs when there is a tendency to adopt an idea (or to do something) because an external group or crowd believes the same thing.
  • Engineering preference is a form of bias that comes into play once a decision has been made. Respondents will actually change their mind about information to provide a supporting¬†rationalization for the¬†decision.¬† They fit the decision to the facts generating more support for the decision. This bias holds true even for people that did not originally support the decision.¬†
  • The illusion of learning occurs when we believe that our judgement must be getting better with experience and time. ¬†This is a common bias seen in Agile teams as they estimate stories and accept work into sprints IF there is no feedback loop to compare estimates to actuals.¬†

As we have seen in past essays, these represent only a few of the biases that we use or experience in our day-to-day lives.  Biases affect how we interpret data, which data we will use in decision making, and even how we support the decision once it is made.  The saving grace is that we can account for biases in decision making if we are aware of them and use structured methods.   

Structured decision models are generally better than unaided expert judgment however experts often continue to fall prey to techniques that increase their confidence with improving their predictions. For example feedback (test) loops can contribute to the illusion of learning which makes it is possible to increase an experts confidence without generating better outcomes.  Sorting collected data in Excel is a simple organizing technique that might be useful and interesting, but rarely improves the decision models.  

Simple statistical techniques can increase the efficacy of decision and prediction models over human performance.  For example, a simple weighted average model can be an improvement on the uncalibrated expert judgment.  These models outperform human judgment because they smooth out the variability that bias introduces.  A simple linear estimation model that multiplies the component size by productivity and then sums the result is an example of a weighted average model that often outperforms expert judgment. Hubbard suggests that simple techniques can have an effect because people are starting in such a bad place.

Techniques that Hubbard suggests improving expert performance include:

  • For simple weighted average comparison models (which option is better than another ‚Äď for example comparing five houses based on size and value), Robyn Dawes recommends converting the weighted value to a z-score ((value-mean)/standard deviation). ¬†The conversion process smooths out inadvertent weights.
  • Rasch models are useful for analyzing categorized data (questionnaire responses). Most satisfaction questionnaires collect categorized data (fixed possible values). In the Rasch model, the probability of a specific response is a function of the person responding and the answering the question (item parameter). Rasch models reflect the premise that if one measurement instrument ‚ÄúA‚ÄĚ returns a value is greater than ‚ÄúB‚ÄĚ, then another measurement instrument should give the same answer. Rasch models allow us to judge the outputs of different expert measurement models to determine if the outcome is biased. A manager of a corporate PMO recently presented a model in which three panels of senior project managers interviewed and rated (using a standard questionnaire) several hundred project managers. A Rasch analysis in this circumstance would make sense to calibrate the responses.
  • The Lens Model developed by Egon Brunswick is another method to remove human inconsistencies. ¬† The process for developing the Lens Model begins by asking experts to make decisions, map those decisions to results (estimates compared to estimates), and then create a regression model from the data.¬† Brunswick found that the model performed better than any of the experts.¬† The model removes the error of expert or judge inconsistency. Lens Models are useful to develop internal estimation models that synthesize the perspectives of multiple human experts.¬† The Lens Model is also useful for avoiding the illusion of learning bias. (Note:¬† Hubbard provides a great seven-step process for creating a Lens Model on page 322.)
  • Professionally I have reviewed many estimation programs.¬† In nearly every scenario, model-based estimates outperformed expert¬†judgment¬†when a program had more than one estimate. Research provided in HTMA by Robyn Dawes supports this observation.

Hubbard wraps up Chapter 12 by introducing ‚ÄúThe Big Measurement Don’t ‚Äď Above all else, don’t use a method that adds more error to the initial estimate.‚ÄĚ If the measure does not reduce uncertainty, the measure does not have value. Defining measurement as a tool that reduces uncertainty makes measuring many different scenarios feasible (which might be why Hubbard choose the title).¬† The definition might feel squishy, that anything can be a measure; however, forcing the measure to reduce uncertainty forces hard constraints. Humans can be a valuable measurement tool; however that value requires using techniques to correct for the certain errors in judgment that are common in unaided human judgment.

Past installments of the Re-read Saturday of  How To Measure Anything, Third Edition, Introduction

Chapter 1: The Challenge of Intangibles

Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily

Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t

Chapter 4: Clarifying the Measurement Problem

Chapter 5: Calibrated Estimates: How Much Do You Know Now?

Chapter 6: Quantifying Risk Through Modeling

Chapter 7: Quantifying The Value of Information

Chapter 8 The Transition: From What to Measure to How to Measure

Chapter 9: Sampling Reality: How Observing Some Things Tells Us about All Things

Chapter 10: Bayes: Adding To What You Know Now

 Chapter 11: Preferences and Attitudes: The Softer Side of Measurement


Categories: Process Management

Beginning Agile:  Types of Listening

2865303503_8ec0c72479_b

Listening is a core competency to succeed in any walk of life.  Listening is the combination of hearing and interpreting.  Failure in either part is a failure in listening.  If you spend any time observing software development in all its variations you will see examples of listening or examples of failures to listen.  There are many different types of listening, each type of listening is useful in different scenarios and are often practiced intuitively.  Knowing that are different types of listening and how each can be applied is useful for being a better listener.  Many signification failures in software development are failures to use the right type of listening. Requirement defects (misinterpretation of a requirement or a failure to capture a user story) are almost always listening problems. We will tackle many of these when we consider listening anti-patterns.  Examples of types of listening in IT include:

Basic types of listening:

Discriminative listening is when the listener interprets and assigns meaning to sound rather than to words. In discriminative listening, the listener interprets the differences and nuances of sounds and body language.  The listener is sensitive to attributes including rate, volume, pitch and emphasis in speaking.  This type of listening is the most basic form of listening. We learn this form of listening early in life. Recognition and interpretation of accents are an example of discriminative listening. 

Comprehensive listening is the interpretation of the words and ideas.  Comprehensive listening involves understanding the thoughts, ideas, and message.  This type of listening requires that the listener understands the language and vocabulary.  Comprehensive listening builds on discriminative learning.  If you can’t understand the sound, you will not be able to interpret language. Mismatches in vocabulary can disrupt comprehension.  

More specific types of listening to that build on the basics:

Informational listening is a type of goal-based listening that requires the listener to interpret verbal and non-verbal cues to learn. Students in a lecture hall are often in informational listening mode (alternate modes might include critical thinking or sleeping).  The listener typically is a less active participant in the listening process.  One non-verbal signal that someone is in informational listening mode is that they are taking notes. In this form of listening to the listener focuses on understanding the speaker’s message postponing critical thinking and processing until later.  In the corporate environment, this type of listening is often used when listening to reports, briefing, and speeches.  In a recent story development session in which an Agile team was interacting with a group of experts, I observed one person leading the questioning and probing while several other team members listened and took notes. The note takers were using informational listening.

Critical listening focuses on evaluating and analyzing information.   This is a more active form of listening that includes evaluating and making judgments.  The listener is interacting with the information in order to make a judgment.  In a scenario where someone trying persuade a listener that they should adopt a technique the listener is typically using critical thinking.   Almost all sales scenarios use critical listening.  In the story development scenario alluded to earlier, the questioner was using critical listening to evaluate the answers and to plan the next question.

Therapeutic listening is technique often used by Scrum masters to help facilitate the team.  Therapeutic listening is a form of active listening in which the listener helps the speaker to draw out and understand their feelings and emotions.  The goal is for the help the listen evaluate and cure their own problems.  I am not suggesting training Scrum Masters as therapists, but leaders often use therapeutic listening to facilitate the resolution of people problems rather than using more authoritarian techniques.

Once upon a time, I sat in a lecture hall for an ECON 101 class twice a week at 8 AM.  The instructor had an accent that I had heard on my radio, but others had never heard the accent.  I was able to immediately meet the basic listening needs and listen to learn.  Others in the class struggled with comprehension (comprehensive listening) and, therefore, could not get into informational or critical listening modes.  Many dropped the class (or slept through it).  As communicators, we need to understand which mode of listening people are using so that we can deliver our message. 


Categories: Process Management