Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/5&page=2' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

2016 Reader Survey

Mike Cohn's Blog - Thu, 01/14/2016 - 16:00

I want to make sure I’m doing the best job I can addressing your needs and writing about the topics you’re interested in. And that means I need to know more about you. And so I’ve created a short survey. 

I’d really appreciate it if you’d take a few minutes to fill out the survey. By doing so, you’ll help me provide you with the best content I can.

There are only 10 questions, so you can finish in just a couple minutes and the results are completely anonymous.

Thanks in advance for your help.

Mapping biases to testing, Part 1: Introduction

Xebia Blog - Wed, 01/13/2016 - 12:40

We humans are weird. We think we can produce bug free software. We think we can plan projects. We think we can say “I’ve tested everything”. But how is that possible when we are governed by biases in our thinking? We simply cannot think about everything in advance, although we like to convince ourselves that we can (confirmation bias). 

In his book “Thinking, Fast and Slow”, Daniel Kahneman explains the most common thinking biases and fallacies. I loved the book so much I’ve read it twice and I’ll tell anyone who wants to listen to read it too. For me it is the best book I ever read on testing. That’s right, a book that by itself has nothing to do with testing, taught me most about it. Before I read the book I wasn’t aware of all the biases and fallacies that are out there. Sure, I noticed that projects always finished late and wondered why people were so big on planning when it never happened that way, but I didn’t know why people kept believing in their excel sheets. In that sense, “Thinking, Fast and Slow” was a huge eye opener for me. There are lots of examples in the book that I answered incorrectly, proving that I’m just as gullible as the next person. 

thinking-fast-and-slow

But that scared me, because it is my job to ‘test all the things’, right? But if my thinking is flawed, how can I possibly claim to be a good tester? I want to try to weed out as much of my thinking fallacies as I can. This is a journey that will never end. I want to take you with me on this journey, though. The goal is as always: improve as a tester. Enjoy the learning process and explore. I feel the need to put a disclaimer here. This is not a scientific type of blog series. I will provide sources where I think they’re necessary, but the point of this series is to take you on a journey that is for the most part personal. I hope it will benefit you as well! My goal is mainly to inspire you to take a look inwards, at your own biases.

Before we continue I need to explain a few basic concepts: fast and slow thinking, heuristics, biases and fallacies. I will conclude this first post with a list of the biases and fallacies that I will cover in this series. This list can grow of course, based on the feedback I hopefully will receive.

Fast and slow thinking

This is a concept taken from Kahneman’s book. Fast thinking, called “System 1 thinking” in the book, is the thinking you do on autopilot. When you drive your car and you see something happening, you react in the blink of an eye. It’s also the thinking you do when you meet someone new. In a split second you have judged this person based on stereotypes. It just happens! It’s fast, automatic, instinctive, emotional. The system 1 thinking is the reason we are thriving today as a species (it helped us escape from dangerous situations, for example).

On the other hand, there’s “System 2 thinking”. This is the type of thinking that takes effort; it’s slow. It’s deliberate. For example, you use system 2 when you have to calculate (in your head) the answer to 234 x 33 (as opposed to 2 x 3, which you do with System 1).

There is one huge problem: we make all kinds of mistakes when it comes to using these systems. Sometimes, we use system 1 to analyse a problem, while system 2 would be more appropriate. In the context of testing: when someone comes up to you and says “is testing finished yet?”, you might be tempted to answer “no” or “yes”, while this is more a type of question that cannot be answered with a yes or no. If you want to be obnoxious you can say testing is never finished, but a more realistic conversation about this topic would be based around risk, in my opinion.

Often, when people ask a seemingly simple or short question, such as "is testing finished yet?", they mean something different entirely. In my context, if the Product Owner would ask me “is testing finished yet?”, for me it translates to: “do you think the quality of our product is good enough to be released? Did we build the thing right? Did we build the right thing? I value your advice in this matter, because I’m uncertain of it myself”. But if I happen to be in a foul mood, I have an option to just say "yes", and that would have been my system 1 answering.

Putting in the mental effort to understand that a simple question can actually be about something else, asking questions to find out what the other person truly means and crafting your answer to really help them, is hard work. Therefore, you have to spend your energy wisely.

Develop your system 1 and system 2 senses: when do you use which system? And then there’s the matter of choice. It would be silly to think you can always choose which system you use.

That brings us to heuristics.

Heuristics

Definition on Wikipedia: “A heuristic technique, often called simply a heuristic, is any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals.”

Heuristics are powerful, but you need to spend time reevaluating and/or adapting them once in a while, and for that you need system 2. Why do you need to reevaluate your heuristics? Because you are prone to fall for biases and fallacies.


hammer

We need to use heuristics, but they are based on system 1. When you are an experienced tester, you have a huge toolbox of heuristics that help you during testing. That’s a good thing, but it comes with a risk. You might start to trust your heuristic judgement a little too much, but you can't use a hammer for everything, right?

Bias and Fallacy, definition and meaning

A bias “is an inclination or outlook to present or hold a partial perspective, often accompanied by a refusal to consider the possible merits of alternative points of view.”

A fallacy “is the use of invalid or otherwise faulty reasoning, or "wrong moves" in the construction of an argument.”

Most of the thinking errors I will cover in this series are biases, but it is good to know the difference between fallacy and bias. A bias involves a mindset; you see something in a pre-conceived way. It influences how you experience things. Stereotyping is a common example of a bias. A fallacy, on the other hand, is an untruth. It is a statement or belief that lacks truthfulness.

There is more than one type of bias, but in this blog series I will talk about cognitive biases, for which the definition is “[...] a repeating or basic misstep in thinking, assessing, recollecting, or other cognitive processes.”

Since testing is a profession that relies heavily on cognition, mental judgement and we are only human, it’s no wonder that we make mistakes. You cannot get rid of all your biases, that would defy human nature, but in the context of testing it’s a great idea to challenge yourself: which biases and fallacies are actually hurting my testing activities?

However, you have to realise that biases can work to your advantage as well! Since it is part of our human nature to be biased, we should use that fact. With regards to testing that could mean: get more people to do testing. Every person brings his or her unique perspective (with biases) to the table and this will result in more information about the application under test.

What’s next

In this blog series I hope to shed some light on a number of biases and fallacies and what harm or good they can do in testing. I will cover the following biases, fallacies and effects:

If you have more input for biases or fallacies that you want to see covered, please leave a comment or leave me a tweet @Maaikees. In the meantime, you know which book you have to read!

Peer Review: An Overview

379278250_48ab2150cc_b

                                                                            Clone review?

Quality is important.  If we embrace that ideal, it will influence many aspects of how software-centric products are developed, enhanced and maintained. Quality is an attribute of the product delivered to a customer and an output of the process used to deliver the product. Quality affects both customers and the users of a product. Quality can yield high customer satisfaction when it meets or exceeds expectations, or negatively shade a customer’s perception of the teams or organization that created the product when quality is below expectations. Quality can also impact the ability of any development organization to deliver quickly and efficiently. Capers Jones in Applied Software Measurement, Third Edition states, “An enterprise that succeeds in quality control will succeed in optimizing productivity, schedules, and customer satisfaction.” The Scaled Agile Framework (SAFe) has included “build-in quality” as one their four core values, because, without built-in quality, teams will not be able to deliver software with the fastest sustainable lead-time.

Peer reviews are a critical tool to build quality into software before we have to try to test quality in or ask our customers to debug the software for us. Unfortunately, the concept of a peer review is often misunderstood or actively conflated with other forms of reviews and inspections in order to save time. A definition is needed. 

The TMMi defines peer review as a methodical examination of work products by peers to identify defects and areas where changes are needed. (TMMi Framework Release 1.0)

The CMMI defines peer review as the review of work products performed by peers during the development of work product to identify defects for removal. (CMMI for Development, Third Edition) 

Arguably it would be possible to find any number of other similar definitions; however, the core concepts of a composite definition would be:

Examination/review
<of>
work products
<by>
peers /colleagues
<to remove>
Defects.

Talmon Ben-Cnaan, Quality Manager at AMDOCS, suggested an example that meets all criteria. “Code written by developer A and is reviewed by developer B from the same team. Or: A test book prepared by tester A and is presented to the entire team testers.” 

Peer reviews are an integral part of many different development frameworks and methods. They can be powerful tools to remove defects before they can impact production and to share knowledge with the team. As with all types of reviews and inspections, peer reviews are part of a class of verification and validation techniques called static techniques. These techniques are considered static because the system or application being built is not yet executed. In peer reviews, people review the work product to find defects, and the “people” involved will have the same or similar organizational status so that goal does not shift from finding defects to hiding defects.

Next: 

  1. How are peer reviews different than gate reviews or sign-off reviews.
  2. Peer Reviews: Confusions and Conflations

Categories: Process Management

Profiling zsh shell scripts

Xebia Blog - Tue, 01/12/2016 - 09:27

With today's blazingly fast hardware, our capacity to "make things slow" continues to amaze me. For example, on my system, there is a noticeable delay between the moment a terminal window is opened, and the moment the command prompt actually shows up.

This post explores how we can quickly quantify the problem and and pinpoint the main causes of the delay.

Quantifying the problem

Let's first see where the problem might be. A likely candidate is of course my ~/.zshrc, so I added 2 log statements: one at the top, one at the bottom:

date "+%s.%N"

This indeed showed my ~/.zshrc took about 300ms, enough to cause a noticeable delay.

Quick insight: zprof

Selection_054The quickest way to get an idea of the culprit is the zprof module that comes with zsh. You simply add zmodload zsh/zprof to the top of your ~/.zshrc, and the zprof built-in command will show a gprof-like summary of the profiling data.

A notable difference between gprof and zprof is that where gprof measures CPU time, where zprof measures wall clock time.

This is fortunate: CPU time is the time a program was actually consuming CPU cycles, and excludes any time the program was for example waiting for I/O. It would be fairly useless to profile zsh in this way, because it probably spends most of its time waiting for invoked commands to return.

zprof provides a fairly rich output, including information about the call hierarchy between functions. Unfortunately, it measures performance per function, so if those are long you're still left wondering which line took so long.

Digging deeper: xtrace

An approach to profiling zsh scripts that will give per-line metrics is using xtrace. Using xtrace, each command that zsh executes is also printed to file descriptor 3 using a special prompt which can be customized with the PS4 environment variable to include things like the current timestamp.

We can collect these detailed statistics by adding to the top of our ~/.zshrc:

PS4=$'\\\011%D{%s%6.}\011%x\011%I\011%N\011%e\011'
exec 3>&2 2>/tmp/zshstart.$$.log
setopt xtrace prompt_subst

And to the bottom:

unsetopt xtrace
exec 2>&3 3>&-

There are 2 problems with this detailed trace:

  • This approach will provide confusing output when there is any parallelism going on: trace messages from different threads of execution will simply get interleaved
  • It is an overwhelming amount of data that is hard to digest

When you're dealing with parallelism, perhaps you can first use zprof and then only xtrace the function you know is a bottleneck.

When you're overwhelmed by the amount of data, read on...

Visualizing: kcachegrind

If we assume there's no parallelism going on, we can visualize our zsh script profile using kcachegrind. This tool is intended to visualize the call graphs produced by valgrind's callgrind tool, but since the file format used is fairly simple we can write a small tool to convert our xtrace output.

Selection_050

Selection_052Summary

zprof is the easiest and most reliable way to profile a zsh script.

Some custom glue combined several existing tools (zsh's xtrace and kcachegrind) to achieve in-depth insight.

Applying this to shell startup time is of course a rather silly exercise - though I'm quite happy with the result: I went from ~300ms to ~70ms (now mostly spent on autocomplete features, which are worth it).

The main lesson: combining tools that were not originally intended to be used together can produce powerful results.

References:

Social Side of Code, Database CI and REST API Testing in Methods & Tools Winter 2015 issue

From the Editor of Methods & Tools - Mon, 01/11/2016 - 14:29
Methods & Tools – the free e-magazine for software developers, testers and project managers – has published its Winter 2015 issue that discusses the social side of code, database continuous intergration and testing REST API. Methods & Tools Winter 2015 issue content: * Meet the Social Side of Your Codebase * Database Continuous Integration * […]

SPaMCAST 376 – Women In Tech, Microservices, Capabilities and More

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

This week we are doing something special. Right after the New Year holiday, all of the regulars from the Software Process and Measurement Cast gathered virtually to discuss the topics we felt would be important in 2016.  The panel for the discussion was comprised of Jeremy Berriault (The QA Corner), Steve Tendon (The TameFlow Approach), Kim Pries (The Software Sensei), Gene Hughson (Form Follows Function) and myself. We had a lively discussion that included the topics of women in tech, microservices, capabilities, business/IT integration and a lot more.

Help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Four, we focused on two questions. The first is getting the reader to answer what is the decision that measurement is supposed to support. The second is, what is the definition of the thing being measured in terms of observable consequences?

Upcoming Events

I am facilitating the CMMI Capability Challenge.  This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on January 12 at 1 PM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

The Challenge will continue on February 17th at 11 AM.

In other events, I will give a webinar, titled: Discover The Quality of Your Testing Process on January 19, 2016, at  11:00 am EST
Organizations that seek to understand and improve their current testing capabilities can use the Test Maturity Model integration (TMMi) as a guide for best practices. The TMMi is the industry standard model of testing capabilities. Comparing your testing organization’s performance to the model provides a gap analysis and outlines a path towards greater capabilities and efficiency. This webinar will walk attendees through a testing assessment that delivers a baseline of performance and a set of prioritized process improvements.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on empathy. Coaching is a key tool to help individuals and teams reach peak performance. One of the key attributes of a good coach is empathy. Critical to the understanding the role that empathy plays in coaching is understanding the definition of empathy. As a coach, if you can’t connect with those you are coaching you will not succeed.

We will also have new columns from Kim Pries, The Software Sensei,  and Gene Hughson Form Follows Function.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 376 – Women In Tech, Microservices, Capabilities and More

Software Process and Measurement Cast - Sun, 01/10/2016 - 23:00

This week we are doing something special. Right after the New Year holiday, all of the regulars from the Software Process and Measurement Cast gathered virtually to discuss the topics we felt would be important in 2016.  The panel for the discussion was comprised of Jeremy Berriault (The QA Corner), Steve Tendon (The TameFlow Approach), Kim Pries (The Software Sensei), Gene Hughson (Form Follows Function) and myself. We had a lively discussion that included the topics of women in tech, microservices, capabilities, business/IT integration and a lot more. 

Help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Four, we focused on two questions. The first is getting the reader to answer what is the decision that measurement is supposed to support. The second is, what is the definition of the thing being measured in terms of observable consequences?

Upcoming Events

I am facilitating the CMMI Capability Challenge.  This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on January 12 at 1 PM EST. 

http://cmmiinstitute.com/conferences#thecapabilitychallenge

The Challenge will continue on February 17th at 11 AM.

In other events, I will give a webinar, titled: Discover The Quality of Your Testing Process on January 19, 2016, at  11:00 am EST
Organizations that seek to understand and improve their current testing capabilities can use the Test Maturity Model integration (TMMi) as a guide for best practices. The TMMi is the industry standard model of testing capabilities. Comparing your testing organization's performance to the model provides a gap analysis and outlines a path towards greater capabilities and efficiency. This webinar will walk attendees through a testing assessment that delivers a baseline of performance and a set of prioritized process improvements.

Next SPaMCAST

The next Software Process and Measurement Cast will feature our essay on empathy. Coaching is a key tool to help individuals and teams reach peak performance. One of the key attributes of a good coach is empathy. Critical to the understanding the role that empathy plays in coaching is understanding the definition of empathy. As a coach, if you can’t connect with those you are coaching you will not succeed.

We will also have new columns from Kim Pries, The Software Sensei,  and Gene Hughson Form Follows Function.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

How To Measure Anything, Third Edition, Chapter 4: Clarifying the Measurement Problem

HTMA

How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

Chapter 4 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition, is titled: Clarifying the Measurement Problem. In this chapter Hubbard focuses on two questions.  The first is getting the reader to answer what is the decision that measurement is supposed to support. The second is what is the definition of the thing being measured in terms of observable consequences.  These questions sound very basic; however, I found myself asking variations of those questions more than once recently when reviewing a relatively mature measurement program.  Answering these questions are often at the heart of defining the real mission of any measurement or measurement program and whether a measure will have value.

Chapter 4 begins the second section of the book titled Before you Measure, in which Hubbard begins a deeper dive into his measurement approach initially identified in Chapter One.  The framework is:

  1. Define the decision. This step includes defining not only the dilemma we are attempting to solve with measurement, but all of the relevant variables in unambiguous terms. Chapter 4 focuses on this step.
  2. Determine what you know. This step is about determining the amount of uncertainty in the desicion and measures defined in step 1
  3. Compute the value of additional information. If the value of additional information is low then you are done (go to step 5).
  4. Measure where the information value is high. After collecting new data, repeat steps 2 and 3 until further measurement does not provide enough additional value.
  5. Make a decision; act upon it.

All measurement begins by defining the decision that will be made. The question you need to be ask and answer is what is the problem or dilemma for which a decision needs to be made. In order to truly answer the question of what decision will be made, you need to clearly articulate the specific action the measurement will inform in a clear and unambiguous fashion. Failure to correctly identify the purpose will lead to debates later when the ambiguities are exposed. For example, I recently listened to a debate on whether an organization’s productivity (defined as delivered functionality per staff month of effort) had increased. The debate had broken down into a fierce argument of what delivered functionality meant and whose effort would be included in the definition of a staff month. All of these ambiguities stemmed from a lack of finality on what decision the organization was trying to make with the measure, and therefore what needed to be measured.

Part of the definitional problem is a failure to understand the requirements needed to make a decision. Hubbard provides a set of criteria that need to be met in decision making scenario:

  • A decision requires two or more realistic alternatives.
  • A decision has uncertainty.
  • The decision has a risk.  Risk is the potential and negative consequences if the wrong alternative is taken.
  • A decision has a decision maker.

Failure to meet any one of these criteria means you are not facing a decision-making scenario.  For example, if you were deciding whether to go out for lunch or not and there were no restaurants, food truck or hot dog carts, you would not really have a decision to make.  If there were one restaurant there again would be no uncertainty, therefore no decision to be made.

Modeling a decision is a mechanism to lay bare any remaining ambiguity.  Any decision can be modeled.  The concept of weighted shortest job first (WSJF), a tool to prioritize (a form of decision making) is a tool often used to model which piece of work should be done first.  Model can be simple such as a cost-benefit analysis or complex such as a computer model or Monti-Carlo analysis. Hubbard suggests that every decision is modeled whether even if modeling is expressed by intuition.

Decisions require risk and uncertainty. As with many other measurement concepts, understanding what risk and uncertainty means is critical to being able to measure anything. The chapter concludes with a discussion of the definition of uncertainty and risk.

Uncertainty is the lack of complete certainty of an outcome of a decision. Uncertainty reflects that there exists more than one possible outcomes of a decision.

Measurement of uncertainty a set of probabilities assigned to the possible outcomes.  For example, there are two possibilities for the weather tomorrow, precipitation or no precipitation.  The measurement of certainty might be expressed as a 60% chance of rain (40% chance of no rain can be inferred).

Risk is the uncertainty that a loss or some other bad thing will occur.

Measurement of risk is quantification of the set of possibilities that combines the probability of occurrence with the quantified impact of an outcome. For example, the risk of a decision to spend money on a picnic that would require an expenditure of thrifty dollars on perishable food could be expressed as a 60% chance of rain tomorrow with a potential loss of $30 for the picnic lunch.

Clarifying the measurement problem requires defining what we mean.  Definition begins with unambiguously defining the decision to be made.  Once we know the decision that needs to be made we can define and measure uncertainty and risk for each of possible outcomes.

Previous Installments in Re-read Saturday, How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

How To Measure Anything, Third Edition, Introduction

HTMA, Chapter 1

HTMA, Chapter 2

HTMA, Chapter 3


Categories: Process Management

Top Ten Blog Entries 2015

2015!

2015!

We continue showcasing our 2015 efforts with the ten most accessed blog articles written in 2015.  As with the recap of the ten most downloaded episodes of the Software Process and Measurement Cast, the blog entries that caught our reader’s eyes were widely varied.  I am thrilled that two entries from our Re-Read Saturday feature made the top ten list because the Software Process and Measurement Blog tends to get its most traffic on weekdays, which often means that Saturday and Sunday post are seen less. The inference is that the blog is being accessed primarily from work (we work very hard to keep the blog and the podcast office safe for exactly that reason).

1. Budgeting, Estimation, Planning, #NoEstimates and the Agile Planning Onion

2. The Difference Between a Persona and an Actor

3. Re-read Saturday: Consolidating Gains and Producing More Change, John P. Kotter Chapter 9

4. The House of Lean, or Is That The House of Quality?

5. Prioritization: Weighted Shortest Job First

6. You Are Not Agile If . . .

7. Project Management Is Dead (Refined)

8. You Are Not Agile (If You Only Do Scrum)

9. Re-read Saturday: Anchoring New Approaches in The Culture, John P. Kotter Chapter 10

10. DevOps Primer: Definition

Almost every article was part of a two to four article thread that allowed us to explore a topic in depth. I must note that we build a mind map for each thread adding topics and ideas as we research and write the articles. In almost every case as we conclude a thread we can conclude with the phrase, “to be continued.”

For those that like numbers, traffic on the blog was up by 16.7% between 2014 and 2015 even though we went from daily publishing to four times a week. We may experiment with publishing days later this year but the frequency feels just about right.

We are working on a two month rolling content calendar for 2016 and would be happy to have your input on each of the topics and on future topics. Planned threads for the remainder of January and February:

  1. Contrasting Peer Reviews and Gate Reviews
  2. Agile Acceptance Testing – Revisited and Refined
  3. Minimum Viable Product
  4. Leadership is Not a Task

Current Re-Read Saturday Book:

·How to Measure Anything – Douglas W. Hubbard

I will be polling for the next book in the series sometime in February. However, feel free to suggest books now (maybe even something from the list that our 2015 podcast interviewees suggested).

Thank you for your eyes, thoughts, ideas, comments, likes, tweets, and reblogs. We will work hard to continue to bring great content to in 2016!


Categories: Process Management

Quote of the Month January 2016

From the Editor of Methods & Tools - Wed, 01/06/2016 - 16:19
Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don’t improve it. Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale […]

Software Process and Measurement Cast Year in Review: Top Podcasts of 2015!

 www.spamcast.net

http://www.spamcast.net

Many blogs and podcasts annually showcase their best columns or shows. We at the Software Process and Measurement Cast are happy to share the ten most downloaded episodes from 2015’ish (from December 2014 through November 2015). Were these the ten best episodes?  It is really hard to tell, I think all 52 podcasts provided great content and there is not one of them I would not do again in a heartbeat. 

The 10 most downloaded podcasts of 2015 were:

SPaMCAST 332 – Shirly Ronen-Harel, The Coaching Booster

SPaMCAST 323 – Five Factors Leading to Failing With Agile, Gene Hughson, Jo Ann Sweeney

SPaMCAST 338 – Stephen Parry, Adaptive Organizations, Lean and Agile Thinking

SPaMCAST 321 -11 Reasons For Agile Success, Communication, and Cloud Development

SPaMCAST 357 – Mind Mapping, Who Needs Architects, TameFlow Chapter 4

SPaMCAST 334 – Mario Lucero, All About Agile Coaching

SPaMCAST 324 – Software Non-Functional Assessment Process, SNAP

SPaMCAST 354 – Allan Kelly, #NoProjects

SPaMCAST 351 – Distributed Agile, Illusion of Control, QA Corner

SPaMCAST 319 – Requirements, Communications, Fixing IT

We are nearing the end of the 9th year of podcasting essays, interviews, and columns. Every year I am amazed by what I learn from interviewees, columnists, and listeners. I hope you get as much from the podcast as I do. Based on my observations and conversations with a wide range of practitioners, I believe the use of Agile, lean and Kanban still are expanding and evolving. Older frameworks continue to both adapt and incorporate newer frameworks or fade in relevance.  Which means that in order to stay relevant all of us must continue to learn and adapt. I hope the podcast and the blog can help all of us stay on the front lines of leading change in the world of software development. 

Software Process and Measurement Cast crew will continue to scan the edges of acceptable development methods to try to identify the next new wave of change, and I can guarantee that there will be a next wave of change. However, more eyes are better than fewer, we need you to let us know what new trends you are seeing and topics you would like us to explore so that everyone that reads the blog or listens to the podcast can benefit.

One interesting avenue I thought to mine for new trends was to ask the reading habits of the 20+ interviewees from this year.  I asked each of my interviewees “What was the most important book you read in 2015?” I got a wide variety of answers that included:

  • Lisa Crispin and Janet Gregory’s More Agile Testing 
  • Jeff Patton’s User Story Mapping
  • Jason Little’s Lean Change Management
  • Boehm and Turner’s Balancing Agility and Discipline
  • Fred Brooks’ Mythical Man Month
  • Douglas W. Hubbard’s How to Measure Anything
  • The Bible
  • Jimmy Janlen’s Toolbox for the Agile Coach – Visualization Examples
  • McChrystal’s Team of Teams: New Rules of Engagement for a Complex World
  • Nassim Nicholas Taleb’s Antifragile
  • Ben Linders and Luis Gonçalves’s Getting Value out of Agile Retrospectives
  • Craig Larman’s Agile and Iterative Development – A manager’s guide
  • Sendhil Mullainathan’s Scarcity: The New Science of Having Less and How It Defines Our Lives
  • Walter Isaacson’s Steve Jobs
  • David McCullough’s The Wright Brothers
  • Tim Kelley’s True Purpose
  • Fredric Laloux’s Reinventing Organizations

While not everyone responded, it is safe to say that the list was varied and eclectic. A few of the authors on the list have been on the podcast (some more than once) and others either are currently being read on Re-Read Saturday or have recently been completed. Quite a few are now on my to-do reading list along with a mixture of science fiction.

Please reach out say thank you to all of the interviewees and columnists that participated on the Software Process and Measurement Cast. They shared their knowledge, wisdom and humor making 2015 a wonderful learning event.


Categories: Process Management

Can a Traditional SRS Be Converted into User Stories?

Mike Cohn's Blog - Tue, 01/05/2016 - 16:00

A lot of traditionally managed projects begin with a Software Requirements Specification (SRS). Then sometime during the project, a decision is made to adopt an agile approach. And so a natural question is whether the SRS can serve as the newly agile project's product backlog. Some teams contemplate going so far as rewriting the SRS into a product backlog with user stories. Let's consider whether that's necessary.

But before taking up this question, I want to clarify what I mean by a Software Requirements Specification or SRS. I find this type of document to vary tremendously from company to company. In general, though, what I'm referring to is the typical document full of "The system shall ..." type statements.

But you can imagine any sort of traditional requirements document and my advice should still apply. This is especially the case for any document with numbered and nested requirements statements, regardless of whether each statement is really written as "the system shall ..."

Some Drawbacks to Using the SRS as a Product Backlog

On an agile product, the product backlog serves two purposes:

  • It serves as a repository for the work to be done
  • It facilitates prioritization of work

That is, the product backlog tells a team what needs to be done and can be used as a planning tool for sequencing the work. In contrast, a traditional SRS addresses only the issue of what is to be done on the project.

There is no attempt with an SRS to write requirements that can be implemented within a sprint or that are prioritized. Writing independent requirements is a minor goal at best, as shown by the hierarchical organization of most SRS documents, with their enumerated requirements such as 1.0, 1.1, 1.1.1, and so on.

These are not problems when an SRS is evaluated purely as a requirements document. But when the items within an SRS will also be used as product backlog items and prioritized, it creates problems.

With an SRS, it is often impossible for a team to develop requirement 1.1.1 without concurrently developing 1.1.2 and 1.1.5. These dependencies make it not as easy as picking a story at a time from a well-crafted product backlog.

Prioritizing sub-items on an SRS is also difficult, as is identifying a subset of features that creates a minimum viable product.

A Software Requirements Specification is good at being a requirements specification. It’s good at saying what a system or product is to do. (Of course, it misses out on all the agile aspects of emergent requirements, collaboration, discovery, and so on. I’m assuming these will still happen.)

But an SRS is not good for planning, prioritizing and scheduling work. A product backlog is used for both of these purposes on an agile project.

My Advice

In general, I do not recommend taking the time to rewrite a perfectly fine SRS. Rewriting the SRS into user stories and a nice product backlog could address the problems I’ve outlined. But, it is not usually worth the time required to rewrite an SRS into user stories.

Someone would have to spend time doing this, and usually that person could spend their time more profitably. I would be especially reluctant to rewrite an SRS if other teammates would be stalled in starting their own work while waiting for the rewritten product backlog.

A ScrumMaster or someone on the team will have to find ways of tracking progress against the SRS and making sure requirements within it don’t fall through the cracks. Usually enlisting help from QA in validating that everything in an SRS is done or listed in the product backlog is a smart move.

One additional important strategy would be educating those involved in creating SRS documents to consider doing so in a more agile-friendly manner for future projects. If you can do this, you’ll help your next project avoid the challenges created by a mismatch between agile and starting with an SRS document.

What Do You Think?

Please join the discussion and share your thoughts and experiences below. Have you worked on an agile project that started with a traditional SRS? How did it go? Would it have been different if the SRS had been rewritten as a more agile product backlog?

SPaMCAST 375 – Quality Essay, Estimating Testing, Discovery Driven Planning

Software Process and Measurement Cast - Sun, 01/03/2016 - 23:00

This week’s Software Process and Measurement Cast opens with our essay on quality and measuring quality. Software quality is a simple phrase that is sometimes difficult to define. In SPaMCAST 374, Jerry Weinberg defined software quality as value. In our essay, we see how others have tackled the subject and add our perspective.

Jeremy Berriault brings the QA Corner to the first SPaMCAST of 2016, discussing the sticky topic of estimating testing. Estimating has always been a hot button issue that only gets hotter when you add in testing.  Jeremy provides a number of pragmatic observations that can help reduce heat the topic generates.

Wrapping up the cast, Steve Tendon discusses the topic of discovery driven planning from his book, Tame The Flow. Discovery driven planning is a set of ideas that recognizes that most decisions are made in situations that are full of uncertainty and complexity. We need new tools and mechanisms to avoid disaster.

Help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

 

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Three, Hubbard explores three misconceptions of measurement that lead people to believe they can’t measure something, three reasons why people think something shouldn’t be measured and four useful measurement assumptions.

Upcoming Events

I am facilitating the CMMI Capability Challenge.  This new competition showcases thought leaders who are building organizational capability and improving performance. The next CMMI Capability Challenge will be held on January 12 at 1 PM EST. 

http://cmmiinstitute.com/conferences#thecapabilitychallenge

The Challenge will continue on February 17th at 11 AM.

In other events, I will give a webinar, titled: Discover The Quality of Your Testing Process on January 19, 2016, at  11:00 am EST


Organizations that seek to understand and improve their current testing capabilities can use the Test Maturity Model integration (TMMi) as a guide for best practices. The TMMi is the industry standard model of testing capabilities. Comparing your testing organization's performance to the model provides a gap analysis and outlines a path towards greater capabilities and efficiency. This webinar will walk attendees through a testing assessment that delivers a baseline of performance and a set of prioritized process improvements.   

Next week even more!  

Next SPaMCAST

The next Software Process and Measurement Cast is a panel discussion featuring all of the regulars from the Software Process and Measurement Cast, including Jeremy Berriault, Steve Tendon, Kim Pries, Gene Hughson and myself.  We prognosticated a bit on the topics that will motivate software development and process improvement in 2016.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 375 – Quality Essay, Estimating Testing, Discovery Driven Planning

Software Process and Measurement Cast - Sun, 01/03/2016 - 23:00

This week’s Software Process and Measurement Cast opens with our essay on quality and measuring quality. Software quality is a simple phrase that is sometimes difficult to define. In SPaMCAST 374, Jerry Weinberg defined software quality as value. In our essay, we see how others have tackled the subject and add our perspective.

Jeremy Berriault brings the QA Corner to the first SPaMCAST of 2016, discussing the sticky topic of estimating testing. Estimating has always been a hot button issue that only gets hotter when you add in testing.  Jeremy provides a number of pragmatic observations that can help reduce heat the topic generates.

Wrapping up the cast, Steve Tendon discusses the topic of discovery driven planning from his book, Tame The Flow. Discovery driven planning is a set of ideas that recognizes that most decisions are made in situations that are full of uncertainty and complexity. We need new tools and mechanisms to avoid disaster.

Help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

 

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Three, Hubbard explores three misconceptions of measurement that lead people to believe they can’t measure something, three reasons why people think something shouldn’t be measured and four useful measurement assumptions.

Upcoming Events

I am facilitating the CMMI Capability Challenge.  This new competition showcases thought leaders who are building organizational capability and improving performance. The next CMMI Capability Challenge will be held on January 12 at 1 PM EST. 

http://cmmiinstitute.com/conferences#thecapabilitychallenge

The Challenge will continue on February 17th at 11 AM.

In other events, I will give a webinar, titled: Discover The Quality of Your Testing Process on January 19, 2016, at  11:00 am EST


Organizations that seek to understand and improve their current testing capabilities can use the Test Maturity Model integration (TMMi) as a guide for best practices. The TMMi is the industry standard model of testing capabilities. Comparing your testing organization's performance to the model provides a gap analysis and outlines a path towards greater capabilities and efficiency. This webinar will walk attendees through a testing assessment that delivers a baseline of performance and a set of prioritized process improvements.   

Next week even more!  

Next SPaMCAST

The next Software Process and Measurement Cast is a panel discussion featuring all of the regulars from the Software Process and Measurement Cast, including Jeremy Berriault, Steve Tendon, Kim Pries, Gene Hughson and myself.  We prognosticated a bit on the topics that will motivate software development and process improvement in 2016.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 375 – Quality Essay, Estimating Testing, Discovery Driven Planning

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

This week’s Software Process and Measurement Cast opens with our essay on quality and measuring quality. Software quality is a simple phrase that is sometimes difficult to define. In SPaMCAST 374, Jerry Weinberg defined software quality as value. In our essay, we see how others have tackled the subject and add our perspective.

Jeremy Berriault brings the QA Corner to the first SPaMCAST of 2016, discussing the sticky topic of estimating testing. Estimating has always been a hot button issue that only gets hotter when you add in testing.  Jeremy provides a number of pragmatic observations that can help reduce heat the topic generates.

Wrapping up the cast, Steve Tendon discusses the topic of discovery driven planning from his book, Tame The Flow. Discovery driven planning is a set of ideas that recognizes that most decisions are made in situations that are full of uncertainty and complexity. We need new tools and mechanisms to avoid disaster.

Help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player and then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

 

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Three, Hubbard explores three misconceptions of measurement that lead people to believe they can’t measure something, three reasons why people think something shouldn’t be measured and four useful measurement assumptions.

Upcoming Events

I am facilitating the CMMI Capability Challenge.  This new competition showcases thought leaders who are building organizational capability and improving performance. The next CMMI Capability Challenge will be held on January 12 at 1 PM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

The Challenge will continue on February 17th at 11 AM.

In other events, I will give a webinar, titled: Discover The Quality of Your Testing Process on January 19, 2016, at  11:00 am EST
Organizations that seek to understand and improve their current testing capabilities can use the Test Maturity Model integration (TMMi) as a guide for best practices. The TMMi is the industry standard model of testing capabilities. Comparing your testing organization’s performance to the model provides a gap analysis and outlines a path towards greater capabilities and efficiency. This webinar will walk attendees through a testing assessment that delivers a baseline of performance and a set of prioritized process improvements.

Next week even more!

Next SPaMCAST

The next Software Process and Measurement Cast is a panel discussion featuring all of the regulars from the Software Process and Measurement Cast, including Jeremy Berriault, Steve Tendon, Kim Pries, Gene Hughson and myself.  We prognosticated a bit on the topics that will motivate software development and process improvement in 2016.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

How To Measure Anything, Third Edition, Chapter 3, The Illusion of Intangibles: Why Immeasurables Aren’t

 How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

Chapter 3 of How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition, is titled The Illusion of Intangibles: Why Immeasurables Aren’t.  In this chapter Hubbard explores three misconceptions of measurement that lead people to believe they can measure something, three reasons why people think something should not be measured and four useful measurement assumptions.

Hubbard begins the chapter with a discussion of the  reasons people most commonly suggest that something can’t be measured. The misconceptions are summarized into three categories:

  1. The concept of measurement. – The first misconception stems from not understanding the definition of measurement. Hubbard defines measurement as an observation that quantifiably reduce uncertainty. If we considered that most of the business decisions are faced with are based on imperfect information, therefore, made under uncertainty. The   measurement is an activity that adds information that improves on prior knowledge. Like the bias that causes people to avoid tackling risks that can’t be reduced to zero, some people will avoid measurement if they can’t reduce uncertainty to zero. Rarely if ever does measurement eliminate all uncertainty but rather  reduces uncertainty, however; all measures that reduce enough uncertainty is often valuable. The concept of the need for measurement data that reduce uncertainty can be seen in portfolio management questions which decide which projects will be funded even before all of the requirements are known.

    All types of attributes can be measured regardless of whether they are qualitative (for example team capabilities) or quantitative (for example project cost). Another consideration when an understanding measurement is that there are numerous measurement scales. Different measurement scales include nominal, ordinal, interval and  ratio scales.  Each scale allows different statistical operations and can present different conceptual challenges. It is imperative to understand the how can be used and the mathematical operations that can be leveraged for each (we will explore these on the blog in the near future).

    Hubbard concludes this section with a discussion of two of his basic assumptions. The first is the assumption is that there is a prior state of uncertainty that be quantified or estimated. And that uncertainty is a feature of the observer not necessarily that of the thing being observed.  This is a basic argument of Bayesian statistic. In Bayesian statistics both the initial and change in uncertainty will be quantified.

    Bottom line, it is imperative to understand the definition of measurement, measurement scales and Bayesian statistics so that you can apply the concepts of measurement to reducing uncertainty.

  1. The object of measurement. – The second misconception stems from the use of sloppy language or a failure to  define what is being measured. In order to measure something, we must unambiguously state what the object of measurement means. For example, many organizations wish to understand the productivity of development and maintenance teams but don’t construct a precise  definition  of the concept AND why we care / why the measure is valuable.

    Bottom Line: Decomposing what is going to be measured from vague to explicit should always be the first step in the measurement process.

  1. Methods of measurement. – The third misconception is a reflection of not understanding what constitutes measurement. The process and procedures are often constrained to direct measurement such as counting the daily receipts at a retail store. Most of the difficult measurement in the business (or a variety of other) environments must be done using indirect measurement. For example, in Chapter 2 Hubbard used the example of  Eratosthenes measurement of the circumference of the earth. Eratosthenes used observations of shadows and the angle of the sun to indirectly determine the circumference. A direct measure would have been if he had used a really long tape measure (pretty close to impossible).

    A second topic related to this misconception is thought that valuable measurement requires either measuring the whole population or a large sample. Studying populations is often impractical.  Hubbard shares the rule of five (proper random samples of five observations) or the single sample majority rule yield can dramatically significantly narrow the range of uncertainty.

    Bottom line: Don’t rely on your intuition about sample size.  The natural tendency is to believe a large sample is needed to reduce uncertainty, therefore, many managers will either decide that measurement is not possible managers because they are uncomfortable with sampling.

Even when it possible to measure the argument often turns to why you shouldn’t. Hubbard summarizes the “shouldn’t” into three categories.

  1. Cost too much / economic objections. – Hubbard suggests that most variables that are measured yield little or no informational value, they do not reduce uncertainty in decisions. The value delivered by measurement must outweigh the cost (this is one of the reasons you should “why” you want any measure) of collecting and analysis.

    Bottom Line:  Calculate the value of information based on the reducing uncertainty. Variables that have enough value justify deliberate measurement is justified. Hubbard suggests (and I concur) when someone says something is too expensive or too hard to measure the question in return should be compared to what.

  1. Measures lack usefulness or meaningfulness. – It is often said that you can prove anything with statistics as a reason to point out that measurement is not very meaningful. Hubbard suggests you “you can prove anything” the statement is patently unprovable. What is really meant is that numbers can be used to confuse people especially those without skills in statistics.

    Bottom Line: Investing in statistical knowledge is important for anyone that needs to make decisions and wants to outperform expert judgment.

  1. Ethical objection – Measurement can be construed as dehumanizing. Reducing everything can be thought of as taking all human factors out of a decision, however, measurement does not suggest there are only black and white answers. Measurement increase information while reducing uncertainty.  Hubbard provides a great quote, ”   the preference for ignorance is over even marginal reduction ignorance is never moral.”

    Bottom Line: Information and reduction of uncertainty are  neither moral or immoral.

The chapter is capped by four useful measurement assumptions that provide a summary for the chapter.

  1. Everything has been measured before. Do your research before you start!
  2. You have data than you think. – Consider both direct and indirect data.
  3. You need far fewer data points than you think. – Remember the rule of five.
  4. New observations are more accessible than you think. – Sample and use simple measures.

More than occasionally I have been told the measuring is meaningless since the project or piece work being measured is unique and that the past does not predict the future.  Interesting these same people yell the loudest when I suggest that if the past does not count that team members can be considered fungible assets and trade in and out of a project.  Measurement and knowledge of the past reduce almost always reduces uncertainty.

Previous Installments in Re-read Saturday, How to Measure Anything, Finding the Value of “Intangibles in Business” Third Edition

How To Measure Anything, Third Edition, Introduction

HTMA, Chapter 1

HTMA, Chapter 2

 

 


Categories: Process Management

Setting Goals: A Simple Checklist

Woman looking at a guide book.

Your goals can be your roadmap.


We set goals to help channel our energy and filter out the less relevant demands on our time and attention.  Goals should reflect what is important to you now and be sticky enough help keep you on track when things get tough. However, context and circumstances do change, and when they do, goals can change.  For example, one of my neighbors had set a goal of canoeing in the Canadian wilderness in all four seasons in 2015. The goal had to be replaced when his wife was diagnosed with cancer. He replaced the original goal with a new more pertinent goal. Given the importance of goals to direct our behavior, allocate our effort and to defend our attention, a simple checklist might be useful when creating resolutions or setting annual goals.

  • Spend time reflecting on what you want from the next year.
    Introspection is an important first step in setting goals that are not quickly overtaken by circumstances.
  • Consider at least one big hairy audacious goal (BHAG).
    Having a big hairy audacious goal is a great way to challenge your inhibitions and the status quo.
  • Make sure the BHAG is beyond the boundary of certain success and courts failure.
    A BHAG is typically at the edge of possible performance and should push the envelope of what is possible. [
  • Write all tactical (non-BHAG) goals using the SMART framework.
    Using the SMART (or other similar frameworks) will help you to craft goals that not only are attainable but also provide feedback on how you are progressing.
  • Construct goals so that they motivate you to higher performance.
    Goals are a tool to prompt and guide you to an achievement or level of performance.
  • Develop goals that, if achieved, will feed your sense of accomplishment and self-confidence.
    Goals that feed your sense of accomplishment will act as a positive feedback loop for motivation.
  • Write goals that you have control over achieving.
    Writing individual goals that you are the mercy of others to achieve tends to reduce motivation and significantly lower the probability of achievement.
  • Prioritize your goals.
    All goals are not created equal. Understand which goal (or goals) are the most important and which can be deprioritized if absolutely necessary.
  • State your goals in a positive fashion.
    Goals stated in a positive fashion provide more motivation by helping to frame feedback in a positive fashion (most true for novice and intermediate personnel). 
  • Set your goals with a clear and level head. (New Year’s Eve and Day special)
    Impaired thinking will yield promises that are very quickly shed.
  • Plan specific points in time for retrospectives and planning.
    Retrospectives provide time to reflect on progress and to re-plan how you will meet the goals you have set for the year.

Goal setting is an important method of providing guidance.  Goals setting in some contexts can be a formal process with rules, frameworks and forms, or can be slapdash in other contexts.  This simple checklist is offered as a tool to help you maximize the value you get from the process of setting goals.

Happy New Year!

 

 


Categories: Process Management

The Business Support Team Pattern

Xebia Blog - Thu, 12/31/2015 - 01:22

Lately I've encountered several teams that organize their work using Agile methods and they all exhibit a similar pattern. Teams (or actually the work) having such work patterns I call Business Support Teams. This type of team usually is responsible for operating the application, supporting the business in using the application, and developing new features on top of the (usually third party) application,

The nature of the work may be plannable or highly ad hoc, e.g. production incidents and/or urgent requests from the business. In practice I notice that the more ad hoc type of work the team has to deal with, the more they are struggling with approaches based on a single backlog of work.

In this post I'll show a set-up using boards and agreements that works for these type of teams very nicely.

Kick-Off

In practice teams that start with Agile often default to using Scrum. It initially provides a structure for teams to start off and sets a cadence for frequent delivery of work and feedback loops. Such teams often start with a 'typical' scrum board consisting of 3 lanes: to do, in progress, and done

Here, the team has a backlog consisting of features to support the business, a visual board with three lanes, Definition of Ready, Definition of Done, and a cadence of 2 or sometimes 3 week sprints.

Note: the Definition of Ready is not part of scrum and a good practice often used by Scrum teams. See also this blog.

Business Support Team

What makes the Business Support Team different from other teams is that besides the list of features (application enhancements) they have other types of work. Typically the work includes:

  • Requests for information, e.g. reports,
  • New features to accelerate the business,
  • Long term improvements,
  • Keeping the application and platform operational
    • Daily and routine operational jobs
    • Handling production incidents

This follows the pattern described by Serge Beaumont in this presentation with the addition of Business Requests ('Requests for information').

Commonly Encountered Dissatisfactions

From a business point of view the customers may experience any of the following:

  • Unpredictable service as to when new features become available,
  • Limited support for questions and such,
  • Long waiting times for information and/or reports.

On the other hand, the team itself may experience that:

  • Work needed to keep the application operational limits the capacity severely to work on new features,
  • Interruptions due to incoming requests and/or incidents that require immediate attention causes longer lead times,
  • Too much work in progress,
  • Pressure to deliver for certain groups of business users will be at the cost of other stakeholders.
Business Expectations

The expectations from the customer with regards to the work items typically are (but may vary)

  • Requests for information, e.g. reports,
  • Nature: Ad hoc, and may varies;
    Expectation: typically ranges from 1 week to 1 month
  • New features to accelerate the business,
    Nature: Continuously entering the backlog;
    Expectation: is predictability
  • Keeping the application and platform operational
    • Daily and routine operational jobs
      Expectation: Just to have a running platform &#x1f609;
    • Handling production incidents
      Expectation: As fast as possible

From the team's perspective:

  • Long term improvements,
    Expectation: be able to spend time on it regularly
  • Keeping the application and platform operational
    • Handling production incidents
    • Expectation: as least as possible to disrupt the team

The challenge for the team is to be predictable enough regarding lead times for New Features while at the same time be able to take up work that requires immediate attention and at the same time have acceptable lead times on business requests.

Board & Policies

A board and set of policies that work very well are is shown to the right.kanbanized

 

The columns are kept the same as what the team already has. The trick to balancing the work, is to treat them differently. The board set-up above has four lanes:

Expedite: Reserved for work that needs to be dealt with immediately, e.g. production incidents. Sometimes also called 'Fast Lane'. Maximum of 1 work item at all times.

(Regular) Changes: Holds regular type of work which needs to be 'predictable enough'.

Urgent: Work that needs to be delivered with a certain service. E.g. within 5 working days. Mainly business requests for information, support and may include problems with priority levels lower than 1 &#x1f609;

Operational: Meant for all tasks that are needed to keep the application up & running. Typically daily routine tasks.

Note: Essential to make this working is to agree upon WiP (work in progress) limits and to set criteria when work is allowed to enter the lanes. This is described in the section below.

Note: As mentioned above this basically follows the pattern of [Beaumont2015] with the row for 'Urgent' work added.

Policy Example

As explained in [Beaumont2014] the Definition of Ready and Definition of Done guard the quality of work that enters and leaves the team respectively.

legendaDefinition of Run

Specifies the state of the application. What does it mean that it is 'running'? Work items that bring back the system into this state goes into the Expedite Lane. Example:

  • Application is up & running,
  • Application's response is within ... seconds,
  • Critical part of application is functioning,
  • ...

Note: There is one other type of that is usually allowed in this lane: items that have a deadline and that are late for delivery....and have a policy in place!

Definition of Change

Regular requests for new features and enhancements. Predictability is most important. Certain variation of the lead time is considered acceptable.

Service level is ... weeks with 85% on time delivery (1 standard deviation service level).

Definition of Request

Business requests. Typical use includes requests for support, creation of reports (information) (e.g. national banks may require reports as part of an audit). And other types of requests that are not critically important to go into the expedite lane, are more than a couple of hours of work, and for which lead times are expected that are considerable shorter than that for changes.

Example criteria:

  • Business requests, requiring less than 1 week of work, or
  • A problem report with severity of 2, 3, or 4

Service level is .... working days with 95% on time delivery.

Definition of operational

Describes the routine tasks that need to be done regularly to keep the system running as stated in the Way of Working.

Example criteria:

  • Less than 2 hours of work, and
  • Routine task as described in the Way of Working, and
  • Can be started and completed on the same day,
  • Maximum of ... hours per day by the team.

It is important to limit the amount of time spent on these items by the team so the team can maintain the expected service levels on the other types of work.

Cadences

From all the aforementioned work item types, only the 'Change' item is plannable; 'Run' items are very ad hoc from its nature, as is the case with 'Operational' tasks (some are routine and some just pop-up during the day). Requests from the business tend to come in on short notice with the expectation of short delivery times.

Because of the 'ad hoc' nature for most work item types planning and scheduling of work needs to be done more often than once every sprint. Replenishment of the 'To do' will be done continuously for the rows whenever new work arrives. The team can agree on a policy for how often and how they want to do this.

The sequence of scheduling work between the rows is done either by the product owner or self-regulated by a policy that includes setting WiP limits over the rows. This will effectively divide the available team's capacity between the work item types.

Summary

Business Support Teams follow a pattern very similar to that described in [Beaumont2015]. In addition to the 'Run', 'Change', 'Operational' types of work the type 'Request' is identified. The work is not described by a single backlog of similar work items but rather as a backlog of types of work. These types are treated differently because they have a different risk profile (Class of Service).

This enables the team to be predictable enough on 'Changes' with an (not so small) acceptable variation of the lead time, have a higher service level on ad hoc requests from the business ('Request'), plan their daily routine work, while at the same time be able to have a fast as possible delivery of 'Run' items.

Allowing a larger variation on the 'Change' items allows for a higher service on the 'Request' items.

References

[Beaumont2014] The 24 Man DevOps Team, Xebicon 2015, Serge Beaumont, Link https://xebicon.nl/slides/serge-beaumont.pdf

Systematic Problems With Goals and Goal Setting

Watch your step when setting goals!

Watch your step when setting goals!

Setting goals is important for deciding what you want to achieve in a specific period. Goal setting provides value by forcing a degree of introspection, acting as a filter to separate the important from the irrelevant and as a guide to channel behavior. Like many things in life the journey is often as important as the goal destination, however, setting goals is complex. There are many processes and frameworks to provide structure for setting goals; however, even with a framework, goal setting sometimes gets off track. Several systematic problems observed when setting goals include:

Too Many Goals: Even with a framework, it is possible to set too many goals.  When you set too many goals it becomes difficult to get them done. Like any other work environment, having too many tasks or projects in progress at any one time tends to complicate and slow progress. Solution: Implement a work-in-process limit (WIP) for your goals so that resource contention is minimized.

Too Narrow of a Focus: This is a corollary to the “Too Many Goals” problem.  Focusing on a single aspect of your life your hobby or career, for example, can lead to a lack of balance that might cause you to sacrifice attention on other important aspects of your life.  Solution: Seek a balance between work, family and health.

Poorly Estimated Goals: SMART goals are by definition supposed to be attainable and time-boxed.  Any task that has a time box needs to be the right size to fit in the time box. Determining the right size requires estimating what can be accomplished in the amount of time in the time box. All estimation exercises require a solid definition of done and an understanding of the level of commitment to the task or goal. Solution: Use portfolio management techniques to prioritize your goals.  Techniques can include value ranking or weighted shortest job first.  Adopt broader Once prioritized the goals can be broken down into subgoals and task and then planned using Agile planning techniques (backlogs, Kanban, and planning meetings similar to sprint planning).

Viewing Goals as Static: The world is a dynamic place. As time moves forward, life happens, and it is possible that the context may have shifted. That means that you may need to change to your goals.  Similarly, as time goes by, some goals may have been accomplished without a next step or follow on goal. I have adopted a weight loss goal nearly every year of my adult life which I almost always meet.  Which once accomplished is quickly celebrated and then not maintained. Solution: Consider broader adopting a BHAG type goal that requires more of a systemic approach that can’t be achieved or failed in a single step instead of single step SMART goal.  For example, improve your health rather a simpler goal of losing 20 pounds. The sub-goals to support the BHAG goal can use the SMART framework. Secondly, perform periodic retrospectives to review progress and re-plan your goals based on context.

Inflicted Goals: Don’t set goals for others, and avoid having goals set for you. Goals that are inflicted on you are not owned, which will lead to problems with motivation. Once upon a time in my role as a manager, my boss provided me with my annual goals which I then turned around and apportioned to my employees. I was not very motivated even though I received monthly tongue lashings to ensure I was progressing toward the goal (I did not pass the tongue lashings down). Solution: Push back on goals that are set for you (within limits of decorum). When setting a goal, the person who is going to be held accountable for achieving the goal MUST be involved in setting the goal.

Weaponized Goals:  When I first got out of school, I worked for a now defunct clothing manufacturer. On my first or second day, my boss was explaining how sales quotas were set.  One of the quotas was significantly higher than the equation should have called for.  When I asked why I was told that the organization wanted the person to quit and that the goals were being used as a tool to send a message. A goal being used as a weapon will almost always demotivate everyone involved. Solution: Goals are not weapons; don’t use them that way.

Goals are used to guide and motivate. However, getting goals wrong can demotivate and lead to poor outcomes. There are a number of classic issues that lead people to set or adopt poor goals. Begin by reviewing your how you set your goals over the last few years and identify whether you suffered from any systemic problems.  Knowing the problems that tend impact how you set your goals before you start the process will allow you to modify your process so that you don’t make the same mistakes again!


Categories: Process Management

Systematic Problems With Goals and Goal Setting

Watch your step when setting goals!

Watch your step when setting goals!

Setting goals is important for deciding what you want to achieve in a specific period. Goal setting provides value by forcing a degree of introspection, acting as a filter to separate the important from the irrelevant and as a guide to channel behavior. Like many things in life the journey is often as important as the goal destination, however, setting goals is complex. There are many processes and frameworks to provide structure for setting goals; however, even with a framework, goal setting sometimes gets off track. Several systematic problems observed when setting goals include:

Too Many Goals: Even with a framework, it is possible to set too many goals.  When you set too many goals it becomes difficult to get them done. Like any other work environment, having too many tasks or projects in progress at any one time tends to complicate and slow progress. Solution: Implement a work-in-process limit (WIP) for your goals so that resource contention is minimized.

Too Narrow of a Focus: This is a corollary to the “Too Many Goals” problem.  Focusing on a single aspect of your life your hobby or career, for example, can lead to a lack of balance that might cause you to sacrifice attention on other important aspects of your life.  Solution: Seek a balance between work, family and health.

Poorly Estimated Goals: SMART goals are by definition supposed to be attainable and time-boxed.  Any task that has a time box needs to be the right size to fit in the time box. Determining the right size requires estimating what can be accomplished in the amount of time in the time box. All estimation exercises require a solid definition of done and an understanding of the level of commitment to the task or goal. Solution: Use portfolio management techniques to prioritize your goals.  Techniques can include value ranking or weighted shortest job first.  Adopt broader Once prioritized the goals can be broken down into subgoals and task and then planned using Agile planning techniques (backlogs, Kanban, and planning meetings similar to sprint planning).

Viewing Goals as Static: The world is a dynamic place. As time moves forward, life happens, and it is possible that the context may have shifted. That means that you may need to change to your goals.  Similarly, as time goes by, some goals may have been accomplished without a next step or follow on goal. I have adopted a weight loss goal nearly every year of my adult life which I almost always meet.  Which once accomplished is quickly celebrated and then not maintained. Solution: Consider broader adopting a BHAG type goal that requires more of a systemic approach that can’t be achieved or failed in a single step instead of single step SMART goal.  For example, improve your health rather a simpler goal of losing 20 pounds. The sub-goals to support the BHAG goal can use the SMART framework. Secondly, perform periodic retrospectives to review progress and re-plan your goals based on context.

Inflicted Goals: Don’t set goals for others, and avoid having goals set for you. Goals that are inflicted on you are not owned, which will lead to problems with motivation. Once upon a time in my role as a manager, my boss provided me with my annual goals which I then turned around and apportioned to my employees. I was not very motivated even though I received monthly tongue lashings to ensure I was progressing toward the goal (I did not pass the tongue lashings down). Solution: Push back on goals that are set for you (within limits of decorum). When setting a goal, the person who is going to be held accountable for achieving the goal MUST be involved in setting the goal.

Weaponized Goals:  When I first got out of school, I worked for a now defunct clothing manufacturer. On my first or second day, my boss was explaining how sales quotas were set.  One of the quotas was significantly higher than the equation should have called for.  When I asked why I was told that the organization wanted the person to quit and that the goals were being used as a tool to send a message. A goal being used as a weapon will almost always demotivate everyone involved. Solution: Goals are not weapons; don’t use them that way.

Goals are used to guide and motivate. However, getting goals wrong can demotivate and lead to poor outcomes. There are a number of classic issues that lead people to set or adopt poor goals. Begin by reviewing your how you set your goals over the last few years and identify whether you suffered from any systemic problems.  Knowing the problems that tend impact how you set your goals before you start the process will allow you to modify your process so that you don’t make the same mistakes again!


Categories: Process Management