Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/5' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Cypress - Dealing with flaky tests

Xebia Blog - Fri, 12/02/2016 - 10:32
Test automation is all about feedback. Feedback that gives you quality updates about the features your team has built. A continuous green build is always the goal because this should give you the confidence you need to go to production. Unfortunately, I‚Äôm more used to a ‚Äútraffic light build‚ÄĚ, a build which passes and fails

Post Agile Age: The Movement Is Dead

Don't get stuck.

Don’t get stuck.

In February 2001 the Agile Manifesto was signed by 17 people. The Manifesto is comprised of four values and 12 principles. The Manifesto acted as a lightning rod for what became the Agile movement. ¬†It provided a new framework to think about how work should or could be approached. That framework challenged the standard thinking of how software should be developed, enhanced, and maintained. 2001 was a year of transition. ¬†Even though most organizations were successful, the US economy was on the verge of a recession (the NERB tracked the 2001 recession from March 2001 to November 2001), many IT jobs were being outsourced, and the oft-quoted Chaos Reports suggested that software (and by extension hardware and systems) projects were late, over budget and did not deliver what was needed by the business. ¬†Anyone who was related to the broad software development industry had numerous war stories about projects that were death marches or abject failures. ¬†That said, all was not a wasteland. Most organizations were successful and most practitioners had also had success stories. ¬†If everything was doom and gloom most have us would have left software development, because constant failure is debilitating. ¬†Needless to say, the change was in the air in the late 90’s and early 00’s. ¬†The Agile Movement caught fire.

As the Agile movement was born, early adopters showed great passion for embracing the concepts of experimentation to find better ways of working, breaking work into small parts, and self-organization and self-management so teams could work together better. The success of the early adopters influenced whole organizations to embrace Agile, which espoused the idea that there was not just one perfect way to develop, enhance and maintain software but rather the complexity of team and organization capabilities, business and technical factors and macro-environmental concerns (such as competition and regulation) required teams to have the flexibility to address.   In some cases, where organizations frowned on Agile, stories circulated of development teams revolting.  It was said that teams changed how they operated while hiding the fact from their process overlords (that was at least the perception).  I have always assumed that these stories were apocryphal; however, they made great stories to salt conference presentations with and I know I sat in rapt silence as I heard them recounted.

Over the past 15ish years Agile has taken off.  If we use the output of Google Book Ngram Viewer as a proxy for the popularity of Agile we see an extraordinary growth curve.

ngram

That said, the movement driven by values and principles has faded replaced by a focus on frameworks and techniques. ¬†A little over a year ago, I witnessed an early advocate for Agile in an organization (someone that I had a lot of admiration for) stand up in front of a team he was part of and say, ‚ÄúI do not want to hear or talk about principles anymore, let‚Äôs just do BDD (behavior driven development).‚ÄĚ ¬†He was tired of fighting for keeping teams together over time, he was tired of being pressed for up-front requirements documents and in reality, he has tired of fighting to try new ways of working in his department. (Side note: the person in question now sells real estate). ¬†The level of frustration is rife in many quarters of the software development community.

Today many organizations have accepted what Ken Schwaber, co-creator of Scrum, called ‚ÄúScrumButt‚ÄĚ and others now call blended/hybrid Agile (or worse). ¬†Acceptance is the pragmatic approach and possibly the natural evolution of Agile. However, there are complications that while will ensure that Agile techniques are the go-to tools for a long time spell the end of Agile as a movement.

Planned essays in Post Agile Age Arc include:

  1.       Prescriptive Agile
  2.       Tool Centric Agile
  3.       The Return of People Versus System Management
  4.       Method Lemmings
  5.       The Age of Aquarius (Something Better is Beginning)

PS ‚Äď I use Agile techniques in my writing, evaluating on daily basis and re-planning weekly. ¬†This is the long way of saying your comments and input are important to shaping this and all of the work I post on the Software Process and Measurement blog.


Categories: Process Management

Are Vanity Metrics Statistically Valid?

How far is it?

Just because you can measure it, should you?

Over the last two weeks, we published three articles on vanity metrics:

Vanity Metrics in Software Organizations

Vanity Metrics? Maybe Not!

Vanity Metrics Have a Downside!

One of those articles elicited an interesting discussion on Twitter.   The discussion was predominately between @EtharUK (Ethar Alali) and @GregerWikstrand.  The discussion started by using the blog entry from 4 Big Myths of Profile Pictures from OKCupid to illustrate vanity metrics, and then shifted to a discussion of whether vanity metrics can be recognized based on statistical validity.  The nice thing about Twitter is that responses are forced to be succinct.  See the whole conversation here.

twitter-conversation

The observations that vanity metrics are not necessarily unsound is important to remember. Just because a metric is statically sound does not make it useful.  The four criteria to help recognize vanity metrics (gamability, linked to business outcomes, provides process knowledge and actionable) specifically did not include a statistical test because statistical validity is not discriminative for determining whether a metric is valuable or not.  I have a shelf in my library that includes several statistical texts that I use in my day-to-day business (I have a large box of stats and quantitative methods books in the attic). When it comes to stats, I have game. I can comfortably say that a measure or metric can only have value IF the data is correct AND the measures and metrics derived from that data is are statistically valid.

Quantitative management requires good data. ¬†This statement seems so overtly obvious that it almost can go unstated. ¬†Almost being the critical word. While most data collection errors are not due to conscious cheating in an IT organization. ¬†The problem is typically loose or multiple definitions. This means it is difficult to get a good handle on what any piece of information means unless you do the due diligence to construct a definition that everyone understands and will use. For example, the ‚Äúsimple‚ÄĚ measure of when a project ends is often different team to team. ¬†Does a project end at implementation, after some support period or at the end of the year? ¬†These are three common definitions I have observed in the same department within a single company. ¬†Garbage in, garbage out (or worse)!

Statistical validity is a discussion of whether the conclusions drawn from any specific statistical test are accurate and reliable. ¬†In order to draw correct conclusions, there are numerous statistical concepts that need to be understood, including things like a standard error, variability, and sample size. The type of test applied to a measure or metric will specify how to determine statistical validity for the conclusions drawn from the data. ¬†While statistics are one of the most popular water cooler discussions (albeit people think they are talking about sports or politics), almost no one acknowledges the depth of statistical knowledge needed to drive important decisions based on quantitative data. ¬†Note: I say this with the full knowledge that almost everyone that has been through college or university has had some formal exposure to statistics. All “good” measures or metrics must have statistical validity but just because¬†a measure or metric is statically valid does not mean the conclusions drawn from them are useful. ¬†

Good data and statically validity are just the table stakes that get us to the point of determining whether a measure or metric can be classified as a vanity metric. Without both good data and statistical validity, any measure or metric is not valuable.


Categories: Process Management

Software Development Conferences Forecast November 2016

From the Editor of Methods & Tools - Tue, 11/29/2016 - 16:00
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

The Core Principles of Agile

Mike Cohn's Blog - Tue, 11/29/2016 - 16:00

I recently watched a video titled, "Why Is Modern Art So Bad?"

One of points of the video was that for centuries, art improved because artists demanded of themselves that they meet the highest standards of excellence.

The video claimed that this aspiration in art was eventually pushed out by the claim that "beauty is in the eye of the beholder." All art then became personal expression and essentially anything could be art.

And we've probably all been to museums and seen--or at least heard of--exhibits that left us wondering, "How is that art?"

Without standards of excellence in art, anything can be art.

Agile without Standards

And the same applies to agile. Without standards of excellence for agile, anyone can call anything agile.

I encounter this today in talking to companies that are agile because the boss has declared them agile. They don't deliver frequently. They don't iterate towards solutions. They don't seek continuous improvement. Teams are not empowered and self-organizing. But they must be agile because someone has slapped that label on their process.

Worse, we all see this today in heavyweight methodologies that don't resemble the agile described in the Agile Manifesto. But they must be agile because it's right there in the name of the process.

Many of us, as experienced agilists, can recognize what is truly agile when we see it. Yet agile is hard to define. It's more than just the four value statements or 12 principles of the Agile Manifesto. Or is it less than those?

What Do You Think?

What do you think are the core principles or elements of agility? Please share your thoughts in the comments below.

Quote of the Month November 2016

From the Editor of Methods & Tools - Mon, 11/28/2016 - 13:32
There is surely no team sport in which every player on the field is not accurately aware of the score at any and every moment of play. Yet in software development it is not uncommon to find team members who do not know the next deadline, or what their colleagues are doing. Nor is it […]

SPaMCAST 419 ‚Äď Notes on Distributed Stand-ups, QA Corner, Configuration Management, Software Senesi

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 419 features our essay on eight quick hints on dealing with stand-up meetings on distributed teams. Distributed Agile teams require a different level of care and feeding than a co-located team in order to ensure that they are as effective as possible. Remember an update on the old adage: distributed teams, you can’t live with them and you can’t live without them.  

We also have a column from the Software Sensei, Kim Pries.  In this installment, Kim talks about the  Fullan Change Model. In the Fullan Change Model, all change stems from a moral purpose.  Reach out to Kim on LinkedIn.

Jon M Quigley brings the next installment of his Alpha and Omega of Product Development to the podcast.  In this installment, Jon begins a 3 part series on configuration management.  Configuration management might not be glamorous but it is hugely important to getting work done with quality.  One of the places you can find Jon is at Value Transformation LLC.

Anchoring the cast this week is Jeremy Berriault and his QA Corner.  Jeremy explored exploratory testing in this installment of the QA Corner.  Also, Jeremy has a new blog!  Check out the QA Corner!

Re-Read Saturday News

The read/re-read of The Five Dysfunctions of a Team by Patrick Lencioni (published by Jossey-Bass) continues on the Blog.  Lencioni’s model of team dysfunctions is illustrated through a set of  crises used to illustrate the common problems that make teams into dysfunctional collections of individuals. The current entry features the sections titled Leaks through Plowing On.  

Takeaways from this week include:

  • Partial information leads to misinterpretations. ¬†¬†¬†¬†¬†¬†¬†
  • Executives need to be ultimately loyal to the executive team rather than their siloed organizations.
  • Productive conflict requires facilitation to learn.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 420 will feature our interview with John Hunter.  John returns to the podcast to discuss building capability in the organization and  understanding the impact of  variation.  We also talked Demining and why people tack the word improvement on almost anything!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

SPaMCAST 419 - Notes on Distributed Stand-ups, QA Corner, Configuration Management, Software Senesi

Software Process and Measurement Cast - Sun, 11/27/2016 - 23:15

The Software Process and Measurement Cast 419 features our essay on eight quick hints on dealing with stand-up meetings on distributed teams. Distributed Agile teams require a different level of care and feeding than a co-located team in order to ensure that they are as effective as possible. Remember an update on the old adage: distributed teams, you can’t live with them and you can’t live without them.  

We also have a column from the Software Sensei, Kim Pries.  In this installment, Kim talks about the  Fullan Change Model. In the Fullan Change Model, all change stems from a moral purpose.  Reach out to Kim on LinkedIn.

Jon M Quigley brings the next installment of his Alpha and Omega of Product Development to the podcast.  In this installment, Jon begins a 3 part series on configuration management.  Configuration management might not be glamorous but it is hugely important to getting work done with quality.  One of the places you can find Jon is at Value Transformation LLC.

Anchoring the cast this week is Jeremy Berriault and his QA Corner.  Jeremy explored exploratory testing in this installment of the QA Corner.  Also, Jeremy has a new blog!  Check out the QA Corner!

Re-Read Saturday News

The read/re-read of The Five Dysfunctions of a Team by Patrick Lencioni (published by Jossey-Bass) continues on the Blog.  Lencioni’s model of team dysfunctions is illustrated through a set of  crises used to illustrate the common problems that make teams into dysfunctional collections of individuals. The current entry features the sections titled Leaks through Plowing On.  

Takeaways from this week include:

  • Partial information leads to misinterpretations. ¬†¬†¬†¬†¬†¬†¬†
  • Executives need to be ultimately loyal to the executive team rather than their siloed organizations.
  • Productive conflict requires facilitation to learn.

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 420 will feature our interview with John Hunter.  John returns to the podcast to discuss building capability in the organization and  understanding the impact of  variation.  We also talked Demining and why people tack the word improvement on almost anything!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Five Dysfunctions of a Team, Patrick Lencioni: Re-Read Week 9

 

The Five Dysfunctions of a Team Cover

The “Book” during unboxing!

In this week’s re-read of The Five Dysfunctions of a Team  by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing), we continue with the third section of the book.  This section begins the second off-site by wrestling with defining who their first team is.  The concept of the first team can be summarized as who a team member owes their ultimate work loyalty.  Any scenario that includes a hierarchy of teams faces this issue.

(Remember to buy a copy from the link above and read along.)  We are well over halfway through this book and I will begin soliciting ideas for the next book soon.

Leaks
Katheryn discovers that others in the organization seem to know bits and pieces of what was going on in the offsite meetings. Opinions are being formed based on incomplete knowledge. Opinions based on partial knowledge tend to be interpreted through the filter of individual and team boundaries.  Misperceptions can lead confusion and infighting amongst teams.

Offsite Number Two
The second offsite began with Katheryn bring up the topic of what the executive team had told their subordinates about the first offsite meeting.  The issue was not that they had communicated but rather how they had handled confidential conversations among the team and which team they owed their ultimate loyalty.  

The question became who the executive team members are loyal to, their subordinates or the executive team. Several of the executive team were closer to their subordinates than they were to the management team. Members of a team that has different ultimate loyalties than the executive team (and by extension the organization) will but tend to put the needs of their other team ahead of the organization.  This can lead to a type of local optimization in one team’s performance is optimized at the expense of optimizing the performance of the organization.  The discussion of shifting loyalties to the executive team caused consternation.  Team members equated shifting loyalties to bad management and abandoning carefully crafted relationships that had been built and nurtured over time.

Plowing on
Kathryn moved the team on by asking how they were doing.  What she got was a tactical discussion of the impact of JR quitting and Nick taking over the sales function. What she really wanted was the team’s perception of how they were doing as a team.  The answer was that the team still had not learned to discuss the hard problems such resource allocation

Carlos suggested that the organization was engineering heavy.  Which caused Martin to go into defense mode, refusing to accept or discuss the possibility. Kathryn defused the negative conflict by facilitating it to more productive conflict that broke down walls between the individuals.  By ensuring that everyone developed an understanding that they all wanted the best for the company.  In the end, Martin went to the whiteboard and educated the team on what everyone in engineering was doing. Martin’s executive team peers were surprised by everything going on within engineering. The rest of the team engaged and going to the whiteboard to facilitate and add their input to the discussion of resources.  The conversation ended with a decision to make changes so that some engineers moved into a sales support role because that was what was best for the organization and not just for Martin and the engineering group. The section and conversation ended with a lunch break and Katheryn’s announcement that they would talk about dealing with interpersonal discomfort and holding each other accountable next.

Three quick takeaways:

  • Partial information leads to misinterpretations.¬†¬†¬†¬†¬†¬†¬†¬†
  • Executives need to be ultimately loyal to the executive team rather than their siloed organizations.
  • Productive conflict requires facilitation to learn.

Previous Installments in the re-read of  The Five Dysfunctions of a Team by Patrick Lencioni:

Week 1 ‚Äď Introduction through Observations

Week 2 ‚Äď The Staff through the End Run

Week 3 ‚Äď Drawing the Line though Pushing Back

Week 4 ‚Äď Entering Danger though Rebound

Week 5 ‚Äď Awareness through Goals

Week 6 ‚Äď Deep Tissue through Exhibition

Week 7 ‚Äď Film Noir through Application

Week 8 ‚Äď On-site through Fireworks


Categories: Process Management

Vanity Metrics Have a Downside!

Danger!

Danger!

Vanity metrics are not merely just inconvenience; they can be harmful to teams and organizations. Vanity metrics can elicit three major categories of poor behavior.

  1. Distraction. You get what you measure. Vanity metrics can lead teams or organizations into putting time and effort into practices or work products that don’t improve the delivery of value.
  2. Trick teams or organizations into believing they have answers when they don‚Äôt. A close cousin to distraction is the belief that the numbers are providing an insight into how to improve value delivery when what is being measured isn’t connected to the flow of value. ¬†For example, the organization that measures the raw number of stories delivered across the department should not draw many inferences in the velocity of stories delivered on a month-to-month basis.
  3. Make teams or organizations feel good without providing guidance. Another kissing cousin to distraction are metrics that don’t provide guidance. ¬†Metrics that don‚Äôt provide guidance steal time from work that can provide real value becasue they require time to collect, analyze and discuss. On Twitter, Gregor Wikstrand recently pointed out:

@TCagley actually, injuries and sick days are very good inverse indicators of general management ability

While I agree with Greger’s statement, his assessment is premised on someone using the metric to affect how work is done.  All too often, metrics such as injuries and sick days are used to communicate with the outside world rather than to provide guidance on how work is delivered.

Vanity metrics can distract teams and organizations by sapping time and energy from delivering value. Teams and organizations should invest their energy in collecting metrics that help them make decisions. A simple test for every measure or metric is to ask: Based on the number or trend, do you know what you need to do? If the answer is ‚Äėno‚Äô, you have the wrong metric.


Categories: Process Management

Vanity Metrics Have a Downside!

Danger!

Danger!

Vanity metrics are not merely just inconvenience; they can be harmful to teams and organizations. Vanity metrics can elicit three major categories of poor behavior.

  1. Distraction. You get what you measure. Vanity metrics can lead teams or organizations into putting time and effort into practices or work products that don’t improve the delivery of value.
  2. Trick teams or organizations into believing they have answers when they don‚Äôt. A close cousin to distraction is the belief that the numbers are providing an insight into how to improve value delivery when what is being measured isn’t connected to the flow of value. ¬†For example, the organization that measures the raw number of stories delivered across the department should not draw many inferences in the velocity of stories delivered on a month-to-month basis.
  3. Make teams or organizations feel good without providing guidance. Another kissing cousin to distraction are metrics that don’t provide guidance. ¬†Metrics that don‚Äôt provide guidance steal time from work that can provide real value becasue they require time to collect, analyze and discuss. On Twitter, Gregor Wikstrand recently pointed out:

@TCagley actually, injuries and sick days are very good inverse indicators of general management ability

While I agree with Greger’s statement, his assessment is premised on someone using the metric to affect how work is done.  All too often, metrics such as injuries and sick days are used to communicate with the outside world rather than to provide guidance on how work is delivered.

Vanity metrics can distract teams and organizations by sapping time and energy from delivering value. Teams and organizations should invest their energy in collecting metrics that help them make decisions. A simple test for every measure or metric is to ask: Based on the number or trend, do you know what you need to do? If the answer is ‚Äėno‚Äô, you have the wrong metric.


Categories: Process Management

Nomad 0.5 configuration templates: consul-template is dead! long live consul-template!

Xebia Blog - Thu, 11/24/2016 - 09:20
Or... has Nomad made the Consul-template tool obsolete? If you employ Consul or Vault to provide service discovery or secrets management to your applications you will love the freshly released 0.5 version of the Nomad workload scheduler: it includes a new 'template' feature to dynamically generate configuration files from Consul and Vault data for the jobs it

Vanity Metrics? Maybe Not!

A beard without gray is a reflection of vanity at this point in my life!

A beard without gray might be a reflection of vanity at this point in my life!

Unlike vanity license plates, calling a measure or metric a ‘vanity metric’ is not meant as a compliment. The real answer is never as cut and dry as when someone jumps up in the middle of a presentation and yells, ‚Äúthat is a vanity metric, you are suggesting we go back to the middle ages.‚ÄĚ ¬†Before you brand a metric with the pejorative of ‚Äúvanity metric,‚ÄĚ consider:

  1. Not all vanity metrics are useless.
  2. Your perception might not be same as someone else.
  3. Just because you call something a vanity metric does not make it true.

I recently toured several organizations that had posted metrics. Several charts caught my eye. Three examples included:

  1. Number of workdays injury-free;
  2. Number of function points billed in the current quarter, and
  3. A daily total of user calls.

Using our four criteria (gamability, linked to business outcomes, provides process knowledge and actionable) I could each classify each of the metrics above as a vanity metric but that might just be my perception based on the part of the process I understand. 

The number of workday injury-free is a simple metric I have seen at construction job sites, manufacturing plants and warehouses since I entered the workplace. ¬†The number tends to increment over time until it suddenly shifts to zero. ¬†By all definitions of a vanity metric, the number shown has little to do with the output of the process and nor is it really actionable. ¬†That said, the metric provides workers with the some assurance that management is ‚Äúinterested‚ÄĚ in the well-being of their employees or at least want to avoid the fines for not posting the chart. Clearly some vanity metrics are useful.

IFPUG function points are a measure of software functionality delivered.  Function points were introduced in the late 1980’s and have evolved over the years. Function points are sometimes perceived as a vanity metric when not used as a system metric. While this might be true in some scenarios, if we consider the common purchasing practice of paying per function point used in several countries (including the US, Brazil, Australia, Korea and Japan to name a few), the metric clearly is linked to a business outcome and is actionable, and therefore is not a vanity metric. The perception who is using the metric clearly impacts how a metric is classified.

I have worked for call centers, help desks and voice credit card authorization organizations during my career.  One of the most ubiquitous metric collected and displayed is the total number of calls answered daily (versions of this chart are limitless and include calls per hour and calls during peak hours). During a recent tour of a warehouse call center, one of the people on the tour suggested the metric was purely for the the vanity of the organization and for showing visitors. The tour leader pointed out the metric was used for staffing the call center properly so employees would not burn out and so that customers got their questions answered in a timely manner. I made sure I was not standing next to him for the rest of tour in case retribution for the snide question was required. Calling something a vanity metric is can be related perception; however in some cases it is a knee jerk negative reaction to any form of measurement. Clearly just because someone calls a measure or metric a vanity metric does not mean the epithet is true.

The concept of measurement and metrics in software development is always an interesting discussion. Metrics and measures are a useful tool support empirical processes, such as Scrum used in software development. Measures and metrics provide transparency so that we can inspect and adapt. That does not mean that vanity metrics don’t exist. For example, on my tours i saw a chart that represented the number of user stories completed this year by month . . . for the whole department.  I have no clue how the metric could be used and nor did my hosts when I asked what decisions were driven by the data shown.  Clearly vanity a metric based on any criteria you might propose.  

 


Categories: Process Management

Software Development Linkopedia November 2016

From the Editor of Methods & Tools - Mon, 11/21/2016 - 15:08
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about introvert project manager, scaling Agile, Test-Driven Development, UX vs UI, philosophy and programming, retrospectives, BDD in Java and Agile metrics. Blog: How Introvert Can Survive as Project Manager Blog: #AgileAfrica […]

Five Dysfunctions of a Team, Patrick Lencioni: Re-Read Week 8

The Five Dysfunctions of a Team Cover

The “Book” during unboxing!

I am back from the √ėredev ¬†in Malmo, Sweden. It¬†was a wonderful conference. Check out my short review.

In this week’s re-read of The Five Dysfunctions of a Team  by Patrick Lencioni (Jossey-Bass, Copyright 2002, 33rd printing), the team returns to the office and quickly begins the transformation process.

(Remember to buy a copy from the link above and read along.)

Part Three – Heavy Lifting

On-site

Kathryn and the team return to the day-to-day grind of the office. Significant progress building teams can be made when the day-to-day pressure of the office are removed, but Kathryn immediately observes that the progress the team has made offsite deteriorates. ¬†I have observed that much of the progress made when away from the office is transitory without reinforcement. Behavior tends to revert when confronted by the same triggers. All progress goes out the window when Nick calls a meeting to propose acquiring another firm and includes only a subset of the team. When called on not including Mikey, Nick slams her competence. Despite a rocky start, the team holds a fairly good discussion of the plusses and minuses until Nick blurts out that while Kathryn might be great at teamwork, she doesn’t know anything about the business and isn’t qualified to participate in the discussion. Kathryn doesn’t let the slight slide and gives Nick the choice of having it out right there in public or behind closed doors.

Nick is frustrated that he was underutilized. He feels that he could be leading the organization.  The acquisition is a reflection of his frustration.  He infers that perhaps he should quit.  Kathryn points out that he is underutilized because he is only interested advancing his own career rather than advancing the goals of the organization. Earlier in my career as a quality manager, I reported to the general manager of the organization. One of my co-workers was a Nick.  All that was important to him was the next rung on the ladder.  He never did anything to did not directly benefit this goal. He was not much of team player and often caused conflict amongst the team. Everyone was happy when he was promoted to another site (I heard he flamed out). Kathryn leaves Nick to sort things out in advance of her first staff meeting late in the day.

Fireworks

The staff meeting starts at two with everyone present except Nick and JR. Nick arrives at the last second and interrupts Kathryn as she begins, Nick delivers an apology for his outburst during the meeting earlier in the day. He publicly admitted to the team (showing trust) that he feels underutilized and that his underutilization will reflect poorly on his career. ¬†Even though he is frustrated, he doesn’t want to leave yet and needs everyone’s help to find something he could hang his hat on. Lencioni uses the reversal in his behavior to provide an example of how team members should be able to safely ask for help. After bares his soul, Kathryn drops the bombshell that JR had quit the night before. With JR gone someone needs to step up and take the sales role. ¬†Carlos volunteers (Carlos tries to please as we have seen before). In the end, Nick decides ¬†to take the sales (he is underutilized) even thought he had come to DecsionTech he felt that sales pigeonholed him even though he was “damn good at it.” Carlos was relieved not to have been called to deliver on his suggestion. Remember to be careful what you ask for…you just might get it and in Carlos’s case, he was not underutilized.

In the end, Nick’s underutilization poisoned his attitude in the same way over-utilization can poison attitudes. Team members need to be able to trust each other enough to ask and provide help. ¬†

Three quick takeaways:

  • ¬†¬†¬†¬†¬†¬†¬†¬†Never tell your boss they are unqualified unless you are willing to suffer the consequences.
  • ¬†¬†¬†¬†¬†¬†¬†¬†Not everyone can fit into every team (team members are not easily replaceable parts).
  • ¬†¬†¬†¬†¬†¬†¬†¬†Trust can be learned in theory at off-site meetings, but trust is really learned on-site.

Previous Installments in the re-read of  The Five Dysfunctions of a Team by Patrick Lencioni:

Week 1 ‚Äď Introduction through Observations

Week 2 ‚Äď The Staff through the End Run

Week 3 ‚Äď Drawing the Line though Pushing Back

Week 4 ‚Äď Entering Danger though Rebound

Week 5 ‚Äď Awareness through Goals

Week 6 ‚Äď Deep Tissue through Exhibition

Week 7 ‚Äď Film Noir through Application


Categories: Process Management

Vanity Metrics in Software Organizations

twitter

Measurement and metrics are lightning rods for discussion and argument in software development.  One of the epithets used to disparage measures and metrics is the term ‘vanity metric’. Eric Ries, author of The Lean Startup, is often credited with coining the term ‘vanity metric’ to describe metrics that make people feel good, but are less useful for making decisions about the business.  For example, I could measure Twitter followers or I could measure the number of blog reads or podcast listens that come from Twitter. The count of raw Twitter followers is a classic vanity metric.

In order to shortcut the discussion (and reduce the potential vitriol) of whether a measure or metric can be classified as actionable or vanity I ask four questions:

  1. Are there mechanisms in place to ensure the measure isn’t game-able. Does the metric reflect how work is being done or have guidelines in place so it can’t easily be manipulated without changing the outcome of the process?  For example, I can buy 10,000 Twitter followers, but adding these users will not translate into blog readers or podcast listeners which is the important output.
  2. Do changes in the measure or metric correlate to changes in the business outcome? For example, measure of automated code coverage is typically positively correlated to product quality and to amount of value delivered. If a metric is not correlated, there is a strong possibility that the metric is a vanity metric.
  3. Does the metric provide an understanding of what is happening within the process being measured without confusion or ambiguity?  For example, if a team measures the number test cases run and the number test cases number of increased (or decreased), what would the change mean?
  4. When the measure shows a change can and will you be able to take action?  If criteria 3 is true and you understand the signal being sent, can and will your organization do something about it? If true, can a decision be made? For example, an organization I recently met with measured overtime amongst developers.  Coder and testers chronically put in 8 hours of overtime each week and had for over a year they either could not use the data to make a change or choose not use the data; this was a vanity metric.

If you can answer the four questions with a yes, the metric will be actionable.  A no to any of the four questions generally indicates a vanity metric. Cutting out vanity metrics provides a better focus on the measures that provide value.  

Next – one person’s vanity metric is another‚Äôs actionable metric

 


Categories: Process Management

Is Your Organization Killing Your Software?

From the Editor of Methods & Tools - Thu, 11/17/2016 - 16:19
When asked ‚ÄúWhat is your architecture?‚ÄĚ most people immediately respond with how their software is laid out and what their plans are for improving parts of it. Rarely does anybody really think through their team and organizational architecture, and even more rarely do people understand how that may fundamentally impact how the software gets written […]

Better guesswork for Product Owners

Xebia Blog - Thu, 11/17/2016 - 09:22
Estimation, if there is one concept hard to grasp in product development it will be when things are done. With done I don‚Äôt mean the releasable increment from the iteration, but rather what will be in it? or in Product Management speak: ‚Äúwhat problem does it solve for our customer?‚ÄĚ. I increasingly am practicing randori

√ėredev 2016 ‚Äď A Quick Review

√ėredev 2016
November 9 – 11
Malmö, Sweden

√ėredev was AWESOME! ¬†The short review reads: good sessions, good food, good people and great conversations. ¬†Here’s the longer version . . .

I attend and speak at several conferences every year. Those conferences include process, test, measurement, quality and development conferences. ¬†This year I was asked to speak at √ėredev. The conference portion of √ėredev, described as a developer conference, spanned three days run by a master of ceremonies who managed six excellent keynote speakers that¬†challenged the audience to harness the power of computer in the broadest sense. Two of the keynotes stood out above the rest. ¬†The most memorable was ¬†Christian Heilmann‚Äôs talk on the fourth industrial revolution. ¬†Mr Heilmann got me excited about the computing power and interfaces between humans and machines just over the horizon. ¬†There are exciting times to come. ¬†The second was Jurgen Appelo‚Äôs keynote titled ‚ÄúManaging for Happiness,‚ÄĚ which discussed Management 3.0 and the fact that happy workers are more productive workers. ¬†

In addition to the¬†keynotes, there were over 180 track sessions. ¬†Each day was made up of seven tracks each. ¬†Sessions on coding, testing, project management, risk, tools, and even mindfulness. ¬†As a somewhat jaded conference goer, I found at least one session for each time that was useful and challenging. ¬†For example, Jose Lorenzo Rodriguez‚Äôs session titled ‚ÄúFixing mind anti-patterns with mindfulness‚ÄĚ has stayed with me since I attended his session (expect to hear Jose on the podcast).

One of the more interesting features of this conference was the food (breakfast, lunch, snacks, and dinner) and the evening events at the conference location. Every effort was made by the conference organizers to facilitate conversations between the attendees which, in the end is what makes a conference more than a series of lectures. The cacophony of voices in engaged in earnest and passionate conversations was invigorating. I was originally a bit anxious about attending a conference that even though in English, the attendees were more comfortable in languages that I have never studied.  I think I will remedy that in the future.    

One of the exciting parts of attending √ėredev was meeting several people that I have interviewed on the Software Process and Measurement Cast (some multiple times). Interviewees that we also √ėredev speakers included:

Jim Benson – SPaMCAST 400 – Jim Benson Personal Kanban and More

Allan Kelly – SPaMCAST 353 -Allan Kelly, #NoProjects

Mike Burrows – SPaMCAST 396 ‚Äď Mike Burrows, Agendashift,¬†SPaMCAST 310 ‚Äď Mike Burrows, Kanban from the Inside

Marcus Hammarberg – SPaMCAST 414 ‚Äď Marcus Hammarberg, Agile In the Real World

Listen to their interviews and then watch the video of their talks on the √ėredev site. ¬†

I did talks on Agile Risks and on using Storytelling to Define the Big Picture In Agile (the links are to the videos).  A funny story, near the end of  the talk on storytelling, for some reason the projection system froze.  It was still a fun presentation and I was happy it was not my computer that was the problem.

The long and short of √ėredev? ¬†This is a developer conference. ¬†There is lots of live coding going on. But, √ėredev is more than a conference about development. ¬†There are topics for everyone involved in the delivery of functional software. ¬†¬†


Categories: Process Management

SPaMCAST 418 ‚Äď Larry Cooper, The Agility Series

SPaMCAST Logo

http://www.spamcast.net

Listen Now
Subscribe on iTunes
Check out the podcast on Google Play Music

The Software Process and Measurement Cast 418 features our interview with Larry Cooper.  Larry and I talked about his project, The Agility Series.  The series is providing the community an understanding of how Agile is applied and how practitioners are interpreting practices and principles.

Reminder: Schedule Change for Vacation, Travel and Holiday

Last week I was in Sweden for the ¬†√ėredev conference with a day of sightseeing thrown in. New listeners joining from the conference: WELCOME. The trip was great, and the conference was awesome and mind expanding. ¬†I will publish a review soon. ¬†Brazil and ‚ÄúM√©tricas 2016‚ÄĚ is next followed immediately by the Thanksgiving holiday in the United States. ¬†This is the long way of saying that I will be publishing on an every other week basis through November 27th. ¬†We will be back to weekly posting in December. ¬†

Larry Cooper’s Bio
Larry Cooper is a Project Executive in the public and private sectors in Canada and the USA and holds over 20 industry certifications in Agile, Project Management, and ITIL. His books include ‚ÄúAgile Value Delivery: Beyond the Numbers‚ÄĚ (which was endorsed by a co-author of the Agile Manifesto) as well as the ‚ÄúThe Agility Series‚ÄĚ to be published over the next year or two. He was also the Mentor for ‚ÄúPRINCE2 Agile‚ÄĚ published by AXELOS.

Larry has  been an invited speaker at numerous conferences and symposia for the PMI, BAWorld, and the itSMF. He has presented global webinars with BrightTalk and ProjectManagement.com and authored more than  30 courses including an Agile-oriented curriculum that is sold directly to training companies in Canada and the USA.

The first two book in the Agility Series on  Organizational Agility and Leadership Agility are  available for free download at www.mplaza.ca as is The Adaptive Strategy Framework Guide.

You can join the adventure with the rest of the Wisdom Council for the Agility through their LinkedIn group https://www.linkedin.com/groups/8539263

Re-Read Saturday News

The read/re-read of The Five Dysfunctions of a Team by Patrick Lencioni (published by Jossey-Bass) continues on the Blog.  Lencioni’s model of team dysfunctions (we get through most of it this week) is illustrated through a set of  crises used to illustrate the common problems that make teams into dysfunctional collections of individuals. The current entry features the sections titled Film Noir and Application.  

Visit the Software Process and Measurement Cast blog to participate in this and previous re-reads.

Next SPaMCAST

The Software Process and Measurement Cast 419 will feature four essays.  Essays from Kim Pries, Jon Quigley, Gene Hughson and one from The SPaMCAST will be featured.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management