Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management

Training Strategies for Transformations and Change: Synthesis

Different training tools are sometimes needed!

Different training tools are sometimes needed!

Organizational transformations, like an Agile transformation, require the acquisition of new skills and capabilities. Gaining new skills and capabilities in an effective manner requires a training strategy. The best transformations borrow liberally from all categories of training strategies to best meet the needs of the transformation program and the culture of the organization. The four major training strategies typically used in Agile (and other IT) transformations have their own strengths and weaknesses. Those attributes make the strategies better for some types of knowledge and skill distribution than other strategies.

Training strategies by use.

Training strategies by use.

Lectures and presentations are the ubiquitous feature of the 21st century corporate meeting. These techniques are useful for spreading awareness and, to a lesser extent, to introduce concepts. The reduced efficiency of the lecture to introduce concepts is a due to trainers that are not trained educators, conference/training rooms that are not as well appointed as college lecture halls and learners that tend to pay only partial attention whenever possible. The partial attention problem is a reflection of email and text messages generated from their day job. Difficulties occur when distributed meetings are not supported with proper telecommunications.

Active learning and experiential learning are both excellent strategies for building and supporting skills and capabilities. Each method can include games, activities, discussions and lecture components. The combination of methods for generating and conveying knowledge keeps the learners focused and involved. Involvement helps defeat the common problem of partial attention by keeping the learners busy. The scalability of the two techniques differs, which can lead to a decision to favor one technique over the other. Many transformation programs blend both strategies. For example, I recently observed a program with group learning session (active learning) with assignments to be done outside the class as part of the learner’s day-to-day activities then debriefed in community of practice sessions (experiential learning).

Mentoring is a specialized form of experience-based learning. Because mentoring is generally a one-on-one technique, it is generally not scalable to for large-scale change programs, however it a good tool to transfer knowledge from one person to another and an excellent tool to support and maintain capabilities. Mentoring is most often used for specialized skills rather than general skills that need to broadly distributed.

Transformation programs generally will need to use more than one training strategy. Each strategy makes sense for specific scenarios. The of crafting an overall strategy requires understanding of which skills, capabilities and knowledge need to be fostered or built within the organization, then the distribution of the learners, the tools available and finally the organization’s culture. Once you understand the requirements, the training strategy can be crafted using a mixture of the training techniques.


Categories: Process Management

Swift self reference in inner closure

Xebia Blog - Fri, 12/19/2014 - 13:06

We all pretty much know how to safely use self within a Swift closure. But have you ever tried to use self inside a closure that's inside another closure? There is a big chance that the Swift compiler crashed (Xcode 6.1.1) without giving you an error in the editor and without any error message. So how can you solve this problem?

The basic working closure

Before we dive into the problem and solution, let's first have a look at a working code sample that only uses a single closure. We can create a simple Swift Playground to run it and validate that it works.

class Runner {
    var closures: [() -> ()] = []

    func doSomethingAsync(completion: () -> ()) {
        closures = [completion]
        completion()
    }
}

class Playground {

    let runner = Runner()

    func works() {
        runner.doSomethingAsync { [weak self] in
            self?.printMessage("This works") ?? ()
        }
    }

    func printMessage(message: String) {
        println(message)
    }

    deinit {
        println("Deinit")
    }

}

struct Tester {
    var playground: Playground? = Playground()
}

var tester: Tester? = Tester()
tester?.playground?.works()
tester?.playground = nil

The doSomethingAsync method takes a closure without arguments and has return type Void. This method doesn't really do anything, but imagine it would load data from a server and then call the completion closure once it's done loading. It does however create a strong reference to the closure by adding it to the closures array. That means we are only allowed to use a weak reference of self within our closure. Otherwise the Runner would keep a strong reference to the Playground instance and neither would ever be deallocated.

Luckily all is fine and the "This works" message is printed in our playground output. Also a "Deinit" message is printed. The Tester construction is used to make sure that the playground will actually deallocate it.

The failing situation

Let's make things slightly more complex. When our first async call is finished and calls our completion closure, we want to load something more and therefore need to create another closure within the outer closure. We add the method below to our Playground class. Keep in mind that the first closure doesn't have [weak self] since we only reference self in the inner closure.

func doesntWork() {
    weak var weakRunner = runner
    runner.doSomethingAsync {

        // do some stuff for which we don't need self

        weakRunner?.doSomethingAsync { [weak self] in
            self?.printMessage("This doesn't work") ?? ()
        } ?? ()
    }
}

Just adding it already makes the compiler crash, without giving us an error in the editor. We don't even need to run it.

Screen Shot 2014-12-19 at 10.30.59

It gives us the following message:

Communication with the playground service was interrupted unexpectedly.
The playground service "com.apple.dt.Xcode.Playground" may have generated a crash log.

And when you have such code in your normal project, the editor also doesn't give an error, but the build will fail with a Swift Compiler Error without clear message of what's wrong:
Command /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swiftc failed with exit code 1

The solution

So how can we work around this problem. Quite simple actually. We simply need to move the [weak self] to the most outer closure.

func doesWork() {
    weak var weakRunner = runner
    runner.doSomethingAsync { [weak self] in

        weakRunner?.doSomethingAsync {
            self?.printMessage("This now works again") ?? ()
        } ?? ()
    }
}

This does mean that it's possible that within the outer closure, self is not nil and in the inner closure it is nil. So don't write code like this:

    runner.doSomethingAsync { [weak self] in

        if self != nil {
            self!.printMessage("This is fine, self is not nil")

            weakRunner?.doSomethingAsync {
                self!.printMessage("This is not good, self could be nil now")
            } ?? ()
        }
    }

There is one more scenario you should be aware of. If you use an if let construction to safely unwrap self, you could create a strong reference again to self. The following sample illustrates this and will create a reference cycle since our runner will create a strong reference to the Playground instance because of the inner closure.

    runner.doSomethingAsync { [weak self] in

        if let this = self {

            weakRunner?.doSomethingAsync {
                this.printMessage("Captures a strong reference to self")
            } ?? ()
        }
    }

Also this is solved easily by including a weak reference to the instance again, now called this.

runner.doSomethingAsync { [weak self] in

    if let this = self {

        weakRunner?.doSomethingAsync { [weak this] in
            this?.printMessage("We're good again") ?? ()
        } ?? ()
    }
}
Conclusion

Most people working with Swift know that there are still quite a few bugs in it. In this case, Xcode should give us an error in the editor. If your editor doesn't complain, but your Swift compiler fails, look for closures like this and correct it. Always be safe and use [weak self] references within closures.

Training Strategies for Transformations and Change: Experiential Learning

You learn to play an instrument by practicing.

You learn to play an instrument by practicing.

Experiential learning, often thought of as learning by doing, can play an important role in any transformation program. In this strategy learners gather knowledge from the combination of doing something, reflecting on what was done and finally generalizing learnings into broader knowledge. The theory holds that knowledge is internalized through concrete engagement more effective and quickly rather through rote learning techniques. The basic steps of experiential learning are:

Experience – The learner is directly involved in experiences that are tied to a real world scenario. The teacher facilitates the experience. Writing your first computer program in a computer lab is an example of a concrete learning experience.

Reflection – The learner reflects on what happened and what they learned during the experience. Reflection typically includes determining what was important about the experience for future experiences. When used in a classroom, the reflection step generally includes sharing reflections and observations with the classmates (a form of a feedback loop). Demonstrating the program you wrote, reviewing the code snippets and sharing lessons learned with the class would be an example of this step.

Generalization – The learner incorporates the experience and what was learned into their broader view of how their job should be performed. The lessons learned from writing the program adds to the base of coding and problem-solving knowledge for the learner.

The flow of work through a team using Scrum can be mapped to experiential learning model. Small slices are of work are accepted into a sprint, the team solves the business problem, reflects on what was learned and then uses what was learned to determine what work will be done next. The process follows the experience, reflection, generalization flow.

There are several versions of the three stage experiential learning model. Conceptually they are all similar, the differences tend to be how the stages are broken down. For example, Northern Illinois University breaks the reflection step into reflection and “what’s important” steps.

There are several pluses and minuses I have observed in applying experiential learning in transformation programs.

Pluses

  1. Builds on and connects theory to the real world – Theory is often a dirty word in organizations. Experiential learning allows learners to experience a concept that can then be tied back to higher-level concepts such as theory.
  2. Experiences can be manufactured – Meaningful real-life examples can be designed to generate or focus on a specific concepts. When I learned to code first assembler computer program in the LSU computer lab, I was assigned a specific project by my TA.  This was an example of experiential learning.
  3. Can be coupled with other learning techniques – Experiential learning techniques can be combined with other learning strategies to meet logistical and cultural needs. For example classic lecture methods can be combined with experiential learning. My assembler class at LSU included lecture (theory) and lab (experiential) features.
  4. Individuals can apply experiential learning outside of the classroom – Motivated learners often apply the concept of experiential learning to add skills in a non-classroom environment when the skill may not generally applicable to the team or organization. For example, I had an employee learn to write SQL when I got frustrated waiting for the support team to write queries for him.  I learned by writing simple queries and debugging the results (he also used the internet for reference).

Minuses

  1. Not perfectly scalable – Experiential learning in the classroom or organization tends to require facilitation. Facilitation of large groups either requires multiple facilitators for breaking the group up into smaller groups and extending the time it takes to deliver the training. Without good facilitation experiential learning is less effective (just ask my wife about my skills facilitating her experience learning to drive a stick shift).
  2. Requires careful design – Experience, if not designed or facilitated well, can lead to learning the wrong lesson or to failures that impact the learner’s motivation.
  3. Reflection and generalization steps are often overlooked – The steps after experience are occasionally not given the focus needed to draw out concepts that were learned and then allow them to be incorporated the broader process of how work is performed.

Can anyone learn to ride a bicycle from a book or from a lecture? But you can learn to ride a bicycle using experiential learning (the reality is that it might be the only way). Experiential learning lets the learner try to ride the bike, fall and skin their knees, reflect on the how to improve and then try again.


Categories: Process Management

Training Strategies for Transformations and Change: Active Learning

Group discussion is an active learning technique.

Group discussion is an active learning technique.

Change is often premised on people learning new methods, frameworks or techniques.  For any change to be implemented effectively, change agents need to understand the most effective way of helping learners learn.  Active learning is a theory of teaching based on the belief that learners should take responsibility for their own learning.  Techniques that support this type of teaching exist on a continuum that begins with merely fostering active listening, to interactive lectures and finally to using investigative inquiry techniques. Learning using games is a form of active learning (see www.tastycupcake.org for examples of Agile games).  Using active learning requires understanding the four basic elements of active learning, participants’ responsibilities and keys to success.

There are four basic elements of active learning that need to be worked into content delivery.

  1. Talking and listening – The act of talking about a topic helps learners organize, synthesize and reinforce what they have learned.
  2. Writing – Writing provides a mechanism for students to process information (similar to talking and listening). Writing is can used in when groups are too large for group or team level interaction or are geographically distributed.
  3. Reading – Reading provides the entry point for new ideas and concepts. Coupling reading with other techniques such as writing (e.g. generating notes and summaries) improves learner’s ability to synthesize and incorporate new concepts.
  4. Reflecting – Reflection provides learners with time to synthesize what they have learned. For example providing learners with time to reflect on how they would teach or answer questions on the knowledge gained in a game or exercise helps increase retention.

Both learners and teachers have responsibilities when using active learning methods. Learners have the responsibility to:

  1. Be motivated – The learner needs to have a goal for learning and be will to expend the effort to reach that goal [HUH?]
  2. Participate in the community – The learner needs to other learners in games, exercises and discussions. [HUH?]
  3. Be able to accept, shape and manage change – Learning is change; the learner must be able to incorporate what they have learned into how they work.

While by definition, active learning shifts the responsibility for learning to learner not all of the responsibility rests on the learner. Teachers/Organization have the responsibility to:

  1. Set goals – The teacher or organization needs to define or identify the desired result of the training.
  2. Design curriculum – The trainer (or curriculum designer) needs to ensure they have designed the courseware needed to guide the learner’s transformations.
  3. Provide facilitation – The trainer needs to provide encouragement and help make the learning process easier.

As a trainer in an organization pursuing a transformation, there are several keys to successfully using active learning.

  1. Use creative events (games or other exercised) that generate engagement.
  2. Incorporate active learning in a planned manner.
  3. Make sure the class understands the process being used and how it will benefit them.
  4. In longer classes, vary pair, team or group membership to help expose learners to as diverse a set of points-of-view as possible.
  5. All exercises should conclude with a readout/presentation of results to the class.  Varying the approach (have different people present, ask different questions) taken during the readout help focus learner attention.
  6. Negotiate a signal for students to stop talking. (Best method: The hand raise, where when the teacher raises his or her hand everyone else raises their hand and stops talking.)

While randomly adding a discussion exercise at the end of a lecture module uses an active learning technique, it not a reflection of an effective approach to active learning.  When building a class or curriculum that intends to use active learning, the game and exercises that are selected need to be carefully chosen  to illicit the desired learning impact.


Categories: Process Management

Training Strategies for Transformations and Change

Presentations are just one learning strategy.

Presentations are just one learning strategy.

How many times have you sat in a room, crowded around tables, perhaps taking notes on your laptop or maybe doing email while someone drones away about the newest process change? All significant organizational transformations require the learning and adoption of new techniques, concepts and methods. Agile transformations are no different. For example, a transformation from waterfall to Scrum will require developing an understanding of Agile concepts and Scrum techniques. Four types of high-level training strategies are often used to support process improvement, such as Agile transformations. They are:

  1. Classic Lecture /Presentation – A presenter stands in front of the class and presents information to the learners. In most organizations the classic classroom format is used in conjunction with a PowerPoint deck, which provides counterpoint and support for the presenter. The learner’s role is to take notes from the lecture, interactions between the class and presenter and the presentation material and then to synthesize what they have heard. Nearly everyone in an IT department is familiar with type of training from attending college or university. An example in my past was Psychology 101 at the University of Iowa with 500+ of my closest friends. I remember the class because it was at 8 AM and because of the people sleeping in the back row. While I do not remember anything about the material many years later, this technique is often very useful in broadly introducing concepts. This method is often hybridized with other strategies to more deeply implement techniques and methods.
  2. Active Learning – Is strategy that is based on the belief that learners should take responsibility for their own learning. Learners are provided with a set of activities that keep them busy doing what is to be learned while analyzing, synthesizing and evaluating. Learners must do more than just listen and take notes. The teacher acts as a facilitator to help ensure the learning activities include techniques such as Agile games, working in pairs/groups, role-playing with discussion of the material and other co-operative learning techniques. Active learning and lecture techniques are often combined.
  3. Experiential Learning – Experiential learning is learning from experience. The learner will perform tasks, reflect on performance and possibly suffer the positive or negative consequences of making mistakes or being successful. The theory behind experiential learning is that for learning to be internalized the experience needs to be concretely and actively engaged rather than in a more theoretical or purely in a reflective manner. Process improvement based on real time experimentation in the Toyota Production system is a type of experiential learning.
  4. Mentoring – Mentoring is a process that uses one-on-one coaching and leadership to help a learner do concrete tasks with some form of support. Mentoring is a form of experience based learning, most attributable to the experiential learning camp. Because mentoring is generally a one-on-one technique it is generally not scalable to for large-scale change programs.

The majority of change agents have not been educated in adult learning techniques and leverage classic presentation/lecture techniques spiced with exercises. However, each of these high-level strategies have value and can be leveraged to help build the capacity and capabilities for change.


Categories: Process Management

5 Reasons Product Owners Should Let Teams Work Out of Order

Mike Cohn's Blog - Tue, 12/16/2014 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

A product owner hands 10 story cards to the team. The team reads them and hands the fifth and sixth cards back to the product owner. By the end of the sprint, the team delivers the functionality described on cards 1, 2, 3, 4, and 7. But the team has not touched the work of cards 5 and 6.

And I say this is OK.

Standard agile advice is that a team should work on product backlog items in the order prescribed by the product owner. And although this is somewhat reasonable advice, I want to argue that good agile teams violate that guideline all the time.

There are many good reasons for a team to work out of order. Let’s consider just a few, and consider it sufficient evidence that teams should be allowed to work out of order.

1. Synergies

There are often synergies between items near the top of a product backlog—while a team is working on item No. 3, they should be allowed to work on No. 6. If two items are in the same part of the system and can be done faster together than separately, this is usually a good tradeoff for a product owner to allow.

2. Dependencies

A team may agree that No. 4 is more important than No. 5 and No. 6 on the product backlog. Unfortunately, No. 4 can’t be done until No. 7 has been implemented. Finding a dependency like this is usually enough to justify a team working a bit out of the product owner’s prioritized sequence.

3. Skillset Availability

A team might love to work on the product owner’s fourth top priority, but the right person for the job is unavailable. Sure, this can be a sign that additional cross-training is needed on that team to address this problem – but, that is often more of a long-term solution. And, in the short term, the right thing to do might simply be to work a bit out of order on something that doesn’t require the in-demand skillset.

4. It’s More Exciting

OK, this one may stir up some controversy. I’m not saying the team can do 1, 2, 3, 4, and then number 600. But, in my example of 1, 2, 3, 4 and 7, choosing to work on something because it’s more exciting to the team is OK.

On some projects, teams occasionally hit a streak of product backlog items that are, shall we say, less than exciting. Letting the team slide slightly ahead sometimes just to have some variety in what they’re doing can be good for the morale of the team. And that will be good for product owners.

Bonus Reason 4a: It’s More Exciting to Stakeholders

While on the subject of things being more exciting, I’m going to say it is also acceptable for a team to work out of order if the item will be more exciting to stakeholders.

It can sometimes be a challenge to get the right people to attend sprint reviews. It gets especially tough after a team runs a few boring ones in a row. Sometimes this happens because of the nature of the high-priority work—it’s stuff that isn’t really visible or is esoteric perhaps to stakeholders.

In those cases, it can be wise to add a bit of sex appeal to the next sprint review by making sure the team works on something stakeholders will find interesting and worth attending the meeting for.

5. Size

In case the first four (plus) items haven’t convinced you, I’ve saved for last the item that proves every team occasionally works out of product owner order: A team may skip item 5 on the product backlog because it’s too big. So they grab the next one that fits.

If a team were not to do this, they would grab items 1, 2, 3, and 4 and stop, perhaps leaving a significant portion of the sprint unfilled. Of course, the team could look for a way to perhaps do a portion of item 5 before jumping down to grab number 7. But sometimes that won’t be practical, which means the team will at least occasionally work out of order.

Product Owners Aren’t Perfect

A perfect product owner would know everything above. The perfect product owner would know that the team’s DBA will be full from tasks from the first four product backlog items, and so wouldn’t put a database-intensive task fifth. The perfect product owner would know that items 2 and 5 both affect the same Java class, and would naturally prioritize them together.

But most product owners have a hard time being perfect. A better solution is for them to put the product backlog in a pretty good prioritized order, and then leave room for fine tuning from the team.

What do you do when inertia wins?

Step 3 is to take smaller bites!

Step 3 is to take smaller bites!

Changing how any organization works is not easy.  Many different moving parts have to come together for a change to take root and build up enough inertia to pass the tipping point. Unfortunately because of misalignment, misunderstanding or poor execution, change programs don’t always win the day.  This is not new news to most of us in the business.  What should happen after a process improvement program fails?  What happens when the wrong kind of inertia wins?

Step One:  All failures must be understood.

First, perform a critical review of the failed program that focuses on why and how it failed.  The word critical is important.  Nothing should be sugar coated or “spun” to protect people’s feelings.  A critical review must also have a good dose of independence from those directly involved in the implementation.  Independence is required so that the biases and decisions that led to the original program can be scrutinized.  The goal is not to pillory those involved, but rather to make sure the same mistakes are not repeated.  These reviews are known by many names: postmortems, retrospectives or troubled project reviews, to name a few.

Step two:  Determine which way the organization is moving.

Inertia describes why an object in motion tends to stay in motion or those at rest tend to stay at rest.  Energy is required to change the state of any object or organization; understanding the direction of the organization is critical to planning any change. In process improvement programs we call the application of energy change management.  A change management program might include awareness building, training, mentoring or a myriad of other events all designed to inject energy into the system. The goal of that energy is either to amplify or change the performance of some group within an organization.  When not enough or too much energy is applied, the process change will fail.

Just because a change has failed does not mean all is lost.  There are two possible outcomes to a failure. The first is that the original position is reinforced, making change even more difficult.  The second is that the target group has been pushed into moving, maybe not all the way to where they should be or even in the right direction, but the original inertia has been broken.

Frankly, both outcomes happen.  If the failure is such that no good comes of it, then your organization will be mired in the muck of living off past performance.  This is similar to what happens when a car gets stuck in snow or sand and digs itself in.  The second scenario is more positive, and while the goal was not attained, the organization has begun to move, making further change easier.  I return to the car stuck in the snow example.  A technique that is taught to many of us that live in snowy climates is “rocking.” Rocking is used to get a car stuck in snow moving back and forth.  Movement increases the odds that you will be able to break free and get going in the right direction.

Step Three:  Take smaller bites!

The lean startup movement provides a number of useful concepts that can be used when changing any organization.  In Software Process and Measurement Cast 196, Jeff Anderson talked in detail about leveraging the concepts of lean start-ups within change programs (Link to SPaMCAST 196).  A lean start up will deliver a minimum amount of functionality needed to generate feedback and to further populate a backlog of manageable changes. The backlog should be groomed and prioritized by a product owner (or owners) from the area being impacted by the change.  This will increase ownership and involvement and generate buy-in.  Once you have a prioritized backlog, make the changes in a short time-boxed manner while involving those being impacted in measuring the value delivered.  Stop doing things if they are not delivering value and go to the next change.

Being a change agent is not easy, and no one succeeds all the time unless they are not taking any risks.  Learn from your mistakes and successes.  Understand the direction the organization is moving and use that movement as an asset to magnify the energy you apply. Involve those you are asking to change to building a backlog of prioritized minimum viable changes (mix the concept of a backlog with concepts from the lean start up movement).  Make changes based on how those who are impacted prioritize the backlog then stand back to observe and measure.  Finally, pivot if necessary.  Always remember that the goal is not really the change itself, but rather demonstrable business value. Keep pushing until the organization is going in the right direction.  What do you do when inertia wins?  My mother would have said just get back up, dust your self off and get back in the game.


Categories: Process Management

SPaMCAST 320 – Alfonso Bucero – Today is a Good Day

www.spamcast.net

http://www.spamcast.net

 

Listen to the Software Process and Measurement Cast 320

SPaMCAST 320 features our interview with Alfonso Bucero. We discussed his book, Today Is A Good Day. Attitude is an important tool for a project manager, team member or executive.  In his book Alfonso provides a plan for honing your attitude.

Alfonso Bucero, MSc, PMP, PMI-RMP, PMI Fellow, is the founder and Managing Partner of BUCERO PM Consulting.  He managed IIL Spain for almost two years, and he was a Senior Project Manager at Hewlett-Packard Spain (Madrid Office) for thirteen years.

Since 1994, he has been a frequent speaker at International Project Management (PM) Congresses and Symposiums. Alfonso has delivered PM training and consulting services in Spain, Mexico, UK, Belgium, Germany, France, Denmark, Costa Rica, Brazil, USA, and Singapore. As believer in Project Management, he teaches that Passion, Persistence and Patience as keys for project success.

Alfonso co-authored the book Project Sponsorship with Randall L. Englund published by Josse-Bass in 2006. He has authored the book Today is a Good Day – Attitudes for achieving project success, published by Multimedia Publishing in Canada in 2010. He has also contributed to professional magazines in Russia (SOVNET), India (ICFAI), Argentina and Spain. Alfonso co-authored The Complete Project Manager and The Complete Project Manager Toolkit published with Randall L. Englund published by Management Concepts in March 2012. Alfonso published The Influential Project Manager in 2014 with CRC Press in the US.

Alfonso has also published several articles in national and international Project Management magazines. He is a Contributing editor of PM Network (Crossing Borders), published by the “Project Management Institute”.

Contact Alfonso: alfonso.bucero@abucero.com
Twitter:
@abucero
Website: http://www.abucero.com/

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

In the next Software Process and Measurement Cast we will feature our essay on the requirements for success with Agile.  Senior management, engagement, culture and coaches are components but not the whole story

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 320 - Alfonso Bucero - Today is a Good Day

Software Process and Measurement Cast - Sun, 12/14/2014 - 23:00

SPaMCAST 320 features our interview with Alfonso Bucero. We discussed his book, Today Is A Good Day. Attitude is an important tool for a project manager, team member or executive.  In his book Alfonso provides a plan for honing your attitude.

Alfonso Bucero, MSc, PMP, PMI-RMP, PMI Fellow, is the founder and Managing Partner of BUCERO PM Consulting.  He managed IIL Spain for almost two years, and he was a Senior Project Manager at Hewlett-Packard Spain (Madrid Office) for thirteen years.

Since 1994, he has been a frequent speaker at International Project Management (PM) Congresses and Symposiums. Alfonso has delivered PM training and consulting services in Spain, Mexico, UK, Belgium, Germany, France, Denmark, Costa Rica, Brazil, USA, and Singapore. As believer in Project Management, he teaches that Passion, Persistence and Patience as keys for project success.

Alfonso co-authored the book Project Sponsorship with Randall L. Englund published by Josse-Bass in 2006. He has authored the book Today is a Good Day – Attitudes for achieving project success, published by Multimedia Publishing in Canada in 2010. He has also contributed to professional magazines in Russia (SOVNET), India (ICFAI), Argentina and Spain. Alfonso co-authored The Complete Project Manager and The Complete Project Manager Toolkit published with Randall L. Englund published by Management Concepts in March 2012. Alfonso published The Influential Project Manager in 2014 with CRC Press in the US.

Alfonso has also published several articles in national and international Project Management magazines. He is a Contributing editor of PM Network (Crossing Borders), published by the “Project Management Institute”.

Contact Alfonso: alfonso.bucero@abucero.com
Twitter: 
@abucero
Website: http://www.abucero.com/

Call to action!

We are in the middle of a re-read of John Kotter’s classic Leading Change on the Software Process and Measurement Blog.  Are you participating in the re-read? Please feel free to jump in and add your thoughts and comments!

After we finish the current re-read will need to decide which book will be next.  We are building a list of the books that have had the most influence on readers of the blog and listeners to the podcast.  Can you answer the question?

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.

First, we will compile a list and publish it on the blog.  Second, we will use the list to drive future  “Re-read” Saturdays. Re-read Saturday is an exciting new feature that began on the Software Process and Measurement blog on November 8th.  Feel free to choose you platform; send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

In the next Software Process and Measurement Cast we will feature our essay on the requirements for success with Agile.  Senior management, engagement, culture and coaches are components but not the whole story

Upcoming Events

DCG Webinars:

Agile Risk Management - It Is Still Important
Date: December 18th, 2014
Time: 11:30am EST
Register Now

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Agile: how hard can it be?!

Xebia Blog - Sun, 12/14/2014 - 13:48

Yesterday my colleagues and I ran an awesome workshop at the MIT conference in which we built a Rube Goldberg machine using Scrum and Extreme Engineering techniques. As agile coaches one would think that being an Agile team should come naturally to us, but I'd like to share our pitfalls and insights with you since "we learned a lot" about being an agile team and what an incredible powerful model a Rube Goldberg machine is for scaled agile product development.

If you're not the reading type, check out the video.

Rube ... what?

Goldberg. According to Wikipedia; A Rube Goldberg machine is a contraption, invention, device or apparatus that is deliberately over-engineered or overdone to perform a very simple task in a very complicated fashion, usually including a chain reaction. The expression is named after American cartoonist and inventor Rube Goldberg (1883–1970).

In our case we set out on a 6 by 4 meter stage divided in 5 sections. Each section had a theme like rolling, propulsion, swinging, lifting etc. In a fashion it resembled a large software product that requires to respond to some event in an (for outsiders) incredibly complex matter, by triggering a chain of sub-systems which end in some kind of end-result.

The workspace, scrum boards and build stuff

The workspace, scrum boards and build stuff

Extreme Scrum

During the day 5 teams worked in a total of 10 sprints to create the most incredible machine, experiencing everything one can find during "normal" product development. We had inexperienced team members, little to no documentation, legacy systems whom's engineering principles were shrouded in mystery, teams that forgot to retrospective, interfaces that were ignored because the problem "lies with the other team". The huge time pressure of the relative small sprint and the complexity of what we were trying to achieve created a pressure cooker that brought these problems to the surface faster than anything else and with Scrum we were forced to face and fix these problems.

Team scrumboard

Team scrumboard

Build, fail, improve, build

“Most people do not listen with the intent to understand; they listen with the intent to reply.” - Stephen R. Covey

Having 2 minutes to do your planning makes it very difficult to listen, especially when your head is buzzing with ideas, yet sometimes you have to slow down to speed up. Effective building requires us to really understand what your team mate is going to do, pairing proved a very effective way to slow down your own brain and benefit from both rubber ducking and the insight of your team mate. Once our teams reached 4 members we could pair and drastically improve the outcome.

Deadweight with pneumatic fuse

Deadweight with pneumatic fuse

Once the machine had reached a critical size integration tests started to fail. The teams responded by testing multiple times during the sprint and fix the broken build rather than adding new features. Especially in mechanical engineering that is not a simple as it sounds. Sometimes a part of the machine would be "refactored" and since we did not design for a simple end-to-end test to be applied continuously. It took a couple of sprints to get that right.

A MVP that made it to the final product

A MVP that made it to the final product

"Keep your code clean" we teach teams every day. "Don't accept technical or functional debt, you know it will slow you down in the end". Yet it is so tempting. Despite a Scrum Master and an "Ăśber Scrum Master" we still had a hard time keeping our workspace clean, refactor broken stuff out, optimise and simplify...

Have an awesome goal

"A true big hairy audacious goal is clear and compelling, serves as unifying focal point of effort, and acts as a clear catalyst for team spirit. It has a clear finish line, so the organization can know when it has achieved the goal; people like to shoot for finish lines." - Collins and Porras, Built to Last: Successful Habits of Visionary Companies

Truth is: we got lucky with the venue. Building a machine like this is awesome and inspiring in itself, learning how Extreme Scrum can help teams to build machines better, faster, innovative and with a whole lot more fun is a fantastic goal in itself, but parallel to our build space was a true magnet, something that really focussed the teams and go that extra mile.

The ultimate goal of the machine

The ultimate goal of the machine

Biggest take away

Building things is hard, building under pressure is even harder. Even teams that are aware of the theory will be tempted to throw everything overboard and just start somewhere. Applying Extreme Engineering techniques can truly help you, it's a simple set of rules but it requires an unparalleled level of discipline. Having a Scrum coach can make all the difference between a successful and failed project.

Re-read Saturday: Developing a Vision and Strategy, Leading Change, John P. Kotter Chapter Five

A vision provides a goal and direction to travel.

A vision provides a goal and direction to travel.

John P. Kotter’s book, Leading Change, established why change in organizations can fail and the forces that shape the changes when they are successful. The two sets of opposing forces he identifies in the first two chapters are used to define and illuminate his famous eight-stage model for change. The first stage of the model is establishing a sense of urgency. A sense of urgency provides the energy and rational for any large, long-term change program. Once a sense of urgency has been established, the second stage in the eight-stage model for change is the establishment of a guiding coalition. If a sense of urgency provides energy to drive change, a guiding coalition provides the power for making change happen. Once we identify or establish a sense of urgency and the power to make change happen we then have to wrestle establishing a vision and strategy. A vision represents a picture of a state of being at some point in the future. A vision acts as an anchor that establishes the goal of the transformation. A strategy defines the high level path to that future.

Kotter begins the first chapter by reviewing changes driven by different leadership styles that include authoritarian, micromanagement and visionary. Change driven by authoritarian decree (do it because I said so) and micromanagement (I will tell you step-by-step how to get from point A to point B and validate compliance to my instructions) often fail to break through the status quo. In fact, demanding change tends to generate resistance and passive aggressive behavior due to the lack of buy-in from those involved in the change. Couple the lack of buy-in with the incredible level of effort needed to force people to change and then to monitor that change, and scalability problems will surface. Neither authoritarian- nor micromanagement-driven techniques are efficient for responding to dynamic, large scale changes. Change driven by vision overcomes these issues by providing the direction and the rational for why the organization should strive together toward the future defined by the vision.

Effective visions are not easy to craft. Visions are important for three reasons. An effective vision will provide clarity of direction. A clear direction provides everyone making or guiding the change with a clearer set of parameters to make decisions. When lean and Agile teams crisply define the goals of a sprint or Agile release train (SAFe), they are using the same technique to break through the clutter and focus the decision making process on achieving the their goal. Secondly, visions are important because they provide hope by describing a feasible outcome. A vision of what is perceived as a feasible outcome provides a belief that the pain of change be overcome. Finally, a vision provides alignment. Alignment keeps people moving in a common direction.

Kotter defines six characteristics of an effective vision.

  1. Imaginable – The people who consume the vision must be able to paint a rational picture in their mind of what the world will be like if the vision is attained.
  2. Desirable – The vision appeal to the long-term interests of those being asked to change.
  3. Feasible – The vision has to be attainable.
  4. Focused – The vision provide enough clarity and alignment to guide organizational decisions.
  5. Flexible – The vision must provide enough direction to guide but not enough to restrict individual initiative.
  6. Communicable – The vision must be consumable and understandable to everyone involved in the change process. Kotter further suggests that if a vision can’t be explained in five minutes it has failed the test of communicable.

In the third stage of the eight-stage model for change, Kotter drills deeply into the rationale and the definition of an effective vision.  Kotter defines strategy as the logic for how the vision will be attained.  An effectively developed vision makes the processes of defining the path (strategy) for attaining vision far less contentious. The attributes of an effective vision including being imaginable, feasible and focused provide enough of a set of constraints to begin the process of defining how the vision can be achieved.


Categories: Process Management

Agile Metrics: The Relationship Between Measurement Framework Quadrants

Untitled

“I never knew anybody . . . who found life simple. I think a life or a time looks simple when you leave out the details.” – Ursula K. Le Guin, The Birthday of the World and Other Stories

The act of measurement either reflects how work was done, how it is being done and what is possible in the future. A measurement framework that supports all of these goals is going to have to reflect some of the details and complexity that are found in the development (broad sense) environment. The simple Agile measurement framework uses the relationships between the areas of productivity, quality, predictability and value to account and reflect real world complexity and to help generate some balance. Each quadrant of the model interacts with the other to a greater or lesser extent. The following matrix maps the nuances between the quadrants.

Impact Matrix

Impact Matrix

The labor productivity quadrant most directly influences the value quadrant. Lower productivity (output per unit of effort) equates to higher costs and less value that can be delivered. Pressure to increase productivity and lower cost can cause higher levels of technical debt, therefore lower levels of quality. Erratic levels of productivity can be translated into time-to-market variability.

Predictability, typically expressed as velocity or time-to-market, most directly interacts with quality at two levels. The first is terms of customer satisfaction. Delivering functionality at a rate or date that is at odds with what is anticipated will typically have a negative impact on customer satisfaction (quality). Crashing the schedule to meet a date (and be perceived as predictable) will generally cause the team to cut corners, which yields technical debt and higher levels of defects. Lower quality is generally thought to reduce the perceived value of the functionality delivered.

Quality, measured as technical debit or delivered defects, has direct links to predictability (noted earlier) and value. The linkage from quality to value is direct. Software (or any other deliverable) that has lower quality than anticipated will be held in lower regard and be perceived as being less useful. We have noted a moderate relationship between labor productivity and quality through technical debt. This relationship can also be seen through the mechanism of fixing defects. Every hour spent fixing defects is an hour that would normally be spent developing or enhancing functionality.

Value, measure as business value or return on investment, is very strongly related to productivity and value (as noted earlier).

Based on the relationships we can see that a focus on a single area of the model could cause a negative impact on performance in a different quadrant. For example, a single minded focus on efficiency can lead to reduced value quality and more strongly less value to stakeholders. The model would suggest the need to measure and set performance level agreements for value if labor productivity is going to be stressed.

The simple Agile measurement framework provides a means to understand the relationships between the four macro categories of measurement that have been organized into quadrant. Knowledge of those relationships can help an organization or team to structure how they measure to ensure approach taken is balanced.


Categories: Process Management

Extreme Engineering - Building a Rube Goldberg machine with scrum

Xebia Blog - Fri, 12/12/2014 - 15:16

Is agile usable to do other things than software development? Well we knew that already; yes!
But to create a machine in 1 day with 5 teams and continuously changing members using scrum might be exciting!

See our report below (it's in Dutch for now)

 

Extreme engineering video

 

 

Agile Metrics: Filling The Agile Measurement Framework Quadrants

Agile Metrics Framework

Agile Metrics Framework

What gets measured depends on the team’s and the organization’s reporting needs and the measurement goals. For instance, an organization that needs to quantitatively prove their transformation will need to consider measures (and metrics) that can be generated consistently across project teams. Organizations whose teams are standalone and do not anticipate the need for baselining or benchmarking can easily leverage team-based relative measures, such as function points. The simple metrics framework suggested here with potential metrics is shown below: [

  1. Labor Productivity Quadrant
    1. Labor productivity is generally expressed as a functional measure of size per person month, for example: function points per person month or use case points per person month. Labor productivity is typically the measure of choice when an overall transformation program needs to prove efficiency results. These measures are easily comparable between teams and have industry benchmarks available for comparison. The drawbacks are the perceived level of effort to generate the measure and the invasiveness of the process used to generate the size component of the metric.
    2. Story completion (variants include measure of percentage story completion) is a relatively easy metric for teams to collect and leverage. The simplest form of this measure is a simple count of the number of stories competed in a sprint (or period of time for Kanban). Adding a time component creates a rate of completion (a metric) which can be used as a variant of velocity.
  2. Quality Quadrant
    1. Customer satisfaction is a measure of how satisfied the customers (or stakeholders) of a project are with the team performance or functionality delivered. Customer satisfaction can measured using techniques as simple as asking specific stakeholder how they feel about the project or very formal techniques, such as surveys and calculations such as Net Promoters. The higher the formality the more effort that will be required to collect and analyze the metric and the more comparable the metric will be between teams across the life of long running projects.
    2. Delivered defects are a count of the number of defects found after the code (or other deliverable) is marked as done. This measure is generally considered one of the more important reflection of quality, because all code or other deliverables are potentially implementable after they have be marked as done, which means any defects found, regardless of by whom, could have been found in production. Defects found in production can negatively impact customer satisfaction and the overall business.
    3. Usability is typically a measure of compliance against a set of industry and/or organizational standards. Most teams build usability compliance into the definition of done, therefore what is delivered as done is compliant. The metric is used as a mechanism to reflect progress while functionality is being built.
  3. Predictability Quadrant
    1. Velocity is the average number of units of work delivered in a sprint (or any other repeating unit of time). Typically velocity is expressed as the average number of story points a team delivers per sprint. While similar to productivity, velocity is typically used to represent team predictability. Predictability metrics can be used to generate release plans (when will some group of functionality be ready for production) and in sprint planning.
    2. Time-to-market is very similar to velocity reflecting how fast functionality is developed and delivered. Time to market is generally used to reflect plan-based (non-Agile) projects or in organizations using functional metrics (e.g. function points). An example of a time-to-market metric would be the number of function points per calendar. Note: like velocity, time-to-market metrics are generally averages and used in planning exercises or in benchmark.
    3. Effort burn-down is a measure of a team’s estimate of the number of hours of effort remaining in a sprint to deliver the functionality that was committed to during planning. This almost always used as mechanism for teams to anticipate whether will complete work by the end of the sprint. There are numerous variations on effort burn-down chart including story points, task and feature burn-down charts. In every case some measure of work is count (e.g. hours, tasks, story points) and then as they are completed, used up or new instances discovered, the number remaining is either incremented or decremented.
  4. Value Quadrant
    1. Business value is an estimate of the net value being delivered in a unit of work (e.g. story, epic or feature). While business value is the Holy Grail in this category, it is generally very difficult to estimate at a story or feature level, therefore tracking value tends at a higher level such as a release.
    2. Features delivered is a proxy for business value. This measure is typically a count of the number of features delivered in sprint. Variants of this measure include counting stories or epic (larger user stories). Features and stories reflect requirements therefore as the number of features delivered increase the value users and the product owner perceive.

The measures and metrics noted above barely scratch the surface of what can be measured. What should be measured is dependent on the needs and goals of the team and the organization. In ALL cases the measures and metrics must be vetted to ensure they meet the Agile measurement philosophies.


Categories: Process Management

Agile Metrics: Differences Based On Organizational Levels

Metrics Are About Prediction

Metrics Are About Prediction

There are three reasons to measure. The first is to guide specific behaviors. The second is to provide information on the status of process. And the third is as a tool to help predict the future. At a team level it is easy to take a very narrow view of metrics and measurement, however the organization is another significant stakeholder in the collection and consumption of metrics information. Teams and other organizational stakeholders have different metrics needs for each of three basic reasons for measuring.  Part of maturing as an Agile organization is the development of a common understanding of metrics needs that includes the differences between groups.  Reaching a common understanding is a step toward developing the mechanisms to accommodate all of the relevant metrics needs within the organization.

 

Reason to Measure

Agile Team Perspective Organizational Perspective Guide Behaviors The goal of metrics and tools at a team level are to support tactical behaviors focused on the delivery the functionality the team has committed to delivering.  Metrics can be delivered with tools such as card walls (the simple metric of a card moving across the board), burn-down charts or story completion charts.  These tools (also known as information radiators) provide information that teams generally find useful for guiding behaviors such as swarming, collaboration and continuous re-planning. The goal of measurement that guides behavior at the organizational level is to reinforce desired overall Agile behavior. The metrics needed to support and reinforce Agile behavior will evolve as an organization completes its Agile transformation.  Examples of organization metrics that guide behavior include Ka8znztcskills/capabilities tracking (gamification – Gamification is a mechanism that leverages the competitive attributes of the target audience). [As the transformation matures, measurement against Agile Maturity Models can be leveraged to guide behavior. Provide Status Tactical Perspective: The team shares status on a daily basis during the stand-up/Scrum meeting while leveraging tools line the card wall and burn-down charts as metrics and informational radiators. Burn-down chart provides team level status information that can by share across multiple layers of the organization hierarchy, however team level data tends to be seen as too granular as projects morph into programs and status is passed up the organizational hierarchy. Program level burn-up charts and story maps provide quantifiable measurement feedback that is accessible to senior leaders. Predict Future Scrum and Scrumban teams need to be able see the work in front of them to understand how to plan both at a short, medium term and long term basis.  Tools like burn-down (short term), burn-up (program level view), story maps and product roadmaps (both long-term) provide a quantified view of progress. Organizations need to develop tactical and strategic plans that are supported by software functionality.  Portfolio metrics and information radiators (story maps and product roadmaps) leverage naturally occurring data from project performance.

Different stakeholders have different measurement needs and perspectives.  Occasionally there is a suggestion that the only measurement data that Agile should generate is what the team needs.  While teams and other organizational stakeholders, such as product, IT and executive management, can (and should) use similar tools, organizational data needs extend to being able to monitor and guide the Agile transformation and other process improvement efforts. Those needs will require everyone involved collect a wider range of data and generate different metrics.


Categories: Process Management

Software Development Linkopedia December 2014

From the Editor of Methods & Tools - Wed, 12/10/2014 - 15:37
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about slow programming, technical career paths, Agile QA, Scrum backlog refinement meetings, being a better test manager, java BDD, mixing Waterfall and Agile, the TDD cycle and dealing with bad Java code. Blog: The Case for Slow Programming Blog: Coding, Fast and Slow: Developers and the Psychology of Overconfidence Blog: Climbing off the CTO ladder Blog: What Does QA Do on the First Day of a Sprint? Blog: Stop ...

Agile Metrics: A Simple Framework

398137720_d2b6404e4a_b

When driving to work or watching a baseball game, we can observe Newton’s Third Law of Motion, “for every action there is an equal and opposite reaction.” The engine explodes gasoline the engine and transmission makes the wheels turn generating motion.  A baseball player swings his bat and the ball flies from the stadium. Measurement has a similar set of effects. In Agile Metrics: Philosophy we discussed the impact of the effort required for measurement on teams (every hour spent measuring is one less hour building functionality – equal and opposite). Equally as important is the impact of what is being measured on behavior. What is measured signals what is important to the project teams and the organization, therefore influencing both what work is done and how the work is done. All measurement programs must have balance so as not to let one measure create runaway behavior that generates too much of a good think and overwhelms other desired behaviors. To paraphrase the Third Law of Motion: for every measure there must be an opposite measure. A simple four quadrant metrics framework can be used to shape a balanced approach to measurement.  The framework includes four measurement quadrants that push and pull against each other to minimize runaway behavior.

  1. Labor Productivity – Labor productivity measures the efficiency of the transformation of labor into something of higher value. For example, the productivity of a software project would be measured as the amount of software that is produced per hour (or more typically a person-month). The metric is generally represented as an average value for team over a period of time. Productivity is very similar to common Agile metric, velocity which measures how many stories or story points a sprint a team averages. A singular focus on productivity and velocity can push teams toward working more efficiently or can cause teams to cut corners and undervalue quality and value.
  2. Quality – Quality measures the compliance of the function delivered against a standard. This typically assessed by some form of comparison. Testing, for example, is a comparison of functionality against the standard of predicted behavior. Standards can include technical coding standards, security standards, and measures of technical debt (corners cut), customer satisfaction or defects delivered.
  3. Predictability – Predictability measures a team’s (or organization’s) ability to deliver on its commitments. Teams commit to the number of stories (or story points) they will delivery in a sprint and similarly IT departments and even organizations commit to delivering levels of value and functionality their stakeholders. Measures such time-to-market (of which there are many variants), the number incomplete stories or program-level burn-up charts are often used to measure and highlight predictability.
  4. Value – Value measures the worth or usefulness of the functionality (or other deliverable) delivered. Value can be a difficult concept to quantify, especially when the functionality (or any deliverable) does not interface with the ultimate customer. However most projects have gone through a qualification process with some form of cost/benefit analysis. While this process can be very informal in some organizations, someone has gone through a process to rationalize the benefits that are expected from the project or feature they are requesting. A typical measure for value is return on investment (ROI).

A balanced metrics model will help ensure that one measure does not generate unanticipated negative results. For example, an over-focus on quality can cause organizations to spend more time and effort on testing than needed; causing costs to rise and value to diminish. There is no single formula for a balanced set of metrics, each organization will need to generate a balance based on their business context.


Categories: Process Management

Quote of the Month December 2014

From the Editor of Methods & Tools - Tue, 12/09/2014 - 18:23
Stories shouldn’t be small because they need to fit into an iteration, but because the world shouldn’t end just because a story turns out to be wrong. Stories are based on assumptions about business value, and those assumptions might turn out to be right or wrong.The key questions for story sizing shouldn’t be about the iteration length, but about how much business stakeholders want to invest in learning whether a proposed change will actually give them what they assumed. Source: Fifty Quick Ideas to Improve your User Stories, Gojko Adzic and ...

Who Picks the Sprint Length on a Scrum Team?

Mike Cohn's Blog - Tue, 12/09/2014 - 15:00

An important consideration for every Scrum team is how long its sprints should be. Choose a length that’s too long, and it will be hard to keep change out of the sprint. Choose a length that’s too short, and a team may struggle with completing significant work within the sprint or weaken their definition of done to do so.

But who is it that gets to select a team’s sprint length?

Of course, the answer is the whole team – that collective of ScrumMaster plus product owner plus team members such as programmers, testers, designers, DBAs, analysts and so on.

But what if that broad set of individuals cannot agree? Do they argue endlessly, perhaps sticking with their waterfall or ad hoc process until consensus finally emerges?

No. The ScrumMaster is ultimately the one who gets to choose a team’s sprint length.

A good ScrumMaster will do everything possible to arrive at a consensus. But, when the ScrumMaster exhausts his or her collaborative, facilitative skills without arriving at a consensus, the good ScrumMaster makes the decision.

This should not happen often. I hope most ScrumMasters never need to say, “I’ve listened to everyone, but here’s what we’re doing.” But since the ScrumMaster can be considered a team’s process owner, the ultimate decision does belong to the ScrumMaster.

Let’s consider one example from my past.

In this case, I was consulting to a team doing four-week sprints. They were struggling to pull the right amount of work into their sprints. For the six months before I’d met them, team members were consistently dropping about a third of the work each sprint.

They were a good team doing high-quality work. They simply didn’t know how much of it they could do in four weeks. Their optimism was getting the better of them, and they were consistently overcommitting.

I asked the team to think about how they’d like to solve the problem and tell me their suggestions the next day. I was thrilled the next day when they announced that they should clearly change the length of their sprints. “Yes,” I told them, “Definitely.”

They were relieved that I agreed with them and said so: “Wow! We didn’t think you’d let us go to six-week sprints!”

I had to inform them that while I agreed with changing sprint length, the better solution would be to go to shorter rather than longer sprints.

We ended with me—as a consulting ScrumMaster—setting a two-week length.

Why did I do this?

The team was already pulling too much work into a four-week sprint. They were, in fact, probably pulling six weeks of work into each four-week sprint. But, if they had gone to a six-week sprint, they probably would have pulled eight or nine weeks of work into those!

This team needed more chances to learn how much work fit into a sprint (of any length). As their ScrumMaster—especially coming to the team as a consultant—I could see that more easily than they could.

I want to end by repeating my caution that this is not something that should happen very often. I can count on one hand the number of times when I’ve flat-out chosen a length for the team after failing to gain consensus.

I’ll stand by the value of having done so in each case. But I only did so after significant effort to gain consensus. Overriding team members on something as important as sprint length should be done with great caution. But, it’s a process issue and, therefore, in the domain of the ScrumMaster.

What about you? Were there times when your team couldn’t agree on a sprint length? How did you resolve it?

Agile Metrics: Philosophy

This measurement that encourages behavior.

This measurement that encourages behavior.

Discussions of software development, maintenance and enhancement measurement are generally tense, even at the best of times. Measurement has a significant amount of baggage, ranging from using measures to grade individuals for team activities to measurement without sharing the results with those being measured. Add the philosophies of Agile into the mix and the discussion becomes even more difficult, because, to be optimally efficient, Agile measurement requires an organization to adopt a different set of measurement philosophies that are more inline with the principles found in the Agile Manifesto.

  • Reinforce desired Agile behavior – Measurement has a very strong behavioral competent. You get what you measure, therefore the measures selected need to support and promote the behavior you want in the organization. For example, organizations often try to measure individual productivity or velocity, which reinforces individualism. Where the more Agile behavior would be team collaboration, asking for help and swarming to problems. Measure team productivity and velocity rather than focusing on individuals.
  • Focus on results – Agile techniques are geared to delivering potentially implementable functionality early and often. Potentially implementable functionality equates to value for the organization. Agile measurement should focus on the value that is delivered. Examples of value would include functionality delivered, stories completed or estimated business value.
  • Measure trends – The direction a measure is trending is typically more important than an individual observation (common cause versus special cause variation). Given the short cycles (sprints) most Agile teams use, teams can accumulate enough observations of their processes and the functionality being delivered to understand whether they are improving or not.
  • Easy to collect – All measurement requires effort, typically from both the people doing the measurement and those being measured. The overhead of measurement is time that is not being used to deliver value therefore it should be minimized. The measurement information you collect should be easily collected (or in perfect world) be a natural byproduct of the tools and techniques being used on the project.
  • Includes context – Just knowing black and white numbers does not allow nuanced interpretation of performance. Collecting and storing (write story down and save it as text attached to the measurement data) the context that helped generate a specific result will help teams and managers to understand and use the data.
  • Creates real conversation – Measurement needs to generate a dialog in which all stakeholders of IT and projects can understand the value that is being delivered, discuss performance and find ways to improve the delivery of value.
  • Measure only what is absolutely needed – Define and collect measures and metrics that will be actively used to guide the organization or make decisions. Collecting more information than you will need to answer questions and make decisions will generate more overhead for teams to overcome.

All organizations and teams need the feedback that measurement generates. Organizations typically measure for two reasons. The first is to generate specific behaviors, and second to predict the future.  Organizational goals provide the rational for what to measure and the type of measure determines whether it drives behavior or provides direction. In organizations that leverage Agile techniques measurement will be more effective if it embraces Agile measurement philosophies.


Categories: Process Management