Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management

Re-Read Saturday: The Goal: A Process of Ongoing Improvement, Part 1

download

Today we begin the re-read of The Goal. If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version 

Eliyahu M. Goldratt and Jeff Cox’s wrote The Goal: A Process of Ongoing Improvement (published in 1984).  The Goal was framed as a business novel. In general, a novel presents a story through a set of actions and events to facilitate a plot. A business novel uses the plot, interactions between characters and events to develop and illustrate concepts or processes that are important to the author.  Bottom line, a business novel uses metaphors rather than drawn out scholarly exposition to make its point.  The Goal uses the story of Alex Rogo, plant manager, to illustrate the theory of constraints and how the wrong measurement focus can harm an organization.

I am using the 30th anniversary edition of The Goal for this re-read.  This version of the book includes two forewords and 40 chapters.

The two forwards to The Goal expose Goldratt‚Äôs philosophical approach.¬† For example, in the forwards, science is defined as a method to develop or expose a ‚Äúminimum set of assumptions that can be explained through straightforward derivation, the existence of many phenomena of nature.‚ÄĚ Science provides an approach to develop an understanding why something is occurring and then to be able to test against that understanding. We deduce based on observation and measurement to develop a hypothesis and then continue to compare what we see against the hypothesis. Good science is the foundation of effective process improvement. Good process improvement is simply a requirement for survival in today‚Äôs dynamic business environment.

The characters introduced in chapters 1 – 4:

  • Alex Rogo ‚Äď the protagonist, manufacturing plant manager
  • Bill Peach ‚Äď command and control division vice-president
  • Fran – Alex’s secretary
  • Bob Donovan – Production Manager
  • Julie Rogo – Alex Rogo’s wife

Chapter 1:

In the first chapter we are immediately introduced to Alex Rogo, plant manager, and his boss, Bill Peach.  Our protagonist, Alex Rogo is immediately thrown into a crisis, revolving around a late shipment that his boss has arrived, unannounced, at the plant to expedite. Bill Peach begins by interfering with plant operations, which leads to a critical mechanic quitting and to a broken and potentially sabotaged machine.  Remember back to Kotter’s eight stage model for significant change in his seminal book, Leading Change (the last book featured in out Re-Read Saturday feature).  The first step in the model was to establish a sense of urgency. Goldratt and Cox use chapter one to establish the proverbial burning platform.  The plant is losing money, orders are shipping late and Peach has delivered an ultimatum that unless the plant is turned around, it will be closed.

Chapter 2:

The immediate crisis is surmounted, the order is completed and shipped. The plant focused on getting a single order done and shipped. Bob Donavan noted that everyone pulled together, behavior that the Agile community would call ‚Äúswarming.‚ÄĚ ¬†A thread running through the chapter is that the plant has aggressively pursued cost savings and increased efficiency. This thread foreshadows a recognition that measuring the cost savings or efficiency improvement in any individual step might not provide the results the organization expects. Rogo reflects at one point that he has the best people and the best technology, therefore he must be a poor manager.

Chapter 3:

This chapter develops on the corporate culture by exposing the fixation on efficiency and cost control as the basis for measurement and comparison.¬† The whole division is on the chopping block and an endemic atmosphere of fear has taken hold.¬† For example, Rogo‚Äôs and Peach‚Äôs relationship that had, in the past, been marked by camaraderie is a reflection of the fear and animosity that has been generated. Fear hinders the collaboration and innovation that will be needed to save both the plant and the division.¬† W. Edward Deming in his famous 14 principles explicitly stated ‚Äúdrive out fear, so that everyone may work effectively for the company.‚ÄĚ My interpretation of chapter 3¬†is that fear and the tools that generate fear will need to be addressed for the division to survive.

Chapters 1 through 3 actively present the reader with a burning platform.  The plant and division are failing.  Alex Rogo has actively pursued increased efficiency and automation to generate cost reductions, however performance is falling even further behind and fear has become central feature in the corporate culture.

Next week we begin the path toward redemption!

What are your thoughts on the forwards and first 3 chapters?


Categories: Process Management

Exploring container platforms: StackEngine

Xebia Blog - Fri, 02/20/2015 - 15:51

Docker has been around for more than a year already, and there are a lot of container platforms popping up. In this series of blogposts I will explore these platforms and share some insights. This blogpost is about StackEngine.

TL;DR: StackEngine is (for now) just a nice frontend to the Docker binary. Nothing...

Product versus Project: Exposing Behavior Differences

Organizations with a product perspective generally have an understanding that a project or release will follow the current project reducing the need to get as large a bite at the apple as possible (having tried this a child, I can tell you choking risk is increased).

Organizations with a product perspective generally have an understanding that a project or release will follow the current project reducing the need to get as large a bite at the apple as possible (having tried this a child, I can tell you choking risk is increased).

The concepts of product and project are common perspectives in software development organizations. A simple definition for each is that product is the thing that is delivered – software, an app or an interface. A project reflects that activities needed to develop the product or a feature of the product. Products often have roadmaps that define the path they will follow as they evolve. I was recently shown a road map for an appraisal tool a colleague markets that showed a number of new features planned for later this year and areas that would be addressed in the next few years. The map became less precise the further the time horizon was pushed out. Projects, releases and sprints typically are significantly more granular and with specific plans for work that is currently being developed. Different perspectives generate several different behaviors.

  1. Roadmap versus plan: The time-boxed nature of a project or a sprint (both have a stated beginning and end) tends to generate a focus planning and executing specific activities and tasks. For example, in Scrum sprint planning, accept and commit to the user stories they will deliver. There is often a many-to-one relationship between stories and features that would be recognized at by end-users or customers. Product planning tends to focus on the features and architectures that meet the needs of the user community. Projects foster short-term rather than long-term focus. Short-term focus can lead to architectural trade-offs or technical shortcuts to meet specific dates that will have negative implications in the future. The product owner is often the bridge between the project and product perspectives, acting as an arbiter. The product owner helps the team make decisions could have long-term implications and provides the whole team with an understanding of the roadmap. Teams without (or with limited) access to a product owner and product roadmap can only focus on the time horizon they know.
  2. Needs versus Constraints: Projects are often described as the interaction between the triple constraints of time, budget and scope. Sprints are no different; cadence – time, fixed team size ‚Äď budget, and committed stories ‚Äď scope. There is always a natural tension between the business/product owner and the development team. In organizations with a project perspective, product owners and other business stakeholders typically have a rational economic rational to pressure teams to commit to more than can reasonably accomplished in any specific project. Who knows when the next project will be funded? This behavior is often illustrated when the business indicates that ALL requirements they have identified are critical, or when concepts like a minimum viable product are met with hostility. Other examples of this behavior can be seen in organizations that adopt pseudo-Agile. ¬†In pseudo-Agile¬†backlogs are created and an overall due date generated for all the stories ¬†before a team even understands their capacity to deliver. Shortcuts, technical debt and lower customer satisfaction are often the results of this type of perspective. Organizations with a product perspective generally have an understanding that a project or release will follow the current project reducing the need to get as large a bite at the apple as possible (having tried this a child, I can tell you choking risk is increased).
  3. Measuring Efficiency/Cost versus Revenue: Organizations with a product perspective tend to take a wider view of what needs to be measured. Books such as The Goal (by Goldratt and Cox) make a passionate argument for the measurement of overall revenue. The thought is that any process change or any system enhancement needs to be focused on optimizing the big picture rather than over optimizing steps that don’t translate to the goals of the organization. Focusing of delivering projects more efficiently, which is the classic IT measurement, does not make sense if what is being done does not translate to delivering value. Measuring the impact of a product roadmap (e.g. revenue, sales, ROI) leads organizations to a product view of work which lays stories and features out as portfolio of work.

These dichotomies represent how differences in project and product perspectives generate different behaviors. Both perspectives are important based on the role a person is playing in an organization. For example, a sprint team must have a project perspective so they can commit to work with a time box. That same team needs to have a product view when they are making day-to-day trade-offs that all teams take or technical debt may overtake their ability to deliver. Product owners are often the bridge between the project and product perspectives, however the best teams understand and leverage both.


Categories: Process Management

Try, Option or Either?

Xebia Blog - Wed, 02/18/2015 - 09:45

Scala has a lot of different options for handling and reporting errors, which can make it hard to decide which one is best suited for your situation. In Scala and functional programming languages it is common to make the errors that can occur explicit in the functions signature (i.e. return type), in contrast with the common practice in other programming languages where either special values are used (-1 for a failed lookup anyone?) or an exception is thrown.

Let's go through the main options you have as a Scala developer and see when to use what!

Option
A special type of error that can occur is the absence of some value. For example when looking up a value in a database or a List you can use the find method. When implementing this in Java the common solution (at least until Java 7) would be to return null when a value cannot be found or to throw some version of the NotFound exception. In Scala you will typically use the Option[T] type, returning Some(value) when the value is found and None when the value is absent.

So instead of having to look at the Javadoc or Scaladoc you only need to look at the type of the function to know how a missing value is represented. Moreover you don't need to litter your code with null checks or try/catch blocks.

Another use case is in parsing input data: user input, JSON, XML etc.. Instead of throwing an exception for invalid input you simply return a None to indicate parsing failed. The disadvantage of using Option for this situation is that you hide the type of error from the user of your function which, depending on the use-case, may or may not be a problem. If that information is important keep on reading the next sections.

An example that ensures that a name is non-empty:

def validateName(name: String): Option[String] = {
  if (name.isEmpty) None
  else Some(name)
}

You can use the validateName method in several ways in your code:

// Use a default value

 validateName(inputName).getOrElse("Default name")

// Apply some other function to the result
 validateName(inputName).map(_.toUpperCase)

// Combine with other validations, short-circuiting on the first error
// returning a new Option[Person]
 for {
   name <- validateName(inputName)
   age <- validateAge(inputAge)
 } yield Person(name, age)

Either
Option is nice to indicate failure, but if you need to provide some more information about the failure Option is not powerful enough. In that case Either[L,R] can be used. It has 2 implementations, Left and Right. Both can wrap a custom type, respectively type L and type R. By convention Right is right, so it contains the successful result and Left contains the error. Rewriting the validateName method to return an error message would give:

def validateName(name: String): Either[String, String] = {
 if (name.isEmpty) Left("Name cannot be empty")
 else Right(name)
 }

Similar to Option Either can be used in several ways. It differs from option because you always have to specify the so-called projection you want to work with via the left or right method:

// Apply some function to the successful result
validateName(inputName).right.map(_.toUpperCase)

// Combine with other validations, short-circuiting on the first error
// returning a new Either[Person]
for {
 name <- validateName(inputName).right
 age <- validateAge(inputAge).right
} yield Person(name, age)

// Handle both the Left and Right case
validateName(inputName).fold {
  error => s"Validation failed: $error",
  result => s"Validation succeeded: $result"
}

// And of course pattern matching also works
validateName(inputName) match {
  case Left(error) => s"Validation failed: $error",
  case Right(result) => s"Validation succeeded: $result"
}

// Convert to an option:
validateName(inputName).right.toOption

This projection is kind of clumsy and can lead to several convoluted compiler error messages in for expressions. See for example the excellent and in detail discussion of the Either type in the The Neophyte's Guide to Scala Part 7. Due to these issues several alternative implementations for a kind of Either have been created, most well known are the \/  type in Scalaz and the Or type in Scalactic. Both avoid the projection issues of the Scala Either and, at the same time, add additional functionality for aggregating multiple validation errors into a single result type.

Try

Try[T] is similar to Either. It also has 2 cases, Success[T] for the successful case and Failure[Throwable] for the failure case. The main difference thus is that the failure can only be of type Throwable. You can use it instead of a try/catch block to postpone exception handling. Another way to look at it is to consider it as Scala's version of checked exceptions. Success[T] wraps the result value of type T, while the Failure case can only contain an exception.

Compare these 2 methods that parse an integer:

// Throws a NumberFormatException when the integer cannot be parsed
def parseIntException(value: String): Int = value.toInt

// Catches the NumberFormatException and returns a Failure containing that exception
// OR returns a Success with the parsed integer value
def parseInt(value: String): Try[Int] = Try(value.toInt)

The first function needs documentation describing that an exception can be thrown. The second function describes in its signature what can be expected and requires the user of the function to take the failure case into account. Try is typically used when exceptions need to be propagated, if the exception is not needed prefer any of the other options discussed.

Try offers similar combinators as Option[T] and Either[L,R]:

// Apply some function to the successful result
parseInt(input).map(_ * 2)

// Combine with other validations, short-circuiting on the first Failure
// returning a new Try[Stats]
for {
  age <- parseInt(inputAge)
  height <- parseDouble(inputHeight)
} yield Stats(age, height)

// Use a default value
parseAge(inputAge).getOrElse(0)

// Convert to an option
parseAge(inputAge).toOption

// And of course pattern matching also works
parseAge(inputAge) match {
  case Failure(exception) => s"Validation failed: ${exception.message}",
  case Success(result) => s"Validation succeeded: $result"
}

Note that Try is not needed when working with Futures! Futures combine asynchronous processing with the Exception handling capabilities of Try! See also Try is free in the Future.

Exceptions
Since Scala runs on the JVM all low-level error handling is still based on exceptions. In Scala you rarely see usage of exceptions and they are typically only used as a last resort. More common is to convert them to any of the types mentioned above. Also note that, contrary to Java, all exceptions in Scala are unchecked. Throwing an exception will break your functional composition and probably result in unexpected behaviour for the caller of your function. So it should be reserved as a method of last resort, for when the other options don’t make sense.
If you are on the receiving end of the exceptions you need to catch them. In Scala syntax:

try {
  dangerousCode()
} catch {
  case e: Exception => println("Oops")
} finally {
  cleanup
}

What is often done wrong in Scala is that all Throwables are caught, including the Java system errors. You should never catch Errors because they indicate a critical system error like the OutOfMemoryError. So never do this:

try {
  dangerousCode()
} catch {
  case _ => println("Oops. Also caught OutOfMemoryError here!")
}

But instead do this:

import scala.util.control.NonFatal

try {
  dangerousCode()
} catch {
  case NonFatal(_) => println("Ooops. Much better, only the non fatal exceptions end up here.")
}

To convert exceptions to Option or Either types you can use the methods provided in scala.util.control.Exception (scaladoc):

import scala.util.control.Exception._

val i = 0
val result: Option[Int] = catching(classOf[ArithmeticException]) opt { 1 / i }
val result: Either[Throwable, Int] = catching(classOf[ArithmeticException]) either { 1 / i }

Finally remember you can always convert an exception into a Try as discussed in the previous section.

TDLR;

  • Option[T], use it when a value can be absent or some validation can fail and you don't care about the exact cause. Typically in data retrieval and validation logic.
  • Either[L,R], similar use case as Option but when you do need to provide some information about the error.
  • Try[T], use when something Exceptional can happen that you cannot handle in the function. This, in general, excludes validation logic and data retrieval failures but can be used to report unexpected failures.
  • Exceptions, use only as a last resort. When catching exceptions use the facility methods Scala provides and never catch { _ => }, instead use catch { NonFatal(_) => }

One final advice is to read through the Scaladoc for all the types discussed here. There are plenty of useful combinators available that are worth using.

Product versus Project: Related but Different

 

Learning to pour sidra (cider) - product or project?

Learning to pour sidra (cider) – product or project?

A product is something that is constructed for sale or for trade for value. In the software world that product is often software code or a service to interface users to software. Typically a project or set of projects is required to build and maintain an IT product. If we simplify and combine the two concepts we could define a product as what is delivered and a project as the vehicle to deliver the product. The idea of a product and a project are related, but different concepts. There are several differences in common attributes:

Untitled

Agile pushes organizations to take more of a product than a project perspective, however arguably parts of both can be found as the product evolves. A sprint (or even a release) will include a subset of the features that are included on a product backlog.  The sprint or release is a representation of the project perspective.  As time progresses, the product backlog evolves as customer or user needs change (the product perspective). In the long run the product perspective drives the direction of the organization.  For example, a friend that owns a small firm that delivers software services maintains a single product backlog. In classic fashion the items near the top of the backlog have a higher priority and are more granular. The backlog includes some ideas, new services and features that won’t be addressed for one or more years. The owner acts as the product owner and at a high level sets the priorities with input from her staff once a quarter, based on progress, budget and market forces. The Scrum master and team are focused on delivering value during every sprint, while the product owner in this case is focused of building greater business capabilities.

IT in general, and software development specifically, have historically viewed work as a series projects, sometimes interlocked into larger programs. A project has a beginning and an end and delivers some agreed upon scope. When a project is complete a team moves on to the next job. A simple and rational behavior for a product owner who might not know when the next project impacting his product might occur would be to ask for the moon and to pitch a fit when it isn’t delivered. Because the product owner and the team are taking a project perspective it is impossible to count on work continuing forcing an all or nothing attitude. That attitude put the pressure on a team to accept more requirements than they can deliver leading an increased possibility of¬†¬†disappointment, lower quality and failure. Having either a product or project perspective will drive how everyone involved in delivering functionality interact and behave.


Categories: Process Management

Software Development Linkopedia February 2015

From the Editor of Methods & Tools - Tue, 02/17/2015 - 16:09
Here is our monthly selection of interesting knowledge material on programming, software testing and project management. This month you will find some interesting information and opinions about software and system modeling, programmer psychology, managing priorities, improving software architecture, technical user stories, free tools for Scrum, coding culture and integrating UX in Agile approaches. Web site: Fundamental Modeling Concepts The Fundamental Modeling Concepts (FMC) primarily provide a framework for the comprehensive description of software-intensive systems. Blog: The Ten Commandments of Egoless Programming Blog: WIP and Priorities: Hw to Get Fast and Focused! Blog: Sacrificial Architecture Blog: ...

Multiple Levels of Done

Mike Cohn's Blog - Tue, 02/17/2015 - 16:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

Having a “definition of done” has become a near-standard thing for Scrum teams. The definition of done (often called a “DoD”) establishes what must be true of each product backlog item for that item to be done.

A typical DoD would be something similar to:

  • The code is well written. (That is, we’re happy with it and don’t feel like it immediately needs to be rewritten.)
  • The code is checked in. (Kind of an “of course” statement, but still worth calling out.)
  • The code was either pair programmed or peer reviewed.
  • The code comes with tests at all appropriate levels. (That is, unit, service and user interface.)
  • The feature the code implements has been documented in any end-user documentation such as manuals or help systems.

Many teams will improve their Definition of Done over time. For example, a team using the example above might not be able to do so much automated testing when first starting out. But, hopefully, they would add that to their definition of done over time.

All this is sufficient for the vast majority of teams. But I’ve worked on a few projects whose teams benefitted from having multiple definitions of done. A team takes a product backlog item to definition of done Level 1 in a first sprint, to definition of done Level 2 in a subsequent sprint, and so on.

I am most definitely not saying they code something in a first sprint and test it in a second sprint. “Done” still means tested, but it may mean tested to different—but appropriate—levels. Let’s look an example.

An Example from a Game Studio

One thing I’ve really enjoyed in working with game studios is that they understand that not all work will make it into the finished game. Sometimes, for example, a game team experiments with a new character trying to make the character fun. If they can’t, the character isn’t added to the game.

So it would be extremely wasteful for a game team to have a definition of done requiring all art to be perfect, all audio be recorded, and refresh rates be high when they are merely trying to decide if a new character is fun. The team should do just enough to answer that question.

In a number of game studios, this has led to a four-level definition of done:

Done, Level 1 (D1) means the new feature works and decisions can be made. For animation, this was often “the character is animated in a white room.” It’s “shippable” to friendly users (often internal) who can comment on whether the new functionality meets its objective.

D2: The thing is integrated into the game and users can play it / interact with it.

D3: The feature is truly shippable. It’s good enough to include in a major public release. The team may not want to release it yet—they may first want to improve the frame rate, add some polygons, brighten colors, and so on. But the feature could be shipped with this feature in this state if necessary.

D4: The feature is tuned, polished, and everyone loves it. There’s nothing the team would change. A typical public release will include a mix of D4 and D3 items. There will always be areas the team wants to go back to and further improve. But, time intrudes and they ship the product. So D3 is totally shippable. You’re not embarrassed by D3 and only your hardest core users will notice the ways it could be better. D4 rocks.

Are Multiple Definitions of Done Right for You?

Very likely not. Most teams do quite well with a single definition of done. But the ideas above extend beyond just game development. I’ve used the same approach in a variety of other application domains, notably hardware development. In that case, the teams involved were developing dozens of new gadgets for an integrated suite of home automation products.

They used these definitions:

D1: The new hardware works on a test bench in the office.

D2: The new hardware is integrated with the other products in the suite.

D3: The new hardware is installed and running in at least one model house used for this type of beta testing.

D4: The product is fully ready for sale (e.g., it meets all requirements for UL approval).

Within this company, there were dozens of components in development at all times, and some components could be found at each level of doneness. For example, a product to raise and lower window shades could be in testing at the model home, while a newer component to open and close doors had just been started and was only working on a test bench of one developer.

Most projects will never need this. If you do think it’s appropriate for you, before trying it, really be sure you’re not using the technique as an excuse to skip things like testing.

Each level should exist as a way of making decisions about the product. A good test of that is to see if some features are dropped at each level. It is a good sign, for example, that sometimes a feature reaches a certain doneness level, and the product owner decides the feature is no longer wanted due to perhaps its cost or delivery time.

Cancelling $http requests for fun and profit

Xebia Blog - Tue, 02/17/2015 - 09:11

At my current client, we have a large AngularJS application that is configured to show a full-page error whenever one of the $http requests ends up in error. This is implemented with an error interceptor as you would expect it to be. However, we’re also using some calculation-intense resources that happen to timeout once in a while. This combination is tricky: a user triggers a resource request when navigating to a certain page, navigates to a second page and suddenly ends up with an error message, as the request from the first page triggered a timeout error. This is a particular unpleasant side effect that I’m going to address in a generic way in this post.

There are of course multiple solutions to this problem. We could create a more resilient implementation in the backend that will not time out, but accepts retries. We could change the full-page error in something less ‚Äėin your face‚Äô (but you still would get some out-of-place error notification). For this post I‚Äôm going to fix it using¬†a different approach: cancel any running requests when a user switches to a different location (the route part of the URL). This makes sense; your browser does the same when navigating from one page to another, so why not mimic this behaviour in your Angular app?

I’ve created a pretty verbose implementation to explain how to do this. At the end of this post, you’ll find a link to the code as a packaged bower component that can be dropped in any Angular 1.2+ app.

To cancel a running request, Angular does not offer that many options. Under the hood, there are some places where you can hook into, but that won’t be necessary. If we look at the $http usage documentation, the timeout property is mentioned and it accepts a promise to abort the underlying call. Perfect! If we set a promise on all created requests, and abort these at once when the user navigates to another page, we’re (probably) all set.

Let’s write an interceptor to plug in the promise in each request:

angular.module('angularCancelOnNavigateModule')
  .factory('HttpRequestTimeoutInterceptor', function ($q, HttpPendingRequestsService) {
    return {
      request: function (config) {
        config = config || {};
        if (config.timeout === undefined && !config.noCancelOnRouteChange) {
          config.timeout = HttpPendingRequestsService.newTimeout();
        }
        return config;
      }
    };
  });

The interceptor will not overwrite the timeout property when it is explicitly set. Also, if the noCancelOnRouteChange option is set to true, the request won’t be cancelled. For better separation of concerns, I’ve created a new service (the HttpPendingRequestsService) that hands out new timeout promises and stores references to them.

Let’s have a look at that pending requests service:

angular.module('angularCancelOnNavigateModule')
  .service('HttpPendingRequestsService', function ($q) {
    var cancelPromises = [];

    function newTimeout() {
      var cancelPromise = $q.defer();
      cancelPromises.push(cancelPromise);
      return cancelPromise.promise;
    }

    function cancelAll() {
      angular.forEach(cancelPromises, function (cancelPromise) {
        cancelPromise.promise.isGloballyCancelled = true;
        cancelPromise.resolve();
      });
      cancelPromises.length = 0;
    }

    return {
      newTimeout: newTimeout,
      cancelAll: cancelAll
    };
  });

So, this service creates new timeout promises that are stored in an array. When the cancelAll function is called, all timeout promises are resolved (thus aborting all requests that were configured with the promise) and the array is cleared. By setting the isGloballyCancelled property on the promise object, a response promise method can check whether it was cancelled or another exception has occurred. I’ll come back to that one in a minute.

Now we hook up the interceptor and call the cancelAll function at a sensible moment. There are several events triggered on the root scope that are good hook candidates. Eventually I settled for $locationChangeSuccess. It is only fired when the location change is a success (hence the name) and not cancelled by any other event listener.

angular
  .module('angularCancelOnNavigateModule', [])
  .config(function($httpProvider) {
    $httpProvider.interceptors.push('HttpRequestTimeoutInterceptor');
  })
  .run(function ($rootScope, HttpPendingRequestsService) {
    $rootScope.$on('$locationChangeSuccess', function (event, newUrl, oldUrl) {
      if (newUrl !== oldUrl) {
        HttpPendingRequestsService.cancelAll();
      }
    })
  });

When writing tests for this setup, I found that the $locationChangeSuccess event is triggered at the start of each test, even though the location did not change yet. To circumvent this situation, the function does a simple difference check.

Another problem popped up during testing. When the request is cancelled, Angular creates an empty error response, which in our case still triggers the full-page error. We need to catch and handle those error responses. We can simply add a responseError function in our existing interceptor. And remember the special isGloballyCancelled property we set on the promise? That’s the way to distinguish between cancelled and other responses.

We add the following function to the interceptor:

      responseError: function (response) {
        if (response.config.timeout.isGloballyCancelled) {
          return $q.defer().promise;
        }
        return $q.reject(response);
      }

The responseError function must return a promise that normally re-throws the response as rejected. However, that’s not what we want: neither a success nor a failure callback should be called. We simply return a never-resolving promise for all cancelled requests to get the behaviour we want.

That’s all there is to it! To make it easy to reuse this functionality in your Angular application, I’ve packaged this module as a bower component that is fully tested. You can check the module out on this GitHub repo.

When development resembles the ageing of wine

Xebia Blog - Mon, 02/16/2015 - 20:29

Once upon a time I was asked to help out a software product company.  The management briefing went something like this: "We need you to increase productivity, the guys in development seem to be unable to ship anything! and if they do ship something it's only a fraction of what we expected".

And so the story begins. Now there are many ways how we can improve the teams outcome and its output (the first matters more), but it always starts with observing what they do today and trying to figure out why.

It turns out that requests from the business were treated like a good wine, and were allowed to "age", in the oak barrel that was called Jira. Not so much to add flavour in the form of details, requirements, designs, non functional requirements or acceptance criteria, but mainly to see if the priority of this request would remain stable over a period of time.

In the days that followed I participated in the "Change Control Board" and saw what he meant. Management would change priorities on the fly and make swift decisions on requirements that would take weeks to implement. To stay in vinotology terms, wine was poured in and out the barrels at such a rate that it bore more resemblance to a blender than to the art of wine making.

Though management was happy to learn I had unearthed to root cause to their problem, they were less pleased to learn that they themselves were responsible.  The Agile world created the Product Owner role for this, and it turned out that this is hat, that can only be worn by a single person.

Once we funnelled all the requests through a single person, both responsible for the success of the product and for the development, we saw a big change. Not only did the business got a reliable sparring partner, but the development team had a single voice when it came to setting the priorities. Once the team starting finishing what they started we started shipping at regular intervals, with features that we all had committed to.

Of course it did not take away the dynamics of the business, but it allowed us to deliver, and become reliable in how and when we responded to change. Perhaps not the most aged wine, but enough to delight our customers and learn what we should put in our barrel for the next round.

 

SPaMCAST 329 ‚Äď Commitment, Message and Themes, HALT Testing

www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

This week’s Software Process and Measurement Cast is our magazine with three features.  We begin with Jo Ann Sweeney’s Explaining Change column.  In this column Jo Ann tackles the concepts of messages and themes.  I consider this the core of communication.  Visit Jo Ann’s website at http://www.sweeneycomms.com and let her know what you think of her column.

The middle segment is our essay on commitment.  The making and keeping of commitments are core components of both professional behavior and Agile. The simple definition of a commitment is a promise to perform. Whether Agile or Waterfall, commitments are used to manage software projects. Commitments drive the behavior of individuals, teams and organizations.  Commitments are powerful!

We wrap this week’s podcast up with a new column from the Software Sensei, Kim Pries. In this installment Kim discusses software HALT testing.  HALT stands for highly accelerated life test.  The goal is to find defects, faults and things that go bump in the night in hours or days rather than waiting for weeks, months or years.  Whether you are testing software, hardware or some combination this is a concept you need to have in your portfolio.

Call to action!

Can you tell a friend about the podcast?  Even better, show them how you listen to the Software Process and Measurement Cast and subscribe them!  Send me the name of you person you subscribed and I will give both you and the horde you have converted to listeners a call out on the show.

Re-Read Saturday News

The next book in our Re-Read Saturday feature will be Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement. Originally published in 1984, it has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. On February 21st we will begin re-read on the Software Process and Measurement Blog

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

Next SPaMCAST

In the next Software Process and Measurement Cast we will feature our interview Anthony Mersino, author of Emotional Intelligence for Project Managers and the newly published Agile Project Management.  Anthony and I talked about Agile, coaching and organizational change.  A wide ranging interview that will help any leader raise the bar!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese


Categories: Process Management

SPaMCAST 329 ‚Äď Commitment, Message and Themes, HALT Testing

Software Process and Measurement Cast - Sun, 02/15/2015 - 23:00

This week’s Software Process and Measurement Cast is our magazine with three features.  We begin with Jo Ann Sweeney’s Explaining Change column.  In this column Jo Ann tackles the concepts of messages and themes.  I consider this the core of communication.  Visit Jo Ann’s website at http://www.sweeneycomms.com and let her know what you think of her column.

The middle segment is our essay on commitment.  The making and keeping of commitments are core components of both professional behavior and Agile. The simple definition of a commitment is a promise to perform. Whether Agile or Waterfall, commitments are used to manage software projects. Commitments drive the behavior of individuals, teams and organizations.  Commitments are powerful! 

We wrap this week’s podcast up with a new column from the Software Sensei, Kim Pries. In this installment Kim discusses software HALT testing.  HALT stands for highly accelerated life test.  The goal is to find defects, faults and things that go bump in the night in hours or days rather than waiting for weeks, months or years.  Whether you are testing software, hardware or some combination this is a concept you need to have in your portfolio.

Call to action!

Can you tell a friend about the podcast?  Even better, show them how you listen to the Software Process and Measurement Cast and subscribe them!  Send me the name of you person you subscribed and I will give both you and the horde you have converted to listeners a call out on the show.  

Re-Read Saturday News

The next book in our Re-Read Saturday feature will be Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement. Originally published in 1984, it has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. On February 21st we will begin re-read on the Software Process and Measurement Blog

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version  

 

Next SPaMCast

In the next Software Process and Measurement Cast we will feature our interview Anthony Mersino, author of Emotional Intelligence for Project Managers and the newly published Agile Project Management.  Anthony and I talked about Agile, coaching and organizational change.  A wide ranging interview that will help any leader raise the bar! 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book here.

Available in English and Chinese

Categories: Process Management

Re-Read Saturday . . . And The Readers Have Spoken

IMG_1249

The next book in our Re-Read Saturday feature will be Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement. Originally published in 1984, it has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. On February 21st we will begin re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version 

For the record, the top five books in the overall voting were:

  1. The Goal: A Process of Ongoing Improvement – Eliyahu M. Goldrattand Jeff Cox 71%
  2. Checklist Manifesto: How to Get Things Done Right- Atul Gawande 43%
  3. Three Tied:
    The Principles of Product Development Flow ‚Äď Donald G. Reinertsen57%
    The Art of Software Testing – Glenford J. Myers, Cory Sandler and Tom Badgett8.57%
    The Lean Startup: How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses РEric Reis 8.57%

I was asked on LinkedIn for a list of the other books that we have featured in the Re-Read Saturday series. Here they are:

7 Habits of Highly Effective People ‚Äď Stephen Covey

Dr. Covey lays out seven behaviors of successful people (hence the title).  The book is based on observation, interviews and research; therefore the habits presented in the book not only make common sense, but also have a solid evidentiary basis. One of the reasons the book works is the integration of character and ethics into the principles.  I have written and podcasted on the importance and value of character and ethics in the IT environment many times.

Note: If you don’t have a copy of the book, buy one (I would loan you mine, but I suspect I will read it again).  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version Kindle Version

The re-read blog entries:

The audio podcast can be listened to HERE

Leading Change ‚Äď John P. Kotter

Leading Change by John P. Kotter, originally published in 1996, has become a classic reference that most process improvement specialists either have or should have on their bookshelf. The core of the book lays out an eight-step model for effective change that anyone involved in change will find useful. However there is more to the book than just the model.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version 

Entries in the Re-Read are:

I have not compiled the entries into a single essay and podcast as of February 2015.


Categories: Process Management

Who can be Agile?

Transformation is possible if you really want it.

Transformation is possible if you really want it.

I am often asked which projects or organizations should use Agile. Underlying the question is a feeling that there is some formula that indicates specific types of projects or organizations should use Agile and which should not. I feel the answer is very simple. Since Agile is at heart a philosophy, any project or organization that wants to embrace Agile can use Agile. The most important word in that statement is ‚Äúwants‚ÄĚ. We often say we want something, but that statement doesn‚Äôt translate to action — for many reasons. For example, I ‚Äúwant‚ÄĚ to learn Ruby. I have even gone as far to put a card into the backlog of my personal Kanban board. However, I have not found the time to begin the process. Finding the time would require I change my current behavior (get up earlier) or to abandon another project. Really wanting to become Agile means making changes both to how organizations and teams work and interact. As we all know, organizational change is typically not easy. The recent re-read of Kotter’s Leading Change recounted and discussed a framework for generating major organizational change. The requirements for all changes requires focus, effort and constancy of purpose. A change to embrace Agile requires adopting the philosophy of Agile, abandoning much of the trappings of command and control and thinking more in product terms of than project.

Agile is a Philosophy – the Agile Manifesto is comprised of four values and twelve principles that create a philosophy for structuring and managing work. The focus is on the human side of delivering value. There are innumerable frameworks, methods and techniques for implementing those techniques. For example, while Scrum is the most popular framework being used in Agile projects, it is not the only method. Crystal, DSDM and xP are a few of the other common frameworks. How any of these frameworks is implemented makes them more or less Agile.

Abandoning Command and Control ‚Äď Effective Agile teams are self-organizing and self-managing. Self-managing teams plan and manage their day-to-day activities with little overt direction and supervision. Command and control management techniques, which hit their heyday in the 1950‚Äôs and 60‚Äôs and are still common, make the assumption that managers will assign and direct individuals (read that as tell them what to do). Command and control management strips much of the team‚Äôs ability to quickly make decisions and to react to change at a tactical level, which slows progress and reduces productivity.

Taking A Product Perspective ‚Äď Agile embraces short iterative cycles of development that deliver value continuously or in the form of frequent releases. Techniques such as gathering needs into a backlog, involving product owners in planning and prioritizing needs over several sprints and releases are a reflection of product thinking. This type of thinking fosters trust between the business and IT so that the onus is not on the business to think of everything they need at once. Projects, as opposed to products, have a discernible beginning and end and have a known scope or budget, which forces users will to try to maximize the functions they ask for from the project team. The incentive is for the business to ask for everything and to make everything the top priority.

Who Can be Agile? Unfortunately any team or organization can say they are Agile. Any team can begin the day with a stand-up meeting or do a demo or retrospective. However just using a technique or framework does not make anyone Agile. Anyone can be Agile, however only if they want to be Agile enough to make changes in how they think and how they organize their work. Being Agile is only way to get all of the benefits in customer satisfaction, quality and productivity organizations are seeking when they say they ‚Äúwant‚ÄĚ to be Agile.


Categories: Process Management

Ready to Develop

Ready to develop insures that the team is ready to work as soon as they sit down.

Ready to develop insures that the team is ready to work as soon as they sit down.

The definition of done is an important tool to help guide programs and teams. It is the requirements that the software must meet to be considered complete. For example, a simple of the definition of done is:

All stories must be unit tested, a code review performed, integrated into the main build, integration tested, and release documentation completed.

The definition of done is generally agreed upon by the entire core team at the beginning of a project or program and stays roughly the same over the life of the project. It provides all team members with an outline of the macro requirements that all stories must meet. Therefore the definition helps in estimating by suggesting many of the tasks that will be required. Another perspective on the definition of done is that it represents the requirements generated by an organization’s policies, processes and methods. For example, the organization may have a policy that requires code to be scanned for security holes. Bookending the concept of done is the concept of ready to develop (or just ready). Ready to develop is just as powerful a tool as the definition done. Ready acts as a filter to determine whether a user story is ready to be passed to the team and taken into a sprint. Ready keeps teams from spinning their wheels working on stories that are unknown, malformed or are not understood. The basic definition of ready I typically use is:

  1. The story is well-formed. The story follows the format of persona, goal, value/benefit. The classic user story format ensures the team knows who the story is being done to support, the functionality the story is planned to deliver AND the business value the story is expected to deliver.
  2. The story fulfills the criteria encompassed by acronym INVEST.
    1. Independent – Development of the story is not dependent on other incomplete or undeveloped stories. This does not mean that any individual story does not build on a previous story.
    2. Negotiable РThe story generates conversation and collaboration with the product owner and subject matter experts. Stories are not a contract rather they are  tool to ensure that business ideas are explore as the stories evolve from idea into code. Stories are never a specific contract.
    3. Valuable – The story will deliver a demonstrable benefit when it is complete.
    4. Estimable – The team can accurately (enough) estimate the size of the story. The ability to estimate requires that the team has a decent understanding of what the story will require.
    5. Small – The story can be completed in a single sprint.
    6. Testable – The story can be easily unit and acceptance tested.
  3. A story must have acceptance criteria. Acceptance criteria are critical components in the definition of the story’s requirements. All acceptance criteria must be satisfied for the story will be complete (not necessarily done).
  4. Each story should have any external (not on the team) subject matter experts (SMEs) identified with contact details. External SMEs generally participate in the conversation and collaboration needed to deliver a story (N in INVEST).
  5. There are no external dependencies that will prevent the story from being completed. In projects in which the work is completed by a single team, this criteria is generally subsumed by the independent criteria in INVEST. However in larger projects interactions with other teams, applications or factors can generate dependencies. Those dependencies need to be identified and cleared before a story enters the development process.

The definition of ready is a hurdle in much the same manner as the definition of done. The definition of done ensures that a team delivers functionality that meets the not only the requirements of the product owner and business, but also the requirements of the organization. The definition of ready makes sure that the stories that a team is asked to plan and develop are ready so they don’t waste their time and can deliver the maximum amount of value possible. Getting stories ready for development does not happen by magic rather the process to prepare stories is typically part of planning and grooming . . . but that‚Äôs up next.


Categories: Process Management

Using Scrum to Plan Your Wedding

Mike Cohn's Blog - Tue, 02/10/2015 - 16:00

In my ScrumMaster classes I always make the point that Scrum is a general purpose framework that can be applied to projects of all sorts. I’ve seen it applied to building construction, marketing, legal cases, families, restaurant renovations, and, of course, all sorts of product development. One of my favorite examples is using Scrum to plan a wedding.

Think about how perfect that is, though. Scrum excels at projects (yes, planning a wedding is project) that are complex and novel. And planning a wedding is both.

I came across a great website last week called ScrumYourWedding.com, which is a free resource for using Scrum to plan your wedding. Built by Hannah Kane and Julia Smith, the site is a great example of helping move Scrum outside its normal domain. Even if you’re already hitched, check it out for great insights on things like a Minimum Viable Wedding (a great example of that concept!) and sample Wedding Backlogs.

If you are getting married soon, congratulations, and check out their contest to win five hours of free coaching on using Scrum to plan your wedding.

Quote of the Month February 2015

From the Editor of Methods & Tools - Tue, 02/10/2015 - 13:47
The worrying thing about writing tests is that there are numerous accounts of people introducing tests into their development process and.. ending up with even more junk code to support than they had to begin with! Why? I guess, mainly because they lacked the knowledge and understanding needed to write good tests: tests, that is, that will be an asset, not a burden. Reference: Bad Tests, Good Tests, Tomek Kaczanowski, http://practicalunittesting.com/btgt.php

SPaMCAST 328 ‚Äď Alex Papadimoulis, Release, The Game, DevOps

www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

This week’s Software Process and Measurement Cast features our interview with Alex Papadimoulis.  Alex is returning to the Software Process and Measurement Cast to discuss Release.  Release is card game about making software inspired by development strategies like Lean, Agile, and DevOps, and classic trick -taking card games. We also circled back to talk about continuous delivery and DevOps; a bit of lagniappe to add to a great interview.

Alex’s Bio:

Alex

Alex is a speaker and writer who is passionate about looking beyond the code to build great software. In addition to founding Inedo – the makers of BuildMaster, the popular continuous delivery platform – Alex also started The Daily WTF, a fun site dedicated to building software the wrong way.

Contact Information:

Email:  apapadimoulis@inedo.com
Twitter: @apapadimoulis
Web: http://inedo.com/
Other Web: http://thedailywtf.com/

Call to action!

We are just completed a re-read John Kotter’s classic Leading Change on the Software Process and Measurement Blog (www.tcagley.wordpress.com) and are in process of choosing the next book for Re-read Saturday.  Please go to the poll and cast your vote by February 15?  Vote now at Software Process and Measurement Blog!

Next SPaMCast

In the next Software Process and Measurement Cast we will feature our essay on commitment.  What is the power of making a commitment? The making and keeping of commitments are core components of professional behavior. The simple definition of a commitment is a promise to perform. Whether Agile or Waterfall, commitments are used to manage software projects. Commitments drive the behavior of individuals, teams and organizations.  Commitments are powerful!

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 328 ‚Äď Alex Papadimoulis, Release, The Game, DevOps

Software Process and Measurement Cast - Sun, 02/08/2015 - 23:00

This week’s Software Process and Measurement Cast features our interview with Alex Papadimoulis.  Alex is returning to the Software Process and Measurement Cast to discuss Release.  Release is card game about making software inspired by development strategies like Lean, Agile, and DevOps, and classic trick -taking card games. We also circled back to talk about continuous delivery and DevOps; a bit of lagniappe to add to a great interview.

Alex’s Bio:

Alex is a speaker and writer who is passionate about looking beyond the code to build great software. In addition to founding Inedo - the makers of BuildMaster, the popular continuous delivery platform - Alex also started The Daily WTF, a fun site dedicated to building software the wrong way.

Contact Information:
Email:  apapadimoulis@inedo.com
Twitter: @apapadimoulis
Web: http://inedo.com/
Other Web: http://thedailywtf.com/

Call to action!

We are just completed a re-read John Kotter’s classic Leading Change on the Software Process and Measurement Blog (www.tcagley.wordpress.com) and are in process of choosing the next book for Re-read Saturday.  Please go to the poll and cast your vote by February 15?  Vote now at Software Process and Measurement Blog!

 

Next SPaMCast

In the next Software Process and Measurement Cast we will feature our essay on commitment.  What is the power of making a commitment? The making and keeping of commitments are core components of professional behavior. The simple definition of a commitment is a promise to perform. Whether Agile or Waterfall, commitments are used to manage software projects. Commitments drive the behavior of individuals, teams and organizations.  Commitments are powerful!

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book here.

 

Available in English and Chinese.

Categories: Process Management

Efficiency and Doing the Right Thing

 

Knowing what should not be done is rarely this straightforward.

Knowing what should not be done is rarely this straightforward.

 

A quick reminder I am running a poll to choose the next book for re-read Saturday. The poll remains open for another week. Currently Goldratt’s The Goal: A Process of Ongoing Improvement is topping the list, BUT the just a few votes could change the book at the top of the list very quickly. The poll is republished at the bottom of this post.

Management guru Peter Drucker said ‚ÄúThere is nothing so useless as¬†doing efficiently¬†that which should not be done at all.‚ÄĚ Two powerful types of techniques to identify work that should not be done are process mapping, and baselining and benchmarking.

Process Mapping ‚Äď A¬†process map focuses on the capturing and documenting the sequence of tasks and activities that comprise the process. A process map is generally constrained to a specific set of activities within a broader organization. Process mapping is useful at a tactical level while other mapping techniques like value chain mapping are often more useful when taking an organizational view. Developing a process map (of any type) allows an analyst to review each step in the process to determine whether they add value. Steps that do not add value should be evaluated for removal.

Baselining and Benchmarking ‚Äď There are two typical approaches to benchmarking. The first is through measurement of the process to generate a baseline. ¬†Once a baseline is established, that baseline can be then be¬†compared to another baseline to generate a benchmark. This type of benchmark is often called a quantitative benchmark. The second type of a benchmark compares the steps and activities required in a process to a process that yields a similar product. Comparisons to frameworks such as the TMMi, CMMI or Scrum are a form of process benchmarking.

The use of analytical techniques such as process mapping or benchmarking is important to ensure that opinions and organizational politics don’t outweigh processes or work steps that generate real value. Without analysis, it is easy to sit down with an individual or a team and ask them what should not be done and get the wrong answer. Everyone has an opinion informed by his or her own experiences and biases. Unfortunately, just asking may identify a process or task that one person or team feels is not useful but has value to the larger organization. For example, a number of years ago an organization I was working with had instituted a productivity and customer satisfaction measurement program. The software teams involved in the program saw the effort needed to measure their work as overhead. The unstated goal of the program was to gather the information needed to resist outsourcing the development jobs in the organization. The goal was not shared for fear of increasing turnover and of angering the CFO who pushing for outsourcing.

It would be difficult to argue that that doing work that should not be done makes sense. However determining ‚Äúthat which should not be done‚ÄĚ is generally harder than walking up to a team and pointing to specific tasks. There is nothing wrong with asking individuals and teams involved in a process for their input, but the core of all process changes needs to be to gathering data to validate or negate opinions.

Re-read Saturday poll – vote for up to three books!

Take Our Poll (function(d,c,j){if(!d.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src='https://s1.wp.com/wp-content/mu-plugins/shortcodes/js/polldaddy-shortcode.js';s=d.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);} else if(typeof jQuery !=='undefined')jQuery(d.body).trigger('pd-script-load');}(document,'script','pd-polldaddy-loader'));
Categories: Process Management

IT Value and Customer Satisfaction

How do you calculate value?

How do you calculate value?

IT value is an outcome that can be expressed as a transaction; a summation of debits and credits resulting in a number.   Unfortunately, even if we can create a specific formula, the interpretation of the number is problematic.   Measures of the economy of inputs, the efficiency of the transformations and the effectiveness of specific outputs are components of a value equation, but they only go so far.   I would like to suggest that customer satisfaction makes interpretation of value possible.

Those that receive the service determine the value, therefore value¬†is¬†specific to the project or service.¬†In order for value¬†to be predictable, you must assume¬†that there is a relationship between how the product or service is created and the value perceived.¬† When we are assessing the value delivered by an IT department, which is part of a larger organization, it is rare that we are allowed the luxury of being able to declare that the group we are appraising is a black box. ¬†Because we can’t pretend not to care about what happens inside the box or process, we have to find a¬†way¬†to create transparency so that we can understand what is happening. For example,¬†one method is to define the output of the organization or processes. ¬†The output or product can viewed as¬†¬†a synthesis of inputs, raw materials and process. ¬†The measuring the efficiency of the processes used in the transformation process is a¬†typically measure of value add. The product or output is only valuable if it¬†meets the users need and is fit for use. Knowing the details of the transformation process provides us with the knowledge needed to make changes. While this sounds complex, every small business has had to surmount this complexity to stay in businesses.¬† A simple value example from the point of view of a restaurant owner follows.

  • A customer enters the restaurant and orders a medium-rare New York Strip steak (price $32.00)
  • The kitchen retrieves the steak from the cooler and cooks the steak so that it is done at the proper time and temperature.¬† (The inputs include requirement for the steak, effort of waiter and kitchen staff.)
  • The customer receives a medium-rare New York Strip steak

From the restaurant owner’s point of view the value equation begins as the price of the steak minus the cost of steak, preparation, and overhead.   If the cost of steak and servicing the customer was more than the price charged, an accounting loss would have resulted and if the costs were less  . . .  an accounting profit.  The simple calculation of profit and loss provides an important marker in understanding value, but it is not sufficient.   For example, let’s say the customer was the restaurant reviewer for a large local newspaper and the owner comp’ed the meal AND the reviewer was happy with the meal.  The owner would derive value from the transaction regardless of the accounting loss from that single transaction.  As I noted earlier, customer satisfaction is a filter that allows us to interpret the transaction.   Using our example, if the customer was unhappy with his or her steak the value derived by the restaurant will be less than totaling of the accounting debits and credits would predict.  While a large IT department has many inputs and outputs, I believe the example presents a path for addressing value without getting lost in the complexity of technology.

In a perfect world, IT value would be established in a perfect market place.  Customers would weigh the economy, efficiency, effectiveness and customer satisfaction they perceive they would garner from a specific development team to decide who should do their work. If team A down the could do the work for less money and higher quality or deliver it sooner, they would get the work.   Unfortunately, perfect market places seldom exist and participants could easily leverage pricing strategies that internal organizations would not be able to match.  The idea of a project level market place has merit and benchmarking projects is a means of injecting external pressure that helps focus teams on customer satisfaction.

Measuring IT value, whether at a macro or project level, needs to be approached as more than a simple assessment of the processes that convert inputs into products or services that the business requires.  Measure the inputs and raw materials, measure and appraise the processes used in transformation and then determine the user’s perception of the output (where the customer and user are different you need to understand both points of view).  Knowing the value of all of these components while having your thumb on the filter of customer satisfaction will put you in a position to not only determine the value you are delivering (at least over the short-term), but to predict how your customers are perceiving the value you are delivering.  Remember forewarned is forearmed.


Categories: Process Management