Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Xebia Blog
Syndicate content
Updated: 18 min 3 sec ago

Organisational Inertia - A Predictor for Success of Agile Transformations? (Part 2)

Tue, 03/25/2014 - 01:30

In part 1 (Organisational Inertia - Part 1) I have focussed on the question: 'Organisational Inertia - What is it?’. This blog addresses the question ‘How do we measure it?’.

I'll start from the definition of Organisational Inertia as defined in part 1. Then connect to existing models of Organisational Inertia and the relation to Agile teams and show how the analog with Physics is used to find a measure for the 'acceleration'. Then I'll combine these elements to provide a way of measuring inertia. Finally I'll provide basic examples.

Definition of Organisational Inertia

Following the analogy with physics I defined Organisational Inertia in Organisational Inertia - Part 1 as:


Organisational Inertia is an organisation’s capability to change after applying some force to it. Specifically, I define it as the ratio of the force applied and the speed of organisational change (acceleration) in reaction to some applied force, or in formula "Inertia (M) = Force / Acceleration".

We just shifted the problem of defining Organisational Inertia to defining ‘some applied force’ and ‘speed of change’. The next sections will provide an interpretation of these in the context of Agile teams and a model to describe 'some applied forces'.

Origin of the Forces Driving Organisational Change: Change Restraining and Change Facilitating Forces

Associated with Organisational Inertia are two competing forces as described in the Inertia model of Connor & Lake [Con88] (see also [Kin98]). This model describes the Change-restraining forces and the Change-facilitating forces.

The origin of the forces driving Organisational Inertia [Lar96]

The model recognises that there are forces that hinder change and that there are forces that facilitate change.

One of the Agile principles is to regularly reflect on how you are doing and improve. By doing so Agile teams not only improve over time but they change too. An Agile team that changes will force the environment to change with it.
For more detailed information on this, check out the ‘fitness landscape’ [App11] chapter 15 “How to improve everything”.

Another common way Agile teams force the environment to change is by pulling more work in than the environment is capable of delivering. Or, by pushing more working software into production and thereby literally pushing the limits of what the environment is capable to process. The latter mechanism drives organisations towards DevOps [Deb11]. The combination of applying a force to both the up- and down-stream teams is what drives the need for BusDevOps (Agile Delivery Model), see Agile Delivery.

Examples I often see in practise of how Agile teams force the organisation (or environment) to change include:

  • a faster delivery rate forces the environment to cope with more releases and simultaneously forces the business to keep up by supplying user stories at a higher rate,
  • raising impediments,
  • the business wants more features than IT can deliver (this often drives the need to transition form a traditional way of working to an agile way of working).

For a systemic view on cause and effect related to Organisational Inertia see e.g. [Lar96].

How (Agile) Teams Act As Change-Facilitating Forces

Agile teams continuously improve. The way they improve I divide in 3 types. With each type I associate a 'force'. The next section will go into details of quantifying them.

In Management 3.0 [App11] chapter 15 “How to improve everything” the concept of the ‘fitness landscape’ is explained. In this metaphor the team is part of the landscape (the environment of the team within the organisation) in which the team optimizes a certain variable, for instance the business value it creates. The optimum is symbolised by the highest mountain peak. Through improvements the team finds its way to the highest peak. Besides team intrinsic changes the team can also change the environment and thereby change the landscape by moving mountains towards the team instead of moving towards the mountain.

Following this metaphor there are three ways for a team to reach the top of the highest peak:
I) gradually improve (changes intrinsic to the team),
II) large improvements (kaikaku) and jump to other (nearby or remote) mountains, or
III) change the environment, i.e. moving the mountains to the team.

Gradual Improvements (Type I Changes)
Agile teams regularly evaluate how they are doing. A common practise is to perform a retrospective every two weeks. In the retrospective the team decides what to change and executes these improvements in the next sprint (or time period). With these improvements the team walks the ‘fitness landscape’ to the nearest mountain peak. The improvements may be gradual and internal to the team. By this we mean that it increases the velocity and does not force the environment of the team, i.e. the organisation, to change. The team does not exert a force to the organisation.

Large Improvements (Type II Changes)
It may also be the case that the improvements within the team either dramatically increase the velocity or many gradual improvements increases the velocity over a certain threshold. When this happens - in my experience it is not a question whether this will happen but rather ‘when’ - it will force the direct environment of the team to change with it.

For example, when teams start to deliver user stories faster the business needs to keep up by having enough user stories ready.

Another example is that operations needs to keep up by putting more features into production.

This way the team applies a force to the surrounding organisation that needs to change as well. The team is not just walking the ‘fitness landscape’ but forcing the landscape to change as well [App11].

Moving Mountains (Type III Changes)
Changing the landscapeThe third way for teams to apply a force is to raise impediments and have the organisation help them by addressing the most important impediments. By structurally solving impediments the organisation is changing the ‘fitness landscape’ [App11] by moving the mountain peaks to the team.

With 'structurally solving impediments' I mean that solving the impediments requires company policies to be changed.

An Organisation’s Speed of Change

The final part is to have a way of measuring the speed of change while the Agile team exerts a force upon the organisation. Before we can answer this question we need to identify how to observe that an organisation changes. Possible variables are:

  • the trend of the (average) resolution time of impediments,
  • change in the rate of bringing features to production,
  • change in the rate of taking work to the team(s),
  • company policies being changed.

Potentially this could become a long list. But are all these changes relevant? Ultimately it’s the bottom line that counts. It’s all about the outcome. Let’s try to come up with variables that are of value to the business. This is pretty close to the bottom line. At least close enough. Variables that come into mind include:

  • change in the rate of business value per unit of time,
  • change in rate of an organisation’s (agile) maturity rating,
  • change in rate of product incidents per unit time.

These are all variables that can be measured and are the (complex) outcome of the changes made.

The choice depends on the goal that you want to achieve with the (agile) transition. As long as this is communicated clearly to the organisation and progress is being made as measured using this or these variables, you are measuring the rate of success and there will be enough energy in the organisation to continue the transition and avoid Organisational Gravity kicking in.

Metrics for Organisational Inertia

From the aforementioned examples we recognise the following metrics for the 'forces' (Change Facilitating):

  • [F1] the number of raised impediments per time period, e.g. per sprint or per month,
    Force = <Number of Raised Impediments per Period>,
  • [F2] difference in rate (throughput) for pre-sprint, sprint and post-sprint parts in a so-called cumulative flow diagram,
    Force [F2a] = <Business Value Delivered in Production per Period> - <Work According to DoR Worth of Business Value per Period>, or
    Force [F2b] <Business Value Delivered in Production per Period> - <Work Completed according to DoD Worth of Business Value per Period>

Force [F2a] measures the 'strength' with which the (Scrum) team is pulling work from the Product Owner. A positive value means that the PO cannot keep up with the team.
Force [F2b] is similar for the IT-Ops - (Scrum) team interaction. Positive values indicate the (Scrum) team is pushing work for production at a faster rate than the organisation is capable of taking into production.

The associated metrics for measuring the 'acceleration' (speed of change) are:

  • [A1] the trend of the average number of resolved impediments per time period; count only resolved impediments that required changes in company policies,
    Acceleration = Δ(<Number of Resolved Impediments per Period>)/<Period>,
  • [A2] the trend or change in the rate of business value (outcome) delivered per time period,
    Acceleration = Δ(<Business Value Delivered in Production per Period>)/<Period>.

The metrics just given are readily available to Scrum teams and can easily be applied to any team, kanban type of system, or to any part of the total value stream.

Putting It All Together

Using the results of the previous section I quantify Organisational Inertia in terms of measurable variables available to (Agile) teams.

Gradual internal improvements that the team makes (Type I changes) often lead to Type II changes after a certain period of time. Therefore I only explicitly address Type II and Type III changes (as explained earlier).

Are We Moving the Mountains? (Type II Changes)
Taking the change facilitating forces as a basis the Organisational Inertia is derived as follows.

First, consider the metric for inertia (  M_\mathrm{org} M_\mathrm{org} ) based on business value, i.e. [F2b] and [A2] from the previous section. Combining these gives:

 \langle\text{Value delivered per Period}\rangle - \langle\text{Value ready (DoD) per period}\rangle = M_\mathrm{org}\times\Delta(\langle\text{Value delivered per period}\rangle)/\langle\text{Period}\rangle \langle\text{Value delivered per Period}\rangle - \langle\text{Value ready (DoD) per period}\rangle = M_\mathrm{org}\times\Delta(\langle\text{Value delivered per period}\rangle)/\langle\text{Period}\rangle

What does this formula tell us?

  • if the team’s delivery rate balances the organisation's rate of putting features into production no force is exerted and the organisation will not be forced to change; then also no change (Δ) in the delivery rate,
  • if the team delivers more functionality than the organisation can take into production, the team exerts a certain force upon the organisation which needs to undergo structural improvements causing a positive change (Δ) in the delivery rate,
  • for small values of the inertia (  M_\mathrm{org} M_\mathrm{org} ) the effect on the change in delivery rate is larger.

Note: A  unit of  measure for  M_\mathrm{org} M_\mathrm{org}  is ‘time’, e.g. 'weeks' or 'sprints' for Scrum teams.

Example 1. Suppose that the team delivers 5% more business value each sprint. Suppose further that they deliver 20% worth of value faster than can be taken to production. Then the inertia is ’4 sprint’ (20% divided by 5%).

Example 2. Suppose that no change in the rate of delivered business value is measured. Then the Organisational Inertia is infinitely large; under a change facilitating force it does not change.

The effect of Organisational Inertia on delivered business value

The effect of Organisational Inertia on delivered business value

Example 3. Suppose an inertia of ’4 sprint’ (Example 1). Further suppose that the team pushes 20% worth of features per sprint more than can be taken into production. Then it will take M_\mathrm{org}M_\mathrm{org} /20% = 20 sprints to double the production rate.

Example 4. As in the previous example, the graph to the right shows the same team but in an environment in which the inertia is twice the inertia of the other environment. The team in the environment with the least inertia delivers value at a rate that increases twice as fast as compared to the other team. This results in twice as much business value being generated.

Are the Mountains Moving? (Type III Changes)
Another possibility is to take impediments as a basis to define and measure the Organisational Inertia (  M_\mathrm{org} M_\mathrm{org} ):

 M_\mathrm{org} = \frac{F_1}{A_1} = \frac{\langle\text{Number of Raised Impediments per Period}\rangle}{\Delta(\langle\text{Number of Resolved Impediments per Period}\rangle)/\langle\text{Period}\rangle} M_\mathrm{org} = \frac{F_1}{A_1} = \frac{\langle\text{Number of Raised Impediments per Period}\rangle}{\Delta(\langle\text{Number of Resolved Impediments per Period}\rangle)/\langle\text{Period}\rangle}

Note: Only impediments that actually result in changes in the organisation should be taken into account.

Note: This can also be translated in terms of business value by estimating how much more business value the team would be able to put into production if the impediment is resolved.

Note: The unit of measure is ‘time’, e.f. 'weeks' or 'sprints'.


Organisational Inertia is supplementary to the concept of Organisational Gravity and an indicator for how well the team’s environment is facilitating the team’s growth. The two counter balancing driving forces of the speed of organisational change are the Change-restraining and Change-facilitating forces.

An indicator available to agile teams for the Change-restraining forces is the average number of solved impediments per time period. Change-facilitating forces are the forces that teams exert on their environment. Indicators available to agile teams are the amount of pulling from the business for more ready features and pushing completed work into production.

Using the interpretation of the formula  F=M A F=M A , well-known in physics, in terms of the Change-restraining and Change-facilitating forces as defined in the model for Organisational Inertia from Connor and Lake  [Con88], and identifying MM with  M_{\textrm{org}} M_{\textrm{org}}  expressions for Organisational Inertia are derived.

Once the organisation's inertia is known, this can serve as a prediction how long it will take to increase the organisation's delivery rate with a certain amount and is related to the business case of Agile Transformations.

References [App11] Management 3.0, Jurgen Appelo, 2011, Pearson Education, [Deb11] Devops: A Software Revolution in the Making, Patrick Debois, August 2011, Cutter IT Journal, Vol. 24, No. 8,, [Con88] Managing organization change, P.E. Connor & L.K. Lake, 1988, New York: Praeger, [Kin98] The development of an instrument for measuring organisational inertia, C Kinnear, G Roodt, Journal of Industrial Psychology, 1998, 24(2), 44-54; see, [Lar96] The Dynamics of Organisational Inertia, Survival and Change, Erik R. Larsen, Alessandro Lomi, 1996, System Dynamics conference,

AngularJS e2e testing using ngMockE2E

Sat, 03/08/2014 - 10:10

For our project we needed a mock http backend that we could instruct to return predefined responses so we were able to e2e test our full application without having a 'real' backend. This way we do not have to account for any state in the backend and can run our tests in isolation.

  • Reuse the mockdata object created for our unit tests
  • Reuse the config object in which we define the backend endpoints
  • Have little or no impact on the production code

The solution

First thing that is required is a second module that depends on your application's ng-app module and on ngMockE2E. So say your ngApp is called 'myApp' you start by defining a module myAppE2E:

angular.module('myAppE2E', ['myApp', 'ngMockE2E']);

This wrapper module will be responsible for instructing the ngMockE2E version of $httpBackend what to respond to which request. For this instruction we use the run method of the angular.Module type:

angular.module('myAppE2E', ['myApp', 'ngMockE2E']).run(function ($httpBackend, $cookies) {
  // let all views through (the actual html views from the views folder should be loaded)
  $httpBackend.whenGET(new RegExp('views\/.*')).passThrough();
  // Mock out the call to '/service/hello'
  $httpBackend.whenGET('/service/hello').respond(200, {message: 'world'});
  // Respond with 404 for all other service calls
  $httpBackend.whenGET(new RegExp('service\/.*')).respond(404)

As you see we created 2 testable scenarios here: 1 successful call and 1 failure scenario. Note that the failure scenario should be defined after the success scenario as the failure scenario has a more generic selector and would handle the success scenario as well. This is also the point where you can inject any objects used in your 'myApp' module such as a config object or a mockData object.

Now we have our module we need some non-intrusive way of injecting it into our index.html. For this we chose processHtml. We added the following section to the bottom of our index.html:

<!-- build:include:e2e e2e.html -->
<!-- /build -->

This section is replaced by the contents of the e2e.html file when processHtml is run in e2e mode and left to be in all other modes. The contents of the e2e.html file are as follows:

<!-- We need angular-mocks as it contains ngMockE2E -->
<script src="bower_components/angular-mocks/angular-mocks.js"></script>
<!-- This file contains the myAppE2E module responsible for priming the $httpBackend -->
<script src="test/e2e/util/myAppE2E.js"></script>
<!-- Replace the original ng-app attribute with the myAppE2E module name so that one is run -->
<script type="text/javascript">
    $('body').attr('ng-app', 'myAppE2E');

Now all that is left is instructing grunt to run processHtml in the e2e test mode. We first need to add the config to the initConfig section. Here we tell it to use index.html as input and create index_e2e.html as output:

    processhtml: {
      e2e: {
        files: {
          '<%= %>/index_e2e.html': ['<%= %>/index.html']
// snipped of all sorts of other config

Next we simply enhance the (yeoman generated) e2e task to run procssHtml:e2e before running the server

 grunt.registerTask('e2e', [
    // Simply add the task:

There you have it. No when you start grunt e2e and go to index_e2e.html you will have full control over the http responses and can write e2e tests that do not require you to take into account any state of the backend.

E2E mode - but no tests

Every now and then it is useful to be able to test the complete application in isolation without running the e2e tests. We created a special grunt task for this that behaves like the yeoman default 'server' task but with the ngMockE2E module running. This way, whenever you change a resource in your project it is processed immediately and you can see the results by refreshing instead of restarting the e2e task

  grunt.registerTask('e2enotest', [

End of an era in my garage

Wed, 03/05/2014 - 19:27

This weekend I was forced to throw away a cardboard box. So what, I hear you think and I agree, but it being Sunday and me being in a hazy reflective Sunday afternoon state of mind (nono, no alcohol yet) and the box being the specific cardboard box it is (or rather was), I started thinking of the box’s significance for the future.

This is a picture of the cardboard box in question in it’s final state just before it got thrown into the recycler.
End of an era in my garage
As you can see it’s got a large sticker saying ‘Oracle’ above the word ‘KABELS’ written in my clunky handwriting. The word kabels doesn’t really matter, it just indicates the box was used to hold various lengths of electrical wire. Next to the Oracle sticker and the kabels text you’ll see a partially torn label from ‘Donelly Documentation Services’ in Ireland. It is in fact the box that was used to ship my very first set of Oracle manuals (for release 6 of the Database and probably 3.0 of Forms and more contemporary tools) in April 1992. Since that time the box used to hold manuals for consecutive releases of Oracle products stored in the trunk of my car. I used to take them with me to customers on various assignments.
Later I left Oracle and the box was used in two removals (it was of remarkably sturdy material) and ended in my garage holding bits and pieces of electrical wire.
Last Sunday I dragged it of a shelf and finally it’s corners tore spilling cables all over me and the garage floor. Exit box.

This is getting into a long winded intro, I know but there is an interesting similarity between the box and the changes in the tools I use in my professional life. After starting as an Oracle specialist I worked on Java software which became Oracle software later of course. Last Sunday I realized I was writing Java-ish software again for the first time in about a year. Android, but still.
I haven’t been using any of the software I grew up with for quite a while now. Relational databases are being replaced by key-value stores and even flat files. JEE servers changed into Spray and Akka. Scala and functional and reactive programming constructs rather than object oriented Java and its bolted-on lambda extensions. Angular-JS instead of, hey wait a minute, that looks like a fat client all over again (wondering what’s going to happen to it). Server software I work with is easy to install, unpacking a zip file instead of 500+ pages of manual guaranteed only on an outdated Linux version. Modern software runs on simple Linux boxes instead of high end hardware with 7 digit price tags. New and innovative solutions replace software with slow release cycles. Open source is definitely leading the industry.

I wonder, does the end of the box forebode the end of a whole class of software? And like the cardboard box, will some bits and pieces come back in other products? I’m just hoping that it won’t take 22 years to get rid of the last generation of software.

On quality, beauty and elegance. Parting words of a fellow Xebian

Wed, 02/26/2014 - 17:27

This week we said goodbye to a long-time colleague, Luuk Dijkhuis, senior consultant with a history in multiple Xebia units. He agreed to let me share his parting speech with you, here goes:


My dear Xebians, thank you for your kind words. Now, I could say, “thank you, it’s been six nice years, bye bye”, but you deserve more than that. I want to talk a bit about one of our key values. I will keep it short, don’t worry.

So. Quality without compromise. What kind of nonsense is that? Quality is always a compromise, or you will never get anything live, will you. So what ARE we on about?

Like the poet John Keats said, “beauty is truth, truth beauty, that’s all you know on earth and all you need to know”. He said it specifically about the concentrated and timeless austerity of a Grecian urn, but in general there is something to that combination of beauty and truth that resonates.

As most of you know, I have started out as a musician, and I have always had a close relationship to beauty and aesthetics in general. If you want to produce beautiful things or sounds then you must try to see beauty everywhere, that is to say, you must open your eyes to absorb all kinds of it, in order to be able later to produce it yourself. And indeed when you do, it does seem like there is a relationship between truth and beauty. Although the well versed cynic will always be ready to point out some counter examples.

The famous scientist Paul Dirac had a special thing for beauty, he was convinced that mathematics could only be correct when it is beautiful, he said: “What makes the theory of relativity so acceptable to physicists is its great mathematical beauty. This is a quality which cannot be defined, any more than beauty in art can be defined, but which people who study mathematics usually have no difficulty in appreciating

And it’s the same here in our trade, be it software architecture, or process, or actual code: this notion of beauty, of elegance let’s say, plays an important part in how we do things right. A well known quote of Edsger Dijkstra is “Elegance is not a dispensable luxury but a quality that decides between success and failure”. There you are. You guys all know that, and not only do you know it, you breathe it, you live it. It’s not the actual “Quality” itself that is without compromise, it’s all about the relentless pursuit of it. In our branch, actual quality is obviously contextual, it is in the end all about “fit for purpose”, but elegance in its creation is what makes it stand out and shine.

It was an absolutely exhilarating experience in 2007 to suddenly be plunged into a community that had that kind of attitude towards the things they were doing, and that first sense of “wow, this is great stuff” has never left me. I am truly proud to have been a part of you, of this, of what I have come to see as my extended family of Xebians, with all their crazy quirks and oddities. Hey, I never said that beauty is about being normal :-)

But now it is time for me to leave you all, not because I have had enough of you, but because there are other things in my life that need to be tended to now. So I say: “goodbye, see you around”, not “farewell”, and I know you will keep that spark going, that search for elegance. It’s only as a community that you can pull this off, so please, despite all that splitting off of Business Units stuff, PLEASE keep doing things right together. I will miss you. Thank you.


David Farley about Continuous Delivery

Mon, 02/24/2014 - 11:01

In most complex organisations, Continuous Delivery has become an essential process. Therefore Xebia wants to share its knowledge and experience in a book of Continuous Delivery. The book, written by Andrew Phillips, Michiel Sens, Adriaan de Jonge, Mark van Holsteijn, is targeted at the (IT) manager that has the challenging job to speed up the software development process within an organisation. On Thursday 13th there was a book launch with inspiring guest speakers from Kadaster, Rabobank and David Farley was the amazing final keynote speaker of the evening. Click here for the entire live presentation of David Farley.

Duration: 53:31

Promises and design patterns in AngularJS

Sun, 02/23/2014 - 14:23

The traditional way to deal with asynchronous tasks in Javascript are callbacks; call a method, give it a function reference to execute once that method is done.

$.get('api/gizmo/42', function(gizmo) {
  console.log(gizmo); // or whatever

This is pretty neat, but, it has some drawbacks; for one, combining or chaining multiple asynchronous processes is tricky; it either leads to a lot of boilerplate code, or what's known as callback hell (nesting callbacks and calls in each other):

$.get('api/gizmo/42', function(gizmo) {
  $.get('api/foobars/' + gizmo, function(foobar) {
    $.get('api/barbaz/' + foobar, function(bazbar) {
      doSomethingWith(gizmo, foobar, bazbar);
    }, errorCallback);
  }, errorCallback);
}, errorCallback);

You get the idea. In Javascript however, there is an alternative to dealing with asynchronous code: Futures, although in Javascript they're often referred to as Promises. The CommonJS standards committee has released a spec that defines this API called Promises.

The concept behind promises is pretty simple, and has two components:

  • Deferreds, representing units of work, and
  • Promises, representing data from those Deferreds.


Basically, you use a Deferred as a communications object to signal the start, progress, and completion of work.

A Promise in turn is an object output by a Deferred that represents data; it has a certain State (pending, fulfilled or rejected), and Handlers, or callback methods that should be called once a promise resolves, rejects, or gives a progress update.

An important thing that differentiates promises from callbacks is that you can attach a handler after the promise state goes to resolved. This allows you to pass data that may or may not be there yet around in your application, cache it, etc, so that its consumers can perform operations on the data either immediately or as soon as it arrives.

For the remainder of this article we'll talk about promises and such in the context of AngularJS. AngularJS relies heavily on promises throughout its codebase, both the framework and the application code you write in it. AngularJS uses its own implementation of the Promises spec, the $q service, which in turn is a lightweight implementation of the Q library.

$q implements all of the Deferred / Promise methods described above, plus a few in $q itself: $q.defer(), which creates a new Deferred object; $q.all(), which allows you to wait for multiple promises to resolve, and the methods $q.when() and $q.reject(), utility methods we'll go into later on.

$q.defer() returns a Deferred object, which has the methods resolve(), reject(), and notify(). Deferred has a property promise, which is the promise object that can be passed around the application.

The promise object has another three methods: .then(), which is the only method required by the Promises spec, taking three callbacks as arguments; one for success, one for failure, and one for notifications.

$q adds two methods on top of the Promise spec though: catch(), which can be used to have a centralized function to be called if any of the promises in a promise chain fails, and finally(), a method that will always be called regardless of success or failure of the promises. Note that these are not to be confused or used in combination with Javascript's exception handling: an exception thrown inside a promise will not be caught by catch().

Simple promise example

Here's a basic example of using $q, Deferred, and Promise in one. As a disclaimer, none of the code examples in this post have been tested; they also lack the appropriate angular service and dependency definitions, etcetera. But they should provide a good enough example to start fiddling with them yourself.

First, we create a new unit of work by creating a Deferred object, using $q.defer():

var deferred = $q.defer();

Next, we'll grab the promise from the Deferred and attach some behavior to it.

var promise = deferred.promise;

promise.then(function success(data) {
}, function error(msg) {

Finally, we perform some fake work and indicate we're done by telling the deferred:

deferred.resolve('all done!');

Of course, that's not really asynchronous, so we can just fake that using Angular's $timeout service (or Javascript's setTimeout, but, prefer $timeout in Angular applications so you can mock / test it)

$timeout(function() {
  deferred.resolve('All done... eventually');
}, 1000);

And the fun part: we can attach multiple then()s to a single promise, as well as attach then()s after the promise has resolved:

var deferred = $q.defer();
var promise = deferred.promise;

// assign behavior before resolving
promise.then(function (data) {
  console.log('before:', data);

deferred.resolve('Oh look we\'re done already.')

// assign behavior after resolving
promise.then(function (data) {
  console.log('after:', data);

Now, what if some error occurred? We'll use deferred.reject(), which will cause the second argument of then() to be called. Just like callbacks.

var deferred = $q.defer();
var promise = deferred.promise;

promise.then(function success(data) {
  console.log('Success!', data);
}, function error(msg) {
  console.error('Failure!', msg);

deferred.reject('We failed :(');

As an alternative to passing a second argument to then(), you can chain it with a catch(), which will be called if anything goes wrong in the promise chain (more on chaining later):

  .then(function success(data) {
  .catch(function error(msg) {

As an aside, for longer-term processes (like uploads, long calculations, batch operations, etc), you can use deferred.notify() and the third argument of then() to give the promise's listeners a status update:

var deferred = $q.defer();
var promise = deferred.promise;

  .then(function success(data) {
  function error(error) {
  function notification(notification) {;

 var progress = 0;
 var interval = $interval(function() {
  if (progress >= 100) {
    deferred.resolve('All done!');
  progress += 10;
  deferred.notify(progress + '%...');
 }, 100)
Chaining promises

We've seen earlier that you can attach multiple handlers (then()) to a single promise. The nice part about the promise API is that it allows chaining of handlers:


For a simple example, this allows you to neatly separate your function calls into pure, single-purpose functions, instead of one-thing-does-all; double bonus if you can re-use those functions for multiple promise-like tasks, just like how you would chain functional methods (on lists and the like).

It becomes more powerful if you use the result of a previous asynchronous to trigger a next one. By default, a chain like the one above will pass the returned object to the next then(). Example:

var deferred = $q.defer();
var promise = deferred.promise;

  .then(function(val) {
    return 'B';
  .then(function(val) {
    return 'C'
  .then(function(val) {


This will output the following to the console:


This is a simple example though. It becomes really powerful if your then() callback returns another promise. In that case, the next then() will only be executed once that promise resolves. This pattern can be used for serial HTTP requests, for example (where a request depends on the result of a previous one):

var deferred = $q.defer();
var promise = deferred.promise;

// resolve it after a second
$timeout(function() {
}, 1000);

  .then(function(one) {
    console.log('Promise one resolved with ', one);

    var anotherDeferred = $q.defer();

    // resolve after another second

    $timeout(function() {
    }, 1000);

    return anotherDeferred.promise;
  .then(function(two) {
    console.log('Promise two resolved with ', two);

In summary:

  • Promise chains will call the next 'then' in the chain with the return value of the previous 'then' callback (or undefined if none)
  • If a 'then' callback returns a promise object, the next 'then' will only execute if/when that promise resolves
  • A final 'catch' at the end of the chain will provide a single error handling point for the entire chain
  • A 'finally' at the end of the chain will always be executed regardless of promise resolving or rejection, for cleanup purposes.
Parallel promises and 'promise-ifying' plain values

One method I mentioned brielfly was $q.all(), which allows you to wait for multiple promises to resolve in parallel, with a single callback to be executed when all promises resolve. In Angular, this method has two ways to be called: with an Array or an Object. The Array variant takes an array and calls the .then() callback with a single array result object, where the results of each promise correspond with their index in the input array:

$q.all([promiseOne, promiseTwo, promiseThree])
  .then(function(results) {
    console.log(results[0], results[1], results[2]);

The second variant accepts an Object of promises, allowing you to give names to those promises in your callback method (making them more descriptive):

$q.all({ first: promiseOne, second: promiseTwo, third: promiseThree })
  .then(function(results) {
    console.log(results.first, results.second, results.third);

I would only recommend using the array notation if you can batch-process the result, i.e. if you treat the results equally. The object notation is more suitable for self-documenting code.

Another utility method is $q.when(), which is useful if you just want to create a promise out of a plain variable, or if you're simply not sure if you're dealing with a promise object.

  .then(function(bar) {

  .then(function(baz) {

  .then(function(boz) {
    // well you get the idea.

$q.when() is also useful for things like caching in services:

angular.module('myApp').service('MyService', function($q, MyResource) {

  var cachedSomething;

  this.getSomething = function() {
    if (cachedSomething) {
      return $q.when(cachedSomething);

    // on first call, return the result of MyResource.get()
    // note that 'then()' is chainable / returns a promise,
    // so we can return that instead of a separate promise object
    return MyResource.get().$promise
      .then(function(something) {
        cachedSomething = something

And then call it like this:

    .then(function(something) {
Practical applications in AngularJS

Most I/O in Angular returns promises or promise-compatible ('then-able') objects, however often with a twist. $http's documentation indicates it returns a HttpPromise object, which is a Promise but with two extra (utility) methods, probably to not scare off jQuery users too much. It defines the methods success() and error(), which correspond to the first and second argument of a then() callback, respectively.

Angular's $resource service, a wrapper around $http for REST-endpoints, is also a bit odd; the generated methods (get(), save() andsoforth) accept a second and third argument as success and error callbacks, while they also return an object that gets populated with the requested data when the request is resolved. It does not return a promise object directly; instead, the object returned by a resource's get() method has a property $promise, which exposes the backing promise object.

On the one side, it's inconsistent with $http and how everything in Angular is or should be a promise, but on the other side it allows a developer to simply assign the result of $resource.get() to the $scope. Previously, a developer could assign any promise to the $scope, but since Angular 1.2 that has been deprecated: see this commit where it was deprecated.

Personally, I like to have a consistent API, so I wrap pretty much all I/O in a Service that will always return a promise object, but also because calling a $resource is often a bit rough around the edges. Here's a random example:

  .service('BarResource', function ($resource) {
    return $resource('api/bar/:id');

  .service('BarService', function (BarResource) {

    this.getBar = function (id) {
      return BarResource.get({
        id: id


This example is a bit obscure because passing the id argument to BarResource looks a bit duplicate, but it makes sense if you've got a complex object but need to call a service with just an ID property from it. The advantage of the above is that in your controller, you know that anything you get from a Service will always be a promise object; you don't have to wonder whether it's a promise or resource result or a HttpPromise, which in turn makes your code more consistent and predictable - and since Javascript is weakly typed and as far as I know there's no IDE out there yet that can tell you what type a method returns without developer-added annotations, that's pretty important.

Practical chaining example

One part of the codebase we are currently working on has calls that rely on the results of a previous call. Promises are ideal for that, and allow you to write in an easy to read code style as long as you keep your code clean. Consider the following example:

  .controller('CheckoutCtrl', function($scope, $log, CustomerService, CartService, CheckoutService) {

    function calculateTotals(cart) { = cart.products.reduce(function(prev, current) {
        return prev.price + current.price;

      return cart;

      .then(CartService.getCart) // getCart() needs a customer object, returns a cart
      .then(CheckoutService.createCheckout) // createCheckout() needs a cart object, returns a checkout object
      .then(function(checkout) {
        $scope.checkout = checkout;


This combines getting data asynchronously (customers, carts, creating a checkout) with processing data synchronously (calculateTotals); the implementation however doesn't know or need to know whether those various services are async or not, it will just wait for the methods to complete, async or not. In this case, getCart() could fetch data from local storage, createCheckout() could perform a HTTP request to make sure the products are all in stock, etcetera. But from the consumer's point of view (the one making the calls), it doesn't matter; it Just Works. And it clearly states what it's doing, just as long as you remeber that the result of the previous then() is passed to the next.

And of course it's self-documenting and concise.

Testing promise-based code

Testing promises is easy enough. You can either go the hard way and have your test create mock objects that expose a then() method, which is called directly. However, to keep things simple, I just use $q to create promises - it's a very fast library and you're guaranteed to not be missing any promise implementation subtleties. The following spec tries to demonstrate how to mock out the various services used above. Note that it is rather verbose and long, but, I haven't found a way around it yet outside of making utility methods for promise creation (pointers to making it shorter / more concise are welcome).

describe('The Checkout controller', function() {


  it('should do something with promises', inject(function($controller, $q, $rootScope) {

    // create mocks; in this case I use jasmine, which has been good enough for me so far as a mocking library.
    var CustomerService = jasmine.createSpyObj('CustomerService', ['getCustomer']);
    var CartService = jasmine.createSpyObj('CartService', ['getCart']);
    var CheckoutService = jasmine.createSpyObj('CheckoutService', ['createCheckout']);

    var $scope = $rootScope.$new();
    var $log = jasmine.createSpyObj('$log', ['error']);

    // Create deferreds for each of the (promise-based) services
    var customerServiceDeferred = $q.defer();
    var cartServiceDeferred = $q.defer();
    var checkoutServiceDeferred = $q.defer();

    // Have the mocks return their respective deferred's promises

    // Create the controller; this will trigger the first call (getCustomer) to be executed,
    // and it will hold until we start resolving promises.
    $controller("CheckoutCtrl", {
      $scope: $scope,
      CustomerService: CustomerService,
      CartService: CartService,
      CheckoutService: CheckoutService

    // Resolve the first customer.
    var firstCustomer = {id: "customer 1"};

    // ... However: this *will not* trigger the 'then()' callback to be called yet;
    // we need to tell Angular to go and run a cycle first:



    // setup the next promise resolution
    var cart = {products: [ { price: 1 }, { price: 2 } ]}

    // apply the next 'then'

    var expectedCart = angular.copy(cart); = 3;


    // Resolve the checkout service
    var checkout = {total: 3}; // doesn't really matter

    // apply the next 'then'



As you can see, testing promise code is about ten times as long as the code itself; I don't know if / how to have the same power in less code, but, maybe there's a library out there I haven't found (or made) yet.

To get full test coverage, one will have to write tests wherein all three services fail to resolve, one after the other, to make sure the error is logged. While not clearly visible in the code, the code / process actually does have a lot of branches; every promise can, after all, resolve or reject; true or false, or branch out. But, that level of testing granularity is up to you in the end.

I hope this article gives people some insight into promises and how they can be used in any Angular application. I think I've only scratched the surface on the possibilities, both in this article and in the AngularJS projects I've done so far; for such a simple API and such simple concepts, the power and impact promises have on most Javascript applications is baffling. Combined with high-level functional utilities and a clean codebase, promises allow you to write your application in a clean, maintainable and easily altered fashion; add a handler, move them around, change the implementation, all these things are easy to do and comprehend if you've got promises under control.

With that in mind, it's rather odd that promises were scrapped from NodeJS early on in its development in favor of the current callback nature; I haven't dived into it completely yet, but it seems it had performance issues that weren't compatible with Node's own goals. I do think it makes sense though, if you consider NodeJS to be a low-level library; there are plenty of libraries out there that add the higher-level promises API to Node (like the aforementioned Q).

Another note is that I wrote this post with AngularJS in mind, however, promises and promise-like programming has been possible in the grandfather of Javascript libraries for a couple of years now, jQuery; Deferreds were added in jQuery 1.5 (January 2011). Not all plugins may be using them consistently though.

Similarly, Backbone.js' Model api also exposes promises in its methods (save() etc), however what I understand it doesn't really work right alongside its model events. I might be wrong though, it's been a while.

I would definitely recommend aiming for a promise-based front-end application whenever developing a new webapp, it makes the code so much cleaner, especially combined with functional programming paradigms. More functional programming patterns can be found in Reginald Braithwaite's Javascript Allongé book, free to read on LeanPub; some of those should also be useful in writing promise-based code.

PhantomJS data mining & BASH data analysis

Mon, 02/10/2014 - 23:42


As a moderately large company we rent mail boxes for our employees at a hosting provider; a lot of mailboxes. These come in varying sizes, and naturally the larger you go the more expensive they become.

The other day I received an email requesting several new accounts and set upon creating these when I came across what seemed to be a rather inefficient allocation. The user had a mid-size tier, costing about €150 per year, while he could seemingly make do with the very smallest tier of about €50 annually.

This, of course, made me curious about our other allocations and I went looking for an overview of all our mail accounts' usage. No such luck. The only way to see how much of the rented space was actually being used was by navigating the - non-rest and stateful - web interface of our hosting provider and looking up the statistics for each user individually.

Challenge accepted!

I've gotten tired of using selenium lately, and been meaning to look into some of PhantomJS based alternatives. My eye had fallen on CasperJS and I decided to give it a spin.

Using brew I downloaded the latest development release:

$ brew update
$ brew install casperjs --devel

The script in it's entirety can be found in this gist, but walking per section:

var casper = require('casper').create({ verbose: true, logLevel: 'info' });
var credentials = JSON.parse(require('fs').read('./credentials.json'));
var url = 'private';

casper.start(url + '/user/login', function() {
  this.fill('form#login_form', credentials, true);

The first line initialises CasperJS, with some logging enabled. The second line reads in a simple json file containing the form fields and values of the login page, while the third line contains the base url of our hosting provider.

In the next section casper is told to navigate to said url's login page, fill in the specified form with the credentials and submit. It's that easy!

casper.thenOpen(url + '/subscription/config/', function() {
  this.getElementsInfo('tr td  a').forEach(function (node) {
    if (node.attributes.nicetitle === "View") {

All logged in, it's time to navigate to the exchange's overview page. Here every user's account details are linked to, in a node with the attribute nicetitle="view". Naturally, we want to iterate over these. This is where a small hitch in the plan was encountered.. the html is completely unstructured. Simply a table of varying dimensions with label, value pairs. I decide to postpone the problem, and for now simply fetch the entire element:

      casper.thenOpen(url + node.attributes.href, function() {
        require('fs').write('output', JSON.stringify(this.getElementInfo('div.contentleft').html, 'a'));

Ending it all with a:;

It's time to dive in to the console and give it a spin:

$ casperjs fetch.js

Excellent! Casper is spinning along, discovering and fetching the data, and I can see a tail of the generated output file streaming in. Unreadable, but the data is all there, bringing use nicely to the second topic of this post.

BASH data analysis:

To begin with, let's put this malformed html through tidy. Since we're not interested in the many warnings tidy will give us, we'll redirect stderr to /dev/null, yielding us:

tidy <fetched 2>/dev/null
<div class="\&quot;clear\&quot;"></div>
<div>\n\t\t\t<label>Current mailbox size</label>\n\t\t\t1921
<div>\n\t\t\t<label>Warning quota</label>\n\t\t\t2250
<div>\n\t\t\t<label>Block send quota</label>\n\t\t\t2375

Since, as mentioned before, all interesting fields contain a 'label' tag, let's grep for those:

tidy <fetched 2>/dev/null | grep label</pre>
<div>\n\t\t\t<label>Email aliases</label>\n\t\t\t
<div>\n\t\t\t<label>Current mailbox size</label>\n\t\t\t1921
<div>\n\t\t\t<label>Warning quota</label>\n\t\t\t2250
<div>\n\t\t\t<label>Block send quota</label>\n\t\t\t2375
<div>\n\t\t\t<label>Block send and receive
quota</label>\n\t\t\t2500 MB\n\t\t</div>
<div>\n\t\t\t<label>Pop enabled</label>\n\t\t\t<img alt="" src="<br" />

Okay, this is starting to look like something. Let's trim away everything before the closing label tags, and remove the \n and \t characters.

tidy <first-fetch 2>/dev/null | grep label | sed 's/^.*\/label>//' | 
sed 's/\\nt//g'
2500 MB</label></div>
<img alt="" src="<br" /><img alt="" src="<br" />...

Let's filter out some of the uninteresting lines to get:

tidy <first-fetch 2>/dev/null | grep label | sed 's/^.*\/label>//' | 
sed 's/\\nt//g' | grep -v 'label\|img\|<br>\|HOSTED'
Sunil Prakash
Sunil Prakash
2500 MB

Removing the trailing div and MB's we finally have sanitised data:

tidy <first-fetch 2>/dev/null | grep label | sed 's/^.*\/label>//' | 
sed 's/\\nt//g' | grep -v 'label\|img\|<br>\|HOSTED' | sed 's/<.*//' | sed 's/ MB$//'
Sunil Prakash
Sunil Prakash

Which well put in a file called 'data'. Pasting these lines together in sets of eight, separated by comma's and with the empty fields padded with a period, we get:

cat data | paste -d , - - - - - - - - | sed 's/,,/,.,/' | head -n 2

Piping this through awk, setting the field delimeter to comma and calculating the last field divided by the fourth results in:

cat data | paste -d , - - - - - - - - | sed 's/,,/,.,/' | 
awk -F, '{print $0 "," $5/$8*100"%" }' | head -n 2

Finally it's time to sort by the last comma separated field, the utilisation percentage, in reverse numerical order:

cat data | paste -d , - - - - - - - - | sed 's/,,/,.,/' | awk -F, '{print $0 "," $5/$8*100"" }' |
sort -t, -k +9 -n -r | tail -n 2

And we already find two empty mailboxes, which at the very least could be downgraded to the cheapest package! To make things more readable, let's lay them out in a nice column

cat data | paste -d , - - - - - - - - | sed 's/,,/,.,/' | awk -F, '{print $0 "," $5/$8*100 }' |
sort -t, -k +9 -n -r | column -t -s , | tail -n 5
REDACTED REDACTED REDACTED REDACTED 1      2250  2375  2500   0.04 

And there we have it, an overview of the usage of all our mailboxes. Now finally let's use awk to filter by those mailboxes using a package larger than the minimum (250MB), and utilising less than ten percent, as these can definitely be downgraded:

cat data | paste -d , - - - - - - - - | sed 's/,,/,.,/' | 
awk -F, '{print $0 "," $5/$8*100 }' | sort -t, -k +9 -n -r | 
awk -F, '{ if ($8 > 250 && $9 < 10) print $3 "," $9"%" }' | 
column -t -s,

And there we go, a whole list of accounts that can easily be be saved upon. Let's finish of with a quick calculation of how much we just saved:

cat data | paste -d , - - - - - - - - | sed 's/,,/,.,/'x | 
awk -F, '{ if ($8 > 250 && ($5/$8) < 10) print $0}' | 
wc -l | xargs echo "100 *" | bc

There we go, looks like two hours of playing with CapsperJS, tidy, sed, awk and grep just saved us €6100, and probably a factor two more once I inspect the data a bit closer. Not a bad result for 18 lines of javascript and a few lines of BASH!

ngClass expressions in AngularJS

Thu, 01/30/2014 - 23:52

AngularJSThe ngClass directive allows you to dynamically set CSS classes by databinding an expression. The documentation on is pretty clear how to write different kind of expressions but not everything is documented. I will briefly explain the documented expressions and also the one that is not documented.

String syntax

The string syntax is straightforward, the value of the input element will directly added as a CSS class on the legend element.

<!DOCTYPE html>
<html ng-app>

    <link data-require="bootstrap-css@3.0.3" data-semver="3.0.3" rel="stylesheet" href="//" />
    <script data-require="angular.js@1.2.10" data-semver="1.2.10" src=""></script>

    <div class="container">
      <div class="row">
        <form role="form">
            <legend ng-class="text">String syntax</legend>
            <div class="form-group">
              <input class="form-control" ng-model="text" placeholder="Type: text-info, text-warning or text-danger"><br>


Array syntax

The array syntax behaves the same as the string syntax but that it allows you to add more then one CSS class on a single HTML element.

<!DOCTYPE html>
<html ng-app>

    <link data-require="bootstrap-css@3.0.3" data-semver="3.0.3" rel="stylesheet" href="//" />
    <script data-require="angular.js@1.2.10" data-semver="1.2.10" src=""></script>

    <div class="container">
      <div class="row">
        <form role="form">
            <legend ng-class="[label, text]">Array syntax</legend>
            <div class="form-group">
              <input class="form-control" ng-model="label" placeholder="Type: label-info, label-warning or label-danger"><br>
              <input class="form-control" ng-model="text" placeholder="Type: text-muted or text-success"><br>


Map syntax

The map syntax allows you to set the CSS class based on comma separated key-value pairs. In the following example the CSS class label-info is added when the value of info is true. If the values of info and muted are both true then both CSS classes are added. So for all the expressions whose values are true the CSS class is added.

<!DOCTYPE html>
<html ng-app>

    <link data-require="bootstrap-css@3.0.3" data-semver="3.0.3" rel="stylesheet" href="//" />
    <script data-require="angular.js@1.2.10" data-semver="1.2.10" src=""></script>

    <div class="container">
      <div class="row">
          <legend ng-class="{'label-info': info, 'text-muted': muted}">Map syntax</legend>
          <div class="form-group">
            <input type="checkbox" ng-model="info"> info (apply "label-info" class)<br>
            <input type="checkbox" ng-model="muted"> muted (apply "text-muted" class)


Undocumented expression syntax

The previous examples are well documented and the examples speak for themselves. But what if you want to mark a required input element of a form invalid after you have submitted the form? Though it is not documented it is possible.

First we have to keep track if the form is submitted. When the controller is invoked the first time the scope variable submitted is set to false. After the form is submitted the scope variable submitted is set to true. After that we check if the form is invalid. If the form is invalid we return to the page or else we can do something with the form data like posting it to a backend for example.

'use strict';

angular.module('myApp', []).
  controller('MyAppCtrl', function() {
    this.submitted = false;
    var self = this;
    this.submit = function(form) {
      self.submitted = true;
      if (form.$invalid) {
      } else {
        // Do something with the form like posting it to the backend

So how can we write an expression for the ngClass directive that checks if the scope variable submitted is true and if the value of the input element is invalid? The HTML provides the solution.

<!doctype html>
<html ng-app="myApp">
    <link src="//" rel="stylesheet"/>
    <script src=""></script>
  <body ng-controller="MyAppCtrl as ctrl">
    <div class="container">
      <div class="row">
        <form class="form-horizontal" name="myForm" novalidate>
            <div class="form-group" ng-class="{true: 'has-error'}[ctrl.submitted && myForm.myField.$error.required]">
              <label for="myField" class="control-label">My Field</label>
              <input type="text" name="myField" class="form-control" id="myField" ng-model="myField" required/>
            <div class="form-group">
                <button type="submit" class="btn btn-primary" ng-click="ctrl.submit(myForm)">Save</button>
    <script src="script.js"></script>

How does it work? The expression between the square brackets is evaluated. If the result of that expression is equal to true, then the CSS class has-error is added. That is it. To see the code in action go to this Plunker

Speedy FitNesse roundtrips during development

Mon, 01/06/2014 - 10:11

FitNesse is an acceptance testing framework. It allows business users, testers and developers to collaborate on executable specifications (for example in BDD style and/or implementing Specification by Example), and allows for testing both the back-end and the front-end. Aside from partly automating acceptance testing and as a tool to help build a common understanding between developers and business users, a selection of the tests from a FitNesse test suite often doubles as a regression test suite.

In contrast to unit tests, FitNesse tests should usually be focused but still test a feature in an  'end-to-end' way. It is not uncommon for a FitNesse test to for example start mocked versions of external systems, start e.g. a Spring context and connect to a real test database rather than an in-memory one.

Running FitNesse during development

The downside of end-to-end testing is that setting up all this context makes running a single test locally relatively slow. This is part of the reason you should keep in mind the testing pyramid while writing tests, and write tests at the lowest possible level (though not lower).

Still, when used correctly, FitNesse tests can provide enormous value. Luckily, versions of FitNesse since 06-01-2014 make it relatively easy to significantly reduce this round-trip time.

A bit of background

Most modern FitNesse tests are written using the SLIM test system. When executing a test, a separate 'service' process is spun up to actually execute the code under test ('fixtures' and code-under-test). This has a couple of advantages: the classpath of the service process can be kept relatively clean - in fact, you can even use a service process written in an entirely different language, such as .Net or Ruby, as long as implements the SLIM protocol.

In the common case of using the Java SLIM service, however, this means spinning up a JVM, loading your classes into the classloader, and possibly additional tasks such as initializing part of your backend and mocking services. This can take a while, and slows down your development roundtrip, making FitNesse less pleasant to work with.

How to speed up your FitNesse round-trip times

One way to tremendously speed up test round-trip times is to, instead of initializing the complete context every time you run a test, start the SlimService manually and keep it running. When done from your IDE this allows you to take advantage of selective reloading of updated classes and easily setting breakpoints.

To locally use FitNesse in this way, put the FitNesse non-standalone jar on your classpath, and start the main method of fitnesse.slim.SlimService with parameters like '-d 8090': '-d' is to prevent the SlimService from shutting down after the first test disconnects, '8090' specifies the port number on which to listen.

Example: java -cp *yourclasspath* fitnesse.slim.SlimService -d 8090

Now, when starting the FitNesse web UI, use the 'slim.port' to specify the port to connect to and set 'slim.pool.size' to '1', and FitNesse will connect to the already-running SLIM service instead of spinning up a new process each time.

Example: java -Dslim.port=8090 -Dslim.pool.size=1 -jar fitnesse-standalone.jar -p 8000 -e 0 

We've seen improvements in the time it takes to re-run one of our tests from a typical ~15 seconds to about 2-3 seconds. Not only a productivity improvement, but more importantly this makes it much more pleasant to use FitNesse tests where they make sense.

Phoney Deadlines are Deadly for Achieving Results

Sat, 12/28/2013 - 17:55

Ever had deadlines that must be met causing short-term decisions to be made? Ever worked over time with your team to meet an important deadline after which the delivered product wasn’t used for a couple of weeks?

I believe we all know these examples where deadlines are imposed on the team for questionable reasons.

Yet, deadlines are part of reality and we have to deal with them. Certainly, there is business value in meeting them but they also have costs.

The Never Ending Story of Shifting Deadlines…..

Some time ago I was involved in a project for delivering personalised advertisements on mobile phones. At that time this was quite revolutionary and we didn’t know how the market would react. Therefore, a team of skilled engineers and marketeers was assembled and we set out to deliver a prototype in a couple of months and test it in real life, i.e. with real phones and in the mobile network. This was a success and we got the assignment to make it into a commercial product version 1.0.
At this time there was no paying customer for the product yet and we built it targeted at multiple potential customers.

For the investment to make sense the deadline for delivering version 1.0 was set to 8 months.

The prototype worked fine but how to achieve a robust product when the product is scaled to millions of subscribers and thousands of advertisements per second? What architecture to use? Should we build upon the prototype or throw it away and start all over with the acquired knowledge?

A new architecture required us to use new technology which would require training and time to get acquainted with it. Time we did not have as the deadline was only 8 months away. We double checked that the deadline can be moved to a later date. Of course, this wash’t possible as it would invalidate the business case. We decided to not throw away the prototype but enhance it further.

As the deadline was approached it became clear that we were not going to deliver a product 1.0. Causes were multiple: the prototype’s architecture did not quite fit the 1.0 needs, scope changed along the way as marketing got new insights from the changing market, the team grew in size, and the integration to other network components took time as well.
So, the deadline was extended with another 6 months.

The deadline got shifted 2 more times.

This felt really bad. It felt we let down both the company and the product owner by not delivering on time. We had the best people part on the team and already had a working proto type. How come we weren’t able to deliver? What happened? What could we do to prevent this from happening a third time?

Then the new product was going to be announced at a large Telecom conference. This is what the product (and the team) needed; we still got a deadline but this time there was a clear goal associated with the deadline, namely a demo version for attracting first customers! Moreover, there was a small time window for delivering the product; missing the deadline would mean an important opportunity was lost with severe impact to the business case. This time we made the deadline.

The conference was a success and we got our first customers; of course new deadlines followed and this time with clear goals related to the needs of specific customers.

The Effect Deadlines Have

Looking back, which always is easy, we could have finished the product much earlier if the initial deadline was set to a much later date. Certainly, there was value in being able to deliver a product very fast, i.e. having short-term deadlines. On the other hand there were also costs associated with these short-term deadlines including:

  • struggling with new technologies caused by not taking time to do the necessary trainings and take time to gain experience,
  • working with an architecture that does not quite fit causing more workarounds,
  • relatively simple tasks becoming more complex over time.

In this case the short-term deadline put pressure on the team to deliver in short time causing the team to take short-cuts along the way causing delays and refactoring at a later time. Over time less results will be delivered.
What makes this pattern hard to fix is that the action of imposing deadline will deliver short-term results and seems a good idea to get results from the team.

This pattern is known as ‘Shifting the Burden’ and is depicted below. In the previous example the root cause is not trusting the team to get the job done. The distrust is addressed by imposing a deadline as a way to ‘force’ the team to deliver.


The balancing (B) loop on top is the short-term solution to the problem getting results from the team. The 'fear' of the lacking focus and therefore results leads to imposing a deadline and thereby increasing the focus (and results) of the team. The problem symptom will reduce but will reappear causing an 'addiction' to the loop of imposing deadlines.

The fundamental solution of trusting the team, prioritising and giving them goals is often overlooked. Also this fundamental solution is less evident and costs energy and effort from the organisation to implement. The short-term solution has unwanted side effects that in the long run - slashed arrow - have negative impact on the team results.

In the example above the fundamental solution consisted of setting and prioritising towards the goal of a successful demo at the conference. This worked because it was a short-term and realistic goal. Furthermore the urgency was evident to the team: there was not going to be a second chance in case this deadline was missed.

Another example
In practise I also encounter the situation in which deadlines are imposed is a team that seems to lack focus. The underlying problem is the lack of a (business) vision and goals. The symptom as experienced by the organisation is the lack of concrete results. In fact the team does work hard but does so by working on multiple goals at the same time. Here, clear goals and prioritising the work to be done first will help.


Also in this example, the action of imposing deadline to ‘solve’ the problem has the unintended side effect of not addressing the underlying problem. This will make the problem of the team not delivering result reappear.

Goals & Deadlines

In the examples of the previous section the deadlines I call phoney deadlines. When in practise a deadline is imposed it usually also implies a fixed scope and fixed date.

Deadlines should be business case related and induce large costs if not met. For the deadlines in the above examples this is not the case.

Examples of deadlines that have associated large costs if not met, are:

  • associated with small market window of introducing a product (or set of features); the cost of missing the small time window is very high,
  • associated with implementation dates of laws; again missing these deadline severely harms the business,
  • hardware that becomes end of life,
  • support contracts the end.

In the story above the ‘real’ deadline actually was 2 years instead of the 8 months. In this case the deadline probably was used as a management tool, with all good intentions, to get the team focussed on producing results. Whereas in fact it caused short-cuts to be made by the team in order to meet the deadline.

Getting focus in teams is done by giving the team a goal: a series of subgoals leading to the end-goal [Bur13]. Establish a vision and derive a series of (sub)goals to realise the vision. Relate these goals to the business case. Story mapping is one of the tools available to define series of goals [Lit].


Avoid setting deadlines as a means to get results from a team. On the short-term this will give results but on the long run it negatively impacts the results that the organisation wants to achieve.

Reserve deadlines for events that have a very high Cost of Delay when missed, i.e. the cost of missing the deadline is very large.

Instead, set a vision (both for the organisation and product) that is consistent with the fundamental solution. In addition, derive a series of goals and prioritise to help team focus on achieving results. To derive a series of goals several techniques can be used like Story mapping and Goal-Impact Mapping.

References [Bur13] Daniel Burm, 2013, The Secret to 3-Digit Productivity Growth [Wik] Shifting the Burden, Wikipedia, Systems Thinking [Lit] Lithespeed, Story Mapping

Composite User Stories

Wed, 12/18/2013 - 11:21

User stories

User stories must represent the business value. That's why we use the well known one-line description 'as an <actor> I want an <action>, so I can reach a <goal>'. It is both simple and powerful because it provides the team a concrete customer related context for identifying the relevant tasks to reach the required goal.

The stories pulled into the sprint by the team have a constraint on size. They should at least be small enough to fit into a sprint. This constraint of story size can in some cases require the story to be broken down into smaller stories. There are some useful patterns to do this like workflow steps, business rule or data variation etc.

Complex systems

When dealing with large and complex systems consisting of many interacting components the process of breaking down can impose problems even when following the standard guidelines. Especially when breaking down a story leads to stories which are related to components deep within the system without direct connection to the end user or the business goal. Those stories are usually inherently technical and far away from the business perspective.

Lets say the team encounters a story like ‘As a user I want something really complex that doesn’t fit in a story, so I can do A’. The story requires interaction of multiple components so the team breaks it down in to smaller stories like ‘As a user I want component X to expose operation Y, so I can do A’. There should be a user and business goal, but the action has no direct relation to either of them. It provides no meaningful context for this particular story and it just doesn't feel right.

Constraint by time and with no apparent solution provided by known patterns the team is likely to define a story like: ‘Implement operation Y in component X’, which is basically a compound task description and provides no context at all.

Components as actors

Breaking the rules a bit it is possible to use the principle of user story definition and provide meaningful context in these cases. The trick is to zoom into the system and define the sub stories on another level using a composite relation and making the components actors themselves with their own goals: ‘As component Z I want to call operation Y on component X, so I can do B’ and ‘As component X I want to implement operation Y, so I can do C’.

There is no direct customer or business value in this sub story, but because it is linked by composition it is quite easy to trace back to the business value. Each of the sub goals contributes to the goal stated in the composite story. Goal A will be reached by reaching both goal B and goal C (A = B + C).

Linking the stories

There are several ways to link the stories to their composite story. You can number stories like 1, 1a, 1b, ... or by indenting the sticky notes with sub user stories on the scrum board to visualize the relationship. To make the relation more explicit you can also extend the story description like: As an <Actor> I want <Action>, so I can reach <Goal> and contribute to <Composite Goal>.

Composite Stories


The emphasis of this approach is to try to maintain meaningful context while splitting (technical) user stories for complex systems with many interacting components. By viewing components as actors with their own goals you can create meaningful user stories with relevant contexts. The use of a composite structure creates logical relations between the stories in the composition and connects them to the business value. This way the team can maintain a consistent way of expressing functionality using user stories.


This method should only be applied when splitting user stories using the standard patterns is not possible. For instance it does not provide an answer to the rule that each story should deliver value to the end user. It is likely that more than one sprint is needed to deliver a composite story.  Also you should ask yourself the question why there is complexity in the system and could it be avoided. But for teams facing this complexity and the challenge to split the stories today this method can be the lesser of two evils.