Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Xebia Blog
Syndicate content
Updated: 3 hours 8 min ago

Swift self reference in inner closure

12 hours 47 min ago

We all pretty much know how to safely use self within a Swift closure. But have you ever tried to use self inside a closure that's inside another closure? There is a big chance that the Swift compiler crashed (Xcode 6.1.1) without giving you an error in the editor and without any error message. So how can you solve this problem?

The basic working closure

Before we dive into the problem and solution, let's first have a look at a working code sample that only uses a single closure. We can create a simple Swift Playground to run it and validate that it works.

class Runner {
    var closures: [() -> ()] = []

    func doSomethingAsync(completion: () -> ()) {
        closures = [completion]
        completion()
    }
}

class Playground {

    let runner = Runner()

    func works() {
        runner.doSomethingAsync { [weak self] in
            self?.printMessage("This works") ?? ()
        }
    }

    func printMessage(message: String) {
        println(message)
    }

    deinit {
        println("Deinit")
    }

}

struct Tester {
    var playground: Playground? = Playground()
}

var tester: Tester? = Tester()
tester?.playground?.works()
tester?.playground = nil

The doSomethingAsync method takes a closure without arguments and has return type Void. This method doesn't really do anything, but imagine it would load data from a server and then call the completion closure once it's done loading. It does however create a strong reference to the closure by adding it to the closures array. That means we are only allowed to use a weak reference of self within our closure. Otherwise the Runner would keep a strong reference to the Playground instance and neither would ever be deallocated.

Luckily all is fine and the "This works" message is printed in our playground output. Also a "Deinit" message is printed. The Tester construction is used to make sure that the playground will actually deallocate it.

The failing situation

Let's make things slightly more complex. When our first async call is finished and calls our completion closure, we want to load something more and therefore need to create another closure within the outer closure. We add the method below to our Playground class. Keep in mind that the first closure doesn't have [weak self] since we only reference self in the inner closure.

func doesntWork() {
    weak var weakRunner = runner
    runner.doSomethingAsync {

        // do some stuff for which we don't need self

        weakRunner?.doSomethingAsync { [weak self] in
            self?.printMessage("This doesn't work") ?? ()
        } ?? ()
    }
}

Just adding it already makes the compiler crash, without giving us an error in the editor. We don't even need to run it.

Screen Shot 2014-12-19 at 10.30.59

It gives us the following message:

Communication with the playground service was interrupted unexpectedly.
The playground service "com.apple.dt.Xcode.Playground" may have generated a crash log.

And when you have such code in your normal project, the editor also doesn't give an error, but the build will fail with a Swift Compiler Error without clear message of what's wrong:
Command /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swiftc failed with exit code 1

The solution

So how can we work around this problem. Quite simple actually. We simply need to move the [weak self] to the most outer closure.

func doesWork() {
    weak var weakRunner = runner
    runner.doSomethingAsync { [weak self] in

        weakRunner?.doSomethingAsync {
            self?.printMessage("This now works again") ?? ()
        } ?? ()
    }
}

This does mean that it's possible that within the outer closure, self is not nil and in the inner closure it is nil. So don't write code like this:

    runner.doSomethingAsync { [weak self] in

        if self != nil {
            self!.printMessage("This is fine, self is not nil")

            weakRunner?.doSomethingAsync {
                self!.printMessage("This is not good, self could be nil now")
            } ?? ()
        }
    }

There is one more scenario you should be aware of. If you use an if let construction to safely unwrap self, you could create a strong reference again to self. The following sample illustrates this and will create a reference cycle since our runner will create a strong reference to the Playground instance because of the inner closure.

    runner.doSomethingAsync { [weak self] in

        if let this = self {

            weakRunner?.doSomethingAsync {
                this.printMessage("Captures a strong reference to self")
            } ?? ()
        }
    }

Also this is solved easily by including a weak reference to the instance again, now called this.

runner.doSomethingAsync { [weak self] in

    if let this = self {

        weakRunner?.doSomethingAsync { [weak this] in
            this?.printMessage("We're good again") ?? ()
        } ?? ()
    }
}
Conclusion

Most people working with Swift know that there are still quite a few bugs in it. In this case, Xcode should give us an error in the editor. If your editor doesn't complain, but your Swift compiler fails, look for closures like this and correct it. Always be safe and use [weak self] references within closures.

Agile: how hard can it be?!

Sun, 12/14/2014 - 13:48

Yesterday my colleagues and I ran an awesome workshop at the MIT conference in which we built a Rube Goldberg machine using Scrum and Extreme Engineering techniques. As agile coaches one would think that being an Agile team should come naturally to us, but I'd like to share our pitfalls and insights with you since "we learned a lot" about being an agile team and what an incredible powerful model a Rube Goldberg machine is for scaled agile product development.

If you're not the reading type, check out the video.

Rube ... what?

Goldberg. According to Wikipedia;¬†A Rube Goldberg machine is a contraption, invention, device or apparatus that is deliberately over-engineered or overdone to perform a very simple task in a very complicated fashion, usually including a chain reaction. The expression is named after American cartoonist and inventor Rube Goldberg (1883‚Äď1970).

In our case we set out on a 6 by 4 meter stage divided in 5 sections. Each section had a theme like rolling, propulsion, swinging, lifting etc. In a fashion it resembled a large software product that requires to respond to some event in an (for outsiders) incredibly complex matter, by triggering a chain of sub-systems which end in some kind of end-result.

The workspace, scrum boards and build stuff

The workspace, scrum boards and build stuff

Extreme Scrum

During the day 5 teams worked in a total of 10 sprints to create the most incredible machine, experiencing everything one can find during "normal" product development. We had inexperienced team members, little to no documentation, legacy systems whom's engineering principles were shrouded in mystery, teams that forgot to retrospective, interfaces that were ignored because the problem "lies with the other team". The huge time pressure of the relative small sprint and the complexity of what we were trying to achieve created a pressure cooker that brought these problems to the surface faster than anything else and with Scrum we were forced to face and fix these problems.

Team scrumboard

Team scrumboard

Build, fail, improve, build

‚ÄúMost people do not listen with the intent to understand; they listen with the intent to reply.‚ÄĚ - Stephen R. Covey

Having 2 minutes to do your planning makes it very difficult to listen, especially when your head is buzzing with ideas, yet sometimes you have to slow down to speed up. Effective building requires us to really understand what your team mate is going to do, pairing proved a very effective way to slow down your own brain and benefit from both rubber ducking and the insight of your team mate. Once our teams reached 4 members we could pair and drastically improve the outcome.

Deadweight with pneumatic fuse

Deadweight with pneumatic fuse

Once the machine had reached a critical size integration tests started to fail. The teams responded by testing multiple times during the sprint and fix the broken build rather than adding new features. Especially in mechanical engineering that is not a simple as it sounds. Sometimes a part of the machine would be "refactored" and since we did not design for a simple end-to-end test to be applied continuously. It took a couple of sprints to get that right.

A MVP that made it to the final product

A MVP that made it to the final product

"Keep your code clean" we teach teams every day. "Don't accept technical or functional debt, you know it will slow you down in the end". Yet it is so tempting. Despite a Scrum Master and an "√úber Scrum Master" we still had a hard time keeping our workspace clean, refactor broken stuff out, optimise and simplify...

Have an awesome goal

"A true big hairy audacious goal is clear and compelling, serves as unifying focal point of effort, and acts as a clear catalyst for team spirit. It has a clear finish line, so the organization can know when it has achieved the goal; people like to shoot for finish lines." - Collins and Porras, Built to Last: Successful Habits of Visionary Companies

Truth is: we got lucky with the venue. Building a machine like this is awesome and inspiring in itself, learning how Extreme Scrum can help teams to build machines better, faster, innovative and with a whole lot more fun is a fantastic goal in itself, but parallel to our build space was a true magnet, something that really focussed the teams and go that extra mile.

The ultimate goal of the machine

The ultimate goal of the machine

Biggest take away

Building things is hard, building under pressure is even harder. Even teams that are aware of the theory will be tempted to throw everything overboard and just start somewhere. Applying Extreme Engineering techniques can truly help you, it's a simple set of rules but it requires an unparalleled level of discipline. Having a Scrum coach can make all the difference between a successful and failed project.

Extreme Engineering - Building a Rube Goldberg machine with scrum

Fri, 12/12/2014 - 15:16

Is agile usable to do other things than software development? Well we knew that already; yes!
But to create a machine in 1 day with 5 teams and continuously changing members using scrum might be exciting!

See our report below (it's in Dutch for now)

 

Extreme engineering video

 

 

The End of Common-off-the-Shelf Software

Mon, 12/08/2014 - 08:12

Large Common-of-the-Shelf Software (COTS for short) packages are difficult to implement and integrate. Buying a large software package is not a good idea. Below I will explain how Agile methods and services on light weight containers will help implement minimal, focused solutions.

Given the standard [hardware | OS | app server | business logic | user interface] software stack, COTS packages include some of the app server, all of the business logic and the full user interface. Examples are packages for sales support, financial management or marketing. Large and unwieldy beasts that thrash around on your IT infrastructure, needing herds of specialists to keep them going and insist that you install Java 1.6 and Oracle 10 on Redhat 4.2, IE 8.0 and the biggest, meanest server money can buy.

It probably all started with honorable intentions: buy over reuse over build appears to make perfect sense if you don’t look too closely. I even agree, though we might disagree on one important aspect, and that would be scale.
In the old waterfall days we were used to writing an architecture and make an inventory of business needs. Because people quickly learned that they rarely get more than one opportunity to ask for what they needed, they tended to ask for a lot, cramming in as much features as they could think of. At some point in the decision process everyone realized they might as well buy something really versatile; a large software package that matches all requirements now and in the future.

All is well.

Until the next business need pops up and the same reasoning (fix all specs up front, one shot to get it right, might as well ask a little extra, won’t hurt) leads to another package that has some overlap with the first but not too much so that’s OK. Then the need arises to synchronize data (because of the slight overlap between the packages) and an ESB is implemented (because you might as well buy a software package right?).

Now there are two stovepipes in your landscape glued together with a SPOF and things are not well any more. Changing stuff means coordinating the effort of multiple teams. Testing and integrating becomes the task of a large team, no team has ‚Äėworks in production‚Äô in their definition of done. Works on my machine is the best you may hope for and somebody else will fix all integration problems. Oh, and the people who use this software switch between badly designed screens running in a bunch of yesteryear‚Äôs browsers.

How can modern software development wisdom and architecture help?

Two trends allow us to avoid stovepipes connected by super glue: micro-services hosted on light weight containers and Agile methods.

Micro services on light weight containers like Docker or maybe Dropwizard or Spring Boot are the end of the application server that served us so well last decade. If you can scale your application by starting a new process on a fresh VM you don’t need complex software to share resources. That means you don’t really need a lot of infrastructure. You can deploy small components with negligible overhead. Key-value data stores allow you to relax constraints on data that where imposed by relational databases. A service might support two versions of an interface at the same time. Combined with REST, a DNS and a load balancer this is the end of ESBs.

Agile promotes stable teams and budgets that are allocated to a team instead of a project. This means we don’t really have to do budget calculations anymore. Because we can change direction every sprint there is no need to ask for the world like we did in the waterfall days. That implies that we should create the smallest thing that could possible solve the problem, instead of buying the biggest beast that will solve all our problems and some others we don’t even have.

This doesn’t mean we shouldn’t buy software anymore. What I would love to see happening is vendors focusing on small specialized components: a highly specialized service using state-of-the-art algorithms to assess credit risk or a component that knows all about registering and monitoring customer service calls. That would be awesome. But no user interface thanks, we’ll be happy to create that ourselves, grabbing the data with a HTTP call and present it exactly like its needed.

The End of Common-off-the-Shelf Software

Mon, 12/08/2014 - 08:12

Large Common-of-the-Shelf Software (COTS for short) packages are difficult to implement and integrate. Buying a large software package is not a good idea. Below I will explain how Agile methods and services on light weight containers will help implement minimal, focused solutions.

Given the standard [hardware | OS | app server | business logic | user interface] software stack, COTS packages include some of the app server, all of the business logic and the full user interface. Examples are packages for sales support, financial management or marketing. Large and unwieldy beasts that thrash around on your IT infrastructure, needing herds of specialists to keep them going and insist that you install Java 1.6 and Oracle 10 on Redhat 4.2, IE 8.0 and the biggest, meanest server money can buy.

It probably all started with honorable intentions: buy over reuse over build appears to make perfect sense if you don’t look too closely. I even agree, though we might disagree on one important aspect, and that would be scale.
In the old waterfall days we were used to writing an architecture and make an inventory of business needs. Because people quickly learned that they rarely get more than one opportunity to ask for what they needed, they tended to ask for a lot, cramming in as much features as they could think of. At some point in the decision process everyone realized they might as well buy something really versatile; a large software package that matches all requirements now and in the future.

All is well.

Until the next business need pops up and the same reasoning (fix all specs up front, one shot to get it right, might as well ask a little extra, won’t hurt) leads to another package that has some overlap with the first but not too much so that’s OK. Then the need arises to synchronize data (because of the slight overlap between the packages) and an ESB is implemented (because you might as well buy a software package right?).

Now there are two stovepipes in your landscape glued together with a SPOF and things are not well any more. Changing stuff means coordinating the effort of multiple teams. Testing and integrating becomes the task of a large team, no team has ‚Äėworks in production‚Äô in their definition of done. Works on my machine is the best you may hope for and somebody else will fix all integration problems. Oh, and the people who use this software switch between badly designed screens running in a bunch of yesteryear‚Äôs browsers.

How can modern software development wisdom and architecture help?

Two trends allow us to avoid stovepipes connected by super glue: micro-services hosted on light weight containers and Agile methods.

Micro services on light weight containers like Docker or maybe Dropwizard or Spring Boot are the end of the application server that served us so well last decade. If you can scale your application by starting a new process on a fresh VM you don’t need complex software to share resources. That means you don’t really need a lot of infrastructure. You can deploy small components with negligible overhead. Key-value data stores allow you to relax constraints on data that where imposed by relational databases. A service might support two versions of an interface at the same time. Combined with REST, a DNS and a load balancer this is the end of ESBs.

Agile promotes stable teams and budgets that are allocated to a team instead of a project. This means we don’t really have to do budget calculations anymore. Because we can change direction every sprint there is no need to ask for the world like we did in the waterfall days. That implies that we should create the smallest thing that could possible solve the problem, instead of buying the biggest beast that will solve all our problems and some others we don’t even have.

This doesn’t mean we shouldn’t buy software anymore. What I would love to see happening is vendors focusing on small specialized components: a highly specialized service using state-of-the-art algorithms to assess credit risk or a component that knows all about registering and monitoring customer service calls. That would be awesome. But no user interface thanks, we’ll be happy to create that ourselves, grabbing the data with a HTTP call and present it exactly like its needed.

The End of Common-off-the-Shelf Software

Mon, 12/08/2014 - 08:12

Large Common-of-the-Shelf Software (COTS for short) packages are difficult to implement and integrate. Buying a large software package is not a good idea. Below I will explain how Agile methods and services on light weight containers will help implement minimal, focused solutions.

Given the standard [hardware | OS | app server | business logic | user interface] software stack, COTS packages include some of the app server, all of the business logic and the full user interface. Examples are packages for sales support, financial management or marketing. Large and unwieldy beasts that thrash around on your IT infrastructure, needing herds of specialists to keep them going and insist that you install Java 1.6 and Oracle 10 on Redhat 4.2, IE 8.0 and the biggest, meanest server money can buy.

It probably all started with honorable intentions: buy over reuse over build appears to make perfect sense if you don’t look too closely. I even agree, though we might disagree on one important aspect, and that would be scale.
In the old waterfall days we were used to writing an architecture and make an inventory of business needs. Because people quickly learned that they rarely get more than one opportunity to ask for what they needed, they tended to ask for a lot, cramming in as much features as they could think of. At some point in the decision process everyone realized they might as well buy something really versatile; a large software package that matches all requirements now and in the future.

All is well.

Until the next business need pops up and the same reasoning (fix all specs up front, one shot to get it right, might as well ask a little extra, won’t hurt) leads to another package that has some overlap with the first but not too much so that’s OK. Then the need arises to synchronize data (because of the slight overlap between the packages) and an ESB is implemented (because you might as well buy a software package right?).

Now there are two stovepipes in your landscape glued together with a SPOF and things are not well any more. Changing stuff means coordinating the effort of multiple teams. Testing and integrating becomes the task of a large team, no team has ‚Äėworks in production‚Äô in their definition of done. Works on my machine is the best you may hope for and somebody else will fix all integration problems. Oh, and the people who use this software switch between badly designed screens running in a bunch of yesteryear‚Äôs browsers.

How can modern software development wisdom and architecture help?

Two trends allow us to avoid stovepipes connected by super glue: micro-services hosted on light weight containers and Agile methods.

Micro services on light weight containers like Docker or maybe Dropwizard or Spring Boot are the end of the application server that served us so well last decade. If you can scale your application by starting a new process on a fresh VM you don’t need complex software to share resources. That means you don’t really need a lot of infrastructure. You can deploy small components with negligible overhead. Key-value data stores allow you to relax constraints on data that where imposed by relational databases. A service might support two versions of an interface at the same time. Combined with REST, a DNS and a load balancer this is the end of ESBs.

Agile promotes stable teams and budgets that are allocated to a team instead of a project. This means we don’t really have to do budget calculations anymore. Because we can change direction every sprint there is no need to ask for the world like we did in the waterfall days. That implies that we should create the smallest thing that could possible solve the problem, instead of buying the biggest beast that will solve all our problems and some others we don’t even have.

This doesn’t mean we shouldn’t buy software anymore. What I would love to see happening is vendors focusing on small specialized components: a highly specialized service using state-of-the-art algorithms to assess credit risk or a component that knows all about registering and monitoring customer service calls. That would be awesome. But no user interface thanks, we’ll be happy to create that ourselves, grabbing the data with a HTTP call and present it exactly like its needed.

Testing Feature Branches Remotely with Grunt

Tue, 12/02/2014 - 17:48

At my current job we are working on multiple features simultaneously, using git feature branches. We have a Jenkins build server which we use for integration testing of the master branch, which runs about 20 jobs simultaneously for Protractor and Fitnesse tests. An individual job typically takes around 10 minutes to complete.

Our policy is to keep the master branch production ready at all times. Therefore we have a review process in place that should assure that feature branches are only pushed to master when they can't break the application.
This all works very well as long as the feature which you are working on requires only one or two integration test suites to test its functionality. But every once in a while you're working on something that could have effects all over the application, and you would like to run a larger number of integration test suites. And of course before you merge your feature branch to master.
Running all the integration suites on your local machine would take too way much time. And Jenkins is configured to run all its suites against the master branch. So what to do?

The Solution

In this article I'm going to show a solution that we developed for this problem, which lets us start multiple remote Jenkins jobs on the branch that we are working on. This way we can continue working on our local machine while Jenkins is running integration tests on the build server.
Most of our integration suites run against our frontend modules, and for those modules we use grunt as our build tool.
Therefore the most practical step was extend grunt with a task for starting the integration tests on Jenkins: we'd like to type 'grunt jenkins' and then grunt should figure out which branch we have checked out, send that information to Jenkins, and start all the integration suites.
To accomplish that we need to take the following steps:

  • Have some Jenkins integration tests suites which can take a git branch as a parameter
  • Create a custom grunt task called 'jenkins'
  • Let the grunt jenkins task figure out which branch we have checked out
  • Let the grunt jenkins task start a bunch of remote jenkins jobs with the branch as a parameter
The Parameterized Jenkins job

Jenkins offers the feature to configure your build with a parameter. Here is how we do it:
In the configuration of a Jenkins job there's a little checkbox saying 'the build is parameterized'. Upon checking it, you can enter a parameter name, which will be available in your Jenkins build script.
We'll add a parameter called BRANCH, like in the screenshot below:

Jenkins job Parameter

Then in our Jenkins build script, we can check if the parameter is set, and if this is the case, check out the branch. It will look something like this:

git fetch
if [[ -n "$BRANCH" ]]; then
  git checkout -f $BRANCH
  git pull
else
  git checkout -f ${PROMOTED_GIT_COMMIT-"origin/master"}
fi

What's nice about our parameterized build job is that we can invoke it via a Rest call and include our parameter as a query parameter. I'll show that later on.

Our custom 'jenkins' Grunt task

In our grunt.js configuration file, we can load custom tasks. The following snippet loads all files in the conf/grunt/tasks folder.

 grunt.loadTasks('conf/grunt/tasks');

In our tasks folder we create a jenkins.js file, containing our custom jenkins task.
The next thing to do is to retrieve the name of the branch which we have checked out on our machine. There's a grunt plugin called 'gitinfo' which will help us with that.
When the gitinfo plugin is invoked it will add a section to the grunt configuration which contains, amongst others, the name of our current local branch:

module.exports = function (grunt) {
  grunt.registerTask('jenkins', ['gitinfo', 'build-branch']);
  
  grunt.registerTask('build-branch', function () {
    var git = grunt.config().gitinfo;
    grunt.log.ok('Building branch: ' + git.local.branch.current.name);


And now we can start our parameterized job with the correct value for the branch parameter, like this:

    var request = require('request');

    var jenkinsUser = 'your username';
    var jenkinsPassword = 'your password';
    var jenkinsHost = 'your jenkins host';
    var job = 'my-parameterized-integration-suite'; 

    var url = 'http://' + jenkinsUser + ':' + jenkinsHost + '@' + jenkinsHost + ':8080/job/' + job + '/buildWithParameters?BRANCH=' + git.local.branch.current.name + '&delay=0sec';

      request({
        url: url,
        method: 'POST'
      },
      jobFinished(job));
    });

First we acquire a reference to the 'request' package. This is a simple Node package that lets you perform http requests.
We then build the Rest url; to connect to jenkins we need to supply our Jenkins username and password.
And finally we post a request to the Rest endpoint of Jenkins, which will start our job. We supply a callback called 'jobFinished'.

Putting it all together: starting multiple jobs

With these steps in place, we have a new grunt task which we can invoke with 'grunt jenkins' from the commandline, and which will start a Jenkins job on the feature branch that we have checked out locally.
But this will only be useful if our grunt jenkins task is able to start not just one job, but a bunch of them.
Here is the full source code of the jenkins.js file. It has a (hardcoded) array of jobs, starts all of them and keeps track of how many of them have finished:

module.exports = function (grunt) {

  grunt.registerTask('jenkins', ['gitinfo', 'build-branch']);

  grunt.registerTask('build-branch', function () {
    var request = require('request');

    var jenkinsUser = 'your username';
    var jenkinsPassword = 'your password';
    var jenkinsHost = 'your jenkins host';
  
    var jobs = [
      'my-parameterized-integration-suite-1',
      'my-parameterized-integration-suite-2',
      'my-parameterized-integration-suite-3',
      'my-parameterized-integration-suite-4',
      'my-parameterized-integration-suite-5'
    ];
    var git = grunt.config().gitinfo;
    var done = this.async();
    var jobCounter = 0;

    grunt.log.writeln();
    grunt.log.ok('Building branch: ' + git.local.branch.current.name);
    grunt.log.writeln();

    function jobFinished (job) {
      return function (error, response, body) {
        jobCounter++;
        grunt.log.ok('[' + jobCounter + '/' + jobs.length + '] Started: ' + job);

        if (error) {
          grunt.log.error('Error: ' + error + (response ? ', status: ' + response.statusCode : ''));
        } else if (response.statusCode === 301) {
          grunt.log.writeln('See: ' + response.headers.location);
        }

        if (body) {
          grunt.log.writeln(body);
        }

        if (jobCounter === jobs.length) {
          grunt.log.ok();
          done();
        }
      };
    }

    jobs.forEach(function (job, i) {
      var url = 'http://' + jenkinsUser + ':' + jenkinsHost + '@' + jenkinsHost + ':8080/job/' + job + '/buildWithParameters?BRANCH=' + git.local.branch.current.name + '&delay=0sec';
      grunt.log.ok('[' + (i + 1) + '/' + jobs.length + '] Starting: ' + job);

      request({
        url: url,
        method: 'POST'
      },
      jobFinished(job));
    });

    grunt.log.ok();

  });
};

And here's the console output:

$ grunt jenkins
Running "gitinfo" task

Running "build-branch" task

>> Building branch: my-feature-branch

>> [1/5] Starting: my-parameterized-integration-suite-1
>> [2/5] Starting: my-parameterized-integration-suite-2
>> [3/5] Starting: my-parameterized-integration-suite-3
>> [4/5] Starting: my-parameterized-integration-suite-4
>> [5/5] Starting: my-parameterized-integration-suite-5
OK
>> [1/5] Started: my-parameterized-integration-suite-1
>> [2/5] Started: my-parameterized-integration-suite-2
>> [3/5] Started: my-parameterized-integration-suite-3
>> [4/5] Started: my-parameterized-integration-suite-4
>> [5/5] Started: my-parameterized-integration-suite-5
OK

Done, without errors.

About snowmen and mathematical proof why agile works

Mon, 12/01/2014 - 16:05

Last week I had an interesting course by Roger Sessions on Snowman Architecture. The perishable nature of Snowmen under any serious form of pressure fortunately does not apply to his architecture principles, but being an agile fundamentalist I noticed some interesting patterns in the math underlying the Snowmen Architecture that are well rooted in agile practices. Understanding these principles may give facts to feed your gut feeling about these philosophies and give mathematical proof as to why Agile works.

Complexity

“What has philosophy got to do with measuring anything? It's the mathematicians you have to trust, and they measure the skies like we measure a field. “ - Galileo Galilei, Concerning the New Star (1606).

In his book ‚ÄúFacts and Fallacies of Software Engineering‚ÄĚ Robert Glass implied that when the functionality of a system increases by 25% the complexity of it effectively doubles. So in formula form:

                      

This hypothesis is supported by empirical evidence, and also explains why planning poker that focuses on the complexity of the implementation rather than functionality delivered is a more accurate estimator of what a team can deliver in sprint.

Basically the smaller you can make the functionality the better, and that is better to the power 3 for you! Once you start making functionality smaller, you will find that your awesome small functionality needs to talk to other functionalities in order to be useful for an end user. These dependencies are penalized by Roger’s model.

‚ÄúAn outside dependency contributes as much complexity as any other function, but does so independently of the functions.‚ÄĚ

In other words, splitting a functionality of say 4 points (74 complexity points) in two equal separate functions reduces the overall complexity to 17 complexity points. This benefit however vanishes when each module has more than 3 connections.

An interesting observation that one can derive from this is a mathematical model that helps you to find which functions ‚Äúbelong‚ÄĚ together. It stands to reason that when those functions suffer from technical interfacing, they will equally suffer from human interfaces. But how do we find which functions ‚Äúbelong‚ÄĚ together, and does it matter if we get it approximately right?¬†

Endless possibilities

‚ÄúAlmost right doesn‚Äôt count‚ÄĚ ‚Äď Dr. Taylor; on landing a spacecraft after a 300 million miles journey 50 meter from a spot with adequate sunlight for the solar panels.¬†

Partitioning math is incredibly complex, and the main problem with the separation of functions and interfaces is that it has massive implications if you get it ‚Äújust about right‚ÄĚ. This is neatly covered by ‚Äúthe Bell number‚ÄĚ (http://en.wikipedia.org/wiki/Bell_number).

These numbers grow quite quickly e.g. a set of 2 functions can be split 2 ways, but a set of 3 already has 5 options, at 6 it is 203 and if your application covers a mere 16 business functions, we already have more than 10 billion ways to create sets, and only a handful will give that desired low complexity number.

So how can math help us to find the optimal set division? the one with the lowest complexity factor?

Equivalence Relations

In order to find business functions that belong together or at lease have so much in common that the number of interfaces will outweigh the functional complexity, we can resort to the set equivalence relation (http://en.wikipedia.org/wiki/Equivalence_relation). It is both the strong and the weak point in the Snowmen architecture. It provides a genius algorithm for separating a set in the most optimal subsets (and doing so in O(n + k log k) time). The equivalence relation that Session proposes is as follows:

            Two business functions {a, b} have synergy if, and only if, from a business perspective {a} is not useful without {b} and visa versa.

The weak point is the subjective measurement in the equation. When played at a too high level it will all be required, and on a too low level not return any valuable business results.

In my last project we split a large eCommerce platform in the customer facing part and the order handling part. This worked so well that the teams started complaining that the separation had lowered their knowledge of each other’s codebase, since very little functionality required coding on both subsystems.

We effectively had reduced complexity considerable, but could have taken it one step further. The order handling system was talking to a lot of other systems in order to get the order fulfilled. From a business perspective we could have separated further, reducing complexity even further. In fact, armed with Glass’s Law, we’ll refactor the application to make it even better than it is today.

Why bother?

Well, polynomial growing problems can’t be solved with linear solutions.

Polynomial problems vs linear solutions plotted against time

Polynomial problems vs linear solutions plotted against time

As long as the complexity is below the solution curve, things will be going fine. Then there is a point in time where the complexity surpasses our ability to solve it. Sure we can add a team, or a new technology, but unless we change nature of our problem, we are only postponing the inevitable.

This is the root cause why your user stories should not exceed the sprint boundaries. Scrum forces you to chop the functionality into smaller pieces that move the team in a phase where linear development power supersedes the complexity of the problem. In practice, in almost every case where we saw a team breaking this rule, they would end up at the ‚Äúuh-oh moment‚ÄĚ at some point in the future, at that stage where there are no neat solutions any more.

So believe in the math and divide your complexity curve in smaller chunks, where your solution capacity exceeds the problems complexity. (As a bonus you get a happy and thriving team.)

(Edu) Scrum at XP Days Benelux: beware of the next generation

Sat, 11/29/2014 - 09:21

Xp Days Benelux 2014 is over, and it was excellent.
Good sessions, interesting mix of topics and presenters, and a wonderful atmosphere of knowledge sharing, respect and passion for Agile.

After 12 years, XP Days Benelux continues to be inspiring and surprising.

The greatest surprise for me was the participation of 12 high school students from the Valuas College in Venlo, who arrived on the second day. These youngsters did not only attend the conference, but they actually hosted a 120-minute session on Scrum at school, called EduScrum.

eduscrum

 

Eduscrum

EduScrum uses the ceremonies, roles and artifacts of Scrum to help young people learn in a better way. Students work together in small teams, and thus take ownership of their own learning process. At the Valuas College, two enthusiastic Chemistry teachers introduced EduScrum in their department two years ago, and have made the switch to teaching Chemistry in this new way.

In an interactive session, we, the adults, learned from the youngsters how they work and what EduScrum brought them. They showed their (foldable!) Scrum boards, explained how their teams are formed, and what the impact was on their study results. Forcing themselves to speak English, they were open, honest, courageous and admirable.

eduscrum2

 

Learnings

Doing Scrum in school has many similarities with doing Scrum at work. However, there is also a lot we can learn from the youngsters. These are my main takeaways:

- Transition is hard
It took the students some time to get used to working in the new way. At first they thought it was awkward. The transition took about… 4 lessons. That means that these youngsters were up and running with Scrum in 2 weeks (!).

- Inform your stakeholders
When the teachers introduced Scrum, they did not inform their main stakeholders, the parents. Some parents, therefore, were quite worried about this strange thing happening at school. However,  after some explanations, the parents recognised that eduScrum actually helps to prepare their children for today’s society and were happy with the process.

- Results count
In schools more than anywhere else, your results (grades) count. EduScrum students are graded as a team as well as individually. When they transitioned to Scrum the students experienced a drop in their grades at first, maybe due to the greater freedom and responsibility they had to get used to. Soon after, theirs grades got better.

- Compliancy is important
Schools and teachers have to comply with many rules and regulations. The knowledge that needs to get acquired each year is quite fixed. However, with EduScrum the students decide how they will acquire that knowledge.

- Scrum teaches you to cooperate
Not surprisingly, all students said that, next to Chemistry, they now learned to cooperate and communicate better. Because of this teamwork, most students like to work this way. However, this is also the reason a few classmates would like to return to the old, individual, style of learning. Teamwork does not suit everyone.

- Having fun helps you to work better
School (and work) should not be boring, and we work better together when we have some fun too. Therefore, next to a Definition of Done, the student teams also have a Definition of Fun.  :-)

Next generation Scrum

At the conference, the youngsters were surprised to see that so many companies that they know personally (like Bol.com) are actually doing Scrum. ‚ÄėI thought this was just something I learned to do in school ‚Äė, one girl said. ‚ÄėBut now I see that it is being used in so many companies and I will actually be able to use it after school, too.‚Äô

Beware of these youngsters. When this generation enters the work force, they will embrace Scrum as the natural way of working. In fact, this generation is going to take Scrum to the next level.

 

 

 

 

How to implement validation callbacks in AngularJS 1.3

Wed, 11/26/2014 - 09:21

In my current project we've recently switched from AngularJS 1.2 to 1.3. Except for a few breaking changes the upgrade was quite trivial. However, after diving into the changelog we noticed that the way AngularJS handles form validation changed drastically. Since we're working on a greenfield application we decided it was worth the effort to rewrite the validation logic. The main argument for this was that the validation we had could be drastically simplified by using the new validation pipeline.

This article is aimed at AngularJS developers interested in the new validation pipeline offered by AngularJS 1.3. Except for a small introduction, this article will not be about all the different aspects related to validating forms. I will showcase 2 different cases in which we had to come up with custom solutions:

  • Displaying additional information after successful validation
  • Validating equality of multiple password fields

What has changed?

Whereas in AngularJS 1.2 we could use $parsers and $formatters for form validation, AngularJS 1.3 introduces the concept of $validators and $asyncValidators. As we can deduce from the names, the latter is for server-side validations using HTTP calls and the former is for validations on the client-side.

All validators are directives that are registered on a specific ngModel by either adding it to the ngModel.$validators or ngModel.$asyncValidators. When validating, all $validators run before executing the $asyncValidators.

AngularJS 1.3 also utilises the HTML5 validation API wherever possible. For each of the HTML5 validation attributes, AngularJS offers a directive. For example, for minlength, add ng-minlength to your input field to incorporate the minimum length check.

When it comes down to showing error messages we can rely on the $error property on a ngModel. Whenever one of the validators fail it will add the name of the validator to the $error property. Using the brand new ngMessages module we can then easily display specific error messaging depending on the type of validator.

Displaying additional information after successful validation

Implementing the new validation pipeline came with a few challenges. The biggest being that we had quite a few use cases in which, after successfully validating a field, we wanted to display some data returned by the web service. Below I will discuss how we've solved this.

The directive itself is very simple and merely does the following:

  1. Clear the data displayed next to the field. If the user has already entered text and the validation succeeds the data from validation call will be shown next to the input field. If the user were to change the input's value and it would not validate correctly, the data displayed next to the field would be stale. To prevent this we first clear the data displayed next to the field at the start of the validation.
  2. Validate the content against the web service using the HelloResource. Besides returning the promise the resource gives us we invoke the callback() method when the promise is successfully resolved.
  3. Display data returned by the HTTP call using a callback method
'use strict';

angular.module('angularValidators')
  .directive('validatorWithCallback', function (HelloResource) {
    return {
      require: 'ngModel',
      link: function (scope, element, attrs, ngModel) {
        function callback(response) {}

        ngModel.$asyncValidators.validateWithCallback = function (modelValue, viewValue) {
          callback('');

          var value = modelValue || viewValue;

          return HelloResource.get({name: value}).$promise.then(function (response) {
            callback(response);
          });
        };
      }
    };
});

We can add the validator to our input by adding the validator-with-callback attribute to the input which we would like to validate.

<form name="form">
    <input type="text" name="name" ng-model="name" required validator-with-callback />
</form>
Implementing the clear and callback

Because this directive should be independent from any specific ngModel we have to find a way to pass the ngModel to the directive. To accomplish this we add a value to the validator-with-callback attribute. We also change the ng-model attribute value to name.value. Why this is required will be explained later on. To finish we also add a div that will only display when the form element is valid and we will set it to display the value of name.detail.

<form name="form">
    <input type="text" name="name" ng-model="name.value" required validator-with-callback="name" />
    <div ng-if="form.name.$valid">{{name.detail}}</div>
</form>

The $eval method from scope can be used to resolve the object using the attribute's value. Displaying the data won't work if we simply supply and overwrite any scoped object (f.e. $scope.data). We have to add a scoped object name which contains 2 properties: value and detail. Note: the naming is not important.

We will add a controller to our HTML file which will be responsible for setting the default value for our scoped object name. As shown in our HTML view above, the value property will be used for storing the value of the field. The detail property will be used to store the response from the web service call and display it to the user.

'use strict';

angular.module('angularValidators')
  .controller('ValidationController', function ($scope) {
    $scope.name = {
      value: undefined,
      detail: ''
    };
});

The last thing is changing the directive implementation to retrieve the target object and implement the clear and callback methods. We do this by retrieving the value from the validator-with-callback attribute by calling scope.$eval(attrs.validatorWithCallback). When we have the target object we can implement the callback method.

'use strict';

angular.module('angularValidators')
  .directive('validatorWithCallback', function (HelloResource) {
    return {
      require: 'ngModel',
      link: function (scope, element, attrs, ngModel) {
        var target = scope.$eval(attrs.validatorWithCallback);

        function callback(response) {
          if (_.isUndefined(target)) {
            return;
          }

          target.detail = response.msg;
        }
        ngModel.$asyncValidators.validateWithCallback = function (modelValue, viewValue) {
          ...omitted...
        };
      }
    };
});

This is all that's needed to create a directive with a callback method. This callback method uses data returned by the web service to populate the property value but it can of course be adjusted to do anything we desire.

Validating equality of multiple password fields

The second example we would like to show is a synchronous validator that validates the values of 2 different fields. The use case we had for this were 2 password fields which required to be equal.

Requirements
  • (In)validate both fields when the user changes the value of either one of them and they are (not) equal
  • The validator successfully validates the field¬†if the second input has not been touched or the value of the second input is empty.
  • Only display 1 error message and only when no other validators (required & min-length) are invalid
Implementation

We start of by creating the 2 different form elements with both a required and a ng-minlength validator. We also add a button to the form to show how enabling/disabling the button depending on the validity of the form works.

Both password fields also have the validate-must-equal-to="other_field_name" attribute. This indicates that we wish to validate the value of this field against the field defined by the attribute. We also add a form-name="form" attribute to pass in the name of the form to our directive. This is needed for invalidating the second input on our form without hardcoding the form name inside the directive and thus making this directive fully independent from form and field names.

To conclude we also conditionally show or hide the error displaying containers. For the errors related to the first input field we also specify that it should not display if the notEqualTo error has been set by our directive. This ensures that no empty div will be displayed if our validator invalidates the first field.

<form name="form">
    <input type="password" name="password" ng-model="password" required ng-minlength="8" validate-must-equal-to="password2" form-name="form" />
    <div ng-messages="form.password.$error" ng-if="form.password.$touched && form.password.$invalid && !form.password.$error.notEqualTo">
        <div ng-message="required">This field is required</div>
        <div ng-message="minlength">Your password must be at least 8 characters long</div>
    </div>
    <br />
    <input type="password" name="password2" ng-model="password2" required ng-minlength="8" validate-must-equal-to="password" form-name="form" />
    <div ng-messages="form.password2.$error" ng-if="form.password2.$touched && form.password2.$invalid">
        <div ng-message="required">This field is required</div>
        <div ng-message="minlength">Your password must be at least 8 characters long</div>
    </div>

    <p>The submit button will only be enabled when the entire form is valid</p>
    <button ng-disabled="form.$invalid">Submit</button>
</form>

The validator itself is again very compact. Basically all we want is to retrieve the value from the input and pass it to a isEqualToOther method which returns a boolean. At the beginning of the link method we also do a check to see if the form-name attribute is provided. If not, we throw an error. We do this to communicate to any developer reusing this directive that this directive requires the form name to function correctly. Unfortunately at this moment there is no other way to communicate the additional mandatory attribute.

'use strict';

angular.module('angularValidators')
  .directive('validateMustEqualTo', function () {
    return {
      require: 'ngModel',
      link: function (scope, element, attrs, ngModel) {
        if (_.isUndefined(attrs.formName)) {
          throw 'For this directive to function correctly you need to supply the form-name attribute';
        }

        function isEqualToOther(value) {
          ...omitted...
        }

        ngModel.$validators.notEqualTo = function (modelValue, viewValue) {
          var value = modelValue || viewValue;

          return isEqualToOther(value);
        };
      }
    };
  });

The isEqualToOther method itself does the following:

  • Retrieve the other input form element
  • Throw an error if it cannot be found which again means this directive won't function as intended
  • Retrieve the value from the other input and validate the field if the input has not been touched or the value is empty
  • Compare both values
  • Set the validity of the other field depending on the comparison
  • Return the comparison to (in)validate the field this directive is linked to
'use strict';

angular.module('angularValidators')
  .directive('validateMustEqualTo', function () {
    return {
      require: 'ngModel',
      link: function (scope, element, attrs, ngModel) {
        ...omitted...

        function isEqualToOther(value) {
          var otherInput = scope[attrs.formName][attrs.validateMustEqualTo];
          if (_.isUndefined(otherInput)) {
            throw 'Cannot retrieve the second field to compare with from the scope';
          }

          var otherValue = otherInput.$modelValue || otherInput.$viewValue;
          if (otherInput.$untouched || _.isEmpty(otherValue)) {
            return true;
          }

          var isEqual = (value === otherValue);

          otherInput.$setValidity('notEqualTo', isEqual);

          return isEqual;
        }

        ngModel.$validators.notEqualTo = function (modelValue, viewValue) {
          ...omitted...
        };
      }
    };
  });
Alternative solution

An alternative solution to the validate-must-equal-to directive could be to implement a directive that encapsulates both password fields and has a scoped function that would handle validation using ng-blur on the fields or a $watch on both properties. However, this approach does not use the out-of-the-box validation pipeline and we would thus have to extend the logic in the form button's ng-disabled to allow the user to submit the form.

Conclusion

AngularJS 1.3 introduces a new validation pipeline which is incredibly easy to use. However, when faced with more advanced validation rules it becomes clear that certain features (like the callback mechanism) are lacking for which we had to find custom solutions. In this article we've shown you 2 different validation cases which extend the standard pipeline.

Demo application

I've set up a stand-alone demo application which can be cloned from GitHub. This demo includes both validators and karma tests that cover all different scenario's. Please feel free to use and modify this code as you feel appropriate.

CITCON Europe 2014 wrap-up

Mon, 11/24/2014 - 19:33

On the 19th and 20th of September CITCON (pronounced "kit-con") took place in Zagreb, Croatia. CITCON is dedicated to continuous integration and testing. It brings together some of the most interesting people of the European testing and continuous integration community. These people also determine the topics of the conference.

They can do this because CITCON is an Open Space conference. If you're not familiar with the concept of Open Space, check out Wikipedia. On Friday evening, attendees can pitch their proposals. Through dot voting and (constant) shuffling of the schedule, the attendees create their conference program.

In this post we'll wrap up a few topics that were discussed.

Polytesting

Peter Zsoldos (@zsepi) went into his most recent brain-spin: polytesting. If I have a set of requirements, is it feasible to apply those requirements at different levels of my application; say, component, integration and UI level. This sounds very appealing because you can perform ATDD at different levels.
This approach is particularly interesting because it has the potential to keep you focused on the required functionality all the way. You'll need good, concrete requirements for this to work.

Microservices

Microservices are a hot topic nowadays. The promise of small, isolated units with clear interfaces is tempting. There are generally two types of architectures that can be applied. The most common one is where there is no central entity, and services communicate to each other directly.

Douglas Squirrel (@douglasquirrel) explained an alternative architecture by using a central pub-sub "database" to which each service is allowed to publish "facts". Douglas deliberately used the term facts to describe single items that are considered true at a specific point in time ("events"" is too generic a term).

The second model comes closer to mechanisms such as event sourcing (or even ESBs if you take it to the extreme). One of the advantages of this approach is that, because facts are stored, it's possible to construct new functionality based on existing facts. For example, you could use this functionality in a game to create leader boards and, at a later stage, create leaderboards per continent, country, or whatever seems fit.

Unit testing

Arjan Molenaar introduced a flaming hot topic this year: "unit testing is a waste". Inspired by recent discussions of DHH, Martin Fowler, and Kent Beck, Arjan tried to find out the opinions of the CITCON crowd. Most of the people contributing to the discussion must have been working in consultancy, because the main conclusion was "It depends".

Whether unit testing is worth the effort mainly depends on the goals that people try to achieve when writing their unit tests. Some people write them from a TDD perspective. They use tests to guide themselves through development cycles, making sure they haven't made little errors. If this helps you, then please keep doing it! If it does not really help, well ...

Other people write unit tests from a regression perspective, or at least maintain them for regression testing. This part caused the most discussion. How useful are unit tests for regression testing purposes? Are you really catching regression if you isolate a single unit?

The growing interest in microservices also sheds new light on this discussion. In the future, when microservices will be widely adopted, we will be working with much smaller codebases. They might be so small and clear that unit tests are no longer required to guide us through the development process.

CI scaling

Another trending topic was scaling CI systems. It was good to see that the ideas we have at Xebia were consistent with the ideas we heard at CITCON. First off, the solution for everything (and world peace, it seems) is microservices. Unfortunately, some of us, for the time being, must deal with monolithic codebases. Luckily there are still options for your growing CI system, even though for now it remains one big chunk of code.

The staged pipeline: you test the things most likely to break first. Basically, you break your test suite up into multiple test suites and run them at separate stages in the pipeline.

But how do you determine what is most likely to break and what to test where? Tests that are most likely to break are those that are closely linked to the code changes, so run them first. Also, determine high-risk areas for your application. These areas can be identified based on trends (in failing tests) or simply through human analysis. To determine where to run the different test suites is mainly a matter of speed versus confidence. You want fast feedback so you don't want to push all your tests to the end of the pipeline. But you also don't want to wait forever before you know you can move on to the next thing.

Beer brewing for process refinement

Who isn't interested in beer brewing? Tom Denley (@scarytom) proposed a session on home brewing and the analogy. Because Arjan is a homebrewer himself, this seemed like an obvious session for him.

In addition to Tom explaining the process of brewing, we discussed how we got into brewing. In both cases, the first brew was made with a can of hopped malt syrup, adding yeast, and producing a beer from there. For your second beer, you replace the can of syrup with malt extract powder and dark malt (for flavour). At a later stage, you can replace the malt extract with ground malt.

What we basically do is start with the end in mind. If you're starting with continuous delivery, it is considered good practice to do the same: get your application deployed in production as soon as possible and optimise your process from your deployed app back to source code.

Again, it was a good conference with some nice take-aways. Next year's episode will most likely take place in Finland. The year after that... The Netherlands?

Testing cheatsheet

Mon, 11/24/2014 - 19:00

Sometimes it is not clear for everybody how unit tests relates to e2e-test. This cheatsheet, I created, describes in one page:

  1. The different definitions
  2. Different structures of the tests
  3. The importance of unit tests
  4. The importance of e2e tests
  5. External versus internal quality
  6. E2E and unit tests living next to each other

Feel free to download and use it in your project if you feel there is a confusion of tongues between unit and e2e tests.

Download: TestingCheatSheet

 

Ready, Test, Go!

Tue, 11/18/2014 - 10:42

The full potential of many an agile organization is hardly ever reached. Many teams find themselves redefining¬†user stories although they have been committed to as part of the sprint. The ‚Äėready phase‚Äô, meant to get user stories clear and sufficiently detailed so they can be implemented, is missed. How will each user story result in¬†high quality features that deliver business value? The ‚ÄėDefinition of Ready‚Äô is lacking one important entry: ‚ÄúAutomated tests are available.‚ÄĚ Ensuring to have testable and hence automated acceptance criteria before committing to user stories in a sprint, allows you to retain focus during the sprint. We define this as: Ready, Test,¬†Go!

Ready

Behaviour-Driven Development has proven to be a fine technique to write automated acceptance criteria. Using the Gherkin format (given, when, then), examples can be specified that need to be supported by the system once the user story is completed. When a sufficiently detailed list of examples is available, all Scrum stakeholders agree with the specification. Common understanding is achieved that when the story is implemented, we are one step closer to building the right thing.

Test

The specification itself becomes executable: at any moment in time, the gap between the desired and implemented functionality becomes visible. In other words, this automated acceptance test should be run continuously. First time, it happily fails. Next, implementation can start. This, following Test-Driven Development principles, starts with writing (also failing) unit tests. Then, development of the production code starts. When the unit tests are passing and acceptance tests for a story are passing, other user stories can be picked up; stories of which the tests happily fail. Tests thus act as a safeguard to continuously verify that the team is building the thing right. Later, the automated tests (acceptance tests and unit tests) serve as a safety net for regression testing during subsequent sprints.

Go!

That's simple: release your software to production. Ensure that other testing activities (performance tests, chain tests, etc) are as much as possible automated and performed as part of the sprint.

The (Agile) Test Automation Engineer

In order to facilitate or boost this way of working, the role of the test automation engineer is key. The test automation engineer is defining the test architecture and facilitating the necessary infrastructure needed to run tests often and fast. He is interacting with developers to co-develop fixtures, to understand how the production code is built, and to decide upon the structure and granularity of the test code.

Apart from their valuable and unique analytical skills, relevant testers grow their technical skills. If it cannot be automated, it cannot be checked, so one might question whether the user story is ready to be committed to in a sprint. The test automation engineer helps the Scrum teams to identify when they¬†are ‚Äėready to test‚Äô and urges the product owner and business to specify requirements ‚Äď at least for the next sprint ahead.

So: ready, test, go!!

The neverending waveform of the full-stack developer

Tue, 11/11/2014 - 16:50

There was an article on Techcrunch a couple days ago which was linked in our internal mailing list the other day, titled The Rise And Fall Of The Full Stack Developer. I read it, and I just couldn't figure out why the title is about "the fall" of the full-stack developer (and I said as much on the mailing list, after which I was encouraged to write this blog post). In this post I'll try and explain why it's not the end, but just a stage in a recurring cycle

What I read was a waveform - the article describes that first you had low-level assembly, which is specialised but still pretty straightforward since it's just one platform and language. Then things started to get more complicated, with the larger web applications involving lots of experts in various fields (Java, database management, server and JVM management, to name a few from the article).

In or around 2005, there was a bit of a revolution going on. Internet access (and high-speed internet for that matter) became accessible to all, and with it, cheap webhosting - most notably, there was a boom in PHP development. What the article highlights is that that was an era where single or small teams of developers could build a web application from scratch, without needing to lug around all of that expert knowledge - effectively, a full-stack developer (storage, webhosting, a programming language (PHP, Python, Ruby, etc), HTML, CSS, Javascript, the whole stack).

But, as the article states, that isn't the full stack of today - added to that are things like machine learning, mobile development, more advanced and less traditional programming languages, frameworks and persistence solutions, etcetera. The conclusion of the author is that it's too much for one full-stack developer to take, that there's no way a full-stack developer can be an expert in all of those fields - and he basically renames the person that knows a bit of all of those technologies a full-stack integrator, and declares the full-stack developer dead.

But I didn't see that. What I saw was just history repeating - from simple and manageable, to complex and too much for one person to handle and know all about. From assembly and PHP, to Java enterprise software and Docker-contained AWS instances running a MongoDB and Scala REST interface to power your Android, iOS and single-page AngularJS-powered webapps (to name but a few buzzwords).

I'm sure the next 'simple' wave is already around the corner - maybe it's already here, lurking somewhere until some more influential guy goes "Hey guys, let's go back to the simpler, good old times where you wrote code and it Just Worked‚ĄĘ!", where a wave of developer will follow - most likely a combination of younger people, new to the software development world, and older people that have been stuck in an overly complicated software development rut for far too long.

As for not being able to keep up with it all, this is why it's probably better to assemble teams out of T-shaped developers - full-stack developers with a broad knowledge set (and more importantly, broad interests and curiosity), but with at least one specialisation. This was extended within Xebia to pi-shaped developers, then made extreme to comb-shaped developers, but the latter is just a generalist like mentioned in the article - a jack of all trades, master of none. And it's important to realise, as a developer, that it's OK to not know everything, to miss some update, to not learn that fancy new programming language by people disgruntled by Java's slow development - there's simply too much information updates today, and trying to manage it all in the extremest forms can lead to serious problems. But the rapid development is a good thing, I might add, I don't think the software development world has ever moved as fast as it does today, and it's not nearly the end yet as long as more than half of the world's population doesn't have access to the internet and its boundless resources.

I think increasing complexity is just part of software development. I mean look at Facebook and Twitter - they were started using those simple, accessible tools like PHP and Ruby on Rails, which allowed them to get a flying start and adapt quickly to their huge growth (and with the former desperately clinging to their decision, despite lots of people telling them they should use a different technology), but despite that they still turned into some of the most complicated pieces of software ever. The important bit is to be able to manage all that, not so much a decision between a simple or complicated toolset - or having all of your developers know everything. Similarly, the current trend is microservices - again back to simple, one-purpose miniature applications that do one thing and do them right. But those will just end up being the next generation of huge, complicated software systems, given enough time. As a colleague stated, "Microservices are just hipster SOA".

As the saying goes, the more things change, the more they stay the same. The full-stack developer isn't dead and won't be going away any time soon, he'll just go by a different name depending on the times - J2EE Certified Software Engineer, full-stack developer, multidisciplinary expert, T-shaped developer, Xebian, chief of technology evangelisation, or whatever else the HR or marketing departments of the near future will come up with.

Mutation Testing: How Good are your Unit Tests?

Thu, 11/06/2014 - 20:56

You write unit tests for every piece of code you deliver. Your test coverage is close to 100%. So when the point comes when you have to make some small changes to the existing code, you feel safe and confident that your test suite will protect you against possible mistakes.
You make your changes, and all your tests still pass. You should be fairly confident now that you can commit your new code without breaking anything, right?

Well, maybe not. Maybe your unit tests were fooling you. Sure they covered every line of your code, but they could have performed the wrong assertions.
In this post I will introduce mutation testing. Mutation testing can help you find omissions in your unit tests.

Let's begin with a small example:

package com.xebia;

public class NameParser {
  public Person findPersonWithLastName(String[] names, String lastNameToFind) {
    Person result = null;
    for(int i=0; i <= names.length; i++) { // bug 1
      String[] parts = names[i].split(" ");
      String firstName = parts[0];
      String lastName = parts[1];
      if(lastName.equals(lastNameToFind)) {
        result = new Person(firstName, lastName);
        break;
      }
    }
    return result;
  }
}

NameParser takes a list of strings which consist of a first name and a last name, searches for the entry with a given last name, instantiates a Person object out of it and returns it.
Here is the Person class:

package com.xebia;

public class Person {
  private final String firstName;
  private final String lastName;

  public Person(String firstName, String lastName) {
    this.firstName = firstName;
    this.lastName = lastName;
  }

  public String getFirstName() {
    return firstName;
  }

  public String getLastName() {
    return firstName; // bug 2
  }
}

You can see that there are two bugs in the code. The first one is in the loop in NameParser, which loops past the end of the names array. The second on is in Person, which mistakenly returns firstName in its getLastName method.

NameParser has a unit test:

package com.xebia;

import org.junit.Before;
import org.junit.Test;
import static org.junit.Assert.assertEquals;

public class NameParserTest {
  private NameParser nameParser;
  private String[] names;

  @Before
  public void setUp() {
    nameParser = new NameParser();
    names = new String[]{"Mike Jones", "John Doe"};
  }

  @Test
  public void shouldFindPersonByLastName() {
    Person person = nameParser.findPersonWithLastName(names, "Doe");
    String firstName = person.getFirstName();
    String lastName = person.getLastName();
    assertEquals("John", firstName);
  }
}

The unit tests covers the Person and NameParser code for 100% and succeeds!
It doesn't find the bug in Person.getLastName because it simply forgets to do an assertion on it. And it doesn't find the bug in the loop in NameParser because it doesn't test the case where the names list does not contain the given last name; so the loop always terminates before it has a chance to throw an IndexOutOfBoundsException.

Especially the last case is easy to overlook, so it would be nice if there existed a tool which could detect these kinds of problems.
And there is one: actually there are a couple. For this post I have chosen PIT, down at the end are links to some alternatives.

But first: what is mutation testing?

A mutation test will make a small change to your code and then run the unit test(s). Such a change is called a 'mutant'. If a change can be made and the unit tests still succeed, it will generate a warning saying that the mutant 'survived'.
The test framework will try to apply a number of predefined mutants at every point in your code where they are applicable. The higher the percentage of the mutants that get killed by your unit tests, the better the quality of your test suite.
Examples of mutants are: negating a condition in an If statement, changing a conditional boundary in a For loop, or throwing an exception at the end of a method.

Putting NameParser's testcase to the test with PIT

PIT stands for Parallel Isolated Test, which is what the project originally was meant for. But it turned out to be a much more interesting goal to do mutation testing, which required much of the same plumbing.

PIT integrates with JUnit or TestNG and can be configured with Maven, Gradle and others. Or it can be used directly as a plugin in Eclipse or IntelliJ.
I'm choosing for the last option: the IntelliJ plugin. The setup is easy: just install PITest from the plugin manager and you are ready to go. Once you're done, you'll find a new launch configuration option in the 'edit configurations' menu called PIT.

PIT launch configuration

You have to specify the classes where PIT will makes its mutations under 'Target classes'.
When we run the mutation test, PIT creates a Html report with the results for every class.
Here are the results for the NameParser class:

NameParser mutation testing results

As you can read under 'Mutations', PIT has been able to apply five code mutations to the NameParser class. Four of them resulted in a failing NameParserTest, which is exactly what we'd like to see.
But one of them did not: when the condition boundary in line 6, the loop constraint, was changed, NameParserTest still succeeded!
PIT changes loop constraints with a predefined algorithm; in this case, when the loop constraint was i <= names.length, it changed the '<=' into a '<'. Actually this accidentally corrected the bug in NameParser, and of course that didn't break the unit test.
So PIT found an omission in our unit test here, and it turned out that this omission even left a bug undetected!

Note that this last point doesn't always need to be the case. It could be that for the correct behavior of your class, there is some room for some conditions to change.
In the case of NameParser for instance, it could have been a requirement that the names list always contained an entry with the last name that was to be found. In that case the behavior for a missing last name would be unspecified and an IndexOutOfBoundsException would have been as good a result as anything else.
So PIT can only find strong indications of omissions in your unit tests, but they don't necessarily have to be.

And here are the results for the Person class:

Person mutation test results

PIT was able to do two mutations in the Person class; one in every getter method. Both times it replaced the return value with null. And as expected, the mutation in the getLastName method went undetected by our unit test.
So PIT found the second omission as well.

Conclusion

In this case, mutation testing would have helped us a lot. But there can still be cases where possible bugs can go unnoticed. In our code for example, there is no test in NameParser test that verifies the behavior when an entry in the names list does not contain both a first name and a last name. But PIT didn't find this omission.
Still it might make good sense to integrate mutation testing in your build process. PIT can be configured to break your Maven build if too many warnings are found.
And there's a lot more that can be configured as well, but for that I recommend to bring a visit to the website of PIT at www.pitest.org.

Alternatives

PIT is not the only mutation testing framework for Java, but it is the most popular and the one most actively maintained. Others are ¬ĶJava and Jumble.
Although most mutation testing frameworks are written for Java, probably because it's so easy to dynamically change its bytecode, mutation testing is possible in some other languages as well: notable are grunt-mutation-testing for Javascript and Mutator, a commercial framework which is available for a couple of languages.

Swift Function Currying

Thu, 11/06/2014 - 12:55

One of the lesser known features of Swift is Function Currying. When you read the Swift Language Guide you won't find anything about curried functions. Apple only describes it in their Swift Language Reference. And that's a pity, since it's a very powerful and useful feature that deserves more attention. This post will cover the basics and some scenarios in which it might be useful to use curried functions.

I assume you're already somewhat familiar with function currying since it exists in many other languages. If not, there are many articles on the Internet that explain what it is and how it works. In short: you have a function that receives one or more parameters. You then apply one or more known parameters to that function without already executing it. After that you get a function reference to a new function that will call the original function with the applied parameters.

One situation in which I find it useful to use curried functions is with completion handlers. Imagine you have a function that makes a http request and looks something like this:

func doGET(url: String, completionHandler: ([String]?, NSError?) -> ()) {
    // do a GET HTTP request and call the completion handler when receiving the response
}

This is a pretty common pattern that you see with most networking libraries as well. We can call it with some url and do a bunch of things in the completion handler:

doGET("http://someurl.com/items?all=true", completionHandler: { results, error in
    self.results = results
    self.resultLabel.text = "Got all items"
    self.tableView.reloadData()
})

The completion handler can become a lot more complex and you might want to reuse it in different places. Therefore you can extract that logic into a separate function. Luckily with Swift, functions are just closures so we can immediately pass a completion handler function to the doGET function:

func completionHandler(results: [String]?, error: NSError?) {
    self.results = results
    self.resultLabel.text = "Got all items"
    self.tableView.reloadData()
}

func getAll() {
    doGET("http://someurl.com/items?all=true", completionHandler)
}

func search(search: String) {
    doGET("http://someurl.com/items?q=" + search, completionHandler)
}

This works well, as long as the completion handler should always do exactly the same thing. But in reality, that's usually not the case. In the example above, the resultLabel will always display "Got all items". Lets change that into "Got searched items" for the search request:

func search(search: String) {
    doGET("http://someurl.com/items?q=" + search, {results, error in
        self.completionHandler(results, error: error)
        self.resultLabel.text = "Got searched items"
    })
}

This will work, but it doesn't look very nice. What we actually want is to have this dynamic behaviour in the completionHandler function. We can change the completionHandler in such a way that it accepts the text for the resultLabel as a parameter and then returns the actual completion handler as a closure.

func completionHandler(text: String) -> ([String]?, NSError?) -> () {
    return {results, error in
        self.results = results
        self.resultLabel.text = text
        self.tableView.reloadData()
    }
}

func getAll() {
    doGET("http://someurl.com/items?all=true", completionHandler("Got all items"))
}

func search(search: String) {
    doGET("http://someurl.com/items?q=" + search, completionHandler("Got searched items"))
}

And as it turns out, this is exactly what we can also do using currying. We just need to add the parameters of the actual completion handler as a second parameters group to our function:

func completionHandler(text: String)(results: [String]?, error: NSError?) {
    self.results = results
    self.resultLabel.text = text
    self.tableView.reloadData()
}

Calling this with the first text parameter will not yet execute the function. Instead it returns a new function with the [String]?, NSError? as parameters. Once that function is called the completionHandler function is finally executed.

You can create as many levels of this currying as you want. And you can also leave the last parameter group empty just to get a reference to the fully applied function. Let's look at another example. We have a simple function that sets the text of the resultLabel:

func setResultLabelText(text: String) {
    resultLabel.text = text
}

And for some reason, we need to call this method asynchronously. We can do that using the Grand Central Dispatch functions:

dispatch_async(dispatch_get_main_queue(), {
    self.setResultLabelText("Some text")
})

Since the dispatch_async function only accepts a closure without any parameters, we need to create an inner closure here. If the setResultLabelText was a curried function, we could fully apply it with the parameter and get a reference to a function without parameters:

func setResultLabelText(text: String)() { // now curried
    resultLabel.text = text
}

dispatch_async(dispatch_get_main_queue(), setResultLabelText("Some text"))

But you might not always have control over such functions, for example when you're using third party libraries. In that case you cannot change the original function into a curried function. Or you might not want to change it since you already using it at many other places and you don't want to break anything. In that case we can achieve something similar by creating a function that creates the curried function for us:

// defined in global scope
func curry<T>(f: (T) -> (), arg: T)() {
    f(arg)
}

We can now use it as following:

func setResultLabelText(text: String) {
    resultLabel.text = text
}

dispatch_async(dispatch_get_main_queue(), curry(setResultLabelText, "Some text"))

Probably in this example it might be just as easy to go with the inner closure, but being able to pass around partial applied functions is very powerful and used in many programming languages already.

Unfortunately the last example also show a great drawback about the way currying is implemented in Swift: you cannot simply curry normal functions. It would be great to be able to curry any function that takes multiple parameters instead of having to explicitly create curried functions. Another drawback is that you can only curry in the defined order of parameters. That doesn't allow you to do reverse currying (e.g. apply only the last parameter) or even apply just any parameter you want, regardless of its position. Hopefully the Swift language will evolve in this and get more powerful currying features.

iOS localization tricks for Storyboard and NIB files

Fri, 10/31/2014 - 23:36

Localization in iOS from Interface Builder designed UI has never been without any problems. The right way of doing localization is by having multiple Strings files. Duplicating Nib or Storyboard files and then changing the language is not an acceptable method. Luckily Xcode 5 has improved this for Storyboards by introducing Base Localization, but I've personally come across several situations where this didn't work at all or when it seemed buggy. Also Nib (Xib) files without ViewController don't support it.

In this post I'll show a couple of tricks that can help with the Localization of Storyboard and Nib files.

Localized subclasses

When you use this method, you create specialized subclasses of view classes that handle the localization in the awakeFromNib() method. This method is called for each view that is loaded from a Storyboard or Nib and all properties that you've set in Interface Builder will be set already.

For UILabels, this means getting the text property, localizing it and setting the text property again.

Using Swift, you can create a single file (e.g. LocalizationView.swift) in your project and put all your subclasses there. Then add the following code for the UILabel subclass:

class LocalizedLabel : UILabel {
    override func awakeFromNib() {
        if let text = text {
            self.text = NSLocalizedString(text, comment: "")
        }
    }
}

Now you can drag a label onto your Storyboard and fill in the text in your base language as you would normally. Then change the Class to LocalizedLabel and it will get the actual label from you Localizable.strings file.

Screen Shot 2014-10-31 at 22.45.46

Screen Shot 2014-10-31 at 22.46.56

No need to make any outlets or write any code to change it!

You can do something similar for UIButtons, even though they don't have a single property for the text on a button.

class LocalizedButton : UIButton {
    override func awakeFromNib() {
        for state in [UIControlState.Normal, UIControlState.Highlighted, UIControlState.Selected, UIControlState.Disabled] {
            if let title = titleForState(state) {
                setTitle(NSLocalizedString(title, comment: ""), forState: state)
            }
        }
    }
}

This will even allow you to set different labels for the different states like Normal and Highlighted.

User Defined Runtime Attributes

Another way is to use the User Defined Runtime Attributes. This method requires slightly more work, but has two small advantages:

  1. You don't need to use subclasses. This is nice when you already use another custom subclass for your labels, buttons and other view classes.
  2. Your keys in the Strings file and texts that show up in the Storyboard don't need to be the same. This works well when you use localization keys such as myCoolTableViewController.header.subtitle. It doesn't look very nice to see those everywhere in your Interface Builder labels and buttons.

So how does this work? Instead of creating a subclass, you instead add a computed property to an existing view class. For UILabels you use the following code:

extension UILabel {

    var localizedText: String {
        set (key) {
            text = NSLocalizedString(key, comment: "")
        }
        get {
            return text!
        }
    }

}

Now you can add a User Defined Runtime Attribute with the key localizedText to your UILabel and have the Localization key as its value.

Screen Shot 2014-10-31 at 23.05.18

Screen Shot 2014-10-31 at 23.06.16

Also here if you want to make this work for buttons, it becomes slightly more complicated. You will have to add a property for each state that needs a label.

extension UIButton {
    var localizedTitleForNormal: String {
        set (key) {
            setTitle(NSLocalizedString(key, comment: ""), forState: .Normal)
        }
        get {
            return titleForState(.Normal)!
        }
    }

    var localizedTitleForHighlighted: String {
        set (key) {
            setTitle(NSLocalizedString(key, comment: ""), forState: .Highlighted)
        }
        get {
            return titleForState(.Highlighted)!
        }
    }
}
Conclusion

Always try and pick the best solution for your problem. Use Storyboard Base Localization if that works well for you. If it doesn't, use the approach with subclasses if you don't need to use another subclass and if you don't care about using your base location strings as localization keys. Else, use the last approach with User Defined Runtime Attributes.

How to Dockerize your Dropwizard Application

Wed, 10/29/2014 - 10:47

If you want to deploy your Dropwizard Application on a Docker server, you can Dockerize your Dropwizard Application. Since a Dropwizard Application is already packaged as an executable Java ARchive file, creating a Docker image for such an application should be easy.

 

In this blog, you will learn how to Dockerize a Dropwizard Application using 4 easy steps.

Before you start

  • You are going to use the Dropwizard-example application, which can be found at the Dropwizard GitHub repository.
  • Additionally you need Docker. I used Boot2Docker to run the Dockerized Dropwizard Application on my laptop. If you use boot2Docker, you may need this Boot2Docker workaround to access your Dockerized Dropwizard application.
  • This blog does not describe how to create Dropwizard applications. The Dropwizard getting started guide provides an excellent starting point if you like to know more about building your own Dropwizard applications.

 

Step 1: create a Dockerfile

You can start with creating a Dockerfile. Docker can automatically build images by reading the instructions described in this file. Your Dockerfile could look like this:

FROM dockerfile/java:openjdk-7-jdk

ADD dropwizard-example-1.0.0.jar /data/dropwizard-example-1.0.0.jar

ADD example.keystore /data/example.keystore

ADD example.yml /data/example.yml

RUN java -jar dropwizard-example-1.0.0.jar db migrate /data/example.yml

CMD java -jar dropwizard-example-1.0.0.jar server /data/example.yml

EXPOSE 8080

 

The Dropwizard Application needs a Java Runtime, so you can start from an base image already available at Docker Hub, for example: dockerfile/java:openjdk-7-jdk.

You must add the Dropwizard Application files to the image, using the ADD instruction in your Dockerfile.

Next, simply specify the commands of your Dropwizard Application, which you want to execute during image build and container runtime. In the example above, the db migrate command is executed when the Docker image is build and the server command is executed when you issue a Docker run command to create a running container.

Finally, the EXPOSE instruction tells Docker that your container will listen on the specified port(s) at runtime.

 

Step 2: build the Docker image

Place the Dockerfile and your application files in a directory and execute the Docker build command to build an Docker image.

docker@boot2docker:~$ docker build -t dropwizard/dropwizard-example ~/dropwizard/

 

In the console output you should be able to that the Dropwizard Application db migrate command is executed. If everything is ok, the last line reported informs you that the image is successfully build.

Successfully built dd547483b57b

 

Step 3: run the Docker image

Use the Docker run command to create a container based on the image you have created. If you need to find your image id use the Docker images command to list your images. It should take around 3 seconds to start the Dockerized Dropwizard example application.

Docker run ‚Äďp 8080:8080 dd547483b57b

Notice that I included the ‚Äďp option to include a network port binding, which maps 8080 inside the container to port 8080 on the Docker host. ¬†You can verify whether your container is running using the docker ps command.

docker@boot2docker:~$ docker ps

CONTAINER ID        IMAGE                                  COMMAND                CREATED             STATUS              PORTS                    NAMES

3b6fb75adad6        dropwizard/dropwizard-example:latest   "/bin/sh -c 'java -j   3 minutes ago       Up 3 minutes        0.0.0.0:8080->8080/tcp   high_turing

 

  1. Test the application

Now the application is ready for use. You can access the application using your Docker host ip address and the forward port 8080. For example, use the Google Advanced Rest Client App to register ‚ÄúJohn Doe‚ÄĚ.

GoogleRestClient

How to create Java microservices with Dropwizard

Mon, 10/27/2014 - 15:10

On Tuesday October 14th the Amsterdam Middleware Meetup experimented with Dropwizard. The idea was to find out what this technology is about, where it could be useful and what the alternatives are. So below I’ll give you an overview of Dropwizard and compare it to Spring Boot.
The Dropwizard website claims:

Dropwizard pulls together stable, mature libraries from the Java ecosystem into a simple, light-weight package that lets you focus on getting things done.

I’ll discuss each of these claims below.

Stable and mature
Dropwizard uses Jetty, Jersey, Jackson and Metrics as its most important frameworks, but also a host of other stuff like Guava, Liquibase and Joda Time. The latest Dropwizard release is version 0.7.1, released on June 20th 2014. It depends on these versions of some core libraries:
Jetty - 9.2.3.v20140905 - May 2014
Jackson - 2.4.1 - June 2014
Jersey - 2.11 - July 2014

The table shows that stable != out-of-date which is fine of course. The versions of core libraries used are recent though. I guess ‚Äėstable‚Äô means libraries with a long history.

Simple
The components of a Dropwizard application are shown below (taken from the tutorial
http://dropwizard.io/getting-started.html):
Dropview components overview

  1. Application (HelloWorldApplication.java): the applications main method, responsible for startup.
  2. Configuration (HelloWorldConfiguration.java) sets configuration for an environment, this is where you may set hostnames for systems the application depends on or set usernames.
  3. Data object (Saying.java).
  4. Resource (HelloWordResource.java): service implementation entry point
  5. Health Check (TemplateHealthCheck.java): runtime tests that show if the application still works.

Light weight
We did some experiments trying to answer the question whether Dropwizard applications are light weight. The table below summarizes some of the sizes of deployments and tools.
Tomcat size 14 mb
Tomcat lib folder size 7 MB
Jetty size 14,6 MB
Jetty in Dropwizard jar: 5,4 MB
Dropwizard tutorial example 10 mb
Dropwizard extended example 20 MB
Dropwizard Hibernate classes in package: 5 MB

A Tomcat or Jetty installation takes about 14 MB, but if you count only the lib folder the size goes down to about 7 MB. The Jetty folder in Dropwizard however is only 5.5 MB. Apparently Dropwizard managed to strip away some code you don’t really need (or is packaged somewhere else, we didn’t look into that).
Building the tutorial results in a 10 MB jar, so if you would run a webapp in its own Tomcat container, switching to Dropwizard saves quite a bit. On the other hand, deployment size isn’t all that important if we’re still talking < 50 MB.
Compared to your default Weblogic install (513 MB, Weblogic-only on OSX) however, savings are humongous (but this is also true when you compare Weblogic to Tomcat or Jetty).

Productivity
We tried to run the build for the tutorial application (dropwizard-example in the dropwizard project on Github). This works fine and takes about 8 seconds using mocks for external connections. One option to explore would be to run tests against a deployed application. What we’re used to is that deploying an application for test takes lots of time and resources, but starting a Dropwizard app is quite cheap. Therefore it would be possible to run an integration test of services at the end of a build. This would be quite hard to do with e.g. Weblogic or Websphere.

Spring boot
Spring boot is interesting, as well as the discussion around the differences between Spring boot and Dropwizard. See https://groups.google.com/forum/#!topic/dropwizard-user/vH1h2PgC8bU

The official Spring boot website says: Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that can you can "just run". We take an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need very little Spring configuration.
It’s good to see a platform change according to new insights, but still, I remember Rod Johnson saying some ten years ago that J2EE was bloated and complex and Spring was the answer. Now it seems we need Spring boot to make Spring simple? Or is it just that we don’t need application servers anymore to divide resources among processes?

Dropwizard and Docker
Finally we experimented with running Dropwizard in a Docker container. This can be done with limited effort because Dropwizard applications have such a small number of dependencies. Thomas Kruitbosch will report on this later.

References
Spring boot: http://projects.spring.io/spring-boot/
Dropwizard: http://dropwizard.io/

How to deploy a Docker application into production on Amazon AWS

Fri, 10/17/2014 - 17:00

Docker reached production status a few months ago. But having the container technology alone is not enough. You need a complete platform infrastructure before you can deploy your docker application in production. Amazon AWS offers exactly that: a production quality platform that offers capacity provisioning, load balancing, scaling, and application health monitoring for Docker applications.

In this blog, you will learn how to deploy a Docker application to production in five easy steps.

For demonstration purposes, you are going to use the node.js application that was build for CloudFoundry and used to demonstrate Deis in a previous post. A truly useful app of which the sources are available on github.

1. Create a Dockerfile

First thing you need to do is to create a Dockerfile to create an image. This is quite simple: you install the node.js and npm packages, copy the source files and install the javascript modules.

# DOCKER-VERSION 1.0
FROM    ubuntu:latest
#
# Install nodejs npm
#
RUN apt-get update
RUN apt-get install -y nodejs npm
#
# add application sources
#
COPY . /app
RUN cd /app; npm install
#
# Expose the default port
#
EXPOSE  5000
#
# Start command
#
CMD ["nodejs", "/app/web.js"]
2. Test your docker application

Now you can create the Docker image and test it.

$ docker build -t sample-nodejs-cf .
$ docker run -d -p 5000:5000 sample-nodejs-cf

Point your browser at http://localhost:5000, click the 'start' button and Presto!

3. Zip the sources

Now you know that the instance works, you zip the source files. The image will be build on Amazon AWS based on your Dockerfile.

$ zip -r /tmp/sample-nodejs-cf-srcs.zip .
4. Deploy Docker application to Amazon AWS

Now you install and configure the amazon AWS command line interface (CLI) and deploy the docker source files to elastic beanstalk.  You can do this all manually, but here you use the script deploy-to-aws.sh that I created.

$ deploy-to-aws.sh \
         sample-nodejs-cf \
         /tmp/sample-nodejs-cf-srcs.zip \
         demo-env

After about 8-10 minutes your application is running. The output should look like this..

INFO: creating application sample-nodejs-cf
INFO: Creating environment demo-env for sample-nodejs-cf
INFO: Uploading sample-nodejs-cf-srcs.zip for sample-nodejs-cf, version 1412948762.
upload: ./sample-nodejs-cf-srcs.zip to s3://elasticbeanstalk-us-east-1-233211978703/1412948762-sample-nodejs-cf-srcs.zip
INFO: Creating version 1412948762 of application sample-nodejs-cf
INFO: demo-env in status Launching, waiting to get to Ready..
...
INFO: demo-env in status Launching, waiting to get to Ready..
INFO: Updating environment demo-env with version 1412948762 of sample-nodejs-cf
INFO: demo-env in status Updating, waiting to get to Ready..
...
INFO: demo-env in status Updating, waiting to get to Ready..
INFO: Version 1412948762 of sample-nodejs-cf deployed in environment
INFO: current status is Ready, goto http://demo-env-vm2tqi3qk4.elasticbeanstalk.com
5. Test your Docker application on the internet!

Your application is now available on the Internet. Browse to the designated URL and click on start. When you increase the number of instances at Amazon, they will appear in the application. When you deploy a new version of the application, you can observe how new versions of the application  appear without any errors on the client application.

For more information, goto Amazon Elastic Beanstalk adds Docker support. and Dockerizing a Node.js Web App.