Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/sources/22' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Xebia Blog
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.
Syndicate content
Updated: 31 min 36 sec ago

Scheduling containers and more with Nomad

Thu, 11/26/2015 - 11:18

Specifically for the Dutch Docker Day on the 20th of November, HashiCorp released version 0.2.0 of Nomad which has some awesome features such as service discovery by integrating with Consul, the system scheduler and restart policies.  HashiCorp worked hard to release version 0.2.0 on 18th of November and we pushed ourselves to release a self-paced, hands-on workshop. If you would like to explore and play with these latest features of Nomad, go check out the workshop over at http://workshops.nauts.io.

In this blog post (or as I experienced it: roller coaster ride), you will catch a glimpse of the work that went into creating the workshop.

Last friday, November the 20th, was the first edition of the Dutch Docker Day where I helped prepare a workshop about "scheduling containers and more with Nomad". It was a great experience where attendees got to play with the new features included in 0.2.0, which nearly didn't make it into the workshop.


When HashiCorp released Nomad during their HashiConf event at the end of September, I was really excited as they always produce high quality tools with great user experience. As soon as the binary was available I downloaded it and tried to set up a cluster to see how it compared to some of it's competitors. The first release already had a lot of potential but also a lot of problems. For instance: when a container failed, Nomad would report it dead, but take no action; restart policies were still but a dream.

There were a lot of awesome features in store for the future of Nomad: integration with Consul, system jobs, batch jobs, restart policies, etc. Imagine all possible integrations with other HashiCorp tools! I was sold. So when I was asked to prepare a workshop for the Dutch Docker Day I jumped at the opportunity to get better acquainted with Nomad. The idea was that the attendees of the workshop, since it was a pretty new product and had some quirks, would go on an explorative journey into the far reaches of the scheduler and together find it's treasures and dark secrets.

Time went by and the workshop was taking shape nicely. We have a nice setup with a cluster of machines that automatically bootstrap the Nomad cluster and set up it's basic configuration. We were told that there would be a new version released before the Dutch Docker Day but nothing appeared, until the day before the event. I was both excited and terrified! The HashiCorp team worked long hours to get the new release of Nomad done in time for the Dutch Docker Day so Armon Dadgar, the CTO of HashiCorp and original creator of Nomad, could present the new features during his talk. This of course is a great thing, except for the fact that the workshop was entirely aimed at 0.1.2 and we had none of these new features incorporated into our Vagrant box. Were we going to throw all our work overboard and just start over, the night before the event?

“Immediately following the initial release of Nomad 0.1, we knew we wanted to get Nomad 0.2 and all its enhancements into the hands of our users by Dutch Docker Day. The team put in a huge effort over the course of a month and a half to get all the new features done and to a polish people expect of HashiCorp products. Leading up to the release we ran into one crazy bug after another (surprisingly all related to serialization). After intense debugging we got it to the fit and polish we wanted the night before at 3 AM! Once the website docs were updated and the blog post written, we released Nomad 0.2. The experience was very exciting but also exhausting! We are very happy with how it turned out and hope the users are too!„

- Alex Dadgar, HashiCorp Engineer working on Nomad

It took until late in the evening to get an updated Vagrant box with a bootstrapped Consul cluster and the new Nomad version, in order to showcase the auto discovery feature and Consul integration that 0.2.0 added. However, the slides for the workshop were still referencing the problems we encountered when trying out the 0.1.0 and 0.1.2 release, so all the slides and statements we had made about things not working or being released in the future had to be aligned with the fixes and improvements that came with the new release. After some hours of hectic editing during the morning of the event, the slides were finally updated and showcased all the glorious new features!

Nomad simplifies operations by supporting blue/green deployments, automatically handling machine failures, and providing a single workflow to deploy applications.

The new features they added in this release and the amount of fixes and improvements is staggering. In order to discover services there is no longer a need for extra tools such as Registrator, and your services are now automatically registered and deregistered as soon as they are started and stopped (which I first thought was a bug, cause I wasn't used to Nomad actually restarting my dead containers). The system scheduler is another feature I've been missing in other schedulers for a while, as it makes it possible to easily schedule services (such as Consul or Sysdig) on all of the eligible nodes in the cluster.

Feature 0.1.2 0.2.0 Service scheduler Schedule a long lived job. Y Y Batch scheduler Schedule batch workloads. Y Y System scheduler Schedule a long lived job on every eligible node. N Y Service discovery Discover launched services in Consul. N Y Restart policies If and how to restart a service when it fails. N Y Distinct host constraint Ensure that Task Groups are running on distinct clients. N Y Raw exec driver Run exec jobs without jailing them. N Y Rkt driver A driver for running containers with Rkt. N Y External artifacts Download external artifacts to execute for Exec and Raw exec drivers. N Y   And numerous fixes/improvements were added to 0.2.0

If you would like to follow the self-paced workshop by yourself, you can find the slides, machines and scripts for the workshop at http://workshops.nauts.io together with the other workshops of the event. Please let me know your experiences, so the workshop can be improved over time!

I would like to thank the HashiCorp team for their amazing work on the 0.2.0 release, the speed at which they have added so many great new features and improved the stability is incredible.

It was a lot of fun preparing the workshop for the Dutch Docker Day. Working with bleeding edge technologies is always a great way to really get to know it's inner workings and quirks, and I would recommend it to anyone, just be prepared to do some last-minute work ; )

Example Mapping - Steering the conversation

Mon, 11/23/2015 - 19:14

People who are familiar with BDD and ATDD already know how useful the three amigos (product owner, tester and developer) session is for talking about what the system under development is supposed to do. But somehow these refinement sessions seem to drain the group's energy. One of the problems I see is not having a clear structure for conversations.

Example Mapping is a simple technique that can steer the conversation into breaking down any product backlog items within 30 minutes.

The Three Amigos

Example Mapping is best used in so called Three Amigo Sessions. The purpose of this session is to create a common understanding of the requirements and a shared vocabulary across product owner and the rest of the team. During this session the product owner shares every user story by explaining the need for change in a product. It is essential that the conversation has multiple points of view. Testers and developers identify missing requirements or edge cases and are addressed by describing accepted behaviour, before a feature is considered ready for development.

In order to help you steer the conversations, here is a list of guidelines for Three Amigos Sessions:

  • Empathy: Make sure the team has the capability to help each other understand the requirements. Without empathy and the room for it, you are lost.
  • Common understanding of the domain: Make sure that the team uses the same vocabulary (digital of physical) and speaks the same domain language.
  • Think big, but act small: Make sure all user stories are small and ready enough to make impact
  • Rules and examples: Make sure every user story explains the specification with rules and scenarios / examples.
Example mapping

Basic ingredients for Example Mapping are curiosity and a pack of post-it notes containing the following colours:

  • Yellow for defining user story
  • Red for defining questions
  • Blue for defining rules
  • Green for defining examples

Using the following steps can help you steer the conversations towards accepted behaviour of the system under development:

  1. Understanding the problem
    Let the product owner start by writing down the user story on a yellow post-it note and have him explain the need for change in the product. The product owner should help the team understand the problem.
  2. Challenge the problem by asking questions
    Once the team has understood the problem, the team challenges the problem by asking questions. Collect all the questions by writing them down starting with "What if ... " on red post-it notes. Place them on the right side of the user story (yellow) post-it note. We will treat this post-it note as a ticket for a specific and focussed discussion.
  3. Identifying rules
    The key here is to identify rules for every given answer (steered from the red question post-it notes). Extract rules from the answers and write them down on a blue post-it note. Place them below the user story (yellow) post-it note. This basically describes the acceptance criteria of a user story. Make sure that every rule can be discussed separately. The single responsibility principle and separation of concerns should be applied.
  4. Describing situations with examples
    Once you have collected all the important rules of the user story, you collect all interesting situations / examples by writing them down on a green post-it note. Place them below the rule (blue) post-it note. Make sure that the team talks about examples focussed on one single rule. Steer the discussion by asking questions like: Have we reached the boundaries of the rule? What happens when the rule fails?
An example


Here in the given example above, the product owners requires a free shipping process. She wrote it down on a yellow post-it note. After collecting and answering questions, two rules were discussed and refined on blue post-it notes; shopping cart limit and the appearance of the free shipping banner on product pages. All further discussions were steered towards the appropriate rule. Two examples in the shopping cart limit were defined and one example for the free shipping banner on a green post-it notes. Besides steering the team in rule based discussions, the team also gets a clear overview of the scope for the first iteration of the requirement.

Getting everyone on the same page is the key to success here. Try it a couple of times and let me know how it went.


Add ifPresent to Swift Optionals

Fri, 11/20/2015 - 22:43

In my previous post I wrote a lot about how you can use the map and flatMap functions of Swift Optionals. In this one, I'll add a custom function to Optionals through an extension, the ifPresent function.

extension Optional {

    public func ifPresent(@noescape f: (Wrapped) throws -> Void) rethrows {
        switch self {
        case .Some(let value): try f(value)
        case .None: ()

What this does is simply execute the closure f if the Optional is not nil. A small example:

var str: String? = "Hello, playground"
str.ifPresent { print($0) }

This works pretty much the same as the ifPresent method of Java optionals.

Why do we need it?

Well, we don't really need it. Swift has the built-in language feature of if let and guard that deals pretty well with these kind of situations (which Java cannot do).

We could simply write the above example as follows:

var str: String? = "Hello, playground"
if let str = str {

For this example it doesn't matter much wether you would use ifPresent or if let. And because everyone is familiar with if let you'd probably want to stick with that.

When to use it

Sometimes, when you want to call a function that has exactly one parameter with the same type as your Optional, then you might benefit a bit more from this syntax. Let's have a loot at that:

var someOptionalView: UIView? = ...
var parentView: UIView = ...


Since addSubview has one parameter of type UIView, we can immediately pass in that function reference to the ifPresent function.

Otherwise we would have to write the following code instead:

var someOptionalView: UIView? = ...
var parentView: UIView = ...

if let someOptionalView = someOptionalView {
When you can't use it

Unfortunately, it's not always possible to use this in the way we'd like to. If we look back at the very first example with the print function we would ideally write it without closure:

var str: String? = "Hello, playground"

Even though print can be called with just a String, it's function signature takes variable arguments and default parameters. Whenever that's the case it's not possible to use it as a function reference. This becomes increasingly frustrating when you add default parameters to existing methods after which your code that refers to it doesn't compile anymore.

It would also be useful if it was possible to set class variables through a method reference. Instead of:

if let value = someOptionalValue {
  self.value = value

We would write something like this:


But that doesn't compile, so we need to write it like this:

someOptionalValue.ifPresent { self.value = $0 }

Which isn't really much better that the if let variant.

(I had a look at a post about If-Let Assignment Operator but unfortunately that crashed my Swift compiler while building, which is probably due to a compiler bug)


Is the ifPresent function a big game changer for Swift? Definitely not.
Is it necessary? No.
Can it be useful? Yes.

The Sunk Cost Fallacy Fallacy

Thu, 11/19/2015 - 15:41

Imagine two football fans planning to attend a match 60 miles away. One of them paid for a ticket in advance; the other was just about to buy a ticket when he got one from a friend for free. The night of the game, a blizzard hits. Which fan do you think is more likely to drive through a blizzard to see the game?

You probably (correctly) guessed that the fan who paid for his ticket is more likely to drive through the blizzard. What you may not have realized, though, is that this is an irrational decision, at least economically speaking.

The football fan story is a classic example of the Sunk Cost Fallacy, adapted from Richard Thaler's "Towards a Positive Theory of Consumer" (1980) in Daniel Kahneman's excellent book, "Thinking, Fast and Slow" (2011).  Many thanks to my colleagues Joshua Appelman, Viktor Clerc and Bulat Yaminov for the recommendations.

The Sunk Cost Fallacy

The Sunk Cost Fallacy¬†is a faulty pattern of behavior¬†in which past investments cloud our judgment on how to move forward. When past investments are irrecoverable (we call them¬†'sunk' costs, and they should have no effect on our¬†choices for the future. In practice, however,¬†we find it difficult to cut our losses ‚ÄĒ¬†even when it's the rational thing to do.

We see the Sunk Cost Fallacy effect in action every day when evaluating technical and business decisions. For instance, you may recognize a tendency to become attached to an "elegant" abstraction or invariant, even when evidence is mounting that it does the overall complexity more harm than good. Perhaps you've seen a Product Owner who remains too attached to a particular feature, even after its proven failure to achieve the desired effect. Or the team that sticks to an in-house graphing library even after better ones become available for free, because they are too emotional about throwing out their own code.

This is the Sunk Cost Fallacy in action. It's healthy to take a step back and see if it's time to cut your losses.

Abuse of the Sunk Cost Fallacy

However, the Sunk Cost Fallacy can be abused when it's used as an excuse to freely backtrack on choices with little regard for past costs. I call this the Sunk Cost Fallacy Fallacy.

Should you move from framework A to framework B? If B will help you be more effective in the future, even when you've invested in A, the Sunk Cost Fallacy says you should move to B. However, don't forget to factor in the 'cost of switching': the past investments in framework A may be sunk costs, but switching could introduce a technical debt of code that needs to now be ported. Make sure to compare the expected gain against this cost, and make a rational decision.

You might feel bad about having picked framework A in the first place. The Sunk Cost Fallacy teaches you not to let this emotion cloud your judgment while evaluating framework B. However, it is still a useful emotion that can trigger valuable questions: Could you have seen this coming? Is there something you could have done in the past to make it cheaper to move from framework A to framework B now? Can you learn from this experience and make a better initial choice next time?


An awareness of the Sunk Cost Fallacy can help you make better decisions: cut your losses when it is the rational thing to do. Be careful not to use the Sunk Cost Fallacy as an excuse, and take into account the cost of switching. Most importantly, look for opportunities to learn from your mistakes.

Docker to the on-premise rescue

Wed, 11/18/2015 - 10:18

During the second day at Dockercon EU 2015 in Barcelona, Docker introduced the missing glue which they call "Containers as a Service Platform". With both focus on public cloud and on-premise, this is a great addition to the eco system. For this blogpost I would like to focus on the Run part of the "Build-Ship-Run" thought of Docker, and with the focus on on-premise. To realize this, Docker launched the Docker Universal Control Plane which was the project formerly known as Orca.

caas-private I got to play with version 0.4.0 of the software during a hands-on lab and I will try to summarize what I've learned.

Easy installation

Of course the installation is done by launching Docker containers on one or more hosts, so you will need to provision your hosts with the Docker Engine. After that you can launch a `orca-bootstrap` container to install, uninstall, or add an Orca controller. The orca-bootstrap script will generate a Swarm Root CA, Orca Root CA, deploy the necessary Orca containers (I will talk more about this in the next section), after which you can login into the Docker Universal Control Plane. Adding a second Orca controller is as simple as running orca-bootstrap with a join parameter and specifying the existing Orca controller.


Let's talk a bit about the technical parts and keep in mind that I'm not the creator of this product. There are 7 containers running after you have succesfully run the orca-bootstrap installer. You have the Orca controller itself, listening on port 443, which is your main entry point to Docker UCP. There are 2 cfssl containers, one for Orca CA and one for Swarm CA. Then you have the Swarm containers (Manager and Agent) and the key-value store, for which Docker chose etcd. Finally, there is an orca-proxy container, whose port 12376 redirects to the Swarm Manager.  I'm not sure why this is yet, maybe we will find out in the beta.

From the frontend (which we will discuss next) you can download a 'bundle', which is a zip file containing the TLS parts and a  sourceable environment file containing:

export DOCKER_CERT_PATH=$(pwd)
export DOCKER_HOST=tcp://orca_controller_ip:443
# Run this command from within this directory to configure your shell:
# eval $(env.sh)
# This admin cert will also work directly against Swarm and the individual
# engine proxies for troubleshooting.  After sourcing this env file, use
# "docker info" to discover the location of Swarm managers and engines.
# and use the --host option to override $DOCKER_HOST

As you can see, it also works directly against Swarm manager and Engine to troubleshoot. Running `docker version` with this environment returns:

Version:      1.9.0
API version:  1.21
Go version:   go1.4.2
Git commit:   76d6bc9
Built:        Tue Nov  3 17:43:42 UTC 2015
OS/Arch:      linux/amd64
Version:      orca/0.4.0
API version:  1.21
Go version:   go1.5
Git commit:   56afff6
OS/Arch:      linux/amd64


Okay, so when I opened up the frontend it looked pretty familiar and I was trying to remember where I've seen this before. After a look at the source, I found an ng-app parameter in the html tag named shipyard. The GUI is based on the Shipyard project, which is cool because this was an already well functioning management tool built upon Docker Swarm and the Docker API, so people familiar with shipyard already know the functionality, so let me quickly sum up what it can do and wthat it looks like in Docker UCP.

ducp-dashboardDashboard overview

ducp-applications2Application expanded, quickly start/stop/restart/destroy/inspect running container

ducp-applications-applicationApplication overview, graphs of resource usage and container IDs can be included or excluded from the graph.

ducp-containersContainers overview, multi select containers and execute actions

ducp-containers-container-logsAbility to quickly inspect logs

ducp-contaienrs-container-consoleAbility to exec into the container to debug/troubleshoot etc.

Secrets Management & Authentication/Authorization

So, in this hands-on lab there were a a few things that were not ready yet. Eventually it will be possible to hook up Docker UCP to an existing LDAP directory but I was not able to test this yet. Once fully implemented you can hook it up to your existing RBAC system and give teams the authorization they need.

There was also a demo showing off a secret management tool, which also was not yet available. I guess this is what the key-value store is used for as well. Basically you can store a secret at a path such a secret/prod/redis and then access it by running a container with a label like:

docker run -ti --rm -label com.docker.secret.scope=secret/prod

Now you can access the secret within the container in the file /secret/prod/redis.

Now what?

A lot of the new things are being added to the ecosystem, which is certainly going to help the adoption of Docker for some customers and bringing it into production. I like that Docker thought of the on-premise customers and deliver them an equally as the cloud users. As this is an early version they need feedback from users, so if you are able to test it, please do so in order to make it a better product. They said they are already working on multi-tenancy for instance, but no timelines were given.

If you would like to sign up for the beta of Docker Universal Control Plane, you can sign up at this page: https://www.docker.com/try-ducp



Are your node modules secure?

Fri, 11/13/2015 - 13:05

With over 200k packages, npm is the world's largest registry of open source packages. It serves several million downloads each month. The popularity of npm is a direct result of the popularity of JavaScript. Originally npm was the package manager for Node.js, the server-side JavaScript runtime. Since Node.js developers mostly follow the Unix philosophy, the npm registry contains many very small libraries tailored to a specific purpose. Since the introduction of Browserify, many of these libraries suddenly became suitable for use in the web browser. It has made npm not only the package manager for Node.js, but for the entire JavaScript ecosystem. This is why npm is not an abbreviation of Node Package Manager, but a recursive bacronymic abbreviation for "npm is not an acronym". Wow.

If you do any serious JavaScript development, you cannot go without libraries, so npm is an indispensable resource. Any project of meaningful size is quickly going to rely on several dozen libraries. Considering that these libraries often have a handful of dependencies of their own, your application indirectly depends on hundreds of packages. Most of the time this works out quite well, but sometimes things aren't that great. It turns out that keeping all of these dependencies up to date can be quite a challenge. Even if you frequently check your dependencies for updates, there's no guarantee that your dependencies' authors will do the same. With the pace at which new JavaScript packages are being released, it's close to impossible to keep everything up to date at all times.

Most of the time it's not a problem to rely on an older version of a package. If your package works fine with an outdated dependency, there's no compelling reason to upgrade. Why fix something that isn't broken? Unfortunately, it's not so easy to tell if it is. Your package may have been broken without your knowledge. The problem is in the definition of "broken". You could consider it to mean your application doesn't work in some way, but what about the non-functionals? Did you consider the fact that you may be relying on packages that introduce security vulnerabilities into your system?

Like any software, Node.js and JavaScript aren't immune to security issues. You could even consider JavaScript inherently less secure because of its dynamic nature. The Node Security Project exists to address this issue. It keeps a database of known security vulnerabilities in the Node ecosystem and allows anyone to report them. Although NSP provides a command line tool to check your dependencies for vulnerabilities, a new company called Snyk has recently released a tool to do the same and more. Snyk, short for "so now you know", finds security vulnerabilities in your entire dependency tree based on the NSP database and other sources. Its CLI tool is incredibly simple to install and use. Just `npm install snyk` and off you go. You can run it against your own project, or against any npm package:

> snyk test azure

‚úó Vulnerability found on validator@3.1.0
Info: https://app.snyk.io/vuln/npm:validator:20130705
From: azure@0.10.6 > azure-arm-website@0.10.0 > azure-common@0.9.12 > validator@~3.1.0
No direct dependency upgrade can address this issue.
Run `snyk protect -i` to patch this vulnerability
Alternatively, manually upgrade deep dependency validator@~3.1.0 to validator@3.2.0


Tested azure for known vulnerabilities, found 32 vulnerabilities.

It turns out the Node.js library for Azure isn't quite secure. Snyk can automatically patch the vulnerability for you, but the real solution is to update the azure-common package to use the newer version of validator. As you see, most of the security issues reported by Snyk have already been fixed by the authors of the affected library. That's the real reason to keep your dependencies up to date.

I think of Snyk as just another type of code quality check. Just like your unit tests, your build should fail if you've accidently added an insecure dependency. A really simple way to enforce it is to use a pre-commit hook in your package.json:

"scripts": {
 "lint": "eslint src test",
 "snyk": "snyk test",
 "test": "mocha test/spec",
"pre-commit": ["lint", "test", "snyk"]

The pre-commit hook will automatically be executed when you try to commit to your Git repository. It will run the specified npm scripts and if any of them fail, abort the commit. It must be noted that, by default, Snyk will only test your production dependencies. If you want it to also test your devDependencies you can run it with the `--dev` flag.

Robot Framework and the keyword-driven approach to test automation - Part 1 of 3

Wed, 11/04/2015 - 21:55

Hans Buwalda is generally credited with the introduction of the keyword-driven paradigm of functional test automation, initially calling it the 'action word' approach.

This approach tackled certain fundamental problems pertaining to the efficiency of the process of creating test code (mainly the lack of reuse) and the maintainability, readability and robustness of that code. Problems surrounding these aspects frequently led to failed automation efforts. The keyword-driven framework therefore was (and is) a quantum leap forward, providing a solution to these problems by facilitating the application of modularity, abstraction and other design patterns to the automation code.

Robot Framework (RF) can be regarded as the epitome of this type of automation framework. Our first post on the RF concentrated on the high-level design of the platform. In this second of our three-part series of introductory-level posts, we will take a closer look at what the keyword-driven approach to test automation is all about.

This second post will itself be divided into three parts. In part 1, we will look at the position of the keyword-driven approach within the history of test automation frameworks. In part 2 we will delve into the concept of a 'keyword'. Finally, in part 3, we will look at how the keyword-driven approach is implemented by the RF.

A short history of test automation frameworks

In order to get a first, overall impression of the nature and advantages of the keyword-driven approach to test automation, let's have a look at the different framework generations that make up the history of test automation platforms. In doing so, we'll come across some of the differences, similarities and interdependencies between these various types of frameworks.

Please note that this history is partially a recap of similar (and sometimes more elaborate) genealogies in existing books, articles and blog posts. Nevertheless, I want to present it here as part of my comprehensive RF introduction. Moreover, I intend to give my own spin to the categorization of frameworks, by arguing that hybrid, MBT (Model-based testing) and Agile frameworks have no place in this lineage and that Scriptless Frameworks are not really scriptless.

Having said that, let's take a closer look at the various types of automation frameworks. The methodological and technological evolution of automated functional testing platforms is often divided into the following stages.

Linear frameworks (such as record & playback frameworks)

The code that is written or generated is procedural in nature, lacking both control structures (for run-time decision making and the reuse of code sequences) and calling structures (for the  reuse of code modules). To wit, code consists of long sequences of statements that are executed one after the other.

Test cases are thus implemented as typically large, monolithic blocks of static code, mixing what Gojko Adzic calls the technical activity level, user interface workflow level and business rule level. That is, the procedural test case consists of a series of lowest-level statements that implements and 'represents' all three levels. Up until the advance of the keyword-driven approach, this mixing was a staple of test automation.

It will be clear that such code lacks reusability, maintainability, readability and several other critical code qualities. The linear framework therefore always injected major drawbacks and limitations into an automation solution.

Amongst many other examples (of such drawbacks and limitations), is that adding even the slightest variation on an existing test case was labour-intensive and leading to more code to maintain. Due to the lack of control structures, any alternative functional flow had to be implemented through an additional test case, since the test code had no way of deciding dynamically (i.e. run-time) which alternative set of test actions to execute. This was worsened by the fact that test cases could not be made generic due to the lack of data-driven capabilities. Consequently, each test case had to be implemented through a dedicated, separate script, even in those cases where the functional flow and (thus) the required test actions were completely identical and only the data input had to be varied.

The lack of control structures also meant less robustness, stability and reliability of the code, since custom error detection and recovery logic could not be implemented and, therefore, run-time error handling was completely lacking.

It was also hard to understand the purpose of test cases. Especially business stake holders were dependant on documentation provided with each test case and this documentation needed to be maintained as well.

With the advent of each subsequent framework generation, this situation would gradually be improved upon.

Structured/scripted frameworks

The automation code now featured control/flow structures (logic), such as for-loops and conditional  statements, making the code maybe even harder to read while not improving much upon maintainability and reusability.

Of course, this approach did provide flexibility and power to the automation engineer. It also prevented code duplication to some extent, on the one hand due to the reuse of blocks of statements through looping constructs and on the other hand because alternative functional flows could now be handled by the same test case through decisions/branching. Additionally, robustness and stability was greatly improved upon because through the control structures routines for error detection and handling could be implemented.

Data-driven frameworks

Data is now extracted from the code, tremendously improving upon automation efficiency by increased levels of both reusability and maintainability.

The code is made generic by having the data passed into the code by way of argument(s). The data itself persists either within the framework (e.g. in lists or tables) or outside the framework (e.g. in databases, spread sheets or text files).

The automation platform is thereby capable of having the same script iterate through sets of data items, allowing for a truly data-driven test design. Variations on test cases can now be defined by simply specifying the various sets of parameters that a certain script is to be executed with. E.g. a login routine that is repeatedly executed with different sets of credentials to test all relevant login scenario's.

Through this approach, the 1-1 relation between test case and test script could be abandoned for the first time. Multiple test cases, that require identical sequences of test actions but different input conditions, could be implemented through one test script. This increased the level of data coverage immensely, while simultaneously reducing the number of scripts to maintain.

Of course, the data-driven approach to test design comes with a trade-off between efficiency and readability. Nevertheless, it is now possible to very quickly and efficiently extend test coverage. For instance through applying boundary-value analysis and equivalence partitioning to the test designs.

Where the structured frameworks added efficiency to the testing of variations in functional flows, the data-driven framework added efficiency to the testing of variations in data-flows.

Keyword-driven frameworks (sometimes called modularity frameworks)

Reusable blocks of code statements are now extracted into one or more layers of lower-level test functions. These functions are called 'keywords'. Consequently, there are now at least two layers in the automation code: the higher-level test cases and the lower-level, reusable test functions (keywords).

Test cases now can call keywords, thereby reusing the involved keyword logic and abstracting from the technical complexities of the automation solution.

The keywords can live in code (e.g. Python, Java, .Net) or can be created through a (proprietary) scripting language. A combination of coding and scripting is also possible. In that case the scripted functions typically reuse (and are implemented through) the lower-level, coded functions.

By facilitating modularization and abstraction, the keyword-driven framework dramatically improves the reusability and maintainability of the test code as well as the readability of both test code and test cases.

More on the nature and advantages of the keyword-driven framework, in part 2 of this second post.

Scriptless frameworks

There is a substantial level of controversy surrounding this type of framework. That is, there is a lot of polarized discussion on the validity of the claims that tool vendors make with regard to the capabilities and benefits of their tools.

The principal claim seems to be that these frameworks automatically generate the reusable, structured code modules as featured in the keyword-driven approach. No scripting required.

I must admit to not having a lot of knowledge of (let alone hands-on experience with) these frameworks. But based on what I have seen and read (and heard from people who actually use them), it appears to me that the 'scriptless' amounts to nothing more than the automated generation of an (admittedly advanced) object/UI map through some sort of code scanning process. For example, the controls (i.e. GUI elements such as edit fields, buttons, etc.) of some part of an application's GUI may be inventoried (in terms of all kinds of properties) and then be made available for the application of some sort of operation (e.g. click or input) or assertion (e.g. isEnabled, hasValue). The available types of operations and assertions per GUI element are thus part of the object map. But all of this hardly constitutes a truly scriptless approach, since the logic that these actions must be embedded in (to model the workflow of the SUT), still needs to be scripted and/or coded.

Hybrid frameworks

Sometimes hybrid frameworks. are mentioned as the follow-up generation of the keyword-driven approach, but, since keyword-driven frameworks are always data-driven as well, this is a superfluous category. In general, any generation is 'subsuming' the essential features of its predecessors, adding new features. So, for example, keyword-driven means structured and data-driven as well.

Note though that certain modern, keyword-driven frameworks are not inherently structured, since they do not feature a (proprietary) scripting language in which to create (mid-level) structured functions, but only support coding such functions at the lowest level in e.g. Java.

For instance, the 'scenario' tables of FitNesse, although often used as a middle layer between test cases (specifications) and the fixtures implemented in code, cannot hold structured functions, since FitNesse does not provide control structures of any form. Therefore, the scenario table contains linear 'code'. For a long time, these tables weren't even able to provide return values.

As a side note: of all the keyword-driven frameworks, RF features the most mature, complete and powerful scripting engine. We will go into details of the RF scripting facilities in a later post. Of course, to what extent such scripting should be applied to test code development is subject to discussion. This discussion centers mainly around the impact of scripting on the risks that apply to the involved automation efforts. Again, a later RF post will focus on that discussion.

Model-based testing frameworks

Similarly, Model-based testing (MBT) frameworks are sometimes mentioned as preceding the scriptless framework generation. But the specific capabilities of this type of tool pertain to the automated generation of highly formalized (logical) test designs with a specific form of coverage. Therefore, in my opinion, model-based frameworks do not belong in this genealogy at all.

With MBT frameworks test cases are derived from a model, such as e.g. a finite state machine, that represents the (behavior of the) SUT. The model, that serves as input to the framework, is sometimes created dynamically as well. However, the model-based approach to test design is, in itself, agnostic towards the question of how to execute the generated set of cases. It might be automated or it might be manual. Accordingly, there are three 'deployment models' for the execution of a model-based test design.

Only seldom, in the case of the so-called 'online testing' deployment scheme, the generated tests can be executed automatically. The logical test cases are first dynamically made concrete (physical) and are then executed. In that case, the execution is typically performed by an execution engine that is part of the same framework (as is the case with e.g. Tricentis Tosca). However, depending on the product technology and interfaces, customizations (extensions) to the execution engine could be required.

At times the created test cases are candidates for automated execution. This is the case with the so-called 'offline generation of executable tests', deployment scheme which generates the test cases as machine-readable code modules, e.g. a set of JAVA or Python classes. These can then be integrated into the test automation code created on platforms such as Cucumber or RF and subsequently be executed by them.

Most often though the generated test designs adhere to the so-called 'offline generation of manually deployable tests' deployment scheme. In that case, the framework output is a set of human-readable test cases that are to be executed manually.

The fact that occasionally model-based test designs can be executed directly by an MBT framework component (namely in one of the three deployment schemes), is the reason that these frameworks are sometimes (and erroneously) listed in the genealogy of test automation frameworks.

Agile/ATDD/BDD frameworks

Finally, sometimes Agile, ATDD (Acceptance Test Driven Development) or BDD (Behavior Driven Development) frameworks are mentioned as well. However, this is not so much a category onto itself, but rather, at the utmost, an added feature to (or on top of) modern, keyword-driven frameworks. Frameworks such as the Robot Framework or FitNesse allow for high-level, business-readable, BDD-style test designs or specifications. For instance, a test design applying the Gherkin syntax of Given-When-Then.

These BDD-specific features thus add to the frameworks the capability of facilitating specification and collaboration. That is, they enable collaborative specification or specification by example. Some frameworks, such as Cucumber or JBehave actually define, understand and promote themselves in terms of this specific usage. In other words, although they can be used in a 'purely' keyword- and data-driven manner as well, they stress the fact that the main use case for these frameworks is 'specification by example'. They position themselves as advocates of the BDD philosophy and, as such, want to stimulate and evangelize BDD. From the JBehave web site: "JBehave is a framework for Behaviour-Driven Development (BDD). BDD is an evolution of test-driven development (TDD) and acceptance-test driven design. It shifts the vocabulary from being test-based to behaviour-based, and positions itself as a design philosophy." [italics MH]

It is precisely through their ability to create modularized and data-driven test automation code, that the keyword-driven frameworks facilitate and enable a layered and generic test design and thus the ATDD/BDD practices of creating executable specifications and of specifying by example. These specifications or examples need to be business-readable and, consequently, require all of the implementation details and complexities to be hidden in lower layers of the automation solution. They must also be able to allow for variations on test cases (i.e. examples), for instance in terms of boundary cases or invalid data. The keyword-driven mechanisms, mainly those of code extraction and data extraction, thus form the technical preconditions to the methodological and conceptual shift from the 'test-based vocabulary' to the 'behavior-based vocabulary'. ATDD/BDD practices, methods and techniques such as collaborative design, Gherkin, living documentation, etc. can be applied through putting those keyword-driven features that form the key advantages of this type of framework to their best use. So, to create, for instance, high-level scenario's in Gherkin is nothing more than a (somewhat) new form of usage of a keyword-driven framework and a (slightly) different spin on the keyword-driven approach. Although originally not devised and designed with these usages in mind, the keyword-driven frameworks proved to be just the perfect match for such practices.

In conclusion, although Agile, ATDD and BDD as methodological paradigms or conceptual frameworks are new, the phrase 'Agile/ATDD/BDD test automation frameworks' merely refers to a variation on the application of the keyword-driven approach and on the usage of the involved frameworks. This usage can, however, be regarded as applying the keyword-driven approach to the fullest extent and as bringing it to its logical conclusion.

At the end of the day, applying ATDD/BDD-style test designs comes down to layering these designs along the lines of the (already mentioned) distinction made by Gojko Adzic. This layering is made possible by the keyword-driven nature of the utilized frameworks and, consequently, always implies a leyword-driven approach.


We now have a better understanding of the position that the keyword-driven approach holds in relation to other types of test automation frameworks and of some of its unique advantages.

As a side-effect, we have also gained some knowledge concerning the evolution of test automation frameworks as well as concerning the problem space that they were developed in.

And finally, we have seen that certain framework 'types' do not belong into the pantheon of functional test automation frameworks.

Building upon this context, we will now give a brief, condensed analysis of the notion of a 'keyword' in part 2 of this three-part post.

Agile, but still really not Agile? What Pipeline Automation can do for you. Part 3.

Tue, 11/03/2015 - 13:07

Organizations adopting Agile and teams delivering on a feature-by-feature basis producing business value at the end of every sprint. Quite possibly this is also the case in your organization. But do these features actually reach your customer at the same pace and generate business value straight away? And while we are at it: are you able to actually use feedback from your customer and apply it for use in the very next sprint?

Possibly your answer is ‚ÄúNo‚ÄĚ, which I see very often. Many companies have adopted the Agile way of working in their lines of business, but for some reason ‚Äėold problems‚Äô just do not seem to go away...

Hence the question:

‚ÄúDo you fully capitalize on the benefits provided by working in an Agile manner?‚ÄĚ

Straight forward Software Delivery Pipeline Automation might help you with that.

In this post I hope to inspire you to think about how Software Development Pipeline automation can help your company to move forward and take the next steps towards becoming a truly Agile company. Not just a company adopting Agile principles, but a company that is really positioned to respond to the ever changing environment that is our marketplace today. To explain this, I take the Agile Manifesto as a starting point and work from there.

In my previous posts (post 1, post 2), I addressed Agile Principles 1 to 4 and 5 to 8. Please read below where I'll explain about how automation can help you for Agile Principles 9 to 12.


Agile Principle 9: Continuous attention to technical excellence and good design enhances agility.

In Agile teams, technical excellence is achieved by complete openness of design, transparency on design-implications, reacting to new realities1-Quality and using feedback loops to continuously enhance the product. However, many Agile teams still seem to operate in the blind when it comes to feedback on build, deployment, runtime and customer experience.

Automation enhances the opportunity to implement feedback loops into the Software Delivery Process as feedback. Whatever is automated can be monitored and immediately provides insight into the actual state of your pipeline. Think of things like trend information, test results, statistics, and current health status at a press of a button.

Accumulating actual measurement data is an important step, pulling this data to an abstract level for the complete team to understand in the blink of an eye is another. Take that extra mile and use dashboarding techniques to make data visible. Not only is it fun to do, it is very helpful in making project status come alive.


Agile Principle 10: Simplicity--the art of maximizing the amount of work not done--is essential.

Many¬†of us may know the quote: ‚ÄúEverything should be made as simple as possible, but no simpler‚ÄĚ. For ‚ÄúSimplicity-the art of maximizing the amount of work not done‚ÄĚ, wastes like ‚Äėover-processing‚Äô and ‚Äėover-production‚Äô are minimized by showing the product to the Product Owner as soon as possible and at frequent intervals, preventing gold plating and build-up of features in the pipeline.Genius-2

Of course, the Product Owner is important, but the most important stakeholder is the customer. To get feedback from the customer, you need to get new features not only to your Demo environment, but all the way to production. Automating the Build, Deploy, Test and provisioning processes are topics that help organizations achieving that goal.

Full automation of your software delivery pipeline provides a great mechanism for minimizing waste and maximizing throughput all the way into production. It will help you to determine when you start gold plating and position you to start doing things that really matter to your customer.

Did you know that according to a Standish report, more than 50% of functionality in software is rarely or never used. These aren’t just marginally valued features; many are no-value features. Imagine what can be achieved when we actually know what is used and what is not.


Agile Principle 11: The best architectures, requirements, and designs emerge from self-organizing teams.

Traditionally engineering projects were populated with specialists. Populating a team with specialists was based on the concept to divide labor, thereby pushing specialists towards focus on their field of expertise. The interaction designer designs the UI, the architect creates the required architectural model, the database administrator sets up the database, the integrator works on integration logic and so forth. Everyone was working on an assigned activity but as soon as components were put together, nothing seemed to work.

In Agile teams it is not about a person performing a specific task, it is about the team delivering fully functional slices of the product. When a slice fails, the team fails. Working together when designing components will help finding a optimal overall solution instead of many optimized sub-solutions that need to be glued together at a later stage. For this to work you need an environment the emits immediate feedback on the complete solution and this is where build, test and deployment automation come into play. Whenever a developer checks in code, the code is to be added to the complete codebase (not just the developer's laptop) and is to be tested, deployed and verified in the actual runtime environment as well. Working with fully slices provides a team the opportunity to experiment as a whole and start doing the right things.


Agile Principle 12: At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

To improve is to change, to be perfect is to change often. Self-learning teams and adjustments to new realities are key for Agile teams. However, in many organizations teams remain shielded from important transmitters of feedback like customer (usage), runtime (operational), test resucontinuous_improvement_modellts (quality) and so on.

The concept of Continuous Delivery is largely based upon receiving reliable feedback and using the feedback to improve. It is all about enabling the team to do the right things and do things right. An important aspect here is that information should not become biased. In order to steer correctly, actual measurement data need to be accumulated and represented at an understandable way to the team. Automation of the Software Delivery process allows the organization to gather real data, perform real measurements on real events and act accordingly. This is how one can start act on reality instead of hypothesis.


Maybe this article starts to sound a bit like Steve Balmers Mantra for Developers (oh my goshhhh) but than for "Automation", but just give it five minutes... You work Agile, but are your customers really seeing the difference? Do they now have a better product, delivered faster? If the answer is no, what could help you with this? Instantly?


Michiel Sens.

Bringing Agile to the Next Level

Mon, 11/02/2015 - 22:22

The Best is Yet to Come written on desert roadI finished my last post with the statement Agile will be applied on a much wider scale in the near future. Within governmental organizations, industry, startups, on a personal level, you name it.  But how?  In my next posts I will deep dive in this exciting story lying in front of us in five steps:

Blogpost/Step I: Creating Awareness & Distributing Agile Knowledge
Change is a chance, not a threat.  Understanding and applying the Agile Mindset and toolsets will help everyone riding the wave of change with more pleasure and success.  This is the main reason why I’ve joined initiatives like Nederland Kantelt, EduScrum, Wikispeed and Delft University’s D.R.E.A.M. Hall.

Blogpost/Step II: Fit Agile for Purpose
The Agile Manifesto was originally written for software.  Lots of variants of the manifesto emerged the last couple of years serving different sectors and products. This is a good thing if the core values of the agile manifesto are respected.

However, agile is not applicable for everything.  For example, Boeing will never apply Scrum directly for producing critical systems.  They’re applying Scrum for less critical parts and R&D processes. For determining the right approach they use the Cynefin framework.  In this post I will explain this framework making it a lot easier where you could apply Agile and where you should be careful.

Blogpost/Step III: Creating a Credible Purpose or ‚ÄúWhy‚ÄĚ
You can implement a new framework or organization, hire the brightest minds and have loads of capital, in the end it all boils down to real passion and believe. Every purpose should be spot on in hitting the center of the Golden Circle.  But how to create this fontainebleau in spring?

Blogpost/Step IV: Breaking the Status Quo and Igniting Entrepreneurship
Many corporate organizations are busy or have implemented existing frameworks like SAFe or successful Agile models from companies like Netflix and Spotify.  But the culture change which goes with it, is the most important step. How to spark a startup mentality in your organization?  How to create real autonomy?

Compass with needle pointing the word organic. Green and grey tones over beige background, Conceptual illustration for healthy eating and organic farming.Blogpost/Step V: Creating Organic Organizations
Many Agile implementations do not transform organizations in being intrinsically Agile.  To enable this, organizations should evolve organically, like Holacracy.   They will become stronger and stronger by setbacks and uncertain circumstances.  Organic organizations will be more resilient and anti-fragile.  In fact, it’s exactly how nature works.  But how can you work towards this ideal situation?

Dancing with GetKanban (Using POLCA)

Mon, 11/02/2015 - 12:59

Very recently POLCA got some attention on twitter. The potential and application of POLCA to knowledge work I explained in my blog 'Squeeze more out of kanban with POLCA!' [Rij11] of 4 years ago.

In this blog the GetKanban [GetKanban] game is played by following the the initial 'standard' rules for handling Work in Progress (WiP) limits and by changing the rules of the game inspired by POLCA (See [POLCA]).

The results show an equal throughput between POLCA and non-overlapping WiP limits, with smaller inventory size in the case of POLCA way of approaching WiP limits.

First a short introduction to the GetKanban game is given and a description of the set-up together with the basic results.

Second a brief introduction to POLCA is given and the change of rules in the game is explained. Thirdly, the set-up of the game using POLCA and the results are discussed.

Finally, a few words are spent on the team's utilization.

Simulation: GetKanban Game

The step-up with standard WiP limits is shown below. The focus is on a basic simulation of the complete 24 project days of only the regular work items. The expedite, fixed delivery date, and intangibles are left out. In addition, the events described on the event cards are ignored. Reason being to get a 'clean' simulation showing the effect of applying the WiP limits in a different manner.


Other policies taken from the game: a billing cycle of 3 days, replenishment of the 'Ready' column is allowed only at the end of project days 9, 12, 15, 18, and 21.

The result of running the game at the end of day 24 iskanban

The picture shows the state of the board, lead time distribution diagram, control chart, and cumulative flow diagram.

From these it can be inferred that (a) 10 items are in progress, (b) throughput is 25 items in 24 days, (c) median of 9 days for the lead time of items from Ready to Deployed.


Interesting is that the control chart (middle chart) shows the average lead time dropping to 5-6 days in the last three days of the simulation. Since the game starts at day 9, this shows that it takes 12 days before the system settles in a new stable state with 5-6 as an average lead time compared to the 9 days at the beginning.

POLCA: New Rules

In POLCA (see [POLCA]) the essence is to make the WiP limit overlap. The 'O' of POLCA stands for 'Overlapping':

POLCA - Paired Overlapping Loops of Cards with Authorization

One of the characteristics differentiating POLCA from e.g. kanban based systems is that it is a combination of push & pull: 'Push when you know you can pull'.

Setting WiP limits to support pushing work when you know it can subsequently be pulled. The set-up in the game for POLCA is as follows:


For clarity the loops are indicated in the 'expedite' row.

How do the limits work? Two additional rules are introduced:

Rule 1)
In the columns associated with each loop a limit on the number of work items is set. E.g. the columns 'Development - Done' and 'Test' can only accommodate for a maximum of 3 cards. Likewise, the columns underneath the blue and red loops have a limit of 4 and 4 respectively.

Rule 2)
Work can only be pulled if a) the loop has capacity, i.e. has fewer cards than the limit, b) the next, or overlapping loop, has capacity.

These are illustrated with a number of examples that typically occur during the game:

Example 1: Four items in the 'Development - Done' column
No items are allowed to be pulled in 'Ready' because there is no capacity available in the blue loop (Rule 2)

Example2: Two items in 'Test' & two items in 'Analysis - Done'
One item can be pulled in 'Development - In Progress' (Rule 1 and Rule2).

Results for POLCA

The main results for running the game for 24 days with the above rules is shown in the charts below.

The Control Chart shows that it takes roughly 6 days for all existing items to flow out of the system. The effect of the new rules (POLCA style WiP limits) are seen starting from day 15.

Lead Time
On average the charts show a lead time of 3 days, starting from day 18. This is also clearly visible on the lead time distribution chart and on the narrow cumulative flow diagram.

The number of items produced by the project is 24 items in 24 days. This is equal enough to the through put as measured using the standard set of rules.

Work In Progress
The total number of items in progress in only 3 to 4 items. This is less than half of the items as seen in the simulation run using the standard set of rules.

polca-new-3 polca-new-1 polca-new-2

Note: The cumulative flow diagram clearly shows the 3-day billing day and replenishment (step-wise increments of black and purple lines).


As described in my blog 'One change at a time' [Rij15] getting feedback from improvements/changes that effect the flow of the system take some time before the system settles in the new stable state.

With POLCA it is expected that this learning cycle can be shortened. In running the game the control charts show that it takes approximately 12 days before the system reaches the stable state, whereas with the POLCA set of rules this is reached in half the time.

Results for POLCA - Continuous Replenishment

As described above, until now we have adhered to the billing cycle of 3 days, which also allows for replenishment every 3 days.

What happens if replenishment is allowed when possible. The results are shown int he charts below.

polca2-1 polca2-2 polca2-3

The cumulative flow diagram shows the same throughput, namely 24 items over a period of 24 days. Work in progress is larger because it is pulled in earlier instead of at the end of every third day.

What is interesting is that the Control Chart shows a large variation in lead time: from 3 to 6 days. What I noted during playing the game is that at regular times 3 to 4 items are allowed to be pulled into 'Ready'. These would sit for some time in 'Ready' and then suddenly completed all the way to 'Ready for Deployment'. Then another bunch of 3 to 4 items are pulled into 'Ready'.
This behavior is corroborated by the Control Chart (staircase pattern). The larger variation is shown in the Lead Time Distribution Chart.

What is the reason for this? My guess is that the limit of 4 on the red loop is too large. When replenishment was only allowed at days 9, 12, 15, ... this basically meant a lower limit for the red loop.
Tuning the limits is important for establishing a certain cadence. Luckily this behavior can be seen in the Control Chart.


In the GetKanban game specialists in the team are represented by colored dices: green for testers, blue for developers, and red for analysts. Effort spent is simulated by throwing the dices. Besides spending the available effort in their own speciality, it can also be spent in other specialities in which case the effort to spend is reduced.

During the game it may happen that utilization is less than 100%:

  1. Not spending effort in the speciality, e.g. assigning developers to do test work.
  2. No work item to spend the effort on because of WiP limits (not allowed to pull work).

The picture below depicts the utilization as happened during the game: on average a utilization of 80%.



In this blog I have shown how POLCA style of setting WiP limits work, how overlapping loops of limits help in pulling work fast through the system, and the positive effect on the team's learning cycle.

In summary, POLCA allows for

  • Shorter lead times
  • Lower work in progress, enabling a faster learning cycle

Tuning of the loop limits seems to be important for establishing a regular cadence. A 'staircase' pattern in the Control Chart is a strong indication that loop limits are not optimal.


[GetKanban] GetKanban Game: http://getkanban.com

[Rij11] Blog: Squeeze more out of kanban with POLCA!

[Rij15] Blog: One change at a time

[POLCA] POLCA: http://www.business-improvement.eu/qrm/polca_eng.php


The power of map and flatMap of Swift optionals

Sun, 11/01/2015 - 01:22

Until recently, I always felt like I was missing something in Swift. Something that makes working with optionals a lot easier. And just a short while ago I found out that the thing I was missing does already exist. I'm talking about the map and flatMap functions of Swift optionals (not the Array map function). Perhaps it's because they're not mentioned in the optionals sections of the Swift guide and because I haven't seen it in any other samples or tutorials. And after asking around I found out some of my fellow Swift programmers also didn't know about it. Since I find it an amazing Swift feature that makes your Swift code often a lot more elegant I'd like to share my experiences with it.

If you didn't know about the map and flatMap functions either you should keep on reading. If you did already know about it, I hope to show some good, real and useful samples of it's usage that perhaps you didn't think about yet.

What do map and flatMap do?

Let me first give you a brief example of what the functions do. If you're already familiar with this, feel free to skip ahead to the examples.

The map function transforms an optional into another type in case it's not nil, and otherwise it just returns nil. It does this by taking a closure as parameter. Here is a very basic example that you can try in a Swift Playground:

var value: Int? = 2
var newValue = value.map { $0 * 2 }
// newValue is now 4

value = nil
newValue = value.map { $0 * 2 }
// newValue is now nil

At first, this might look odd because we're calling a function on an optional. And don't we always have to unwrap it first? In this case not. That's because the map function is a function of the Optional type and not of the type that is wrapped by the Optional.

The flatMap is pretty much the same as map, except that the return value of the closure in map is not allowed to return nil, while the closure of flatMap can return nil. Let's see another basic example:

var value: Double? = 10
var newValue: Double? = value.flatMap { v in
    if v < 5.0 {
        return nil
    return v / 5.0
// newValue is now 2

newValue = newValue.flatMap { v in
    if v < 5.0 {
        return nil
    return v / 5.0
// now it's nil

If we would try to use map instead of flatMap in this case, it would not compile.

When to use it?

In many cases where you use a ternary operator to check if an optional is not nil, and then return some value if it's not nil and otherwise return nil, it's probably better to use one of the map functions. If you recognise the following pattern, you might want to go through your code and make some changes:

var value: Int? = 10
var newValue = value != nil ? value! + 10 : nil 
// or the other way around:
var otherValue = value == nil ? nil : value! + 10

The force unwrapping should already indicate that something is not quite right. So instead use the map function shown previously.

To avoid the force unwrapping, you might have used a simple if let or guard statement instead:

func addTen(value: Int?) -> Int? {
  if let value = value {
    return value + 10
  return nil

func addTwenty(value: Int?) -> Int? {
  guard let value = value else {
    return nil
  return value + 20

This still does exactly the same as the ternary operator and thus is better written with a map function.

Useful real examples of using the map functions

Now let's see some real examples of when you can use the map functions in a smart way that you might not immediately think of. You get the most out of it when you can immediately pass in an existing function that takes the type wrapped by the optional as it's only parameter. In all of the examples below I will first show it without a map function and then again rewritten with a map function.

Date formatting

Without map:

var date: NSDate? = ...
var formatted: String? = date == nil ? nil : NSDateFormatter().stringFromDate(date!)

With map:

var date: NSDate? = ...
var formatted: date.map(NSDateFormatter().stringFromDate)
Segue from cell in UITableView

Without map:

func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  if let cell = sender as? UITableViewCell, let indexPath = tableView.indexPathForCell(cell) {
    (segue.destinationViewController as! MyViewController).item = items[indexPath.row]

With map:

func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  if let indexPath = (sender as? UITableViewCell).flatMap(tableView.indexPathForCell) {
    (segue.destinationViewController as! MyViewController).item = items[indexPath.row]
Values in String literals

Without map:

func ageToString(age: Int?) -> String {
    return age == nil ? "Unknown age" : "She is (age!) years old"

With map:

func ageToString(age: Int?) -> String {
    return age.map { "She is ($0) years old" } ?? "Unknown age"

(Please note that in the above examples there need to be backslashes before (age!) and ($0) but unfortunately :-( that breaks the formatting of WordPress in this post)

Localized Strings

Without map:

let label = UILabel()
func updateLabel(value: String?) {
  if let value = value {
    label.text = String.localizedStringWithFormat(
      NSLocalizedString("value %@", comment: ""), value)
  } else {
    label.text = nil

With map:

let label = UILabel()
func updateLabel(value: String?) {
  label.text = value.map { 
    String.localizedStringWithFormat(NSLocalizedString("value %@", comment: ""), $0) 
Enum with rawValue from optional with default

Without map:

enum State: String {
    case Default = ""
    case Cancelled = "CANCELLED"

    static func parseState(state: String?) -> State {
        guard let state = state else {
            return .Default
        return State(rawValue: state) ?? .Default

With map:

enum State: String {
    case Default = ""
    case Cancelled = "CANCELLED"

    static func parseState(state: String?) -> State {
        return state.flatMap(State.init) ?? .Default
Find item in Array

With Item like:

struct Item {
    let identifier: String
    let value: String

let items: [Item]

Without map:

func find(identifier: String) -> Item? {
    if let index = items.indexOf({$0.identifier == identifier}) {
        return items[index]
    return nil

With map:

func find(identifier: String) -> Item? {
    return items.indexOf({$0.identifier == identifier}).map({items[$0]})
Constructing objects with json like dictionaries

With a struct (or class) like:

struct Person {
    let firstName: String
    let lastName: String

    init?(json: [String: AnyObject]) {
        if let firstName = json["firstName"] as? String, let lastName = json["lastName"] as? String {
            self.firstName = firstName
            self.lastName = lastName
        return nil

Without map:

func createPerson(json: [String: AnyObject]) -> Person? {
    if let personJson = json["person"] as? [String: AnyObject] {
        return Person(json: personJson)
    return nil

With map:

func createPerson(json: [String: AnyObject]) -> Person? {
    return (json["person"] as? [String: AnyObject]).flatMap(Person.init)

The map and flatMap functions can be incredibly powerful and make your code more elegant. Hopefully with these examples you'll be able to spot when situations where it will really benefit your code when you use them.

Please let me know in the comments if you have similar smart examples of map and flatMap usages and I will add them to the list.

Android Resource Configuration override and Large Text Mode

Sat, 10/31/2015 - 13:15

In Android, the resource Configuration dictates what resources and assets are selected. The system populates a default configuration to match your device and settings (screen size, device orientation, language). Sometimes, you need to deviate from these defaults. Since API 17 you can use applyOverrideConfiguration(Configuration) to specify an alternative resource config. The normal place to do so is in the attachBaseContext(Context) method of your Activity.

public class MainActivity extends Activity {

protected void attachBaseContext(Context newBase) {
    final Configuration override = new Configuration();
    override.locale = new Locale("nl", "NL");



Here's what that looks like:

Screenshot Screenshot

Unfortunately, there's a catch.

Android has a "Large Text" setting in its accessibility options (and in some cases a different text size setting in the display options). If you use the overrideConfiguration method to set your own resource configurtation, you will wipe out the Large Text preference, hurting your accessibilty support. This problem is easily overlooked, and luckily, easily fixed.

Screenshot Screenshot

The large fonts setting works by changing the Configuration.fontScale attribute, which is a public float. This works with the scaled density-independent pixels (sp's) that you use to define fontSize attributes. All sp dimensions have this fontScale multiplier applied. My Nexus 5 has two font size settings, normal at 1.0 and large at 1.3. The Nexus 5 emulator image has four, and many Samsung devices have seven different font sizes you can choose from.

When you set the override configuration, the new Configuration object has its fontScale set to 1.0f, thereby breaking the large fonts mode. To fix this problem, you simply have to copy the current fontScale value from the base context. This is best done using the copy constructor, which will also account for any other properties that come with the same issue.

public class MainActivity extends Activity {

protected void attachBaseContext(Context newBase) {
    final Configuration override = new Configuration(
            // Copy the original configuration so it isn't lost.
    override.locale = new Locale("nl", "NL");

    // BTW: You can also access the fontScale value using Settings.System:
    // Settings.System.getFloat(getContentResolver(), Settings.System.FONT_SCALE, 1.0f);



The app now works as intended, with accessibility support intact.


Long story short: when you use applyOverrideConfiguration, always test your app in the Large Fonts accessibility setting. Be sure to copy the original Configuration in your new Configuration constructor, or use the System.Settings.FONT_SCALE property to retrieve the font scale separately.

Angular reverse URL construction

Thu, 10/29/2015 - 09:02

Let's discuss a problem you probably aren't aware you are having in your web app:

Hardcoded URLs

These hardcoded, static URLs are simple at first, but the code is error prone and increases the app complexity. Instead what you should be doing is creating dynamic URLs by means of reverse URL construction. I have grown up using the Django web framework which extensively uses reverse routing and I love it. Let us look at how this simple system improves your coding and I'm sure you'll love it too.

We'll look at URL construction using an AngularJS example. Even if you don't use Angular, the general idea behind reverse URLs should become apparent.

The stage

Say we have simple web application with a contact page, a list of users and user pages:

 /contact -> contact.html
 /users -> users.html
 /user/foo -> user.html for user "foo"

We will reference these routes directly in our page using the `<a>` tag and occasionally we might even require an URL inside JavaScript. A link to the contact page is easily created using: `<a href="/contact">Contact us</a>. There are however a few drawbacks to this approach.

The first issue is that the URL is now defined in two places, the route configuration and the anchor tag. If you ever decide to change the URL to `contact-us`, all references should be updated as well. As the app grows so do all the duplicate references and you'll end up changing a lot of them. Chances are that you will miss one and cause a bad link.

More problems arise as soon as the URL contains a dynamic element. Lets say the page behind `/users` list all users in our application as follows:

<ul ng-repeat="user in users">
  <li><a ng-href="/user/{{user.name}}"></a></li>

Because the URL is constructed using simple interpolation we can't check whether the resulting URL is valid. Lets say the app router matches the URL on the pattern: `^/user/([a-z]+)$`. This would mean the user's name may only consist of letters, for example "foo". If the name contains an integer, say "foo2015", the URL won't be recognized by our router. So we allow bad URLs to be constructed.

So hardcoded URLs aren't great, what is?

Reverse URL resolution

The solution has many names: URL construction, reverse URL resolution and reverse routing. The essence of this method is that every route is identifiable by a unique element. Reversing the URL is done by referencing the route and supplying parameters. The route itself is stored in a single location, the router. This route is used both for routing a request and constructing a URL.

We will add reverse routing to the standard Angular "$routeProvider". A small project angular-reverse-url adds the reverse resolution capability. We can use a filter to build URLs by referencing either the route name or its controller name. In this example we will identify the route using the route name. But bear in mind that the controller approach would work just as well (given a route is uniquely defined by a controller).

Now let's see how reverse URL construction is done. We will use the example setup as follows:

    .when('/contact', {
      template: 'contact.html',
      name: 'contact'
    .when('/users', {
      template: '/users.html',
      name: 'user-list'
    .when('/user/:id/:name', {
      template: '/users.html',
      name: 'user-detail'
Constructing static URLs

Our web app needs needs a navigation bar. Inside our navigation we can link to the contact page using:

    <li><a ng-href="{{ 'contact' | reverseUrl }}">Contact us</a></li>

The reverseUrl filter simply uses the 'contact' name to construct the URL from the route defined in the $routeProvider.

Parameter based URL generation

Creating plain URLs without any parameters is hardly interesting. The benefits of reverse URL resolution become apparent when we start introducing route arguments. We can list the user pages in our application using:

<li ng-repeat="user in users">
  <a href="{{ 'user-detail' | reverseUrl:{id: user.id, name: user.name} }}">{{user.name}}</a>

Clean and simple!

There is not duplicate route defined and parameters are conveniently mapped by name. We explicitly map the parameters in this example, but we can also pass an object to the filter directly to greatly simplifying the statement. If you so prefer there is also the option to provide positional arguments:

<a href="{{ 'user-detail' | reverseUrl:[user.id, user.name] }}">{{user.name}}</a>
In code URL resolution

Occasionally the need arises to compose an URL directly in code. As `reverseUrl` is an Angular filter, we can simply call it directly inside our JavaScript:

$filter('reverseUrl')('user-detail', { id: 1, name: 1 })

This works just fine, though I would suggest wrapping the filter in a service if you often construct URLs in code.

Other solutions

The library presented here offers a very simple yet elegant way to do reverse URL resolution. It should be relatively easy to extend with extra functionality such as parameter type checking. There are also quite a lot of other libraries that offer reverse URL construction. We'll look at the new Angular router and the popular ui-router.

Angular >= 1.5

The new Angular 1.5 router offers reverse URL resolution out of the box. If you've made the leap to the new component based routing, reverse resolution should be a breeze. There are some minor syntactical differences:


AppController.$routeConfig = [
  { path: '/user/:id/:name', component: 'user-detail' }

Template usage

<a ng-link="user-detail({ id: 1, name:'foo' })">Foo's page</a>
UI router

The wildly popular Angular ui-router also offers reverse routing. Chances are if you are using ui-router, you are using this reverse resolution of URLs already. If you are using ui-router and are still hardcoding URLs, switch today! Setup is easy:


 .state('user-detail', {
   URL: "/user/:id/:name",
   controller: "UserDetailCtrl",
   templateUrl: "views/user.html"

Template usage

<a ui-sref="user-detail({ id: 1, name: 'foo' })">Foo's page</a>
Stop hard coding URLs

With so many options, there really is no good reason to hard code URLs in your application. From now on, all URLs you create in any sizable project should be using a reverse URL resolution system.

Reverse URL construction is really simple, it keeps your routes in one place and standardizes URL generation in your app.

If you want to take a closer look at the Angular example code, see the full example: reverse-routing-Angular-example

Learning about test automation with Lego

Mon, 10/26/2015 - 15:44

‚ÄúHold on, did you say that I can learn about test automation by playing with Lego? Shut up and take my money!‚ÄĚ Yes, I am indeed saying that you can.¬†It will cost you a couple hundred Euro‚Äôs, because Lego isn‚Äôt cheap, especially the¬†Mindstorm EV3 Lego. It turns out that¬†Lego robots eat at¬†a lot¬†of AA batteries, so buy a couple of packs of these as well.¬†On the software¬†side you need to¬†have a computer¬†with a Java development environment and an IDE of your choice (the free edition of IntelliJ IDEA will do).¬†

‚ÄúOkay, hold on a second. Why do you need Java? I thought Lego had its own programming language?‚ÄĚ. Yes, that‚Äôs true. Orginally, Lego provides you with their own visual programming language. I mean, the audience for the EV3 is actually kids, but it will be our little secret. Because Lego is awesome, even for adults. Some hero made a Java library that can communicate with the EV3 hardware,¬†LeJos, so you can do more awesome stuff with it. Another hero dedicated a whole website to his Mindstorm projects, including instructions on how to build them.

Starting the project

So, on a sunny innovation day in August at Xebia, Erik Zeedijk and I started our own Lego project. The goal was to make something cool and relevant for Testworks Conf. We decided to go for The Ultimate Machine, also known as The Most Useless Machine.  It took us about three hours to assemble the Lego. If you’re not familiar with the Useless Machine, check this video below. 

Somehow, we had to combine Lego with test automation. We decided to use the Cucumber framework and write acceptance tests in it. That way, we could also use that to figure out what options we wanted to give the machine (sort of a requirements phase...what did I just say!?). The Ultimate Machine can do more than just turn off the switch, as you could tell if you watched the above video. It can detect when a hand is hovering above the switch and that can trigger all kinds of actions: driving away to trick the human, hiding the switch to trick the human, etc. With Acceptance Test Driven Development, we could write out all these actions in tests and use those tests to drive our coding. In that sense, we were also using Test Driven Development. In the picture below is an example of a Cucumber feature file that we used.


The idea sounded really simple, but executing it was a bit harder. We made a conceptual mistake at first. To run our tests, we first coded them in a way that still required a human (someone who turned the switch on). Also, the tests were testing the Lego hardware too (the sensors) and not our own code. The Lego hardware has quite some bugs in it, we noticed. Some of the sensors aren’t really accurate in the values they return. After some frustration and thinking we found a way to solve our problem. In the end, the solution is pretty elegant and in retrospect I face-palm because of my own inability to see it earlier.

We had to mock the Lego hardware (the infrared sensor and the motors), because it was unreliable and we wanted to test our own code. We also had to mock the human out of the tests. This meant that we didn’t even need the Lego robot anymore to run our tests. We decided to use Mockito for our mock setup. In the end, the setup looked like this. 

robot setup

The LeJos Java library uses a couple of concepts that are important to grasp. An arbitrator decides which behavior should run. All the behaviors are put in a behaviorList Inside each behavior is a boolean wantControl that becomes 'true' when certain conditions arise. See the picture below for an example 'wantControl' in the DriveBehavior class. 


Then the behavior starts to run and when it is finished it returns 'idle = true'. The arbitrator then picks a new behavior to run. Because some behaviors had the same conditions for 'wantControl' we had to think of a way to prevent the same behavior from triggering all the time. In each behavior we put a boolean chanceOfBehaviorHappening and we assigned a chance to it. After a bit of tweaking we had the robot running the way we liked it.

The tests were reliable after this refactoring and super fast. The test code was neatly separated from the code that implemented the robot’s behaviour. In addition, you could start the real Lego robot and play with it. This is a picture of our finished robot. 

Lego Ultimate Machine

We didn’t implement all the behaviors we identified on purpose, because our goal was to get attendants of TestWorks Conf to code for our robot. This little project has taught both Erik and me more about writing good Cucumber feature files, TDD and programming. We are both not really Java experts, so this project was a great way of learning for both of us. I certainly improved my understanding of Object Oriented programming. But even if you are a seasoned programmer, this project could be nice to increase your understanding of Cucumber, TDD or ATDD. So, convince your boss to shell out a couple hundred to buy this robot for you and start learning and have fun.

FYI: I will take the robot with me to the Agile Testing Days, so if you are reading this and going there too, look me up and have a go at coding.

Model updates in Presentation Controls

Mon, 10/26/2015 - 11:15

In this post I'll explain how to deal with updates when you're using Presentation Controls in iOS. It's a continuation of my previous post in which I described how you can use Presentation Controls instead of MVVM or in combination with MVVM.

The previous post didn't deal with any updates. But most often the things displayed on screen can change. This can happen because new data is fetched from a server, through user interaction or maybe automatically over time. To make that work, we need to inform our Presentation Controls of any updates of our model objects.

Let's use the Trip from the previous post again:

struct Trip {

    let departure: NSDate
    let arrival: NSDate
    let duration: NSTimeInterval

    var actualDeparture: NSDate
    var delay: NSTimeInterval {
        return self.actualDeparture.timeIntervalSinceDate(self.departure)
    var delayed: Bool {
        return delay > 0

    init(departure: NSDate, arrival: NSDate, actualDeparture: NSDate? = nil) {
        self.departure = departure
        self.arrival = arrival
        self.actualDeparture = actualDeparture ?? departure

        // calculations
        duration = self.arrival.timeIntervalSinceDate(self.departure)

Instead of calculating and setting the delay and delayed properties in the init we changed them into computed properties. That's because we'll change the value of the actualDeparture property in the next examples and want to display the new value of the delay property as well.

So how do we get notified of changes within Trip? A nice approach to do that is through binding. You could use ReactiveCocoa to do that but to keep things simple in this post I'll use a class Dynamic that was introduced in a post about Bindings, Generics, Swift and MVVM by Srdan Rasic (many things in my post are inspired by the things he writes so make sure to read his great post). The Dynamic looks as follows:

class Dynamic<T> {
  typealias Listener = T -> Void
  var listener: Listener?

  func bind(listener: Listener?) {
    self.listener = listener

  func bindAndFire(listener: Listener?) {
    self.listener = listener

  var value: T {
    didSet {

  init(_ v: T) {
    value = v

This allows us to register a listener which is informed of any change of the value. A quick example of its usage:

let delay = Dynamic("+5 minutes")
delay.bindAndFire {
    print("Delay: $$$0)")

delay.value = "+6 minutes" // will print 'Delay: +6 minutes'

Our Presentation Control was using a TripViewViewModel class to get all the values that it had to display in our view. These properties were all simple constants with types such as String and Bool that would never change. We can replace the properties that can change with a Dynamic property.

In reality we would probably make all properties dynamic and fetch a new Trip from our server and use that to set all the values of all Dynamic properties, but in our example we'll only change the actualDeparture of the Trip and create dynamic properties for the delay and delayed properties. This will allow you to see exactly what is happening later on.

Our new TripViewViewModel now looks like this:

class TripViewViewModel {

    let date: String
    let departure: String
    let arrival: String
    let duration: String

    private static let durationShortFormatter: NSDateComponentsFormatter = {
        let durationFormatter = NSDateComponentsFormatter()
        durationFormatter.allowedUnits = [.Hour, .Minute]
        durationFormatter.unitsStyle = .Short
        return durationFormatter

    private static let durationFullFormatter: NSDateComponentsFormatter = {
        let durationFormatter = NSDateComponentsFormatter()
        durationFormatter.allowedUnits = [.Hour, .Minute]
        durationFormatter.unitsStyle = .Full
        return durationFormatter

    let delay: Dynamic<String?>
    let delayed: Dynamic<Bool>

    var trip: Trip

    init(_ trip: Trip) {
        self.trip = trip

        date = NSDateFormatter.localizedStringFromDate(trip.departure, dateStyle: .ShortStyle, timeStyle: .NoStyle)
        departure = NSDateFormatter.localizedStringFromDate(trip.departure, dateStyle: .NoStyle, timeStyle: .ShortStyle)
        arrival = NSDateFormatter.localizedStringFromDate(trip.arrival, dateStyle: .NoStyle, timeStyle: .ShortStyle)

        duration = TripViewViewModel.durationShortFormatter.stringFromTimeInterval(trip.duration)!

        delay = Dynamic(trip.delayString)
        delayed = Dynamic(trip.delayed)

    func changeActualDeparture(delta: NSTimeInterval) {
        trip.actualDeparture = NSDate(timeInterval: delta, sinceDate: trip.actualDeparture)

        self.delay.value = trip.delayString
        self.delayed.value = trip.delayed


extension Trip {

    private var delayString: String? {
        return delayed ? String.localizedStringWithFormat(NSLocalizedString("%@ delay", comment: "Show the delay"), TripViewViewModel.durationFullFormatter.stringFromTimeInterval(delay)!) : nil

Using the changeActualDeparture method we can increase or decrease the time of trip.actualDeparture. Since the delay and delayed properties on trip are now computed properties their returned values will be updated as well. We use them to set new values on the Dynamic delay and delayed properties of our TripViewViewModel. Also the logic to format the delay String has moved into an extension on Trip to avoid duplication of code.

All we have to do now to get this working again is to create bindings in the TripPresentationControl:

class TripPresentationControl: NSObject {

    @IBOutlet weak var dateLabel: UILabel!
    @IBOutlet weak var departureTimeLabel: UILabel!
    @IBOutlet weak var arrivalTimeLabel: UILabel!
    @IBOutlet weak var durationLabel: UILabel!
    @IBOutlet weak var delayLabel: UILabel!

    var tripModel: TripViewViewModel! {
        didSet {
            dateLabel.text = tripModel.date
            departureTimeLabel.text = tripModel.departure
            arrivalTimeLabel.text  = tripModel.arrival
            durationLabel.text = tripModel.arrival

            tripModel.delay.bindAndFire { [unowned self] in
                self.delayLabel.text = $0

            tripModel.delayed.bindAndFire { [unowned self] delayed in
                self.delayLabel.hidden = !delayed
                self.departureTimeLabel.textColor = delayed ? .redColor() : UIColor(red: 0, green: 0, blue: 0.4, alpha: 1.0)

Even though everything compiles again, we're not done yet. We still need a way to change the delay. We'll do that through some simple user interaction and add two buttons to our view. One to increase the delay with one minute and one to decrease it. Handling of the button taps goes into the normal view controller since we don't want to make our Presentation Control responsible for user interaction. Our final view controller now looks like as follows:

class ViewController: UIViewController {

    @IBOutlet var tripPresentationControl: TripPresentationControl!

    let tripModel = TripViewViewModel(Trip(departure: NSDate(timeIntervalSince1970: 1444396193), arrival: NSDate(timeIntervalSince1970: 1444397193), actualDeparture: NSDate(timeIntervalSince1970: 1444396493)))

    override func viewDidLoad() {

        tripPresentationControl.tripModel = tripModel

    @IBAction func increaseDelay(sender: AnyObject) {

    @IBAction func decreaseDelay(sender: AnyObject) {

We now have an elegant way of updating the view when we tap the button. Our view controller communicates a change logical change of the model to the TripViewViewModel which in turn notifies the TripPresentationControl about a change of data, which in turn updates the UI. This way the Presentation Control doesn't need to know anything about user interaction and our view controller doesn't need to know about which UI components it needs to change after user interaction.

And the result:

Hopefully this post will give you a better understanding about how to use Presentation Controls and MVVM. As I mentioned in my previous post, I recommend you to read Introduction to MVVM by Ash Furrow and From MVC to MVVM in Swift by Srdan Rasic as well as his follow up post mentioned at the beginning of this post.

And of course make sure to join the do {iOS} Conference in Amsterdam the 9th of November, 2015. Here Natasha "the Robot" Murashev will be giving a talk about Protocol-oriented MVVM.

Keeping an eye on your Amazon EC2 firewall rules

Sun, 10/25/2015 - 14:56

Amazon AWS makes it really easy for anybody to create and update firewall rules that provide access to the virtual machines inside AWS. Within seconds you can add your own IP address so you can work from home or the office. However, it is also very easy to forget to remove them once your are finished. The utility aws-sg-revoker , will help you maintain your firewall rules.

aws-sg-revoker inspects all your inbound access permission and compares them with the public IP addresses of the machines in your AWS account. For grants to IP addresses not found in your account, it will generate a aws CLI revoke command. But do not be afraid: it only generates, it does not execute it directly. You may want to investigate before removal. Follow the following 4 steps to safeguard your account!

step 1. Investigate

First run the following command to generate a list of all the IP address ranges that are referenced but not in your account.

aws-sg-revoker -l x.y.z. hostname.com. a.b.c.

You may find that you have to install jq and the aws CLI :-)

step 2. Exclude known addresses

Exclude the ip addresses that are ok. These addresses are added as regular expressions.

aws-sg-revoker -l -w 1\.2\.\3\.4 -w 8\.9\.10\.11/16
step 3. generate revoke commands

Once you are happy, you can generate the revoke commands:

aws-sg-revoker -w 1\.2\.\3\.4 -w 4\.5\.6\.7 -w 8\.9\.10\.11/16

aws ec2 revoke-security-group-ingress --group-id sg-aaaaaaaa --port 22-22 --protocol tcp --cidr # revoke from sg blablbsdf
aws ec2 revoke-security-group-ingress --group-id sg-aaaaaaaa --port 9200-9200 --protocol tcp --cidr # revoke from sg blablbsdf
aws ec2 revoke-security-group-ingress --group-id sg-aaaaaaaa --port 9080-9080 --protocol tcp --cidr # revoke from sg blablbsdf
aws ec2 revoke-security-group-ingress --group-id sg-bbbbbbbb --protocol -1 --cidr # revoke from sg sg-1
aws ec2 revoke-security-group-ingress --group-id sg-bbbbbbbb --protocol -1 -cidr # revoke from sg sg-3
step 4. Execute!

If the revokes look ok, you can execute them by piping them to a shell:

aws-sg-revoker -w 1\.2\.\3\.4 -w 8\.9\.10\.11/16 | tee revoked.log | bash

This utility makes it easy to for you to regularly inspect and maintain your firewall rules and keep your AWS resources safe!

Agile, but still really not Agile? What Pipeline Automation can do for you. Part 2.

Thu, 10/22/2015 - 13:51

Organizations adopting Agile and teams delivering on a feature-by-feature basis producing business value at the end of every sprint. Quite possibly this is also the case in your organization. But do these features actually reach your customer at the same pace and generate business value straight away? And while we are at it: are you able to actually use feedback from your customer and apply it for use in the very next sprint?

Possibly your answer is ‚ÄúNo‚ÄĚ, which I see very often. Many companies have adopted the Agile way of working in their lines of business, but for some reason ‚Äėold problems‚Äô just do not seem to go away...

Hence the question:

‚ÄúDo you fully capitalize on the benefits provided by working in an Agile manner?‚ÄĚ

Straight forward Software Delivery Pipeline Automation might help you with that.

In this post I hope to inspire you to think about how Software Development Pipeline automation can help your company to move forward and take the next steps towards becoming a truly Agile company. Not just a company adopting Agile principles, but a company that is really positioned to respond to the ever changing environment that is our marketplace today. To explain this, I take the Agile Manifesto as a starting point and work from there.

In my previous post, I addressed Agile Principles 1 to 4, please read below where I'll explain about how automation can help you for Agile Principles 5 to 8.


Agile Principle 5: Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

This is an important aspect in Agile. People get motivated by acknowledged empowerment, responsibility, ownership and trusted support when performing a job. This is one of the reasons Agile teams often feel so vibrant and dynamic. Still, in many organizations development-teams work Agile but ‚Äúsubsequent teams‚ÄĚ do not. Resulting in mini-waterfalls slowing down your delivery cycle as a whole.trust-in-hands

‚ÄúEnvironment and the support needed‚ÄĚ means that the Agile team should work in a creative and innovative environment where team-members can quickly test new features. Where the team can experiment, systems ‚Äújust work‚ÄĚ and ‚Äúwaiting‚ÄĚ is not required. The team should be enabled, so to speak .. in terms of automation and in terms of innovation. This means that a build should not take hours, a deployment should not take days and the delivery of new infrastructure should not take weeks.

Applying rigorous automation will help you to achieve the fifth objective of the Agile manifesto. There is a bit of a chicken and egg situation here, but I feel it is safe to say that a sloppy, broken, quirky development environment will not help in raising the bar in terms of motivating individuals. Hence "give them the environment and support they need, and trust them to get the job done".


Agile Principle 6: The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

When working Agile, individuals and interactions are valued over the use of processes and tools. When starting a new project, teams should not be hindered with ticketing systems, extensive documentation to explain themselves and long service times. These type of ‚Äúservices‚ÄĚ often exist on boundaries of business-units of bringing different ‚Äėdisciplines‚Äô to the solution.500px-People_together.svg

Although working Agile, many companies still have these boundaries in place. An important aspect of Continuous Delivery is executing work in Product teams dedicated to delivery and/or maintenance of an end-product. These product teams have all required disciplines working together in one and the same team. Operating in this manner alleviates the need for slow tooling & ticketing systems and inspires people to work together and get the job done.

Organizing people as a team working on a product instead of individuals performing a task, which in itself has no meaning, will help you to achieve the sixth objective of the Agile Manifesto. There is not a lot automation can do for you here.


Agile Principle 7: Working software is the primary measure of progress.

Agile aims towards delivering working software at the end of each sprint. For the customer that is basically what counts: working software, which can actually be used. Working software means software without defects. There is no point in delivering broken software at the end of every sprint.Working-Software

When sending a continuous flow of new functions to the customer, each function should adhere to the required quality level straight away. In terms of quality, new functions might need to be ‚Äėreliable‚Äô, ‚Äėsecure‚Äô, maintainable‚Äô, ‚Äėfast‚Äô, etc, which all are fully testable properties. Testing these type of properties should be integral part of team activities. One of the principles related to Continuous Delivery addresses this topic through Test automation. Without it, it is not possible to deliver working production-ready software by the end of each sprint.

Proper implementation of test-disciplines, fostering a culture of delivering high quality software, testing every single feature, adhering to testing disciplines and applying matching & automated test tooling addresses topics related to the seventh object of the Agile Manifesto. Supply a test for every function you add to the product and automate this test.


Agile Principle 8: Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

As software complexity grows exponentially, it will become more difficult overtime to maintain a constant pace of delivering new features, when assembling, deploying, testing or provision manually. Humans are simply not made to perform multiple tasks fast, repetitively and consistently over a longer period of time, that is what machines are for!

The eighth Agile principle typically comes down to a concept called ‚Äėflow‚Äô. You might have an Agile team in place for creating new softflow-clipart-9183-flow-designware, but what about the flow in the rest of your organization? Should the team wait for requirements to drip through, should it wait for the testers to manually test software, or is it the Operations team that needs to free resources in order to deploy software? To address this, from idea to product, handover moments should be minimized as much as possible and where possible principles of automation should be applied. This brings us back to build automation, test automation, deployment automation and the automation of infrastructure.


Stay tuned for the next post, where I'll address the final four Agile principles of the bunch.


Michiel Sens.

Preparing hands-on conferences: to USB or not to USB

Tue, 10/20/2015 - 10:00

On Friday, October 2nd, Xebia organized the inaugural edition of TestWorks Conf. The conference was born out of the apparent need for a hands-on test automation conference in the Netherlands. Early on, we decided that having a high level of engagement from the participants was key in achieving this. Thus, the idea of making everything hands-on was born. Not only did we have workshops throughout the day, people also should be enabled to code along with the speakers during talks.This however posed a challenge on the logistical side of things: How to make sure that everyone has the right tooling and code available on their laptops?

Constraints of a possible solution

Just getting all code on people's machines is not sufficient. As we already learned during our open kitchen events, there is always some edge case causing problems on a particular machine. In order to let participants jump straight into the essentials of the workshop, it would need to meet the following requirements:

  • Take at most 10 minutes to be up and running for everyone
  • Require no internet connection
  • Be identical on every machine
  • Not be intrusive on people's machines
Virtual machines vs local installations?

The first decision we had to make was opting for local installations or installing a virtual machine. Based on the requirements, a local installation would require participants to atleast install all the software beforehand, as the list of required software is quite large (could easily take 60+ minutes). If we were to go this route, we would have to build a custom installer to make sure everyone has all the contents. Having built deployment packages for a Windows domain in the past, this takes a lot of time to get right. Especially if we would need to support multiple platforms. Going down this route, it's questionable if we could satisfy the final requirement. What happens if software we install overrides specific custom settings a user has done? Will the uninstallers revert this properly? This convinced us that using a VM was the way to go.

Provisioning the virtual machine

In order to have all the contents and configuration of the VM under version control, we decided to provision it using Vagrant. This way we could easily synchronize changes in the VM between the speakers while preparing for the workshops and talks. It also posed a nice dillemma. How far will you go in automating the provisioning? Should you provision application specific settings? Or just set these by hand before exporting the VM? In the end, we decided to have a small list of manual actions:

  • Importing all projects in IntelliJ, so all the required indexing is done
  • Putting the relevant shortcuts on the Ubuntu launcher
  • Installing required IntelliJ plugins
  • Setting the desired Atom theme
Distributing the virtual machine

So, now we have a VM, but how do we get it into everybody's hands? We could ask everyone to provision their own beforehand using Vagrant. However, this would require additional work on the Vagrant scripts (so they're robust enough to be sent out into the wild), and we would need to automate all the manual steps. Secondly, it would require everybody to actually do these preparations. What if 30 people didn't and start downloading 5ish GB simultaneously at the start of the conference? This would probably grind the Internet to a halt at the venue.

Because of this, we decided to make an export of the VM image and copy this to a USB thumbdrive, together with installers for VirtualBox for multiple platforms. Every participant would be asked as preparation to install VirtualBox, and would receive the thumbdrive when registering at the conference. The only step left would be to copy all the contents to 180 thumbdrives. No problem right?

Flashing large batches of USB drives

The theory of flashing the USB drives was easy. Get some USB hubs with a lot of ports, plug all the USB drives and flash an image to all the drives. However, practice has proved different.

First of all, what filesystem should we use? Since we're striving for maximal compatibility, FAT32 would be preferred. This however was not feasible, since FAT32 has a file size limit of 4GB, and our VM grew to well over 5GB. This leaves two options: ExFAT or NTFS. ExFAT works by default on OSX and Windows, but requires an additional package to be installed under Linux. NTFS works by default under Windows and Linux, but is readonly under OSX. Since users would not have to write to the drives, NTFS seemed the best choice.

Having to format the initial drive as NTFS, we opted for using a Windows desktop. After creating the first drive, we created an image from this drive which was to be copied to all the remaining drives.

This got us to plug in 26 drives in total (13 per hub), all ready to start copying data. Only to find out that the drive letter assignment that windows does is a bit outdated :)


When you run out of available drive letters, you have to use a NTFS volume mount point. The software we used for cloning the USB drives (imageUSB) would not recognize these mount points as drives however, so this put a limit on the amount of drives to flash at once. When we actually flashed the drives, both hubs turned out to be faulty, disconnecting the drives at random causing the writes to fail. This lead us to spread the effort of flashing the drives over multiple people (thanks Kishen & Viktor!), as we could do less per machine.

Just copying the data is not sufficient however. We have to verify that the data which was written can be read back as such. During this verification, several USB drives turned out to be faulty. After plugging in and out a lot of drives, this was the result:


What did we learn?

During the conference, it turned out that using the USB drives and virtual machine image (mostly) worked out as planned. There were issues, but they were manageable and usually easy to resolve. To sum up the most important points:

  • Some vendors disable the CPU virtualization extensions by default in the BIOS/UEFI. These need to be enabled for the VM to work
  • The USB drives were fairly slow. Using faster drives would've smoothed things out more
  • A 64-bit VM does not work on a 32-bit host :)
  • Some machines ran out of memory. The machine was configured to have 2GB, this was probably slightly too low.
  • Setting up the machines was generally pain-free, but it would still be good to reserve some time at the beginning of the conference for this.
  • A combination of using USB drives together with providing the entire image beforehand is probably the sweet spot

Next year we will host TestWorks Conf again. What we learned here will help us to deliver an even better hands-on experience for all participants.


Security is dead, long live security

Mon, 10/12/2015 - 19:30

Last week the 7th edition of BruCON was held. For those unfamiliar with it, BruCON is a security conference where everybody with an interest in security can share their views and findings. As always it was a great mixture of technology, philosophy, personal opinions and hands-on workshops.

This year however I noticed a certain pattern in some of the talks. Chris Nickerson gave a presentation about "how to make a pentester's life hell" based on experience, Shyma Rose shared her views on risk management, Mark Hillick showed us how the security was improved at Riot Games and David Kennedy provided his opinion on the state of the information security industry nowadays. All four of them basically told pieces of the same tale from a different perspective and I will try to provide my viewpoint on the matter in this blog.

The security bubble

Both Shyma and Dave said the term 'Risk' is inflated and is nowadays used as a buzz word that no longer has a connection with actual threats. I couldn't agree more on this. Nowadays it is almost normal, when someone identifies a new vulnerability, to launch a complete marketing campaign including fancy names and logos, dedicated websites and huge social media presence. Risk is no longer used as an indicator of 'badness', but instead used as a catalyst for pushing 'money making silver bullets' to customers. And, since most clients don't know any better, they get away with it. And, as Chris showed, even customers who do know better, still enable them to push their crappy services by providing them ideal conditions to prove their effectiveness.

Hackers != unstoppable evil geniuses

Hackers are looked upon as the extremely smart guys with elite skills, where reality is that most breaches happen due to stupid stuff and decade old problems. The infosec industry's solution is products and services that no longer qualify for the fast changing world we now live in. Most services rely on stopping or identifying known attacks. In a world that is changing almost every heartbeat and especially in a world of mobile devices and cloud solutions the 'castle and archers' approach no longer works. Facts show that in many hacks exploits weren't even necessary due to the possibilities of modern platforms. If an attacker has the possibility to access some maintenance or configuration part of your system it's game over. If an attacker can access some scripting environment, it's game over. If an attacker can lure one of your employers into going to a website or installing something, it's game over.

Ivory Towers

Another problem is the huge gap between security operations inside a company and the business and development departments. Many companies have internal security guidelines that are hardly aligned with the rest of the organization and therefor bypassed or ignored. The natural response to this is that the security departments push the guidelines ever harder, only causing the gap to increase even more. Based on experience Mark stated that security departments should get out of their ivory tower and start to understand what is really important. It's more effective to achieve 80% security with 100% alignment, than try to reach 100% security with 0% alignment.


Both the infosec industry and clients nowadays have enough money and attention to change things, so we should get rid of the technology driven approach and start focusing on talent and smartness. When you look at the root causes of many hacks it's not the technology that is to blame, but instead ego, culture, miscommunication and the working environment. As long as security is considered as something you can bolt on or use external expertise for it will fail. We, both the suppliers and clients, should consider security as a standard quality attribute where everybody is responsible for.

Telling instead of training

In most companies the ratio between security and non-security minded people is way off. Security teams should therefor start acting as supporters and trainers. By becoming more visible in the organization and start aligning with it, the security awareness will rise within everyone. Every single person in the company should get a basic understanding of what security is about. And it isn't that hard to achieve. Developers should know secure coding, testers should learn to use security tooling, operations should know how hacking tools work and can be identified and taught the basics of forensic research. People also need to be trained to how to handle in case of an issue: build a good incident response program with flowcharts that everybody can use and apply. It's not rocket science, you can achieve a lot with good old common sense.


Another key item is visibility. Often incidents, breaches and other security related issues are 'kept under the radar' and only 'the chosen few' will know the details. By being open and transparent about these to the whole organization, people will start to understand the importance and challenge each other to prevent these in the future. By creating internal security challenges and promoting good ideas a community will form on itself. Use leaderboards and reward with goodies to stimulate people to improve themselves and get accustomed with the matter. Make sure successes are acknowledged. To quote Mark (who also quoted someone else) "If Tetris has taught me anything, it’s that errors pile up and accomplishments disappear."

Hackers don't only knock on the front door

Lastly start to implement defense by default and assess every situation as if a breach had occurred. Assume bad stuff will happen at some point and see how you can minimize the damage from each point. Do this on all levels; disable local admin accounts, use application protection like EMET and Applocker, implement strict password policies, apply network segmentation between byod, office automation and backends, using coding frameworks, patch all the time, test everything, monitor everything, and start analyzing your external and internal network traffic. The ultimate goal is to make it pentesters (and therefor hackers) as difficult as possible. Pentesters should cry and require weeks, months or even years to get somewhere.

There is no I in team!

We, as an infosec industry, are facing a future where change is the constant factor and we have find a way to deal with that. In order to be successful, we have to understand and acknowledge that we can no longer do it on our own. Unless we start to behave as a member of the team, we will fail horribly and become sitting ducks.


Refactoring a monolith to Microservices

Mon, 10/12/2015 - 16:37

For a training on Microservices that is currently under development at Xebia, we've created implementations of a web shop in both a monolithic and Microservices architecture. We then used these examples in a couple of workshops to explain a number of Microservices concepts (see here and here). In this post we will describe the process we followed to move from a monolith to services, and what we learned along the way.

First we built ourselves a monolithic web shop. It's a simple application that offers a set of HTTP interfaces that accept and produce JSON documents. You can find the result here on Github. The class com.xebia.msa.rest.ScenarioTest shows how the REST calls can be used to support a user interface (we didn't actually build a user interface and hope you can imagine one based on the ScenarioTest code...).

So this was our starting point. How should we split this monolith into smaller pieces? To answer this question we started out defining our services in a Post It-saturated design session. We used event storming to find out what the boundaries of our services should be. A good place to start reading on event storming is Ziobrando's blog. The result is shown in the picture below.

Results of event storming session

We came up with seven aggregates that were translated into software in four services: Catalog, Shop, Payment and Fulfillment. We decided those four services would be enough to start with. One service calls another using REST, exchanging JSON documents over HTTP. The code can be found here. Our process to move from a monolith to services was to copy the monolith code four times, followed by stripping out unnecessary code from each service. This results in some initial duplication of classes, but gradually the domain models of the services started drifting apart. An Order in the context of Payment is really very simple compared to an Order in the context of Shop, so lots of detail can be removed. Spring JSON support helps a great deal because it allows you to just ignore JSON that doesn't fit the target class model: the complex Order in Shop can be parsed by the bare-bones Order in Payment.

Though the new solution will probably work well, we weren't quite satisfied. What would happen for instance if one of the services became unavailable? In our implementation this would mean that the site would be down; no Payment -> no Order. This is a pity because in practice a shop may want to accept an Order and send an invoice to be paid later. For inspiration on integration patterns refer to Gero's blog.. Our solution was still tightly coupled, not at design time, but at runtime.

To fix this problem we decided to place queues between services: one service may not call another service but only send out messages on queues. The result can be found here. This solution looks more like the diagram below.
Shop services interacting with events
Events in the diagram correspond to events in the code: a service informs the world that something interesting has happened, like a new Order is completed or a Payment is received. Other services register their interest in certain types of event and pick up processing when needed, corresponding to pattern #3 in Gero's blog.

This architecture is more robust in the sense that it can handle delays and un-availability of parts of the infrastructure. This comes at a price though:

  1. A user interface becomes more complex. Customers will expect a complete view of their order including payment information, but this is based on UI parts under control of different services. The question now becomes how to create a UI that looks consistent to end users while it is still robust and respects service boundaries.
  2. What happens to data about orders that is stored by Shop if something goes wrong later in the process? Imagine that a customer completes an order, is presented with a payment interface and then fails to actually pay? This means Shop could be left with stale Orders, so we may need some reporting on that to allow a sales rep to follow up with the customer or a batch job to just archive old orders.
  3. We often got lost while refactoring. The picture showing the main events really helped us stay on track. While this was hard enough in baby systems like our example, it seems really complex in real-life software. Having a monolith makes it easier to see what happens because you can use your IDE to follow the path through code. How to stay on track in larger systems is an open question still.

We plan to explore both issues later and hope to report our findings.