Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/sources/22' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Xebia Blog
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.
Syndicate content
Updated: 5 hours 52 min ago

Robot Framework and the keyword-driven approach to test automation - Part 2 of 3

Wed, 02/03/2016 - 18:03

In part 1 of our three-part post on the keyword-driven approach, we looked at the position of this approach within the history of test automation frameworks. We elaborated on the differences, similarities and interdependencies between the various types of test automation frameworks. This provided a first impression of the nature and advantages of the keyword-driven approach to test automation.

In this post, we will zoom in on the concept of a 'keyword'.

What are keywords? What is their purpose? And what are the advantages of utilizing keywords in your test automation projects? And are there any disadvantages or risks involved?

As stated in an earlier post, the purpose of this first series of introductory-level posts is to prevent all kinds of intrusive expositions in later posts. These later posts will be of a much more practical, hands-on nature and should be concerned solely with technical solutions, details and instructions. However, for those that are taking their first steps in the field of functional test automation and/or are inexperienced in the area of keyword-driven test automation frameworks, we would like to provide some conceptual and methodological context. By doing so, those readers may grasp the follow-up posts more easily.

Keywords in a nutshell A keyword is a reusable test function

The term ‘keyword’ refers to a callable, reusable, lower-level test function that performs a specific, delimited and recognizable task. For example: ‘Open browser’, ‘Go to url’, ‘Input text’, ‘Click button’, ‘Log in’, 'Search product', ‘Get search results’, ‘Register new customer’.

Most, if not all, of these are recognizable not only for developers and testers, but also for non-technical business stakeholders.

Keywords implement automation layers with varying levels of abstraction

As can be gathered from the examples given above, some keywords are more atomic and specific (or 'specialistic') than others. For instance, ‘Input text’ will merely enter a string into an edit field, while ‘Search product’ will be comprised of a chain (sequence) of such atomic actions (steps), involving multiple operations on various types of controls (assuming GUI-level automation).

Elementary keywords, such as 'Click button' and 'Input text', represent the lowest level of reusable test functions: the technical workflow level. These often do not have to be created, but are being provided by existing, external keyword libraries (such as Selenium WebDriver), that can be made available to a framework. A situation that could require the creation of such atomic, lowest-level keywords, would be automating at the API level.

The atomic keywords are then reused within the framework to implement composite, functionally richer keywords, such as 'Register new customer', 'Add customer to loyalty program', 'Search product', 'Add product to cart', 'Send gift certificate' or 'Create invoice'. Such keywords represent the domain-specific workflow activity level. They may in turn be reused to form other workflow activity level keywords that automate broader chains of workflow steps. Such keywords then form an extra layer of wrappers within the layer of workflow activity level keywords. For instance, 'Place an order' may be comprised of 'Log customer in', 'Search product', 'Add product to cart', 'Confirm order', etc. The modularization granularity applied to the automation of such broader workflow chains is determined by trading off various factors against each other - mainly factors such as the desired levels of readability (of the test design), of maintainablity/reusability and of coverage of possible alternative functional flows through the involved business process. The eventual set of workflow activity level keywords form the 'core' DSL (Domain Specific Language) vocabulary in which the highest-level specifications/examples/scenarios/test designs/etc. are to be written.

The latter (i.e. scenarios/etc.) represent the business rule level. For example, a high-level scenario might be:  'Given a customer has joined a loyalty program, when the customer places an order of $75,- or higher, then a $5,- digital gift certificate will be sent to the customer's email address'. Such rules may of course be comprised of multiple 'given', 'when' and/or 'then' clauses, e.g. multiple 'then' clauses conjoined through an 'and' or 'or'. Each of these clauses within a test case (scenario/example/etc.) is a call to a workflow activity level, composite keyword. As explicated, the workflow-level keywords, in turn, are calling elementary, technical workflow level keywords that implement the lowest-level, technical steps of the business scenario. The technical workflow level keywords will not appear directly in the high-level test design or specifications, but will only be called by keywords at the workflow activity level. They are not part of the DSL.

Keywords thus live in layers with varying levels of abstraction, where, typically, each layer reuses (and is implemented through) the more specialistic, concrete keywords from lower levels. Lower level keywords are the building blocks of higher level keywords and at the highest-level your test cases will also be consisting of keyword calls.

Of course, your automation solution will typically contain other types of abstraction layers, for instance a so-called 'object-map' (or 'gui-map') which maps technical identifiers (such as an xpath expression) onto logical names, thereby enhancing maintainability and readability of your locators. Of course, the latter example once again assumes GUI-level automation.

Keywords are wrappers

Each keyword is a function that automates a simple or (more) composite/complex test action or step. As such, keywords are the 'building blocks' for your automated test designs. When having to add a customer as part of your test cases, you will not write out (hard code) the technical steps (such as entering the first name, entering the surname, etc.), but you will have one statement that calls the generic 'Add a customer' function which contains or 'wraps' these steps. This wrapped code, as a whole, thereby offers a dedicated piece of functionality to the testers.

Consequently, a keyword may encapsulate sizeable and/or complex logic, hiding it and rendering it reusable and maintainable. This mechanism of keyword-wrapping entails modularization, abstraction and, thus, optimal reusability and maintainability. In other words, code duplication is prevented, which dramatically reduces the effort involved in creating and maintaining automation code.

Additionally, the readability of the test design will be improved upon, since the clutter of technical steps is replaced by a human readable, parameterized call to the function, e.g.: | Log customer in | Bob Plissken | Welcome123 |. Using so-called embedded or interposed arguments, readability may be enhanced even further. For instance, declaring the login function as 'Log ${userName} in with password ${password}' will allow for a test scenario to call the function like this: 'Log Bob Plissken in with password Welcome123'.

Keywords are structured

As mentioned in the previous section, keywords may hide rather complex and sizeable logic. This is because the wrapped keyword sequences may be embedded in control/flow logic and may feature other programmatic constructs. For instance, a keyword may contain:

  • FOR loops
  • Conditionals (‘if, elseIf, elseIf, 
, else’ branching constructs)
  • Variable assignments
  • Regular expressions
  • Etc.

Of course, keywords will feature such constructs more often than not, since encapsulating the involved complexity is one of the main purposes for a keyword. In the second and third generation of automation frameworks, this complexity was an integral part of the test cases, leading to automation solutions that were inefficient to create, hard to read & understand and even harder to maintain.

Being a reusable, structured function, a keyword can also be made generic, by taking arguments (as briefly touched upon in the previous section). For example, ‘Log in’ takes arguments: ${user}, ${pwd} and perhaps ${language}. This adds to the already high levels of reusability of a keyword, since multiple input conditions can be tested through the same function. As a matter of fact, it is precisely this aspect of a keyword that enables so-called data-driven test designs.

Finally, a keyword may also have return values, e.g.: ‘Get search results’ returns: ${nrOfItems}. The return value can be used for a myriad of purposes, for instance to perform assertions, as input for decision-making or for passing it into another function as argument, Some keywords will return nothing, but only perform an action (e.g. change the application state, insert a database record or create a customer).

Risks involved With great power comes great responsibility

The benefits of using keywords have been explicated above. Amongst other advantages, such as enhanced readability and maintainability, the keyword-driven approach provides a lot of power and flexibility to the test automation engineer. Quasi-paradoxically, in harnessing this power and flexibility, the primary risk involved in the keyword-driven approach is being introduced. That this risk should be of topical interest to us, will be established by somewhat digressing into the subject of 'the new testing'.

In many agile teams, both 'coders' and 'non-coders' are expected to contribute to the automation code base. The boundaries between these (and other) roles are blurring. Despite the current (and sometimes rather bitter) polemic surrounding this topic, it seems to be inevitable that the traditional developer role will have to move towards testing (code) and the traditional tester role will have to move towards coding (tests). Both will use testing frameworks and tools, whether it be unit testing frameworks (such as JUnit), keyword-driven functional test automation frameworks (such as RF or Cucumber) and/or non-functional testing frameworks (such as Gatling or Zed Attack Proxy).

To this end, the traditional developer will have to become knowledgeable and gain experience in the field of testing strategies. Test automation that is not based on a sound testing strategy (and attuned to the relevant business and technical risks), will only result in a faster and more frequent execution of ineffective test designs and will thus provide nothing but a false sense of security. The traditional developer must therefore make the transition from the typical tool-centric approach to a strategy-centric approach. Of course, since everyone needs to break out of the silo mentality, both developer and tester should also collaborate on making these tests meaningful, relevant and effective.

The challenge for the traditional tester may prove to be even greater and it is there that the aforementioned risks are introduced. As stated, the tester will have to contribute test automation code. Not only at the highest-level test designs or specifications, but also at the lowest-level-keyword (fixture/step) level, where most of the intelligence, power and, hence, complexity resides. Just as the developer needs to ascend to the 'higher plane' of test strategy and design, the tester needs to descend into the implementation details of turning a test strategy and design into something executable. More and more testers with a background in 'traditional', non-automated testing are therefore entering the process of acquiring enough coding skills to be able to make this contribution.

However, by having (hitherto) inexperienced people authoring code, severe stability and maintainability risks are being introduced. Although all current (i.e. keyword-driven) frameworks facilitate and support creating automation code that is reusable, maintainable, robust, reliable, stable and readable, still code authors will have to actively realize these qualities, by designing for them and building them in into their automation solutions. Non-coders though, in my experience, are (at least initially) having quite some trouble understanding and (even more dangerously) appreciating the critical importance of applying design patters and other best practices to their code. That is, most traditional testers seem to be able to learn how to code (at a sufficiently basic level) rather quickly, partially because, generally, writing automation code is less complex than writing product code. They also get a taste for it: they soon get passionate and ambitious. They become eager to applying their newly acquired skills and to create lot's of code. Caught in this rush, they often forget to refactor their code, downplay the importance of doing so (and the dangers involved) or simply opt to postpone it until it becomes too large a task. Because of this, even testers who have been properly trained in applying design patterns, may still deliver code that is monolithic, unstable/brittle, non-generic and hard to maintain. Depending on the level at which the contribution is to be made (lowest-level in code or mid-level in scripting), these risks apply to a greater or lesser extent. Moreover, this risky behaviour may be incited by uneducated stakeholders, as a consequence of them holding unrealistic goals, maintaining a short-term view and (to put it bluntly) being ignorant with regards to the pitfalls, limitations, disadvantages and risks that are inherent to all test automation projects.

Then take responsibility ... and get some help in doing so

Clearly then, the described risks are not so much inherent to the frameworks or to the approach to test automation, but rather flow from inexperience with these frameworks and, in particular, from inexperience with this approach. That is, to be able to (optimally) benefit from the specific advantages of this approach, applying design patterns is imperative. This is a critical factor for the long-term success of any keyword-driven test automation effort. Without applying patterns to the test code, solutions will not be cost-efficient, maintainable or transferable, amongst other disadvantages. The costs will simply outweigh the benefits on the long run. Whats more, essentially the whole purpose and added value of using keyword-driven frameworks are lost, since these frameworks had been devised precisely to this end: counter the severe maintainability/reusability problems of the earlier generation of frameworks. Therefore, from all the approaches to test automation, the keyword-driven approach depends to the greatest extent on the disciplined and rigid application of standard software development practices, such as modularization, abstraction and genericity of code.

This might seem a truism. However, since typically the traditional testers (and thus novice coders) are nowadays directed by their management towards using keyword-driven frameworks for automating their functional, black-box tests (at the service/API- or GUI-level), automation anti-patterns appear and thus the described risks emerge. To make matters worse, developers remain mostly uninvolved, since a lot of these testers are still working within siloed/compartmented organizational structures.

In our experience, a combination of a comprehensive set of explicit best practices, training and on-the-job coaching, and a disciplined review and testing regime (applied to the test code) is an effective way of mitigating these risks. Additionally, silo's need to be broken down, so as to foster collaboration (and create synergy) on all testing efforts as well as to be able to coordinate and orchestrate all of these testing efforts through a single, central, comprehensive and shared overall testing strategy.

Of course, the framework selected to implement a keyword-driven test automation solution, is an important enabler as well. As will become apparent from this series of blog posts, the Robot Framework is the platform par excellence to facilitate, support and even stimulate these counter-measures and, consequently, to very swiftly enable and empower seasoned coders and beginning coders alike to contribute code that is efficient, robust, stable, reusable, generic, maintainable as well as readable and transferable. That is not to say that it is the platform to use in any given situation, just that it has been designed with the intent of implementing the keyword-driven approach to its fullest extent. As mentioned in a previous post, the RF can be considered as the epitome of the keyword-driven approach, bringing that approach to its logical conclusion. As such it optimally facilitates all of the mentioned preconditions for long-term success. Put differently, using the RF, it will be hard not to avoid the pitfalls inherent to keyword-driven test automation.

Some examples of such enabling features (that we will also encounter in later posts):

  • A straightforward, fully keyword-oriented scripting syntax, that is both very powerful and yet very simple, to create low- and/or mid-level test functions.
  • The availability of dozens of keyword libraries out-of-the-box, holding both convenience functions (for instance to manipulate and perform assertions on xml) and specialized keywords for directly driving various interface types. Interfaces such as REST, SOAP or JDBC can thus be interacted with without having to write a single line of integration code.
  • Very easy, almost intuitive means to apply a broad range of design patterns, such as creating various types of abstraction layers.
  • And lots and lots of other great and unique features.

We have now an understanding of the characteristics and purpose of keywords and of the advantages of structuring our test automation solution into (various layers of) keywords. At the same time, we have looked at the primary risk involved in the application of such a keyword-driven approach and at ways to deal with these risks.

Keyword-driven test automation is aimed at solving the problems that were instrumental in the failure of prior automation paradigms. However, for a large part it merely facilitates the involved solutions. That is, to actually reap the benefits that a keyword-driven framework has to offer, we need to use it in an informed, professional and disciplined manner, by actively designing our code for reusability, maintainability and all of the other qualities that make or break long-term success. The specific design as well as the unique richness of powerful features of the Robot Framework will give automators a head start when it comes to creating such code.

Of course, this 'adage' of intelligent and adept usage, is true for any kind of framework that may be used or applied in the course of a software product's life cycle.

Part 3 of this second post, will go into the specific implementation of the keyword-driven approach by the Robot Framework.

FitNesse in your IDE

Wed, 02/03/2016 - 17:10

FitNesse has been around for a while. The tool has been created by Uncle Bob back in 2001. It’s centered around the idea of collaboration. Collaboration within a (software) engineering team and with your non-programmer stakeholders. FitNesse tries to achieve that by making it easy for the non-programmers to participate in the writing of specifications, examples and acceptance criteria. It can be launched as a wiki web server, which makes it accessible to basically everyone with a web browser.

The key feature of FitNesse is that it allows you to verify the specs with the actual application: the System Under Test (SUT). This means that you have to make the documentation executable. FitNesse considers tables to be executable. When you read ordinary documentation you’ll find that requirements and examples are outlined in tables often, hence this makes for a natural fit.

There is no such thing as magic, so the link between the documentation and the SUT has to be created. That’s where things become tricky. The documentation lives in our wiki server, but code (that’s what we require to connect documentation and SUT) lives on the file system, in an IDE. What to do? Read a wiki page, remember the class and method names, switch to IDE, create classes and methods, compile, switch back to browser, test, and repeat? Well, so much for fast feedback! When you talk to programmers, you’ll find this to be the biggest problem with FitNesse.

Imagine, as a programmer, you're about to implement an acceptance test defined in FitNesse. With a single click, a fixture class is created and adding fixture methods is just as easy. You can easily jump back and forth between the FitNesse page and the fixture code. Running the test page is as simple as hitting a key combination (Ctrl-Shift-R comes to mind). You can set breakpoints, step through code with ease. And all of this from within the comfort of your IDE.

Acceptance test and BDD tools, such as Cucumber and Concordion, have IDE plugins to cater for that, but for FitNesse this support was lacking. Was lacking! Such a plugin is finally available for IntelliJ.


Over the last couple of months, a lot of effort has been put in building this plugin. It’s available from the Jetbrains plugin repository, simply named FitNesse. The plugin is tailored for Slim test suites, but also works fine with Fit tables. All table types are supported. References between script, decision tables and scenarios work seamlessly. Running FitNesse test pages is as simple as running a unit test. The plugin automatically finds FitNesseRoot based on the default Run configuration.

The current version (1.4.3) even has (limited) refactoring support: renaming Java fixture classes and methods will automatically update the wiki pages.

Feel free to explore the new IntelliJ plugin for FitNesse and let me know what you think!

(GitHub: https://github.com/gshakhn/idea-fitnesse)

Nine Product Management lessons from the Dojo

Tue, 02/02/2016 - 23:00
Are you kidding? a chance to add the Matrix to a blogpost?

Are you kidding? a chance to add the Matrix to a blogpost?

As I am gearing up for the belt exams next Saturday I couldn’t help to notice the similarities of what we learn in the dojo (it’s where the martial arts are taught) and how we should behave as Product Managers. Here are 9 lessons, straight from the Dojo, ready for your day job:

1.) Some things are worth fighting for

In Judo we practice Randori, which means ground wrestling. You will find that there are some grips that are worth fighting for, but some you should let go in search of a better path to victory.

In Product Management, we are the heat shield of the product, constantly between engineering striving for perfection, sales wanting something else, marketing pushing the launch date and management hammering on the PNL.

You need to pick your battles, some you deflect, some you unarm, and some you accept, because you are maneuvering yourself so you can make the move that counts.

Good product managers are not those who win the most battles, but those who know which ones to win.

2.) Preserve your partners

It’s fun to send people flying through the air, but the best way to improve yourself is to improve your partner. You are in this journey together, just as in Product Management. Ask yourself the following question today: “whom do I need to train as my successor” and start doing so.

I was delayed to the airport because of the taxi strike, but saved by the strike of the air traffic controllers

"I was delayed to the airport because of the taxi strike, but saved by the strike of the air traffic controllers"

3.) There is no such thing as fair

It’s a natural reaction if someone changed the rules of the game. We protest, we go on strike, we say it’s not fair, but in a market driven environment, what is fair? Disruption, changing the rules of the game has become the standard (24% of the companies experience it already, 58% expect it, 42% is still in denial) We can go on strike or adapt to it.

The difference between Kata and free sparing is that your opponents will not follow a prescribed path. Get over it.

4.) Behavior leads to outcome

I’m heavily debating the semantics with my colleague from South Africa (you know who you are), so it’s probably wording but the grunt of it is: if you want more of something, you should start doing it. Positive brand experiences will drive people to your products; hence one bad product affects all other products of your brand.

It’s not easy to change your behavior, whether it is in sport, health, customer interaction or product philosophy, but a different outcome starts with different behaviour.

Where did my product go?

Where did my product go?

5.) If it’s not working try something different

Part of Saturday’s exams will be what in Jujitsu is called “indirect combinations”. This means that you will be judged on the ability to move from one technique to another when the first one fails. Brute force is also an option, but not one that is likely to succeed, even if you are very strong.

Remember Microsoft pouring over a billion marketing dollars in Windows Phone? Brute forcing its position by buying Nokia? Blackberry doing something similar with QNX and only now switching to Android? Indirect combinations is not a lack of perseverance but adaptability to achieve result without brute force and with a higher chance of success.

This is where you tap out

This is where you tap out

6.) Failure is always an option

Tap out! Half of the stuff in Jujitsu is originally designed to break your bones, so tap out if your opponent has got a solid grip. It’s not the end, it’s the beginning. Nobody gets better without failing.

Two third of all Product Innovations fails, the remaining third takes about five iterations to get it right. Test your idea thoroughly but don’t be afraid to try something else too.

7.) Ask for help

There is no way you know it all. Trust your team, peers and colleagues to help you out. Everyone has something to offer, they may not always have the solution for you but in explaining your problem you will often find the solution.

8.) The only way to get better is to show up

I’m a thinker. I like to get the big picture before I act. This means that I can also overthink something that you just need to do. Though it is okay to study and listen, don’t forget to go out there and start doing it. Short feedback loops are key in building the right product, even if the product is not build right. So talk to customers, show them what you are working on, even in an early stage. You will not get better at martial arts or product management if wait too long to show up.

9.) Be in the moment

Don’t worry about what just happened, or what might happen. Worry about what is right in front of you. The technique you are forcing is probably not the one you want.


This blog is part of the Product Samurai series. Sign up here to stay informed of the upcoming book: The Product Manager's Guide to Continuous Innovation.

The Product Manager's guide to Continuous Innovation

Which Agile Organizational Model or Framework to use? Use them all!

Sat, 01/30/2016 - 22:11

Many organizations are reinventing themselves as we speak.  One of the most difficult questions to answer is: which agile organizational model or framework do we use? SAFe? Holacracy? LeSS? Spotify?

Based on my experience on all these models, my answer is: just use as many agile models and frameworks you can get your hands on.  Not by choosing one of them specifically, but by experimenting with elements of all these models the agile way: Inspect, Learn and Adapt continuously.

For example, you could use Spotify’s tribe-structure, Holacracy’s consent- and role principles and SAFe’s Release Trains in your new agile organization. But most important: experiment towards your own “custom made” agile organizational building blocks.  And remember: taking on the Agile Mindset is 80% of the job, only 20% is implementing this agile "organization".

Probably the worst thing you can do is just copy-paste an existing model.  You will inherit the same rigid situation you just wanted to prevent by implementing a scaled, agile organizational model.

Finally, the main ingredient of this agile recipe is trust.  You have to trust your colleagues and this new born agile organization in being anti-fragile and self-correcting just right from the start.  These principles are the same as successful agile organizations you probably admire, depend on.

Hoe om te gaan met Start-ups

Fri, 01/29/2016 - 16:15

Dit is een vraag die regelmatig door mijn hoofd speelt. In ieder geval moeten we stoppen met het continu romantiseren van deze initiatieven en als corporate Nederland nou eens echt mee gaan spelen.

Maar hoe?

Grofweg zijn er twee strategieën als corporate: opkopen of zelf beter doen! Klinkt simpel, maar is toch best wel complex. Waarschijnlijk is de beste strategie om een mix te kiezen van beide, waarbij je maximaal je eigen corporate kracht gebruikt (ja, die heb je), en tegelijkertijd volledig de kracht van start-up innovatie kunt gebruiken.

Deze post verkent de mogelijkheden en je moet vooral verder lezen, als ook jij wilt weten hoe jij de digitalisering van de 21ste eeuw wilt overleven.

Waarom moet ik hier iets mee?

Eigenlijk hoef ik deze alinea natuurlijk niet meer te schrijven toch? De gemiddelde leeftijd van bedrijven neemt af.
avg age fortune 500
Dit komt mede doordat de digitalisering van producten en diensten de entree barriĂšres in veel markten steeds lager maken. Er is dus meer concurrentie en daarom moet je beter je best doen om relevant te blijven.
Ten tweede, start-ups zijn hot! Iedereen wil voor een start-up werken en dus gaat het talent daar ook heen. Talent uit een toch al (te) krappe pool. Daarom moet je meer dan voorheen innoveren, omdat je anders de “war on talent” verliest.
Als laatste is er natuurlijk veel te winnen met digitale innovatie. De snelheid waarmee bedrijven tegenwoordig via digitale producten en diensten winstgevend kunnen worden is ongelofelijk, dus doe je het goed, dan doe je mee.

Wat zijn mijn mogelijkheden?

Er zijn eigenlijk maar twee manieren om met start-ups om te gaan. De eerste is om simpelweg een aandeel te nemen in een start-up of een veelbelovende start-up over te nemen. De andere is om zelf te gaan innoveren vanuit de eigen kracht in de organisatie.

De voordelen van aandelen en overnames is natuurlijk de snelle winst. De huid wordt vaak niet goedkoop verkocht, maar dan heb je ook wat. Helemaal mooi is het, de start-up actief is in een segment of markt, waar je zelf met je brand niet in wilt of kunt zitten (incumbent inertia). De nieuwe aanwinst is dan complementair aan de bestaande business. Bijvoorbeeld een grote bank, die een start-up overneemt die gericht is op het verkopen van kortlopend krediet aan midden- en kleinbedrijf.

Het nadeel is natuurlijk dat het moeilijk is om de bestaande assets over te hevelen. Bovendien wordt het nieuwe bijna nooit echt een onderdeel van de staande organisatie en misschien wil je dat ook helemaal niet. De kans bestaat namelijk dat de overgenomen start-up te veel beĂŻnvloed wordt door het moeder bedrijf en daarom meegetrokken wordt door de zwaartekracht van bureaucratie en contraproductieve bestaande corporate gedragspatronen.

Daarbij komt, dat een succesvolle start-up vanzelf onderhevig wordt aan aanvallen van weer andere start-ups. De “kannibalen mindset” moet er in blijven! Facebook heeft daarom altijd gezegd, als wij ons eigen model niet disrupten, doet iemand anders dat wel. Misschien is het waar wat de CEO van Intel Andy Grove eens zei: “only the paranoid survive”.

Zelf innovatiever worden is natuurlijk ook een optie. Dat is echter behoorlijk complex. Meestal wordt innovatie binnen de corporate nog in een lab-setting geĂŻsoleerd. Niet dat dit fout is, maar start-ups doen natuurlijk zoiets niet he. De start-up is namelijk het lab!

Het is grappig dat het lijkt alsof start-ups altijd nieuwe markten met nieuwe producten proberen te bereiken en dat we dit doorgaans bestempelen als “echte” innovatie. In een corporate setting worden namelijk alle product-marktcombinaties in een portfolio gezet (bijvoorbeeld een BCG-matrix) en gaat het om balans tussen huidige business en nieuwe business en de juiste cashflow verhoudingen.
Nou is het leuke dat start-ups maling hebben aan jouw portfolio en dus in elk kwadrant concurreren, zij het business die voor jou in de volwassenheid zit, of in de groei of in je lab-fase. Start-ups zijn simpelweg in de meerderheid en opereren los van elkaar op verschillende fronten. Dit betekent dat feitelijk iedereen in de corporate setting onder vuur ligt door start-ups en dus dat ook iedereen ongeacht de rol in het portfolio moet leren innoveren. Een voorbeeld van hoe waardevol het is om deze mindset te adopteren is het verhaal van deze jongeman uit 1973, hij werkte voor Kodak.

Je hele bedrijf veranderen is natuurlijk een ontzettende klus en als alternatief zou je ook kunnen kiezen om hetzelfde effect te creĂ«ren als bij een overname; het bewust lanceren van eigen start-ups voldoende los opgezet van de moederorganisatie om te versnellen. Deze eigen start-ups moeten directe concurrenten worden van de huidige business en zo succesvol worden dat iig een deel van de bestaande eigen en concurrerende business daar naartoe stroomt. Grote corporates kunnen zich op die manier meer en meer omvormen tot een netwerk van start-up nodes, waarbij het moeder bedrijf ondersteund en strategische complementaire nodes aankoopt waar nodig. Hoe zo’n node er uit ziet en hoe je dit organiseert is voer voor een volgende post.

Mocht je niet kunnen wachten en dit eerder willen weten dan kun je natuurlijk altijd bellen voor een gesprek.

Backlog ordering done right!

Wed, 01/27/2016 - 11:00

Various methods exist for helping product owners to decide which backlog item to start first. That this pays off to do so (more or less) right has been shown in blogs of Maurits Rijk and Jeff Sutherland.

These approaches to ordering backlog items all assume that items once picked up by the team are finished according to the motto: 'Stop starting, start finishing'. An example of a well-known algorithm for ordering is Weighted Shortest Job First (WSJF).

For items that may be interrupted, this results not in the best scheduling possible. Items that usually are interrupted by other items include story map slices, (large) epics, themes, Marketable Features and possibly more.

In this blog I'll show what scheduling is more optimal and how it works.


Weighted Shortest Job First (WSJF)

In WSJF scheduling of work, i.e. product backlog items, is based on both the effort and (business) value of the item. The effort may be stated in duration, story points, or hours of work. The business value may be calculated using Cost of Delay or as is prescribed by SAFe.

When effort and value are known for the backlog items, each item can be represented by a dot. See the picture to the right.
The proper scheduling is obtained by sweeping the dashed line from the bottom right to the upper left (like a windshield wiper).


ruitenwisserIn practice both the value and effort are not precisely known but estimated. This means that product owners will treat dots that are 'close' to each other the same. The picture to the left shows this process. All green sectors have the same ROI (business value divided by effort) and have roughly the same value for their WSJF.

Product owners will probably schedule items according to: green cells from left-to-right. Then consider the next 'row' of cells from left-to-right.


Other Scheduling Rules

It is known at least since the 1950's (and probably earlier) that WSJF is the most optimal scheduling mechanism if both value and size are known. The additional condition is that preemption, i.e. interruption of the work, is not allowed.

If either of these 3 conditions (known value, known size, no preemption) is not valid, WSJF is not the best mechanism and other scheduling rules are more optimal. Other mechanisms are (for a more comprehensive overview and background see e.g. Table 3.1, page 146 in [Kle1976]):

No preemption allowed

  1. no value, no effort: FIFO
  2. only effort: SJF / SEPT
  3. only value: on value
  4. effort & value: WSJF / SEPT/C
  5. Story map slices: WSJF (no preemption)

FIFO = First in, First out
SEPT = Shortest Expected Processing Time
SJF = Shortest Job First
C = Cost

Examples: (a) user stories on the sprint backlog: WSJF, (b) production incidents: FIFO or SJF, (c) story map slices that represent a minimal marketable feature (or short Feature). Leaving out a single user story from a Feature creates no business value (that's why it is a minimal marketable feature) and starting such a slice also means completing it before starting anything else. These are scheduled using WSJF. (d) User stories that are part of Feature; they represent no value by themselves, but all are necessary to complete the Feature they belong to. Schedule these according to SJF.

Preemption allowed

  1. no value: SIRPT (SIJF)
  2. effort & value: SIRPT/C or WSIJF (preemption)
  3. SIRPT = Shortest Imminent Remaining Processing Time

SIRPT/C = Shortest Imminent Remaining Processing Time, weighted by Cost
SIJF = Shortest Imminent Job First
WSIJF = Weighted Shortest Imminent Job First

The 'official' naming for WSIJF is SIRPT/C. In this blog I'll use Weighted Shortest Imminent Job First, or WSIJF.

Examples: (a) story map slices that contain more than one Feature (minimal marketable feature). We call these Feature Sets. These are scheduled using WSIJF, (b) (Large) Epics that consist of more than 1 Feature Set, or epics that are located at the top-right of the windshield-wiper-diagram. The latter are usually split in smaller one containing most value for less effort. Use WSIJF.


  • User Story (e.g. on sprint backlog and not part of a Feature): WSJF
  • User Story (part of a Feature): SJF
  • Feature: WSJF
  • Feature Set: WSIJF
  • Epics, Story Maps: WSIJF
Weighted Shortest Imminent Job First (WSIJF)

Mathematically, WSIJF is not as simple to calculate as is WSJF. Perhaps in another blog I'll explain this formula too, but in this blog I'll just describe what WSIJF does in words and show how it affects the diagram with colored sections.

WSIJF: Work that is very likely to finish in the next periods, has large priority

What does this mean?

Remember that WSIJF only applies to work that is allowed to be preempted in favour of other work. Preemption happens at certain points in time. Familiar examples are Sprints, Releases (Go live events), or Product Increments as used in the SAFe framework.

The priority calculation takes into account:

  • the probability (or chance) that the work is completed in the next periods,
  • if completed in the next periods, the expected duration, and
  • the amount of time already spent.

ruitenwisser_dots_wsijfExample. Consider a Scrum team that has a cadence of 2-week sprints and time remaining to the next release is 3 sprints. For every item on the backlog determine the chance for completing it in the next sprint and if completed, divide by the expected duration. Likewise for completing the same it in the next 2 and 3 sprints. For each item you'll get 3 numbers. The value divided by the maximum of these is the priority of the backlog item.

Qualitatively, the effect of WSIJF is that items with large effort get less priority and items with smaller effort get larger priority. This is depicted in the diagram to the right.

Example: Quantifying WSIJF

In the previous paragraph I described the basics of WSIJF and only qualitatively indicated its effect. In order to make this concrete, let's consider large epics that have been estimated using T-shirt sizes. Since WSIJF affects the sizing part and to less extent the value part, I'll not consider the value in this case. In a subtle manner value also plays a role, but for the purpose of this blog I'll not discuss it here.

Teams are free to define T-shirt sizes as they like. In this blog, the following 5 T-shirt sizes are used:

  • ­XS ~ < 1 Sprint
  • S ~ 1 – 2 Sprints
  • M ~ 3 – 4 Sprints
  • L ~ 5 – 8 Sprints
  • XL ~ > 8 Sprints

Items of size XL take around 8 sprints, so typically 4 months. These are very large items.

probabilitiesOf course, estimates are just what they are: estimates. Items may take less or more sprints to complete. In fact, T-shirt sizes correspond to probability distributions: an 'M'-sized item has a probability to complete earlier than 3 sprints or may take longer than 4 sprints. For these distributions I'll take:

  • ­XS ~ < 1 Sprint (85% probability to complete within 1 Sprint)
  • ­S ~ 1 – 2 Sprints (85% probability to complete within 3 Sprints)
  • ­M ~ 3 – 4 Sprints (85% probability to complete within 6 Sprints)
  • ­L ~ 5 – 8 Sprints (85% probability to complete within 11 Sprints)
  • ­XL ~ > 8 Sprints (85% probability to complete within 16 Sprints)

As can be seen from the picture, the larger the size of the item the more uncertainty in completing it in the next period.

Note: for the probability distribution, the Wald or Inverse Gaussian distribution has been used.

Based on these distributions, we can calculate the priorities according to WSIJF. These are summarized in the following table:


Column 2 specifies the probability to complete an item in the next period, here the next 4 sprints. In the case of an 'M' this is 50%.
Column 3 shows that, if the item is completed, what the expected duration will be. For an 'M' sized item this is 3.22 Sprints.
Column 4 contains the calculated priority as 'value of column 2' divided by 'value of column 3'.
The last column shows the value as calculated using SJF.

The table shows that items of size 'S' have the same priority value in both the SIJF and SJF schemes. Items larger than 'S' actually have a much lower priority as compared to SJF.

Note: there are slight modifications to the table when considering various period lengths and taking into account the time already spent on items. This additional complexity I'll leave for a future blog.

In practice product owners only have the estimated effort and value at hand. When ordering the backlog according to the colored sections shown earlier in this blog, it is easiest to use a modified version of this picture:


Schedule the work items according to the diagram above, using the original value and effort estimates: green cells from left to right, then the next row from left to right.


Most used backlog prioritization mechanisms are based on some variation of ROI (value divided by effort). While this is the most optimal scheduling for items for which preemption is not allowed, it is not the best way to schedule items that are allowed to be preempted.

As a guide line:

  • Use WSJF (Weighted Shortest Job First) for (smaller) work items where preemption is not allowed, such as individual user stories with (real) business value on the sprint backlog and Features (minimal marketable features, e.g. slices in a story map).
  • Use SJF (Shortest Job First) for user stories within a Feature.
  • Use WSIJF (Weighted Shortest Imminent Job First) for larger epics and collections of Features (Feature Set), according to the table above, or more qualitatively using the modified sector chart.

[Kle1976] Queueing Systems, Vol. 2: Computer Applications, Version 2, Leonard Kleinrock, 1976

[Rij2011] A simulation to show the importance of backlog prioritisation, Maurits Rijk, June 2011, https://maurits.wordpress.com/2011/06/08/a-simulation-to-show-the-importance-of-backlog-prioritization/

[Sut2011] Why a Good Product Owner Will Increase Revenue at Least 20%, Jeff Sutherland, June 2011, https://www.scruminc.com/why-product-owner-will-increase-revenue/

Demonstration of the Exactness of Little's Law

Sat, 01/23/2016 - 11:00

Day 18

Little's Law is a powerful tool that relates the amount the work a team is doing and the average lead time of each work item. Basically there are two main applications involving either 1) the input rate of work entering the team, or 2) the throughput of work completed.

In previous posts (Applying Little's Law in agile games, Why Little's law works...always) I already explained that Little's Law is exact and hardly has any assumptions, other than work entering the team (or system).

This post demonstrates this by calculating Little Law at every project day while playing GetKanban.

The video below clearly shows that Little's Law holds exactly at every project day. For both the input rate and throughput versions. Throughput is based on the subclass of 'completed' items.

E.g. on the yellow post-it the product of lambda and W equals N on every project day.




The set-up is that we run the GetKanban game from day 9 through day 24. The video will show on the right hand side the board and charts whereas the left hand side shows the so-called 'sample path' and Little's Law calculation for both input rate (yellow post-it) and throughput (green post-it).

Sample Path. The horizontal axis shows the project day running from 9 till 24. The vertical axis shows the work item: each row represents a item on the board.

The black boxes mark the days that the work in on the board. For example, item 8 was in the system on project day 9 and completed at the end of project day 12 where it was deployed.

The collection of all black boxes is called a 'Sample Path'.



IMG_1433 copyLittle's Law. The average number of items in the system (N) is show on top. This is an average over the project days. Here W denotes the average lead-time of the items. This is an average taken over all work items.

Input rate: on the yellow post-it the Greek lambda indicates the average number of work per day entering the system.

Throughput: the green post-it indicates the average work per day completed. This is indicated by the Greek mu.

Note: the numbers on the green post-it are obtained by considering only the subclass of work that is completed (the red boxes).


[Rij2014a] http://blog.xebia.com/why-littles-law-works-always/

[Rij2014b] http://blog.xebia.com/applying-littles-law-in-agile-games/

Achieve The Unthinkable using Hyper-Sprints

Wed, 01/20/2016 - 13:17

2015-06-25 AMSTERDAM - Wereldkampioen sprint Dafne Schippers poseert naast de Nuna 7S van het Nuon Solar Team. De atlete neemt het in Olympisch Stadion op tegen het Nuon Solar Team, de wereldkampioen zonneracen. Projecten zoals Nuna en Forze worden door Hardware Scrum coaches van Xebia begeleid.

In my opinion, the best indicator how "agile" teams actually are, is their sprint length.  The theory says 2-4 weeks. To be honest, as an agile coach, this doesn’t feel agile all the time.

Like I wrote in one of my previous posts, in my opinion the ultimate form of agility is nature. Nature’s sprint length seems to vary from billions of years how the universe is created to a fraction of a second how matter is formed.

Of course, it’s nonsense stating we could end up in sprints of just a few nano-seconds.  But on the other hand, we see our society is speeding up dramatically. Where a service or product could take years before it went to market a couple of years ago, now it can be a matter of days, even hours.  Think about the development of disruptive apps and technology like Uber and 3D-printing.

In these disruptive examples a sprint length of 2 weeks can be a light year.  Even in Scrum we can be trapped in our patterns here. Why don’t we experiment with shorter sprint lengths?  All agile rituals are relative in time; during build parties and hackathons I often use sprints of only 30 or 60 minutes; 5 mins for planning, 45 mins for the sprint, 5 mins for the review/demo, 5 mins for the retrospective.  Combined with a fun party atmosphere and competition, this creates a hyper-productive environment.

Try some hyper sprinting next to your regular sprints. You’ll be surprised how ultra-productive and fun they are. For example, it enables your team to build a car in just an afternoon. Enjoy!

Automated deployment of Docker Universal Control Plane with Terraform and Ansible

Tue, 01/19/2016 - 20:06

You got into the Docker Universal Control Plane beta and you are ready to get going, and then you see a list of manual commands to set it up. As you don't want to do anything manually, this guide will help you setup DUCP in a few minutes by using just a couple of variables. If you don't know what DUCP is, you can read the post I made earlier. The setup is based on one controller, and a configurable amount of replicas which will automatically join the controller to form a cluster. There a few requirements we need to address to make this work, like setting the external (public) IP while running the installer and passing the controller's certificate fingerprint to the replicas during setup. We will use Terraform to spin up the instances, and Ansible to provision the instances and let them connect to each other.


Updated 2016-01-25: v0.7 has been released, and no Docker Hub account is needed anymore because the images are moved to public namespace. This is updated in the 'master' branch on the Github repository. If you still want to try v0.6, you can checkout tag 'v0.6'!

Before you start cloning a repository and executing commands, let's go over the prerequisites. You will need:

  • Access to the DUCP beta (during installation you will need to login with a Docker Hub account which is added to the 'dockerorca' organization, tested with v0.5, v0.6 and v0.7. Read update notice above for more information.)
  • An active Amazon Web Services and/or Google Cloud Platform account to create resources
  • Terraform (tested with v0.6.8 and v0.6.9)
  • Ansible (tested with 1.9.4 and

Step 1: Clone the repository

CiscoCloud's terraform.py is used as Ansible dynamic discovery to find our Terraform provisioned instances, so --recursive is needed to also get the Git submodule.

git clone --recursive https://github.com/nautsio/ducp-terraform-ansible
cd ducp-terraform-ansible

Step 2.1: AWS specific instructions

These are the AWS specific instructions, if you want to use Google Cloud Platform, skip to step 2.2.

For the AWS based setup, you will be creating an aws_security_group with the rules in place for HTTPS (443) en SSH (22). With Terraform you can easily specify what we need by specifying ingress and egress configurations to your security group. By specifying 'self = true' the rule will be applied to the all resources in that to be created security group. In the single aws_instance for the ducp_controller we use the lookup function to get the right AMI from the list specified in vars.tf. Inside each aws_instance we can reference the created security group by using "${aws_security_group.ducp.name}". This is really easy and it keeps the file generic. To configure the amount of instances for ducp-replica we are using the count parameter. To identify each instance to our Ansible setup, we specify a name by using the tag parameter. Because we use the count parameter, we can generate a name by using a predefined string (ducp-replica) and add the index of the count to it. You can achieve this by using the concat function like so: "${concat("ducp-replica",count.index)}". The sshUser parameter in the tags block is used by Ansible to connect to the instances. The AMIs are configured inside vars.tf and by specifying a region, the correct AMI will be selected from the list.

variable "amis" {
    default = {
        ap-northeast-1 = "ami-46c4f128"
        ap-southeast-1 = "ami-e378bb80"
        ap-southeast-2 = "ami-67b8e304"
        eu-central-1   = "ami-46afb32a"
        eu-west-1      = "ami-2213b151"
        sa-east-1      = "ami-e0be398c"
        us-east-1      = "ami-4a085f20"
        us-west-1      = "ami-fdf09b9d"
        us-west-2      = "ami-244d5345"

The list of AMIs

    ami = "${lookup(var.amis, var.region)}"

The lookup function

Let's configure the variables so you can use it to create the instances. Inside the repository you will find a terraform.tfvars.example file. You can copy or move this file to terraform.tfvars so that Terraform will pick it up during a plan or apply.

cd aws
cp terraform.tfvars.example terraform.tfvars

Open terraform.tfvars with your favorite text editor so you can set up all variables to get the instances up and running.

  • region can be selected from available regions
  • access_key and secret_key can be obtained through the console
  • key_name is the name of the key pair to use for the instances
  • replica_count defines the amount of replicas you want

The file could look like the following:

region = "eu-central-1"
access_key = "string_obtained_from_console"
secret_key = "string_obtained_from_console"
key_name = "my_secret_key"
replica_count = "2"

By executing terraform apply you can create the instances, let's do that now. Your command should finish with:

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

 Step 2.2: Google Cloud Platform specific instructions

In GCP, it's a bit easier to set everything up. Because there are no images/AMI's per region, we can use a disk image with a static name. And because the google_compute_instance has a name variable, you can use the same count trick as we did on AWS, but this time without the metadata. By classifying the nodes with the tag https-server, it will automatically open port 443 in the firewall. Because you can specify the user that should be created with your chosen key, setting the ssh_user is needed to connect with Ansible later on.

Let's setup our Google Cloud Platform variables.

cd gce
cp terraform.tfvars.example terraform.tfvars

Open terraform.tfvars with your favorite text editor so you can set up all variables to get the instances up and running.

The file could look like the following:

project = "my_gce_project"
credentials = "/path/to/file.json"
region = "europe-west1"
zone = "europe-west1-b"
ssh_user = "myawesomeusername"
replica_count = "2"

By executing terraform apply you can create the instances, let's do that now. Your command should finish with:

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Step 3: Running Ansible

Instances should all be there, now let's install the controller and add a replica. This setup uses terraform.py to retrieve the created instances (and IP addresses) based on the terraform.tfstate file. To make this work you need to specify the location of the tfstate file by settings TERRAFORM_STATE_ROOT to the current directory. Then you specify the script to lookup the inventory (-i) and the site.yml where you can assign the roles to the hosts.

There are two roles that will be applied to all hosts, called common and extip. Inside common everything is set up to get Docker running on the hosts, so it configures the apt repo, installs the docker-engine package and finally installs the docker-py package because Ansible needs this to use Docker. Inside extip, there are two shell commands to lookup external IP addresses. Because if you want to access DUCP on the external IP, it should be present inside the certificate that DUCP generates. Because the external IP addresses are not found on GCP instances and I wanted a generic approach where you would only need one command to provision both AWS and GCP I chose to look them up, and eventually register the variable extip with the one that was correctly looked up on the instances. Second reason to use the external IP, is that all the replicas need the external IP of the controller to register themselves. By passing the --url parameter to the join command, you can specify to what controller it should register.

--url https://"{{ hostvars['ducp-controller']['extip'] }}"

The extip variable used by replica

This also counts for the certificates fingerprint, the replica should provide the fingerprint of the controllers certificate to register successfully. We can access that variable the same way: "{{ hostvars['ducp-controller']['ducp_controller_fingerprint'].stdout }}. It specifies .stdout to only use the stdout part of the command to get the fingerprint, because it also registers exitcode and so.

To supply external variables, you can inject vars.yml through --extra-vars. Let's setup the vars.yml by copying the example file to vars.yml and configure it.

cd ../ansible
cp vars.yml.example vars.yml

As stated before, the installer will login to the Docker Hub and download images that live under the dockerorca organization. Your account needs to be added to this organization to let it succeed. Fill in your Docker Hub account details in vars.yml, and choose an admin user and admin password for your installation. If you use ssh-agent to store your SSH private keys, you can proceed with the ansible-playbook command, else you can specify your private-key file by adding --private-key <priv_key_location> to the list of arguments.

Let's run Ansible to set up DUCP. You need to change to the directory where the terraform.tfstate file resides, or change TERRAFORM_STATE_ROOT accordingly.

cd ../{gce,aws}
TERRAFORM_STATE_ROOT=. ansible-playbook -i ../terraform.py/terraform.py \
                       ../ansible/site.yml \
                       --extra-vars "@../ansible/vars.yml"

If all went well, you should see something like:

PLAY RECAP *********************************************************************
ducp-controller            : ok=13   changed=9    unreachable=0    failed=0   
ducp-replica0              : ok=12   changed=8    unreachable=0    failed=0   
ducp-replica1              : ok=12   changed=8    unreachable=0    failed=0

To check out our brand new DUCP installation, run the following command to extract the IP addresses from the created instances:

TERRAFORM_STATE_ROOT=. ../terraform.py/terraform.py --hostfile

Copy the IP address listed in front of ducp-controller and open up a web browser. Prefix the address with https://<ip> and now you can login with your chosen username and password.

ducp login

Let me emphasise that this is not a production ready setup, but can definitely help if you want to try out DUCP and maybe build a production ready version from this setup. If you want support for other platforms, please file an issue on Github or submit a pull request. I'll be more than happy to look into it for you.

Judo Strategy

Thu, 01/14/2016 - 23:00

In the age of disruption incumbents with decades of history get swept away by startups at an alarming rate. What allows these fledgling companies to succeed? What is their strategy and how can you defend against such an opponent?

I found that that the similarities between Judo and Business strategy allow us to apply the Judo principles to become a Product Samurai.

There are people who learn by reading (carry on doing so), people who learn by listening and observing (see the video registration) and people who learn by experiencing (sign up here). With that out of our way, let’s get started.

Enter the dojo

The dojo is a place where Judoka’s come to practice. One of the key things that make Judo great is that it can be learned, and it can be taught. To illustrate just how powerful it is and what the three core principles are I asked my 9-year-old daughter to give a demonstration.

The three principles of Judo Strategy

Jigorƍ Kanƍ founded Judo after having studied many different styles. Upon reaching a deeper level of understanding of what works and what doesn’t he created his philosophy based on the rule of “maximum effect with minimal effort”. What contributed to this rule became part of Judo, what did not contribute was discarded.

The operation of disruptors is not much different. With no past to hang on to, startups are free to pivot and take technology, processes and tools from the best offerings and combine them to a new technique. If they do not operate on the principle of “maximum effect with minimal effort” they will die. This is a luxury that incumbents have but usually not leverage. Typically the incumbents have the capital, equipment, skills, market knowledge, channels and people, still choose not to leverage.


The first principle in Judo is movement. Before we get in close with our opponent we seek to grab him in a favorable position. Maybe we can grip a sleeve, or arm and catch the opponent off guard. As a disruptor I also seek for uncontested ground. A head on attack on the core product of an established player is usually met with great resistance, as a disruptor I cannot hope to win that battle.

Variables in the Lean Canvas

Figure: Variables in the Lean Canvas

So to seek uncontested ground, means launching under the radar. This will go against the advice of your marketing director who will tell you to make “as much noise as possible”. This will indeed attract more people, but also tell your opponent exactly what you are doing. So have your marketing align with your ability to execute. Why go multi-lingual when you can only serve local customers? Why do a nation wide campaign when you are still running a pilot in a single city? Why pinpoint the shortcomings in your competitor’s product when you cannot outpace them in development (yet)?

At some point contact is inevitable, but a good disruptor will have chosen a vantage point. By the time Dell became noticed they had a well-oiled distribution network in place and were able to scale quickly, whereas the competition still relied on brick and mortar partners. Digital media like nu.nl came out of nowhere and by the time traditional newspapers had noticed them it was not only hard to catch up, but their opponent had defined where the battle would take place: in the land of Apps, away from the strengths of traditional media. There is still a chance for incumbents, and that is to out innovate your opponent and to do so you have absorb some disruption DNA yourself.


The second principle is balance. Once we have gripped our opponent we are trying to destroy their balance whilst strengthening our own. Ironically this usually means keeping your enemies close.

Puppy ploy

Figure: Puppy ploy

Perhaps the best possible example of this is the “puppy ploy” that Apple pulled with the music industry in the early days of iTunes. By emphasizing that downloadable music was not really their business (it only represented 1 or 2%) and was highly unprofitable (due to illegal sources like Napster and Kazaa) did Apple obtain the position it still holds today, and became the dominant music platform. As history repeats itself, a small company from Finland did the same thing with streaming, rather than owning the music. If you close your eyes you can hear them pitch: “it’s only a few percent of your business, and it’s like radio, does radio effect your business?”.

A little bit closer to home, I’ve seen first hand how a major eCommerce platform brokered deals with numerous brands. Sure, there is a kickback involved, but it prevents your competitors from opening their own store. By now they have become the dominant player, and their partners have come to rely on their digital partner to run the shop for them. It’s a classical example of keeping your enemies so close, that they cannot leverage their strength.


My favourite category of throwing techniques (Nage Waza) are the Sutemi, or in English “sacrifice throws”. In these techniques you typically sacrifice your position so you can leverage the power of your opponent.

Basically it means: go after sunk cost. Observe your opponent and learn where he has invested. Virgin Air does not fly out of major airports, therewith circumventing the enormous investment that other airlines have made. Has your opponent invested in a warehouse to support building a one-day delivery service? Make delivery cost free! Is it a platform battle? Open source it and make money from running services on top of it.

Does it hurt? of course it does! This is why the first thing you learn in Judo is fall breaking (ukemi waza). The question is not if you will fall, but if you can get back up quickly enough. Now this is not a plea for polka style pivoting startup behavior. You still need a strategy and stick to your product vision, but be prepared to sacrifice in order to reach that.

I once ran a Customer Journey mapping workshop at Al Jazeera. Though we focused on Apps, the real question was: “what is the heart of the brand” How can we be a better news agency than ABC, BBC, CNN etc.? By creating better articles? by providing more in-depth news? Turned out we could send photographers where they could not. They had invested in different areas and by creating a photo driven news experience they would be hindered by sunk cost.

If you manage to take the battle to uncontested grounds and have destroyed your opponent’s balance, his strength will work against him. It took Coca Cola 15 years to respond to the larger Pepsi bottle due its investment in their iconic bottle. By the time they could and wanted to produce larger bottles, Pepsi had become the second largest brand. WhatsApp reached a 10 billion messages in 2 years; you don’t have Coca Cola’s luxury anymore.

Awesome machines that you need to let go of

Figure: Awesome machines that you need to let go of

Why did Internet only news agencies like nu.nl scored a dominant position in the mobile space? Because the incumbents were too reluctant to cannibalize their investments in dead tree technology.

Key take away 

Judo can be learned and so can these innovation practices. We have labeled the collection of these practices Continuous Innovation. Adopting these practices means adopting the DNA of a disruptor.

It’s a relentless search to find unmet market needs, operating under the radar until you find market-fit. You can apply typical Lean startup techniques like Wizard of Oz, landing pages or product bootcamp.

Following through fast means scalable architecture principles and an organization that can respond to change. As an incumbent, watch out for disruptors that destroy your balance; typically by running a nice of your business for you that will become strategic in the future.

Finally: if you are not prepared to sacrifice your current product for one that addresses the customers need better someone else will do it for you.


This blog is part of the Product Samurai series. Sign up here to stay informed of the upcoming book: The Product Manager's Guide to Continuous Innovation.

The Product Manager's guide to Continuous Innovation


Mapping biases to testing, Part 1: Introduction

Wed, 01/13/2016 - 12:40

We humans are weird. We think we can produce bug free software. We think we can plan projects. We think we can say “I’ve tested everything”. But how is that possible when we are governed by biases in our thinking? We simply cannot think about everything in advance, although we like to convince ourselves that we can (confirmation bias). 

In his book “Thinking, Fast and Slow”, Daniel Kahneman explains the most common thinking biases and fallacies. I loved the book so much I’ve read it twice and I’ll tell anyone who wants to listen to read it too. For me it is the best book I ever read on testing. That’s right, a book that by itself has nothing to do with testing, taught me most about it. Before I read the book I wasn’t aware of all the biases and fallacies that are out there. Sure, I noticed that projects always finished late and wondered why people were so big on planning when it never happened that way, but I didn’t know why people kept believing in their excel sheets. In that sense, “Thinking, Fast and Slow” was a huge eye opener for me. There are lots of examples in the book that I answered incorrectly, proving that I’m just as gullible as the next person. 


But that scared me, because it is my job to ‘test all the things’, right? But if my thinking is flawed, how can I possibly claim to be a good tester? I want to try to weed out as much of my thinking fallacies as I can. This is a journey that will never end. I want to take you with me on this journey, though. The goal is as always: improve as a tester. Enjoy the learning process and explore. I feel the need to put a disclaimer here. This is not a scientific type of blog series. I will provide sources where I think they’re necessary, but the point of this series is to take you on a journey that is for the most part personal. I hope it will benefit you as well! My goal is mainly to inspire you to take a look inwards, at your own biases.

Before we continue I need to explain a few basic concepts: fast and slow thinking, heuristics, biases and fallacies. I will conclude this first post with a list of the biases and fallacies that I will cover in this series. This list can grow of course, based on the feedback I hopefully will receive.

Fast and slow thinking

This is a concept taken from Kahneman’s book. Fast thinking, called “System 1 thinking” in the book, is the thinking you do on autopilot. When you drive your car and you see something happening, you react in the blink of an eye. It’s also the thinking you do when you meet someone new. In a split second you have judged this person based on stereotypes. It just happens! It’s fast, automatic, instinctive, emotional. The system 1 thinking is the reason we are thriving today as a species (it helped us escape from dangerous situations, for example).

On the other hand, there’s “System 2 thinking”. This is the type of thinking that takes effort; it’s slow. It’s deliberate. For example, you use system 2 when you have to calculate (in your head) the answer to 234 x 33 (as opposed to 2 x 3, which you do with System 1).

There is one huge problem: we make all kinds of mistakes when it comes to using these systems. Sometimes, we use system 1 to analyse a problem, while system 2 would be more appropriate. In the context of testing: when someone comes up to you and says “is testing finished yet?”, you might be tempted to answer “no” or “yes”, while this is more a type of question that cannot be answered with a yes or no. If you want to be obnoxious you can say testing is never finished, but a more realistic conversation about this topic would be based around risk, in my opinion.

Often, when people ask a seemingly simple or short question, such as "is testing finished yet?", they mean something different entirely. In my context, if the Product Owner would ask me “is testing finished yet?”, for me it translates to: “do you think the quality of our product is good enough to be released? Did we build the thing right? Did we build the right thing? I value your advice in this matter, because I’m uncertain of it myself”. But if I happen to be in a foul mood, I have an option to just say "yes", and that would have been my system 1 answering.

Putting in the mental effort to understand that a simple question can actually be about something else, asking questions to find out what the other person truly means and crafting your answer to really help them, is hard work. Therefore, you have to spend your energy wisely.

Develop your system 1 and system 2 senses: when do you use which system? And then there’s the matter of choice. It would be silly to think you can always choose which system you use.

That brings us to heuristics.


Definition on Wikipedia: “A heuristic technique, often called simply a heuristic, is any approach to problem solving, learning, or discovery that employs a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals.”

Heuristics are powerful, but you need to spend time reevaluating and/or adapting them once in a while, and for that you need system 2. Why do you need to reevaluate your heuristics? Because you are prone to fall for biases and fallacies.


We need to use heuristics, but they are based on system 1. When you are an experienced tester, you have a huge toolbox of heuristics that help you during testing. That’s a good thing, but it comes with a risk. You might start to trust your heuristic judgement a little too much, but you can't use a hammer for everything, right?

Bias and Fallacy, definition and meaning

A bias “is an inclination or outlook to present or hold a partial perspective, often accompanied by a refusal to consider the possible merits of alternative points of view.”

A fallacy “is the use of invalid or otherwise faulty reasoning, or "wrong moves" in the construction of an argument.”

Most of the thinking errors I will cover in this series are biases, but it is good to know the difference between fallacy and bias. A bias involves a mindset; you see something in a pre-conceived way. It influences how you experience things. Stereotyping is a common example of a bias. A fallacy, on the other hand, is an untruth. It is a statement or belief that lacks truthfulness.

There is more than one type of bias, but in this blog series I will talk about cognitive biases, for which the definition is “[...] a repeating or basic misstep in thinking, assessing, recollecting, or other cognitive processes.”

Since testing is a profession that relies heavily on cognition, mental judgement and we are only human, it’s no wonder that we make mistakes. You cannot get rid of all your biases, that would defy human nature, but in the context of testing it’s a great idea to challenge yourself: which biases and fallacies are actually hurting my testing activities?

However, you have to realise that biases can work to your advantage as well! Since it is part of our human nature to be biased, we should use that fact. With regards to testing that could mean: get more people to do testing. Every person brings his or her unique perspective (with biases) to the table and this will result in more information about the application under test.

What’s next

In this blog series I hope to shed some light on a number of biases and fallacies and what harm or good they can do in testing. I will cover the following biases, fallacies and effects:

If you have more input for biases or fallacies that you want to see covered, please leave a comment or leave me a tweet @Maaikees. In the meantime, you know which book you have to read!

Profiling zsh shell scripts

Tue, 01/12/2016 - 09:27

With today's blazingly fast hardware, our capacity to "make things slow" continues to amaze me. For example, on my system, there is a noticeable delay between the moment a terminal window is opened, and the moment the command prompt actually shows up.

This post explores how we can quickly quantify the problem and and pinpoint the main causes of the delay.

Quantifying the problem

Let's first see where the problem might be. A likely candidate is of course my ~/.zshrc, so I added 2 log statements: one at the top, one at the bottom:

date "+%s.%N"

This indeed showed my ~/.zshrc took about 300ms, enough to cause a noticeable delay.

Quick insight: zprof

Selection_054The quickest way to get an idea of the culprit is the zprof module that comes with zsh. You simply add zmodload zsh/zprof to the top of your ~/.zshrc, and the zprof built-in command will show a gprof-like summary of the profiling data.

A notable difference between gprof and zprof is that where gprof measures CPU time, where zprof measures wall clock time.

This is fortunate: CPU time is the time a program was actually consuming CPU cycles, and excludes any time the program was for example waiting for I/O. It would be fairly useless to profile zsh in this way, because it probably spends most of its time waiting for invoked commands to return.

zprof provides a fairly rich output, including information about the call hierarchy between functions. Unfortunately, it measures performance per function, so if those are long you're still left wondering which line took so long.

Digging deeper: xtrace

An approach to profiling zsh scripts that will give per-line metrics is using xtrace. Using xtrace, each command that zsh executes is also printed to file descriptor 3 using a special prompt which can be customized with the PS4 environment variable to include things like the current timestamp.

We can collect these detailed statistics by adding to the top of our ~/.zshrc:

exec 3>&2 2>/tmp/zshstart.$$.log
setopt xtrace prompt_subst

And to the bottom:

unsetopt xtrace
exec 2>&3 3>&-

There are 2 problems with this detailed trace:

  • This approach will provide confusing output when there is any parallelism going on: trace messages from different threads of execution will simply get interleaved
  • It is an overwhelming amount of data that is hard to digest

When you're dealing with parallelism, perhaps you can first use zprof and then only xtrace the function you know is a bottleneck.

When you're overwhelmed by the amount of data, read on...

Visualizing: kcachegrind

If we assume there's no parallelism going on, we can visualize our zsh script profile using kcachegrind. This tool is intended to visualize the call graphs produced by valgrind's callgrind tool, but since the file format used is fairly simple we can write a small tool to convert our xtrace output.



zprof is the easiest and most reliable way to profile a zsh script.

Some custom glue combined several existing tools (zsh's xtrace and kcachegrind) to achieve in-depth insight.

Applying this to shell startup time is of course a rather silly exercise - though I'm quite happy with the result: I went from ~300ms to ~70ms (now mostly spent on autocomplete features, which are worth it).

The main lesson: combining tools that were not originally intended to be used together can produce powerful results.


The Business Support Team Pattern

Thu, 12/31/2015 - 01:22

Lately I've encountered several teams that organize their work using Agile methods and they all exhibit a similar pattern. Teams (or actually the work) having such work patterns I call Business Support Teams. This type of team usually is responsible for operating the application, supporting the business in using the application, and developing new features on top of the (usually third party) application,

The nature of the work may be plannable or highly ad hoc, e.g. production incidents and/or urgent requests from the business. In practice I notice that the more ad hoc type of work the team has to deal with, the more they are struggling with approaches based on a single backlog of work.

In this post I'll show a set-up using boards and agreements that works for these type of teams very nicely.


In practice teams that start with Agile often default to using Scrum. It initially provides a structure for teams to start off and sets a cadence for frequent delivery of work and feedback loops. Such teams often start with a 'typical' scrum board consisting of 3 lanes: to do, in progress, and done

Here, the team has a backlog consisting of features to support the business, a visual board with three lanes, Definition of Ready, Definition of Done, and a cadence of 2 or sometimes 3 week sprints.

Note: the Definition of Ready is not part of scrum and a good practice often used by Scrum teams. See also this blog.

Business Support Team

What makes the Business Support Team different from other teams is that besides the list of features (application enhancements) they have other types of work. Typically the work includes:

  • Requests for information, e.g. reports,
  • New features to accelerate the business,
  • Long term improvements,
  • Keeping the application and platform operational
    • Daily and routine operational jobs
    • Handling production incidents

This follows the pattern described by Serge Beaumont in this presentation with the addition of Business Requests ('Requests for information').

Commonly Encountered Dissatisfactions

From a business point of view the customers may experience any of the following:

  • Unpredictable service as to when new features become available,
  • Limited support for questions and such,
  • Long waiting times for information and/or reports.

On the other hand, the team itself may experience that:

  • Work needed to keep the application operational limits the capacity severely to work on new features,
  • Interruptions due to incoming requests and/or incidents that require immediate attention causes longer lead times,
  • Too much work in progress,
  • Pressure to deliver for certain groups of business users will be at the cost of other stakeholders.
Business Expectations

The expectations from the customer with regards to the work items typically are (but may vary)

  • Requests for information, e.g. reports,
  • Nature: Ad hoc, and may varies;
    Expectation: typically ranges from 1 week to 1 month
  • New features to accelerate the business,
    Nature: Continuously entering the backlog;
    Expectation: is predictability
  • Keeping the application and platform operational
    • Daily and routine operational jobs
      Expectation: Just to have a running platform &#x1f609;
    • Handling production incidents
      Expectation: As fast as possible

From the team's perspective:

  • Long term improvements,
    Expectation: be able to spend time on it regularly
  • Keeping the application and platform operational
    • Handling production incidents
    • Expectation: as least as possible to disrupt the team

The challenge for the team is to be predictable enough regarding lead times for New Features while at the same time be able to take up work that requires immediate attention and at the same time have acceptable lead times on business requests.

Board & Policies

A board and set of policies that work very well are is shown to the right.kanbanized


The columns are kept the same as what the team already has. The trick to balancing the work, is to treat them differently. The board set-up above has four lanes:

Expedite: Reserved for work that needs to be dealt with immediately, e.g. production incidents. Sometimes also called 'Fast Lane'. Maximum of 1 work item at all times.

(Regular) Changes: Holds regular type of work which needs to be 'predictable enough'.

Urgent: Work that needs to be delivered with a certain service. E.g. within 5 working days. Mainly business requests for information, support and may include problems with priority levels lower than 1 &#x1f609;

Operational: Meant for all tasks that are needed to keep the application up & running. Typically daily routine tasks.

Note: Essential to make this working is to agree upon WiP (work in progress) limits and to set criteria when work is allowed to enter the lanes. This is described in the section below.

Note: As mentioned above this basically follows the pattern of [Beaumont2015] with the row for 'Urgent' work added.

Policy Example

As explained in [Beaumont2014] the Definition of Ready and Definition of Done guard the quality of work that enters and leaves the team respectively.

legendaDefinition of Run

Specifies the state of the application. What does it mean that it is 'running'? Work items that bring back the system into this state goes into the Expedite Lane. Example:

  • Application is up & running,
  • Application's response is within ... seconds,
  • Critical part of application is functioning,
  • ...

Note: There is one other type of that is usually allowed in this lane: items that have a deadline and that are late for delivery....and have a policy in place!

Definition of Change

Regular requests for new features and enhancements. Predictability is most important. Certain variation of the lead time is considered acceptable.

Service level is ... weeks with 85% on time delivery (1 standard deviation service level).

Definition of Request

Business requests. Typical use includes requests for support, creation of reports (information) (e.g. national banks may require reports as part of an audit). And other types of requests that are not critically important to go into the expedite lane, are more than a couple of hours of work, and for which lead times are expected that are considerable shorter than that for changes.

Example criteria:

  • Business requests, requiring less than 1 week of work, or
  • A problem report with severity of 2, 3, or 4

Service level is .... working days with 95% on time delivery.

Definition of operational

Describes the routine tasks that need to be done regularly to keep the system running as stated in the Way of Working.

Example criteria:

  • Less than 2 hours of work, and
  • Routine task as described in the Way of Working, and
  • Can be started and completed on the same day,
  • Maximum of ... hours per day by the team.

It is important to limit the amount of time spent on these items by the team so the team can maintain the expected service levels on the other types of work.


From all the aforementioned work item types, only the 'Change' item is plannable; 'Run' items are very ad hoc from its nature, as is the case with 'Operational' tasks (some are routine and some just pop-up during the day). Requests from the business tend to come in on short notice with the expectation of short delivery times.

Because of the 'ad hoc' nature for most work item types planning and scheduling of work needs to be done more often than once every sprint. Replenishment of the 'To do' will be done continuously for the rows whenever new work arrives. The team can agree on a policy for how often and how they want to do this.

The sequence of scheduling work between the rows is done either by the product owner or self-regulated by a policy that includes setting WiP limits over the rows. This will effectively divide the available team's capacity between the work item types.


Business Support Teams follow a pattern very similar to that described in [Beaumont2015]. In addition to the 'Run', 'Change', 'Operational' types of work the type 'Request' is identified. The work is not described by a single backlog of similar work items but rather as a backlog of types of work. These types are treated differently because they have a different risk profile (Class of Service).

This enables the team to be predictable enough on 'Changes' with an (not so small) acceptable variation of the lead time, have a higher service level on ad hoc requests from the business ('Request'), plan their daily routine work, while at the same time be able to have a fast as possible delivery of 'Run' items.

Allowing a larger variation on the 'Change' items allows for a higher service on the 'Request' items.


[Beaumont2014] The 24 Man DevOps Team, Xebicon 2015, Serge Beaumont, Link https://xebicon.nl/slides/serge-beaumont.pdf

Interpreting Scala in the Browser

Thu, 12/24/2015 - 15:38

At this years' Scala Exchange keynote, Jessica Kerr got a nice round of applause when she asked us all to, please, include our import statements when sharing code snippets.

That's great advice - but can't we do better? What if we could share actual running, hackable code?

Turns out we can: there's a number of 'JSFiddle'-style projects around, and I'll take a brief look at some of them.


Scala-Js-Fiddle leverages Scala.js: once compiled on the server, your Scala application runs in the browser.

This allows you to create beautiful visuals while running your applications. This could be an awesome teaching tool.  Be sure to check out their documentation, which comes generated on-the-fly by a Scala snippet!

The downside of using Scala.js is of course that you're restricted in which libraries you can include.


Scripster by Razvan Cojocaru (github) allows you to write Scala code in a syntax-highlighted environment, and gives some control over which expressions are evaluated.

Even though this tool is not restricted to Scala.js, only a limited number of libraries is included.

Scala KataSelection_047

Scala Kata by Guillaume Massé (github) is an attractive editor with completion features, which feels much smoother than Scripster.

While again you don't have control over which dependencies are available, a nice collection is included.

Unlike other options Scala Kata does not show the output of the program, but the value of the expression on each line, which might be familiar if you've ever worked with the 'notebook' feature of Scala IDE. While sometimes constraining, this encourages a nicely functional style.

Scala Kata is currently my online Scala environment of choice, though unfortunately it doesn't provide an easy way to store and share your snippets.

Scala NotebookSelection_048

Scala Notebook looks pretty clean, but unfortunately does not appear to be available in hosted form anywhere.

The project takes its inspiration from the IPython notebook, which is indeed a great example.

An interesting aspect of Scala Notebook is that, somewhat like scala-js-fiddle, it allows rich output: when a command returns HTML or an image, it will be rendered/shown.


Users of scastie will have to do without fancy features such as advanced highlighting and completion, but gives you complete control over the sbt configuration - making it a very powerful alternative.


Round-up Overview intermediate values completion git(hub) dependencies scala version Scala-Js-Fiddle no no yes no n/a Scripster no no no limited 2.9.1 Scala Kata yes yes no limited 2.11.7 Scala Notebook yes no no full control scastie no no no full control configurable The future

Some desirable features, like a more 'direct' REPL-like interface, are not widely supported. Also, none of the available offerings appear to be particularly easy to embed into an existing site or blog.

If these services would become wildly popular, I wonder whether making sure there is sufficient back-end computing power available might become a challenge. On the other hand, computing is rapidly becoming cheaper, so this might not be an issue after all.


I was amazed by the number of projects that managed to make a functional Scala programming environment available in the browser. While there's more work to be done, there is amazing potential here.

Uncovering the mysteries of Swift property observers

Wed, 12/23/2015 - 11:31

One of the cool features of Swift are property observers, perhaps better known as the willSet and didSet. Everyone programming in Swift must have used them. Some people more than others. And some people might use them a little bit too much, changing many of them together (me sometimes included). But it’s not always completely obvious when they are called. Especially when dealing with struct, because structs can be a bit odd. Let’s dive into some situations and see what happens.


The most obvious situation in which didSet (and willSet) gets called is by simply assigning a variable. Imagine the following struct:

struct Person {
    var name: String
    var age: Int

And some other code, like a view controller that is using it in a variable with a property observer.

class MyViewController: UIViewController {
    var person: Person! {
        didSet {
            print(&quot;Person got set to '(person.name)' with age (person.age)&quot;)

    override func viewDidLoad() {
        person = Person(name: &quot;Bob&quot;, age: 20)

As you would expect, the code in didSet will be executed when the view did load and the following is printed to the console:

Person got set to 'Bob' with age 20

This one is also pretty clear and you probably already know about it: property observers are not executed then the variable is assigned during initialization.

var person = Person (name: "Bob", age: 20) {
        didSet {
            print("Person got set to '(person.name)' with age (person.age)")

This doesn’t print anything to the console. Also if you would assign person in the init function instead the willSet will not get called.

Modifying structs

A thing less known is that property observers are also executed when you change the member values of structs without (re)assigning the entire struct. The following sample illustrates this.

class ViewController: UIViewController {

    var person = Person (name: "Bob", age: 20) {
        didSet {
            print("Person got set to '(person.name)' with age (person.age)")

    override func viewDidLoad() {
        person.name = "Mike"
        person.age = 30


In this sample we never reassign the person in our viewDidLoad function but by changing the name and age, the willSet still gets executed twice and we get as output:

Person got set to 'Mike' with age 20
Person got set to 'Mike' with age 30
Mutating functions

The same that applies to changing values of a struct also applies to mutating struct functions. Calling such a function always results in the property observers being called once. It does not matter if you replace the entire struct (by assigning self), change multiple member values or don’t change anything at all.

struct Person {
    var name: String
    var age: Int

    mutating func incrementAge() {
        if age < 100 {

Here we added a mutating function that increments the age as long as the age is lower than 100.

class ViewController: UIViewController {

    var person = Person (name: "Bob", age: 98) {
        didSet {
            print("Person got set to '(person.name)' with age (person.age)")

    override func viewDidLoad() {


Our willSet is called 4 times, even though the last two times nothing has changed.

Person got set to 'Bob' with age 99
Person got set to 'Bob' with age 100
Person got set to 'Bob' with age 100
Person got set to 'Bob' with age 100
Changes inside property observers

It is also possible to make changes to the variable inside its own property observers. You can reassign the entire variable, change its values or call mutating functions on it. When you do that from inside a property observer, the property observers do not trigger since that would most likely cause an endless loop. Keep in mind that changing something in willSet will be without effect since your change will be overwritten by that value that was being set originally (this gives a nice warning in Xcode as well).

class ViewController: UIViewController {

    var person = Person (name: "Bob", age: 98) {
        didSet {
            print("Person got set to '(person.name)' with age (person.age)")
            if person.name != oldValue.name {
                person.age = 0
                print("Person '(person.name)' age has been set to 0")

    override func viewDidLoad() {
        person.name = "Mike"

Why it matters

So why does all this matter so much you might think. Well, you might have to rethink what kind of logic you put into your property observers and which you put outside. And all this applies to Arrays and Dictionaries as well, because they are also structs. Let’s say you have an array of numbers that can change and each time it changes you want to update your UI. But you also want to sort the numbers. The following code might look fine to you at first:

class ViewController: UIViewController {

    var numbers: [Int] = [] {
        didSet {

    override func viewDidLoad() {

    func refreshNumbers() {
        numbers = [random() % 10, random() % 10, random() % 10, random() % 10]

    func updateUI() {
        print("UI: (numbers)")


Every time numbers changes, the UI will be updated. But since sortInPlace will also trigger the property observer, the UI gets updated twice:

UI: [3, 6, 7, 5]
UI: [3, 5, 6, 7]

So we should really put sortInPlace inside willSet right before we call updateUI.

Security is maturing in the Docker ecosystem

Mon, 12/21/2015 - 21:34

Security is probably one of the biggest subjects when it comes to containers. Developers love containers, some ops do as well. But it most of the time boils down to the security aspects of containers. Is it safe to use, what if someone breaks out? The characteristics of containers which we love, could also be a weak spot when it comes to security. In this blog I want to show some common methods to establish a defence in depth around your containers. This is container-specific, so I won't be talking about locking down the host nodes or reducing the attack surface i.e. by disabling Linux daemons.

Read-only containers (Docker 1.5)
First up, the possibility to run a read-only container. By specifying --read-only, the container's rootfs will be mounted read-only so no process inside the container can write to it. This means when you have a vulnerability inside your app allowing to upload files, this is blocked by marking the containers rootfs as read-only. This will also block applications to log to the rootfs, so you may want to use a remote logging mechanism or a volume for this.

Usage (docs):
$ docker run --read-only -v /icanwrite busybox touch /icanwrite here

User-namespaces (Experimental)

Lots of people are waiting for this one to land in stable. Currently, being root in the container will mean you are also root on the host. If you are able to mount /bin inside your container, you can add whatever you want in there, and possible take over the host system. With the introduction of user-namespaces, you will be able to run containers where the root user inside the container will still have privileged capabilities but outside the container the uid:gid will be remapped to a non-privileged user/group. This is also know as phase 1, remapped root per daemon instance. A possible next phase could be full maps and per container mapping, but this is still under debate.

Usage (docs):

$ docker daemon --userns-remap=default

Seccomp (Git master branch)

With namespaces we have separation, but we also would like to control what can happen inside a running container. That's where seccomp comes into play. Seccomp is short for secure computing mode. It allows you to filter syscalls, so you define the syscalls your application needs, and all the other will be denied. A quick example, given socket.json:

"defaultAction": "SCMP_ACT_ALLOW",
"syscalls": [
"name": "socket",
"action": "SCMP_ACT_ERRNO"

will result in the following:

# docker run -ti --rm --security-opt seccomp:tcpsocket.json ubuntu bash
root@54fd6641a219:/# nc -l 555
nc: Operation not permitted

Project Nautilus

One of the missing pieces in the eco-system was checking image contents. There was a great buzz around this when an article was published stating that there were common vulnerabilities in over 30% of the official images on the Docker hub. Docker got to work, and have been scanning a lot of official images on the background on the Docker Hub before they published anything about it. During Dockercon EU, they announced Project Nautilus, an image-scanning service from Docker that makes it easier to build and consume high-integrity content.

There is not a lot official about Nautilus yet, we know it has been running in the background and Docker says they secured over 74 million pulls with it. Recently, they created a survey asking questions about how it could be used so I can only give you some assumptions. First up, what Docker says it does:

  • Image security
  • Component inventory/license management
  • Image optimization
  • Basic functional testing

Here are some pointers on things that may be coming soon:

  • Running Nautilus on-premise
  • Pricing may be on per image or per deployed node


AppArmor profiles

By using AppArmor you can restrict capabilities by using profiles. The profiles can be really fine-grained, but a lot of people don't want to take the time to write these profiles. These profiles can be used on running Docker containers. Jessie Frazelle, one of the core maintainers of Docker, created bane to make writing these profiles easier. It can use a toml input file, and will generate and install an AppArmor profile. This profile can then be used on running a Docker container, in the same syntax as stated before:

docker run -d --security-opt="apparmor:name_of_profile" -p 80:80 nginx

Docker Security Profiles

These are all parts to secure your containers, of course Docker is also working on making this as easy to use as possible. This means  If you want to know more on where this is heading, check out the proposal on Github to keep yourself up-to-date.

Removing duplicate elements from a Swift array

Fri, 12/18/2015 - 11:47

Today I had to remove duplicate items from an Array while maintaining the original order. I knew there was no standard uniq function in Swift so I Googled a bit and found some implementations on StackOverflow. I found some good implementations, but wasn’t completely satisfied with any of them. So of course I tried to see if I could make something myself that I would be satisfied with.

Labels with padding in iOS

Fri, 12/04/2015 - 17:01

There is no standard way of creating UILabels with padding around the text in iOS. And usually that’s not really necessary because you should be using auto layout to position the label and the space around them. However, there are some occasions when you have a UILabel and just need some padding around it. One such occasion would be the creation of table view headers. Instead of manually creating a UIView around a UILabel to achieve this let’s have a look at some alternatives.

Why Product is Dead and Experience is Killing It

Fri, 12/04/2015 - 09:42

We have arrived at a time when there are no more handoffs between design and development. Where added value trumps effort (hallelujah, man before machine!). Where user stories are no longer about business requirements, but about enabling users. A time where features are replaced by offering an experience and empathy for our customers guide the decisions we make.

Utopia, or daily reality? As my time at a tech startup in a leading UX position is coming to an end, I was recently interviewing with various companies. This experience made me realize we still have some work to do if we (and by “we” I mean the companies we work for) want to make the most out of our UX efforts.

We’ve all heard that customer experience is the new advantage. That’s a great revelation (which is actually not so new, but that’s beside the point), so now what? How do we translate this insight into actionable results? Let’s start at the base.

What is experience?

As B. Joseph Pine II & James H. Gilmore described it back in 1998 in their controversial essay in the Harvard Business Review:

An experience occurs when a company intentionally uses services as the stage, and goods as props, to engage individual customers in a way that creates a memorable event.

Customers have increasingly higher demands. Not just in regard to products and services, but in their experience when interacting with what a company has to offer. Companies have come to realize how important customer- and user experience (CX/UX) is for their site, app, store, service or product.

Empathy plays a central role when designing services and products, because there’s no point in creating something people don’t want or know how to use. Design goes far beyond aesthetics. Good design is just good business.

Let me explain why:

  • It reduces the amount of rework and bug fixes post-launch
  • It prevents misalignment — building features which your customers don’t need or want
  • And therefor aims your focus towards the things that create value for your customers, enabling you to make better choices and maximizing the effectiveness of team efforts.

Other benefits include:

  • Reduced support costs
  • Reduced requests for clarification by the development team
  • More accurate estimates for build time and cost
  • And the obvious things such as a strengthened brand and increased loyalty and advocacy among customers.

Lets look at the numbers
  • 89% of companies expect to compete mostly on the basis of customer experience by 2016 *
  • Customer power has grown, as 73% trust recommendations from friends and family, while only 19% trust direct communication **
  • 86% of customers will pay more for a better experience ***

If you want to know more about how customer experience drives digital transformation ambitions, go read this latest study conducted by Forrester Consulting.


The mystery of the UX unicorn

You must be thinking: Let’s hire a UX unicorn to sprinkle his/her magical pixie dust and make everything better!

Ehm, no. Stop right there. Because without proper commitment throughout the organization, you won’t get the results you seek.

One of the biggest problems is that we attempt to make customer experience design fit with legacy philosophies and processes which come from a different time for a different type of customer. Putting our customer at the heart of an organization requires a shift in mindset and asks for significant changes to our business processes. Without it, we are just managing businesses the way we always have. Change will mean we go from merely managing the customer experience, to designing it.

As a wise man once said:
Doing the same thing over and over again and expecting different results, is insanity  — Albert Einstein

So now that we’re on the same page, let’s see how we can implement this experience-centered approach and identify what we need to change, in order to catalyze business success.

How to make experience work

Forget McCarthy’s 4 P’s from the 60’s. (OK, don’t completely forget them! Just keep them in the back of your mind, however) Today’s consumer is a visual consumer who expects interactive communication and has about an 8-second attention span. It’s no longer about Product, Price, Promotion and Place.

A five-year study published by the Harvard Business Review, involving over 500 marketers and customers across a variety of businesses, found that the 4P’s approach to marketing leads to a real disconnect between what they believe matters and what their customers really want.

The S.A.V.E. framework**** comes closest to what my experiences as a UX Designer have taught me. Even more so for tech companies since they mostly (but sadly) prioritize technology — and the effort they put into applying technology —  over everything else.

So let’s take a look at what S.A.V.E. means:

  • S. is for Solution. Think about the solution of your customers’ problems instead of the features and functions of the product.
  • A. is for Access. It is not about the place, it is not about whether it will be online or offline, your customers need your business to be accessible and you should provide them with it. You choose how.
  • V. is for Value. Customers care about the price, it is natural, but before the price come the concerns about the value. Are the benefits of your product relevant to the price you define?
  • E. is for Education. Your audience needs and wants to be informed. The number of businesses with online presence grows daily as well as the importance of attraction-based marketing.

To obtain the transformation you seek, your organization needs to do the following 3 things:

  1. Management must encourage a solutions mindset throughout the organization.
  2. Management needs to ensure that the design of the marketing organization reflects and reinforces the customer-centric focus.
  3. Management must create collaboration between the marketing and sales organizations and with the development and delivery teams.

See, no unicorn, no pixie dust, just management driving this train towards a better future.


Experience = value

When customer experience is properly integrated into corporate processes, the organization as a whole will benefit. Everyone on the team involved in developing the product or service will be better informed and as a result, be able to make better decisions. This will have a remarkable impact on the experience your product or service offers, it will not only maximize your ROI of your CX/UX efforts, but your entire development track will be able to create more value in a shorter amount of time. It’s not about building feature after feature after feature, it’s about creating as much value for your customers as you can and getting it out there ASAP.

I didn’t want to bore you with an article about the basic misconceptions people have about user experience and what it is. There’s more than enough on that out there. Instead, I wanted to shed some light on how to successfully incorporate UX into your organization, so that you and your team can maximize it’s pay offs.

To create an experience, UX Designers:

  1. Understand — research and become the user;
  2. Create — think of solutions and opportunities;
  3. Do — prototype and;
  4. Learn — test.

This feeds back into the mindset of empathy: where you as a designer focus on looking outward, instead of inward. Empathy plays a central role when designing services and products, because there’s no point in creating something people don’t want or know how to use.

Assume you’re wrong, know you’ll fail and be determined to learn.

* “Customer Experience on the Rise”, Gartner Research, 2014

** “Consumer Ad-itudes Stay Strong”, Forrester, 2012

*** “Customer Experience Impact Report”, RightNow Technologies, 2011

**** “Rethinking the 4 P’s”, Harvard Business Review, 2013

Creating an Immutable MultiMap in Scala

Wed, 12/02/2015 - 09:30

This post shows a couple of neat Scala tricks by implementing an immutable MultiMap.

A MultiMap is a special kind of Map: one that allows a collection of elements to be associated with each key, not just a single value. For this example, we will start by using Lists to contain the values, hence creating a ListMultiMap

Adding elements to a Map[A, List[B]]

Before writing any code we should ask ourselves: do we really need a ListMultiMap at all? Why wouldn't a normal Map[A, List[B]] be sufficient?

Of course a Map[A, List[B]] can hold our data just fine, but adding a value can be cumbersome: when the key does not exist yet, a new List must be created - otherwise, our value needs to be added to the existing one. Let's start by putting this code in a helper function:

def addBinding[A,B](map: Map[A, List[B]], key: A, value: B): Map[A, List[B]] = 
  map + (key -> { value :: map.getOrElse(key, Nil) })
Type aliases

If you start using the pattern Map[A, List[B]] a lot, things might become a little more easy to ready by defining a type alias for it:

type ListMultiMap[A,B] = Map[A,List[B]]

def addBinding[A,B](map: ListMultiMap[A, B], key: A, value: B): ListMultiMap[A, B] =
  map + (key -> { value :: map.getOrElse(key, Nil) })

A type and its alias can be used pretty much interchangeably, there is no 'hierarchy': you can use a ListMultiMap[A,B] where a Map[A, List[B]] is expected and vice-versa.

Pimp My Library

While this helper function makes adding a single value to a MultiMap less painful, when doing multiple operations it does not look very pretty:

val original = Map(1 -> List(2, 3))
val intermediate = addBinding(original, 2, 5)
val result = addBinding(step1, 2, 6)

To make this a bit easier to use, we can use the pimp my library pattern:

implicit class ListMultiMap[A,B](map: Map[A, List[B]]) {
  def addBinding(key: A, value: B): ListMultiMap[A, B] = 
    map + (key -> { value :: map.getOrElse(key, Nil) })
val result = Map(1 -> List(2, 3))
  .addBinding(2, 5)
  .addBinding(2, 6)

Because here the ListMultiMap[A,B] is defined as an implicit class, whenever you have a Map[A,List[B]] in your code and try to invoke addBinding, the Scala compiler will know how to perform the conversion from Map[A, List[B]] to ListMultiMap[A,B].


Unfortunately, this means there is an overhead associated with using addBinding: every time it is used on a Map[A,List[B]] a new ListMultiMap[A,B] wrapper object is constructed. Luckily, we can remove this overhead by extending AnyVal, turning it into a value class:

implicit class ListMultiMap[A,B](val map: Map[A, List[B]]) extends AnyVal {
  def addBinding(key: A, value: B): Map[A, List[B]] = 
    map + (key -> { value :: map.getOrElse(key, Nil) })
val result = Map(1 -> List(2, 3))
  .addBinding(2, 5)
  .addBinding(2, 6)

This tells the compiler that the ListMultiMap class does not have any state of its own, and it can optimize out the overhead of creating a wrapper object.

Removing elements

Of course a map is not complete without a way to remove bindings, so:

implicit class ListMultiMap[A,B](val map: Map[A, List[B]]) extends AnyVal {

  def addBinding(key: A, value: B): Map[A, List[B]] = 
    map + (key -> { value :: map.getOrElse(key, Nil) })

  def removeBinding(key: A, value: B): ListMultiMap[A, B] = map.get(key) match {
    case None => map
    case Some(List(value)) => map - key 
    case Some(list) => map + (key -> list.diff(List(value)))
Control when needed

Another advantage of the 'pimp my library' pattern over simply inheriting from Map is that the user can have control over the performance characteristics of the result by choosing a particular Map implementation.

This control can be extended further by allowing the user to also choose his List implementation. In fact, we can easily make our MultiMap generic over any immutable sequence:

implicit class MultiMap[A,B](val map: Map[A, Seq[B]]) extends AnyVal {
  def addBinding(key: A, value: B, emptySeq: Seq[B] = Nil): Map[A, Seq[B]] = 
    map + (key -> { value +: map.getOrElse(key, emptySeq) })
Being more generic

There's still a lot of opportunity for making this implementation even more generic, for example by using the collection.MapLike trait and more precisely specifying the variance of the generic type parameters. Those, however, are topics that deserves a blogpost of their own.

In short

Using Scala features like type aliases and the 'pimp my library' pattern allows you to write supporting code that can remove distracting boilerplate from your actual program logic.