Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Xebia Blog
Syndicate content
Updated: 43 min 57 sec ago

Isomorphism vs Universal JavaScript

Fri, 09/04/2015 - 07:50

Ever since Spike Brehm of Airbnb popularized the term Isomorphic JavaScript people have been confused about what exactly it means to have an Isomorphic JavaScript application and what it takes to build one. From the beginning there were people opposing the term, suggesting alternatives such as monomorphic or polymorphic, whatever that all means. Eventually Michael Jackson (the React guy) suggested the term Universal JavaScript and most people seem to prefer it and proclaim “Isomorphic” to be dead.

To reopen the discussion, JavaScript guru Dr. Axel Rauschmayer recently asked the question: Is Isomorphic JavaScript a good term? I’ve already left a comment explaining my view of things, but I’d like to explain a little more. I used make the distinction between Functional Isomorphism and Technical Isomorphism. In my talk at XebiCon I explained the difference. Having the server render the HTML on first page load is the functional part, the thing that provides for a better user experience. The technical part is where we use the same code in both environments, which no user ever asked for, but makes a developer’s life easier (at least in theory).

Continue reading at Medium.com

Robot Framework - The unsung hero of test automation

Fri, 09/04/2015 - 05:47

The open source Robot Framework (RF) is a generic, keyword- and data-driven test automation framework for acceptance test driven development (ATDD). As such it stands alongside similar, but more well-known frameworks, like FitNesse, Cucumber, et alia. The (relative) unfamiliarity of the testing community with the RF is undeserved, since the RF facilitates powerful and yet simple test automation against a variety of interfaces and features some distinct advantages when compared to those other frameworks.

In a series of blogposts, we would like to make a case for the Robot Framework, by showing its greatness through a number of hands-on examples from my upcoming workshop. Next to demonstrating its advantages and strengths we will also expose some of its drawbacks and limitations, as well as touch upon certain risks that flow from harnessing some of its unique features.

Our first three posts will give an introductory overview of the RF, laying the conceptual foundation for the remainder of the series. Therefore these three articles will not concentrate on practical, hands-on examples or instructions, but instead have a more theoretical feel. Moreover, several of the fundamental concepts laid out in them, are applicable not only to the RF, but to most (if not all) test automation frameworks. Consequently, these first three posts target those that miss a basic understanding of test automation (frameworks) in general and/or of the RF in particular. The remainder of the series will be also of interest to more seasoned automation engineers.

We will first look into the basic architecture that underlies the framework and discuss the various components that it is being comprised of. In the second post we will discuss the nature of the keyword-driven approach that the RF entails. The third post will detail a typical test automation work flow.

For a first-hand experience of the pros and cons of the RF, you might want to join the first Robot Framework meetup in the Netherlands.

Robot Framework stack

The RF is written in Python. Consequently, it runs on Python, Jython or IronPython. The framework can thus be used with any OS that is able to run any of these interpreters, e.g. Windows, Linux or OS X. Unless you have a specific reason to do otherwise, the RF should run on Python. A typical situation that would require e.g. Jython, would be automating against a Java application or implementing your own RF test library in Java (more on this in a later post). A disadvantage of running on Jython is that quite a few of the low-level test libraries within the RF ecosystem will not be available. Moreover, running in Jython will slap you with a performance penalty. Fortunately, in the mentioned situations, one could still run the stack on Python, through the usage of the so-called Remote Library Interface mechanism, that can be harnessed to connect the Python stack to an application and/or a test library running in a JVM (on the same or a remote system). We will be addressing the latter subject, as well, in one of our follow-up articles.

A possible, though highly simplified, set-up of an automation framework is the following:

Generic framework design

Generic framework high-level design

Green signifies framework components whereas grey refers to components or artefacts, such as test code and product code, that are to be created by the development organization. The numbers indicate the order in which a typical test execution run would flow (more on this in the third post). The framework components are typical of all of today's test automation frameworks. Obviously, this schema is a simplification of a real-life set-up, which would result in a more complex infrastructural model so as to account for topics such as:

  • a possible distributed setup of the test engine and/or test driver
  • parallel testing against a variety of interfaces (e.g. against REST and some UI) or against a multitude of product configurations/stacks/databases
  • integration within a continuous delivery pipe line and with the test code repository
  • etc.

Mapping these generic components onto concrete run-times within the RF stack, we get the following:

Robot Framework high-level design

Robot Framework high-level design

The RF itself functions as the central framework engine. It is the core framework, that is being augmented by various tools and libraries that have been developed within the RF ecosystem, to form the larger, broader framework. (To be precise, in the given example, Selenium Webdriver does not belong to the RF ecosystem. But most of the other available low-level test libraries do.)

Let’s elaborate somewhat on the various components of the framework stack.

Test editor

The test editor is what we use to write, maintain and structure our automation code with. Test code not only consists of test cases, but also of various layers of abstractions, such as re-usable functions (keywords), wrappers, object-maps and global variables.

In the case of the RF, the editor can be anything, ranging from the simplest of text editors to a full-blown IDE. The Robot Framework comes with various editors, such as the RF Integrated Development Environment (RIDE), and with several plug-ins for popular IDE’s and text editors such as Eclipse, IntelliJ, Atom, TextMate or even Vim. But of course, you could also use a separate text editor, such as Notepad++. Which editor to use may depend on factors such as the required complexity of the test code, the layers to which one has to contribute (e.g. high-level test cases or re-usable, low-level test functions), the skill set of the involved automaton engineers (which may be business stakeholders, testers or developers) or simply personal taste.

Depending on the editor used, you may additionally benefit from features such as code completion, syntax highlighting, code extraction, test cases management and debugging tools.

Note that ‘official’ test runs are typically not initiated from within the editor, but through other mechanisms, such as build steps in a CI server or a cron job of some sort. Test runs are initiated from within the editor to test or debug the test code.

Test engine

The test engine, in our case the RF,  is the heart of the framework.

That is, it is the central component that regulates and coordinates and, as such, ties all components together. For instance, some of the tasks of the engine are:

  • Parsing the test case files, e.g. removing white spaces, resolving variables and function calls and reading external files containing test data (such as multiple username/password pairs)
  • Controlling the test driver (e.g. Selenium Webdriver)
  • Catching and handling test library return values
  • Error handling and recovery
  • Aggregate logs and reports based on the results
Test driver

A test engine, such as the RF, is a generic framework and, as such, cannot itself drive a specific type of application interface, may it be UI (e.g. mobile or web) or non-UI (e.g. a SOAP service or an API). Otherwise it would not be generic. Consequently, to be able to drive the actual software under test, a separate layer is required that has the sole purpose of interacting with the SUT.

The test driver (in RF terms a 'test library' or 'low-level test library') is the instance that controls the SUT. The driver holds the actual intelligence to make calls to a specific (type of) interface. That interface may be non-UI (as would be the case with testing directly against a SOAP-service, http-server, REST-API or jdbc-database) or UI (as would be the case with testing against a web UI or Windows UI).

Examples of test drivers that are available to the RF are Selenium Webdriver (web UI), AutoIt (Windows UI) or the RF test libraries: Android library, Suds library (SOAP-services), SSH library, etc.

The latter are examples of ‘native’ RF test libraries, i.e. libraries that have been developed against the RF test library interface with the express intent of extending the RF. Some of these RF test libraries in turn re-use (that is, wrap) other open source components. The http library, for instance, reuses the Python ‘requests’ http client.

The former are existing tools, developed outside of the RF ecosystem, that have been incorporated into that ecosystem, by creating thin layers of integration code that make the external functionality available to the framework. Which brings us to the integration layer.

Integration layer

The main responsibility of the integration layer is to expose the functionality, as contained within an external tool or library, to the rest of the framework, mainly the engine and editor. Consequently,  the integration layer can also form a limiting factor.

Through the integration layer, the test code statements (as written in RFW syntax) are ‘translated’ into parameterized instructions that adhere to the syntax of the external tool. For instance, in the case of Selenium Webdriver, the RF integration library (called ‘Selenium2Library’) consists of a set of Python (module) files, that contain small functions that wrap one or more Webdriver functions. That is, these Python functions contain one or more Webdriver API-compliant calls, optionally embedded in control logic. Each of these Python functions is available within the framework, thus indirectly providing access to the functions as are exposed by the Webdriver API.

For example, the following function provides access to the Webdriver click() method (as available through the webelement interface):

def click_element(self, locator):
 self._info("Clicking element '%s'." % locator)
 self._element_find(locator, True, True).click()

Within your editor (e.g. RIDE), the function ‘Click element’ can be used in your test code. The editor will indicate that an argument $locator is required.

These Python functions, then, are basically small wrappers and through them the integration layer, as a whole, wraps the external test driver.

As mentioned before, an integration layer is not necessarily part of the stack. Test drivers (test libraries) that have been written directly against the RF library API, do not require an integration library.

Our next post will elaborate on the keyword-driven approach to test automation that the RF follows.

Making Amazon ECS Container Service as easy to use as Docker run

Mon, 08/31/2015 - 20:52

One of the reasons Docker caught fire was that it was soo easy to use. You could build and start a docker container in a matter of seconds. With Amazon ECS this is not so. You have to learn a whole new lingo (Clusters, Task definitions, Services and Tasks), spin up an ECS cluster, write a nasty looking JSON file or wrestle with a not-so-user-friendly UI before you have your container running in ECS.

In the blog we will show you that Amazon ECS can be as fast, by presenting you a small utility named ecs-docker-run which will allow you to start a Docker container almost as fast as with Docker stand-alone by interpreting the Docker run command line options. Together with a ready-to-run CloudFormation template, you can be up and running with Amazon ECS within minutes!

ECS Lingo

Amazon ECS uses different lingo than Docker people, which causes confusion. Here is a short translation:

- Cluster - one or more Docker Hosts.
- Task Definition - A JSON representation of a docker run command line.
- Task - A running docker instance. When the instance stops, the task is finished.
- Service - A running docker instance, when it stops, it is restarted.

In the basis that is all there is too it. (Cutting a few corners and skimping on a number of details).

Once you know this, we are ready to use ecs-docker-run.

ECS Docker Run

ecs-docker-run is a simple command line utility to run docker images on Amazon ECS. To use this utility you can simply type something familiar like:

ecs-docker-run \
        --name paas-monitor \
        --env SERVICE_NAME=paas-monitor \
        --env SERVICE_TAGS=http \
        --env "MESSAGE=Hello from ECS task" \
        --env RELEASE=v10 \
        -P  \
        mvanholsteijn/paas-monitor:latest

substituting the 'docker run' with 'ecs-docker-run'.

Under the hood, it will generate a task definition and start a container as a task on the ECS cluster. All of the following Docker run command line options are functionally supported.

-P publishes all ports by pulling and inspecting the image. --name the family name of the task. If unspecified the name will be derived from the image name. -p add a port publication to the task definition. --env set the environment variable. --memory sets the amount of memory to allocate, defaults to 256 --cpu-shares set the share cpu to allocate, defaults to 100 --entrypoint changes the entrypoint for the container --link set the container link. -v set the mount points for the container. --volumes-from set the volumes to mount.

All other Docker options are ignored as they refer to possibilities NOT available to ECS containers. The following options are added, specific for ECS:

--generate-only will only generate the task definition on standard output, without starting anything. --run-as-service runs the task as service, ECS will ensure that 'desired-count' tasks will keep running. --desired-count specifies the number tasks to run (default = 1). --cluster the ECS cluster to run the task or service (default = cluster). Hands-on!

In order to proceed with the hands-on part, you need to have:

- jq installed
- aws CLI installed (version 1.7.44 or higher)
- aws connectivity configured
- docker connectivity configured (to a random Docker daemon).

checkout ecs-docker-run

Get the ecs-docker-run sources by typing the following command:

git clone git@github.com:mvanholsteijn/ecs-docker-run.git
cd ecs-docker-run/ecs-cloudformation
import your ssh key pair

To look around on the ECS Cluster instances, import your public key into Amazon EC2, using the following command:

aws ec2 import-key-pair \
          --key-name ecs-$USER-key \
          --public-key-material  "$(ssh-keygen -y -f ~/.ssh/id_rsa)"
create the ecs cluster autoscaling group

In order to create your first cluster of 6 docker Docker Hosts, type the following command:

aws cloudformation create-stack \
        --stack-name ecs-$USER-cluster \
        --template-body "$(<ecs.json)"  \
        --capabilities CAPABILITY_IAM \
        --parameters \
                ParameterKey=KeyName,ParameterValue=ecs-$USER-key \
                ParameterKey=EcsClusterName,ParameterValue=ecs-$USER-cluster

This cluster is based upon the firstRun cloudformation definition, which is used when you follow the Amazon ECS wizard.

And wait for completion...

Wait for completion of the cluster creation, by typing the following command:

function waitOnCompletion() {
        STATUS=IN_PROGRESS
        while expr "$STATUS" : '^.*PROGRESS' > /dev/null ; do
                sleep 10
                STATUS=$(aws cloudformation describe-stacks \
                               --stack-name ecs-$USER-cluster | jq -r '.Stacks[0].StackStatus')
                echo $STATUS
        done
}

waitOnCompletion
Create the cluster

Unfortunately, CloudFormation does (not) yet allow you to specify the ECS cluster name, so need to manually create the ECS cluster, by typing the following command:

aws ecs create-cluster --cluster-name ecs-$USER-cluster

You can now manage your hosts and tasks from the Amazon AWS EC2 Container Services console.

Run the paas-monitor

Finally, you are ready to run any docker image on ECS! Type the following command to start the paas-monitor.

../bin/ecs-docker-run --run-as-service \
                        --number-of-instances 3 \
                        --cluster ecs-$USER-cluster \
                        --env RELEASE=v1 \
                        --env MESSAGE="Hello from ECS" \
                        -p :80:1337 \
                        mvanholsteijn/paas-monitor
Get the DNS name of the Elastic Load Balancer

To see the application in action, you need to obtain the DNS name of the Elastic Load Balancer. Type the following commands:

# Get the Name of the ELB created by CloudFormation
ELBNAME=$(aws cloudformation describe-stacks --stack-name ecs-$USER-cluster | \
                jq -r '.Stacks[0].Outputs[] | select(.OutputKey =="EcsElbName") | .OutputValue')

# Get the DNS from of that ELB
DNSNAME=$(aws elb describe-load-balancers --load-balancer-names $ELBNAME | \
                jq -r .LoadBalancerDescriptions[].DNSName)
Open the application

Finally, we can obtain access to the application.

open http://$DNSNAME

And it should look something like this..

host release message # of calls avg response time last response time

b6ee7869a5e3:1337 v1 Hello from ECS from release v1; server call count is 82 68 45 36

4e09f76977fe:1337 v1 Hello from ECS from release v1; server call count is 68 68 41 38 65d8edd41270:1337 v1 Hello from ECS from release v1; server call count is 82 68 40 37

Perform a rolling upgrade

You can now perform a rolling upgrade of your application, by typing the following command while keeping your web browser open at http://$DNSNAME:

../bin/ecs-docker-run --run-as-service \
                        --number-of-instances 3 \
                        --cluster ecs-$USER-cluster \
                        --env RELEASE=v2 \
                        --env MESSAGE="Hello from Amazon EC2 Container Services" \
                        -p :80:1337 \
                        mvanholsteijn/paas-monitor

The result should look something like this:

host release message # of calls avg response time last response time b6ee7869a5e3:1337 v1 Hello from ECS from release v1; server call count is 124 110 43 37 4e09f76977fe:1337 v1 Hello from ECS from release v1; server call count is 110 110 41 35 65d8edd41270:1337 v1 Hello from ECS from release v1; server call count is 124 110 40 37 ffb915ddd9eb:1337 v2 Hello from Amazon EC2 Container Services from release v2; server call count is 43 151 9942 38 8324bd94ce1b:1337 v2 Hello from Amazon EC2 Container Services from release v2; server call count is 41 41 41 38 7b8b08fc42d7:1337 v2 Hello from Amazon EC2 Container Services from release v2; server call count is 41 41 38 39

Note how the rolling upgrade is a bit crude. The old instances stop receiving requests almost immediately, while all requests seem to be loaded onto the first new instance.

You do not like the ecs-docker-run script?

If you do not like the ecs-docker-run script, do not dispair. Below are the equivalent Amazon ECS commands to do it without the hocus-pocus script...

Create a task definition

This is the most difficult task: Manually creating a task definition file called 'manual-paas-monitor.json' with the following content:

{
  "family": "manual-paas-monitor",
  "containerDefinitions": [
    {
      "volumesFrom": [],
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 1337
        }
      ],
      "command": [],
      "environment": [
        {
          "name": "RELEASE",
          "value": "v3"
        },
        {
          "name": "MESSAGE",
          "value": "Native ECS Command Line Deployment"
        }
      ],
      "links": [],
      "mountPoints": [],
      "essential": true,
      "memory": 256,
      "name": "paas-monitor",
      "cpu": 100,
      "image": "mvanholsteijn/paas-monitor"
    }
  ],
  "volumes": []
}
Register the task definition

Before you can start a task it has to be registered at ECS, by typing the following command:

aws ecs register-task-definition --cli-input-json "$(<paas-monitor.json)"
Start a service

Now start a service based on this definition, by typing the following command:

aws ecs create-service \
     --cluster ecs-$USER-cluster \
     --service-name manual-paas-monitor \
     --task-definition manual-paas-monitor:1 \
     --desired-count 1

You should see a new row appear in your browser:

host release message # of calls avg response time last response time .... 5ec1ac73100f:1337 v3 Native ECS Command Line Deployment from release v3; server call count is 37 37 37 36 Conclusion

Amazon EC2 Container Services has a higher learning curve than using plain Docker. You need to get passed the lingo, the creation of an ECS cluster on Amazon EC2 and most importantly the creation of the cumbersome task definition file. After that it is almost as easy to use as Docker run.

In return you get all the goodies from Amazon like Autoscaling groups, Elastic Load Balancers and multi-availability zone deployments ready to use in your Docker applications. So, check ECS out!

More Info

Checkout more information:

Unlocking ES2015 features with Webpack and Babel

Mon, 08/31/2015 - 09:53

This post is part of a series of ES2015 posts. We'll be covering new JavaScript functionality every week for the coming two months.

After being in the working draft state for a long time, the ES2015 (formerly known as ECMAScript 6 or ES6 shorthand) specification has reached a definitive state a while ago. For a long time now, BabelJS, a Javascript transpiler, formerly known as 6to5, has been available for developers that would already like to use ES2015 features in their projects.

In this blog post I will show you how you can integrate Webpack, a Javascript module builder/loader, with Babel to automate the transpiling of ES2015 code to ES5. Besides that I'll also explain you how to automatically generate source maps to ease development and debugging.

Webpack Introduction

Webpack is a Javascript module builder and module loader. With Webpack you can pack a variety of different modules (AMD, CommonJS, ES2015, ...) with their dependencies into static file bundles. Webpack provides you with loaders which essentially allow you to pre-process your source files before requiring or loading them. If you are familiar with tools like Grunt or Gulp you can think of loaders as tasks to be executed before bundling sources. To make your life even easier, Webpack also comes with a development server with file watch support and browser reloading.

Installation

In order to use Webpack all you need is npm, the Node Package Manager, available by downloading either Node or io.js. Once you've got npm up and running all you need to do to setup Webpack globally is install it using npm:

npm install -g webpack

Alternatively, you can include it just in the projects of your preference using the following command:

npm install --save-dev webpack

Babel Introduction

With Babel, a Javascript transpiler, you can write your code using ES2015 (and even some ES7 features) and convert it to ES5 so that well-known browsers will be able to interpret it. On the Babel website you can find a list of supported features and how you can use these in your project today. For the React developers among us, Babel also comes with JSX support out of the box.

Alternatively, there is the Google Traceur compiler which essentially solves the same problem as Babel. There are multiple Webpack loaders available for Traceur of which traceur-loader seems to be the most popular one.

Installation

Assuming you already have npm installed, installing Babel is as easy as running:

npm install --save-dev babel-loader

This command will add babel-loader to your project's package.json. Run the following command if you prefer installing it globally:

npm install -g babel-loader

Project structure
webpack-babel-integration-example/
  src/
    DateTime.js
    Greeting.js
    main.js
  index.html
  package.json
  webpack.config.js

Webpack's configuration can be found in the root directory of the project, named webpack.config.js. The ES6 Javascript sources that I wish to transpile to ES5 will be located under the src/ folder.

Webpack configuration

The Webpack configuration file that is required is a very straightforward configuration or a few aspects:

  • my main source entry
  • the output path and bundle name
  • the development tools that I would like to use
  • a list of module loaders that I would like to apply to my source
var path = require('path');

module.exports = {
  entry: './src/main.js',
  output: {
    path: path.join(__dirname, 'build'),
    filename: 'bundle.js'
  },
  devtool: 'inline-source-map',
  module: {
    loaders: [
      {
        test: path.join(__dirname, 'src'),
        loader: 'babel-loader'
      }
    ]
  }
};

The snippet above shows you that my source entry is set to src/main.js, the output is set to create a build/bundle.js, I would like Webpack to generate inline source maps and I would like to run the babel-loader for all files located in src/.

ES6 sources A simple ES6 class

Greeting.js contains a simple class with only the toString method implemented to return a String that will greet the user:

class Greeting {
  toString() {
    return 'Hello visitor';
  }
}

export default Greeting
Using packages in your ES2015 code

Often enough, you rely on a bunch of different packages that you include in your project using npm. In my example, I'll use the popular date time library called Moment.js. In this example, I'll use Moment.js to display the current date and time to the user.

Run the following command to install Moment.js as a local dependency in your project:

npm install --save-dev moment

I have created the DateTime.js class which again only implements the toString method to return the current date and time in the default date format.

import moment from 'moment';

class DateTime {
  toString() {
    return 'The current date time is: ' + moment().format();
  }
}

export default DateTime

After importing the package using the import statement you can use it anywhere within the source file.

Your main entry

In the Webpack configuration I specified a src/main.js file to be my source entry. In this file I simply import both classes that I created, I target different DOM elements and output the toString implementations from both classes into these DOM objects.

import Greeting from './Greeting.js';
import DateTime from './DateTime.js';

var h1 = document.querySelector('h1');
h1.textContent = new Greeting();

var h2 = document.querySelector('h2');
h2.textContent = new DateTime();
HTML

After setting up my ES2015 sources that will display the greeting in an h1 tag and the current date time in an h2 tag it is time to setup my index.html. Being a straightforward HTML-file, the only thing that is really important is that you point the script tag to the transpiled bundle file, in this example being build/bundle.js.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Webpack and Babel integration example</title>
</head>
<body>

<h1></h1>

<h2></h2>

    <script src="build/bundle.js"></script>
</body>
</html>
Running the application

In this example project, running my application is as simple as opening the index.html in your favorite browser. However, before doing this you will need to instruct Webpack to actually run the loaders and thus transpile your sources into the build/bundle.js required by the index.html.

You can run Webpack in watch mode, meaning that it will monitor your source files for changes and automatically run the module loaders defined in your configuration. Execute the following command to run in watch mode:

webpack --watch

If you are using my example project from Github (link at the bottom), you can also use the following script which I've set up in the package.json:

npm run watch

Easier debugging using source maps

Debugging transpiled ES5 is a huge pain which will make you want to go back to writing ES5 without thinking. To ease development and debugging of ES2015 I can rely on source maps generated by Webpack. While running Webpack (normal or in watch mode) with the devtool property set to inline-source-map you can view the ES2015 source files and actually place breakpoints in them using your browser's development tools.

Debugging ES6 with source maps

Running the example project with a breakpoint inside the DateTime.js toString method using the Chrome developer tools.

Conclusion

As you've just seen, setting up everything you need to get started with ES2015 is extremely easy. Webpack is a great utility that will allow you to easily set up your complete front-end build pipeline and seamlessly integrates with Babel to include code transpiling into the build pipeline. With the help of source maps even debugging becomes easy again.

Sample project

The entire sample project as introduced above can be found on Github.

Xebia KnowledgeCast Episode 6: Lodewijk Bogaards on Stackstate and TypeScript

Sun, 08/30/2015 - 12:00

lodewijk
The Xebia KnowledgeCast is a podcast about software architecture, software development, lean/agile, continuous delivery, and big data.

In this 6th episode, we switch to a new format! So, no fun with stickies this time. It’s one interview. And we dive in deeper than ever before.

Lodewijk Bogaards is co-founder and CTO at Stackstate. Stackstate is an enterprise application that provides real time insight in the run-time status of all business processes, applications, services and infrastructure components, their dependencies and states.

It gives immediate insight into the cause and impact of any change or failure from the hardware level to the business process level and everything in between. To build awesome apps like that, Lodewijk and his team use a full-stack of technologies that they continue to improve. One such improvement is ditching plain Javascript in favor of TypeScript and Angular and it is that topic that we'll focus on in today's disussion.

What's your opinion on TypeScript? I'm looking forward to reading your comments!

Links:

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or Stitcher.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, send in a voice message so we can put you ON the show!

Credits

Trying out the Serenity BDD framework; a report

Fri, 08/28/2015 - 12:47

“Serenity, that feeling you know you can trust your tests.” Sounds great, but I was thinking of Firefly first when I heard the name ‘Serenity’. In this case, we are talking about a framework you can use to automate your tests.

The selling points of this framework are that it integrates your acceptance tests (BDD) with reporting and acts like living documentation. It can also integrate with JIRA and all that jazz. Hearing this, I wasn’t ‘wowed’ per se. There are many tools out there that can do that. But Serenity isn’t supporting just one approach. Although it is heavily favouring Webdriver/Selenium, you can also use JBehave, JUnit, Cucumber. That is really nice! 

Last weekend, at the Board of Agile Testers, we tried the framework with a couple of people. Our goal was to see if it’s really easy to set up, to see if the reports are useful and how easy it is to implement features. We used Serenity ‘Cucumber-style’ with Selenium/Webdriver (Java) and the Page Object Pattern.

Setup

It maybe goes a little too far to say a totally non-technical person could set up the framework, but it was pretty easy. Using your favorite IDE, all you had to do was import a Maven archetype (we used a demo project) and all the Serenity dependencies are downloaded for you. We would recommend using Java 7 at least, Java 6 gave us problems.

Using the tool

The demo project tests ran alright, but we noticed it was quite slow! The reason is probably that Serenity takes a screenshot at every step of your test. You can configure this setting, thankfully.

At the end of each test run, Serenity generates an HTML report. This report looks really good! You get a general overview and can click on each test step to see the screenshots. There is also a link to the requirements and you can see the ‘coverage’ of your tests. I’m guessing they mean functional coverage here, since we’re writing acceptance tests.

Serenity Report

Writing our own tests

After we got a little overview of the tool we started writing our own tests, using a Calculator as the System Under Test. The Serenity specific Page Object stuff comes with the Maven archetype so the IDE could help you implement the tests. We tried to do it in a little TDD cycle. Run the test, let it fail and let the output give you a hint on how to implement the step definitions. Beyond the step definition you had to use your Java skills to implement the actual tests.

Conclusion

The tool is pretty developer oriented, there’s no denying that. The IDE integration is very good in my opinion. With the Community IntelliJ edition you have all the plugins you need to speed up your workflow. The reporting is indeed the most beautiful I had seen, personally. Would we recommend changing your existing framework to Serenity? Unless your test reports are shit: no. There is in fact a small downside to using this framework; for now there are only about 15 people who actively contribute. You are of course allowed to join in, but it is a risk that there are only a small group of people actively improving it. If the support base grows, it will be a powerful framework for your automated tests and BDD cycle. For now, I'm going to play with the framework some more, because after using it for about 3 hours I think we only scratched the surface of its possibilities. 

 

HTTP/2 Server Push

Sun, 08/23/2015 - 17:11

The HTTP/2 standard was finalized in May 2015. Most major browsers support it, and Google uses it heavily.

HTTP/2 leaves the basic concepts of Requests, Responses and Headers intact. Changes are mostly at the transport level, improving the performance of parallel requests - with few changes to your application. The go HTTP/2 'gophertiles' demo nicely demonstrates this effect.

A new concept in HTTP/2 is Server Push, which allows the server to speculatively start sending resources to the client. This can potentially speed up initial page load times: the browser doesn't have to parse the HTML page and find out which other resources to load, instead the server can start sending them immediately.

This article will demonstrate how Server Push affects the load time of the 'gophertiles'.

HTTP/2 in a nutshell

The key characteristic of HTTP/2 is that all requests for a server are sent over one TCP connection, and responses can come in parallel over that connection.

Using only one connection reduces overhead caused by TCP and TLS handshakes. Allowing responses to be sent in parallel is an improvement over HTTP/1.1 pipelining, which only allows requests to be served sequentially.

Additionally, because all requests are sent over one connection, there is a Header Compression mechanism that reduces the bandwidth needed for headers that previously would have had to be repeated for each request.

Server Push

Server Push allows the server to preemptively send a 'request promise' and an accompanying response to the client.

The most obvious use case for this technology is sending resources like images, CSS and JavaScript along with the page that includes them. Traditionally, the browser would have to first fetch the HTML, parse it, and then make subsequent requests for other resources. As the server can fairly accurately predict which resources a client will need, with Server Push it does not have to wait for those requests and can begin sending the resources immediately.

Of course sometimes, you really do only want to fetch the HTML and not the accompanying resources. There are 2 ways to accomplish this: the client can specify it does not want to receive any pushed resources at all, or cancel an individual push after receiving the push promise. In the latter case the client cannot prevent the browser from initiating the Push, though, so some bandwidth may have been wasted. This will make deciding whether to use Server Push for resources that might already have been cached by the browser subtle.

Demo

To show the effect HTTP/2 Server Push can have, I have extended the gophertiles demo to be able to test behavior with and without Server Push, available hosted on an old raspberry pi.

Both the latency of loading the HTML and the latency of loading each tile is now artificially increased.

When visiting the page without Server Push with an artificial latency of 1000ms, you will notice that loading the HTML takes at least one second, and then loading all images in parallel again takes at least one second - so rendering the complete page takes well above 2 seconds.

With server push enabled, you will see that after the DOM has loaded, the images are almost immediately there, because they have been Push'ed already.

All that glitters, however, is not gold: as you will notice when experimenting (especially at lower artificial latencies), while Server Push fairly reliably reduces the complete load time, it sometimes increases the time until the DOM-content is loaded. While this makes sense (the browser needs to process frames relating to the Server Push'ed resources), this could have an impact on the perceived performance of your page: for example, it could delay running JavaScript code embedded in your site.

HTTP/2 does give you tools to tune this, such as Stream Priorities, but this might need careful tuning and be supported by the http2 library you are choosing.

Conclusions

HTTP/2 is here today, and can provide a considerable improvement in perceived performance - even with few changes in your application.

Server Push potentially allows you to improve your page loading times even further, but requires careful analysis and tuning - otherwise it might even have an adverse effect.

Release Burn Down Brought to Life

Tue, 08/18/2015 - 23:48

Inspired by the blog of Mike Cohn [Coh08] "Improving On Traditional Release Burndown Charts" I created a time lapsed version of it. It also nicely demonstrates that forecasts of "What will be finished?" (at a certain time) get better as the project progresses.

The improved traditional release burn down chart clearly show what (a) is finished (light green), (b) what will very likely be finished (dark green), (c) what will perhaps be finished, and perhaps not (orange), and (d) what almost is guaranteed not to be finished (red).

This knowledge supports product owners in ordering the backlog based on the current knowledge.

Simulation

The result is obtained doing a Monte Carlo simulation of a toy project, using a fixed product backlog of around 100 backlog items with various sized items. The amount of work realized also varies per projectday based on a simple uniform probability distribution.

Forecasting is done using a 'worst' velocity and a 'best' velocity. Both are determined using only the last 3 velocities, i.e. only the last 3 sprints are considered.

The 2 grey lines represent the height of the orange part of the backlog, i.e. the backlog items that might be or not be finished. This also indicates the uncertainty over time of what actually will be delivered by the team at the given time.

 

http://blog.xebia.com/wp-content/uploads/2015/07/out1.mp4

 

The Making Of...

The movie above has been created using GNU plot [GNU Plot] for drawing the charts, and ffmpeg [ffmpeg] has been used to creat the time lapsed movie from the set of charts.

Result

Over time the difference between the 2 grey lines gets smaller, a clear indication of improving predictability and reduction of risk. Also, the movie shows that the final set of backlog items done is well between the 2 grey lines from the start of the project.

This looks very similar to the 'Cone of Uncertainty'. Besides that the shape of the grey lines only remotely resembles a cone, another difference is that the above simulation merely takes statistical chances into account. The fact that the team gains more knowledge and insight over time, is not considered in the simulation, whereas it is an important factor in the 'Cone of Uncertainty'.

References

[Coh08] "Improving On Traditional Release Burndown Charts", Mike Cohn, June 2008, https://www.mountaingoatsoftware.com/blog/improving-on-traditional-release-burndown-charts

[GNU Plot] Gnu plot version 5.0, "A portable command-line driven graphing utility", http://www.gnuplot.info

[ffmpeg] "A complete, cross-platform solution to record, convert and stream audio and video", https://ffmpeg.org

Iterables, Iterators and Generator functions in ES2015

Mon, 08/17/2015 - 20:45

ES2015 adds a lot of new features to javascript that make a number of powerful constructs, present in other languages for years, available in the browser (well as soon as support for those features is rolled out of course, but in the meantime we can use these features by using a transpiler such as Babeljs or Traceur).
Some of the more complicated additions are the iterator and iterable protocols and generator functions. In this post I'll explain what they are and what you can use them for.

The Iterable and Iterator protocols

These protocols are analogues to, for example, the Java Interfaces and define the contract that an object must adhere to in order to be considered an iterable or an iterator. So instead of new language features they leverage existing constructs by agreeing upon a convention (as javascript does not have a concept like an interface in other typed languages). Let's have a closer look at these protocols and see how they interact with each other.

The Iterable protocol

This protocol specifies that for an object object to be considered iterable (and usable in, for example, a `for ... of` loop) it has to define a function with the special key "Symbol.iterator" that returns an object that adheres to the iterator protocol. That is basically the only requirement. For example say you have a datastructure you want to iterate over, in ES2015 you would do that as follows:

class DataStructure {
  constructor(data) {
    this.data = data;
  }

  [Symbol.iterator]() {
    let current = 0
    let data = this.data.slice();
    return  {
      next: function () {
        return {
          value: data[current++],
          done: current > data.length
        };
      }
    }
  }
}
let ds = new DataStructure(['hello', 'world']);

console.log([...ds]) // ["hello","world"]

The big advantage of using the iterable protocol over using another construct like `for ... in` is that you have more clearly defined iteration semantic (for example: you do not need explicit hasOwnProperty checking when iterating over an array to filter out properties on the array object but not in the array). Another advantage is the when using generator functions you can benefit of lazy evaluation (more on generator functions later).

The iterator protocol

As mentioned before, the only requirement for the iterable protocol is for the object to define a function that returns an iterator. But what defines an iterator?
In order for an object to be considered an iterator it must provided a method named `next` that returns an object with 2 properties:
* `value`: The actual value of the iterable that is being iterated. This is only valid when done is `false`
* `done`: `false` when `value` is an actual value, `true` when the iterator did not produce a new value

Note that when you provided a `value` you can leave out the `done` property and when the `done` property is `true` you can leave out the value property.

The object returned by the function bound to the DataStructure's `Symbol.iterator` property in the previous example does this by returning the entry from the array as the value property and returning `done: false` while there are still entries in the data array.

So by simply implementing both these protocols you can turn any `Class` (or `object` for that matter) into an object you can iterate over.
A number of built-ins in ES2015 already implement these protocols so you can experiment with the protocol right away. You can already iterate over Strings, Arrays, TypedArrays, Maps and Sets.

Generator functions

As shown in the earlier example implementing the iterable and iterator protocols manually can be quite a hassle and is error-prone. That is why a language feature was added to ES2015: generator functions. A generator combines both an iterable and an iterator in a single function definition. A generator function is declared by adding an asterisk (`*`) to the function name and using yield to return values. Big advantage of using this method is that your generator function will return an iterator that, when its `next()` method is invoked will run up to the first yield statement it encounters and will suspend execution until `next()` is called again (after which it will resume and run until the next yield statement). This allows us to write an iteration that is evaluated lazily instead of all at once.

The following example re-implements the iterable and iterator using a generator function producing the same result, but with a more concise syntax.

class DataStructure {
  constructor(data) {
    this.data = data;
  }

  *[Symbol.iterator] () {
    let data = this.data.slice();
    for (let entry of data) {
      yield entry;
    }
  }
}
let ds = new DataStructure(['hello', 'world']);

console.log([...ds]) // ["hello","world"]
More complex usages of generators

As mentioned earlier, generator functions allow for lazy evaluation of (possibly) infinite iterations allowing to use constructs known from more functional languages such as taking a limited subset from an infinte sequence:

function* generator() {
  let i = 0;
  while (true) {
    yield i++;
  }
}

function* take(number, gen) {
  let current = 0;
  for (let result of gen) {
    yield result;
    if (current++ &gt;= number) {
      break;
    }
  }
}
console.log([...take(10, generator())]) // [0,1,2,3,4,5,6,7,8,9]
console.log([...take(10, [1,2,3])]) // [1,2,3]

Delegating generators
Within a generator it is possible to delegate to a second generator making it possible to create recursive iteration structures. The following example demonstrates a simple generator delegating to a sub generator and returning to the main generator.

function* generator() {
  yield 1;
  yield* subGenerator()
  yield 4;
}

function* subGenerator() {
  yield 2;
  yield 3;
}

console.log([...generator()]) // [1,2,3,4]

Persistence with Docker containers - Team 1: GlusterFS

Mon, 08/17/2015 - 10:05

This is a follow-up blog from KLM innovation day

The goal of Team 1 was to have GlusterFS cluster running in Docker containers and to expose the distributed file system to a container by ‘mounting’ it through a so called data container.

Setting up GlusterFS was not that hard, the installation steps are explained here [installing-glusterfs-a-quick-start-guide].

The Dockerfiles we eventually created and used can be found here [docker-glusterfs]
Note: the Dockerfiles still containe some manual steps because you need to tell GlusterFS about the other node so they can find each other. In an real environment this could be done by for example Consul.

Although setting up GlusterFS cluster was not hard, mounting it on CoreOS proved much more complicated. We wanted to mount the folder through a container using the GlusterFS client but to achieve that the container needs to run in privileged mode or with ‘SYS_ADMIN’ capabilities. This has nothing to do with GlusterFS it self, Docker doesn’t allow mounts without these options. Eventually mounting of the remote folder can be achieved but exposing this mounted folder as Docker volume is not possible. This is an Docker shortcoming, see docker issue here

Our second - not so prefered - method was mounting the folder in CoreOS itself and then using it in a container. The problem here is that CoreOS does not have support for GlusterFS client but does have NFS support. So to make this work we exposed GlusterFS through NFS, the step to enable it can be found here [Using_NFS_with_Gluster].  After enabling NFS on GlusterFS we mounted the exposed folder in CoreOS and used it in a container which worked fine.

Mounting GlusterFS through NFS was not what we wanted, but luckily Docker released their experimental volume plugin support. And our luck did not end there, because it turned out David Calavera had already created a volume plugin for GlusterFS. So to test this out we used the experimental Docker version 1.8 and run the plugin with the necessary settings. This all worked fine but this is where our luck ran out. When using the experimental Docker daemon, in combination with this plugin, we can see in debug mode the plugin connects to the GlusterFS and saying it is mounting the folder. But unfortunately it receives an error, it seems from the server and then unmounts the folder.

The volume plugin above is basically a wrapper around the GlusterFS client. We also found a Go API for GlusterFS. This could be used to create a pure Go implementation of the volume plugin, but unfortunately we ran out of time to actually try this.

Conclusion:

Using distributed files system like GlusterFS or CEPH sounds very promising especially combined with the Docker volume plugin which hides the implementation and allows to decouple the container from the storage of the host.

Update:

Between the innovation day and this blog post the world evolved and Docker 1.8 came out and with it a CEPH docker volume plugin.

Innovation day at KLM: Persistence with Docker containers

Sat, 08/15/2015 - 12:06

On 3th of July KLM and Cargonauts joined forces at KLM headquarters for an innovation day. The goal was to share knowledge and find out how to properly do “Persistence with Docker containers”.

Persistence is data that you want to have available after the reboot, and to make it more complex in some cases you also want to share that data over multiple nodes. Examples of this are an upload folder that is shared or a database. Our innovation day case is focusing on a MySQL database, we want to find out how we can host MySQL data reliable and highly available.

Persistence with Docker containers poses the same dilemmas as when you don’t use them but with more options to chose from. Eventually those options could be summarized to this:

  1. Don’t solve the problem on the Docker platform; Solve it at the application level by using a distributed database like Cassandra.
  2. Don’t solve the problem on the Docker platform; Use a SAAS or some technology that provides a high available storage to your host.
  3. Do fix it on the Docker platform, so you are not tied to a specific solution. This will allows you to deploy everywhere as long as Docker is available.

Since this was an innovation day and we are container enthusiasts, we focused on the last option. To make it more explicit we decided to try and investigate into these two possible solutions for our problem:

  1. Provide the host with a highly available distributed file system with GlusterFS. This will allow us to start a container anywhere and move it freely because the data is available anywhere on the platform.
  2. GlusterFS might not provide the best performance due to its distributed nature. So to have better performance we need to have the data available on the host. For this we investigated Flocker.

Note: We realise that these two solutions only solve part of the problem because what happens if the data gets corrupted? To solve that we still need some kind of a snapshot/backup solution.

Having decided on the approach we split the group into two teams, team 1 focused on GlusterFS and team 2 focused on Flocker. We will post their stories in the coming days. Stay tuned!

The innovation day went exactly as an innovation day should go, with a lot of enthusiasm followed with some disappointments which led to small victories and unanswered questions, but with great new knowledge and a clear vision where to focus your energy next.

We would like to thank KLM for hosting the innovation day!

You might not need lodash (in your ES2015 project)

Tue, 08/11/2015 - 12:44
code { display: inline !important; }

This post is the first in a series of ES2015 posts. We'll be covering new JavaScript functionality every week for the coming two months.

ES2015 brings a lot of new functionality to the table. It might be a good idea to evaluate if your new or existing projects actually require a library such as lodash. We'll talk about several common usages of lodash functions that can be simply replaced by a native ES2015 implementation.

 

_.extend / _.merge

Let's start with _.extend and its related _.merge function. These functions are often used for combining multiple configuration properties in a single object.

const dst = { xeb: 0 };
const src1 = { foo: 1, bar: 2 };
const src2 = { foo: 3, baz: 4 };

_.extend(dst, src1, src2);

assert.deepEqual(dst, { xeb: 0, foo: 3, bar: 2, baz: 4 });

Using the new Object.assign method, the same behaviour is natively possible:

const dst2 = { xeb: 0 };

Object.assign(dst2, src1, src2);

assert.deepEqual(dst2, { xeb: 0, foo: 3, bar: 2, baz: 4 });

We're using Chai assertions to confirm the correct behaviour.

 

_.defaults / _.defaultsDeep

Sometimes when passing many parameters to a method, a config object is used. _.defaults and its related _.defaultsDeep function come in handy to define defaults in a certain structure for these config objects.

function someFuncExpectingConfig(config) {
  _.defaultsDeep(config, {
    text: 'default',
    colors: {
      bgColor: 'black',
      fgColor: 'white'
    }
  });
  return config;
}
let config = { colors: { bgColor: 'grey' } };

someFuncExpectingConfig(config);

assert.equal(config.text, 'default');
assert.equal(config.colors.bgColor, 'grey');
assert.equal(config.colors.fgColor, 'white');

With ES2015, you can now destructure these config objects into separate variables. Together with the new default param syntax we get:

function destructuringFuncExpectingConfig({
  text = 'default',
  colors: {
    bgColor: bgColor = 'black',
    fgColor: fgColor = 'white' }
  }) {
  return { text, bgColor, fgColor };
}

const config2 = destructuringFuncExpectingConfig({ colors: { bgColor: 'grey' } });

assert.equal(config2.text, 'default');
assert.equal(config2.bgColor, 'grey');
assert.equal(config2.fgColor, 'white');

 

_.find / _.findIndex

Searching in arrays using an predicate function is a clean way of separating behaviour and logic.

const arr = [{ name: 'A', id: 123 }, { name: 'B', id: 436 }, { name: 'C', id: 568 }];
function predicateB(val) {
 return val.name === 'B';
}

assert.equal(_.find(arr, predicateB).id, 436);
assert.equal(_.findIndex(arr, predicateB), 1);

In ES2015, this can be done in exactly the same way using Array.find.

assert.equal(Array.find(arr, predicateB).id, 436);
assert.equal(Array.findIndex(arr, predicateB), 1);

Note that we're not using the extended Array-syntax arr.find(predicate). This is not possible with Babel that was used to transpile this ES2015 code.

 

_.repeat, _.startsWith, _.endsWith and _.includes

Some very common but never natively supported string functions are _.repeat to repeat a string multiple times and _.startsWith / _.endsWith / _.includes to check if a string starts with, ends with or includes another string respectively.

assert.equal(_.repeat('ab', 3), 'ababab');
assert.isTrue(_.startsWith('ab', 'a'));
assert.isTrue(_.endsWith('ab', 'b'));
assert.isTrue(_.includes('abc', 'b'));

Strings now have a set of new builtin prototypical functions:

assert.equal('ab'.repeat(3), 'ababab');
assert.isTrue('ab'.startsWith('a'));
assert.isTrue('ab'.endsWith('b'));
assert.isTrue('abc'.includes('b'));

 

_.fill

A not-so-common function to fill an array with default values without looping explicitly is _.fill.

const filled = _.fill(new Array(3), 'a', 1);
assert.deepEqual(filled, [, 'a', 'a']);

It now has a drop-in replacement: Array.fill.

const filled2 = Array.fill(new Array(3), 'a', 1);
assert.deepEqual(filled2, [, 'a', 'a']);

 

_.isNaN, _.isFinite

Some type checks are quite tricky, and _.isNaN and _.isFinite fill in such gaps.

assert.isTrue(_.isNaN(NaN));
assert.isFalse(_.isFinite(Infinity));

Simply use the new Number builtins for these checks now:

assert.isTrue(Number.isNaN(NaN));
assert.isFalse(Number.isFinite(Infinity));

 

_.first, _.rest

Lodash comes with a set of functional programming-style functions, such as _.first (aliased as _.head) and _.rest (aliased _.tail) which get the first and rest of the values from an array respectively.

const elems = [1, 2, 3];

assert.equal(_.first(elems), 1);
assert.deepEqual(_.rest(elems), [2, 3]);

The syntactical power of the rest parameter together with destructuring replaces the need for these functions.

const [first, ...rest] = elems;

assert.equal(first, 1);
assert.deepEqual(rest, [2, 3]);

 

_.restParam

Specially written for ES5, lodash contains helper functions to mimic the behaviour of some ES2015 parts. An example is the _.restParam function that wraps a function and sends the last parameters as an array to the wrapped function:

function whatNames(what, names) {
 return what + ' ' + names.join(';');
}
const restWhatNames = _.restParam(whatNames);

assert.equal(restWhatNames('hi', 'a', 'b', 'c'), 'hi a;b;c');

Of course, in ES2015 you can simply use the rest parameter as intended.

function whatNamesWithRest(what, ...names) {
 return what + ' ' + names.join(';');
}

assert.equal(whatNamesWithRest('hi', 'a', 'b', 'c'), 'hi a;b;c');

 

_.spread

Another example is the _.spread function that wraps a function which takes an array and sends the array as separate parameters to the wrapped function:

function whoWhat(who, what) {
 return who + ' ' + what;
}
const spreadWhoWhat = _.spread(whoWhat);
const callArgs = ['yo', 'bro'];

assert.equal(spreadWhoWhat(callArgs), 'yo bro');

Again, in ES2015 you want to use the spread operator.

assert.equal(whoWhat(...callArgs), 'yo bro');

 

_.values, _.keys, _.pairs

A couple of functions exist to fetch all values, keys or value/key pairs of an object as an array:

const bar = { a: 1, b: 2, c: 3 };

const values = _.values(bar);
const keys = _.keys(bar);
const pairs = _.pairs(bar);

assert.deepEqual(values, [1, 2, 3]);
assert.deepEqual(keys, ['a', 'b', 'c']);
assert.deepEqual(pairs, [['a', 1], ['b', 2], ['c', 3]]);

Now you can use the Object builtins:

const values2 = Object.values(bar);
const keys2 = Object.keys(bar);
const pairs2 = Object.entries(bar);

assert.deepEqual(values2, [1, 2, 3]);
assert.deepEqual(keys2, ['a', 'b', 'c']);
assert.deepEqual(pairs2, [['a', 1], ['b', 2], ['c', 3]]);

 

_.forEach (for looping over object properties)

Looping over the properties of an object is often done using a helper function, as there are some caveats such as skipping unrelated properties. _.forEach can be used for this.

const foo = { a: 1, b: 2, c: 3 };
let sum = 0;
let lastKey = undefined;

_.forEach(foo, function (value, key) {
  sum += value;
  lastKey = key;
});

assert.equal(sum, 6);
assert.equal(lastKey, 'c');

With ES2015 there's a clean way of looping over Object.entries and destructuring them:

sum = 0;
lastKey = undefined;
for (let [key, value] of Object.entries(foo)) {
  sum += value;
  lastKey = key;
}

assert.equal(sum, 6);
assert.equal(lastKey, 'c');

 

_.get

Often when having nested structures, a path selector can help in selecting the right variable. _.get is created for such an occasion.

const obj = { a: [{}, { b: { c: 3 } }] };

const getC = _.get(obj, 'a[1].b.c');

assert.equal(getC, 3);

Although ES2015 does not has a native equivalent for path selectors, you can use destructuring as a way of 'selecting' a specific value.

let a, b, c;
({ a : [, { b: { c } }]} = obj);

assert.equal(c, 3);

 

_.range

A very Python-esque function that creates an array of integer values, with an optional step size.

const range = _.range(5, 10, 2);
assert.deepEqual(range, [5, 7, 9]);

As a nice ES2015 alternative, you can use a generator function and the spread operator to replace it:

function* rangeGen(from, to, step = 1) {
  for (let i = from; i &lt; to; i += step) {
    yield i;
  }
}

const range2 = [...rangeGen(5, 10, 2)];

assert.deepEqual(range2, [5, 7, 9]);

A nice side-effect of a generator function is it's laziness. It is possible to use the range generator without generating the entire array first, which can come in handy when memory usage should be minimal.

Conclusion

Just like kicking the jQuery habit, we've seen that there are alternatives to some lodash functions, and it can be preferable to use as little of these functions as possible. Keep in mind that the lodash library offers a consistent API that developers are probably familiar with. Only swap it out if the ES2015 benefits outweigh the consistency gains (for instance, when performance is an issue).

For reference, you can find the above code snippets at this repo. You can run them yourself using webpack and Babel.

Testing UI changes in large web applications

Mon, 08/10/2015 - 13:51

When a web application starts to grow in terms of functionality, number of screens and amount of code, automated testing becomes a necessity. Not only will these tests prevent you from delivering bugs to your users but also help to maintain a high speed of development. This ensures that you'll be focusing on new and better features without having to fix bugs in the existing ones.

However even with all kinds of unit-, integration- and end-to-end tests in place,  you'll still end up with a huge blind spot: does your application still looks like it's supposed to?

Can we test for this as well? (hint: we can).

Breaking the web's UI is easy

A web application's looks is determined by a myriad of HTML tags and CSS rules which are often re-used in many different combinations. And therein lies the problem: any seemingly innocuous change to markup or CSS could lead to a broken layout, unaligned elements or other unintended side effects. A change in CSS or markup for one screen, could lead to problems on another.

Additionally, as browsers are often being updated, CSS and markup bugs might be either fixed or introduced. How will you know if your application is still looking good in the latest Firefox or Chrome version or the next big new browsers of the future?

So how do we test this?

The most obvious method to prevent visual regressions in a web application is to manually click through every screen of an application using several browsers on different platforms, looking for problems. While this solution might work fine at first, it does not scale very well. The amount of screens you'll have to look through will increase, which will steadily increase the time you'll need for testing. This in turn will slow your development speed considerably.

Clicking through every screen every time you want to release a new feature is a very tedious process. And because you'll be looking at the same screens over and over again, you (and possibly your testing colleagues) will start to overlook things.

So this manual process slow downs your development process, it's error prone and, most importantly, it's no fun!

Automate all the things?

As a developer, my usual response to repetitive manual processes is to automate them away with some clever scripts or tools. Sadly, this solution won't work either. Currently it's not possible to let a script determine which visual change to a page is good or bad. While we might delegate this task to some revolutionary artificial intelligence in the future, it's not a solution we can use right now.

What we can do: automate the pieces of the visual testing process where we can, while still having humans determine whether a visual change is intended.

Also taking into account our quality and requirements in regards to development speed, we'll be looking for a tool that:

  • minimizes the manual steps in our development workflow
  • makes it easy to create, update, debug and run the tests
  • provides a solid user- and developer/tester experience
Introducing: VisualReview

To address these issues we're developing a new tool called VisualReview. Its goal is to provide a productive and human-friendly workflow for testing and reviewing your web application's layout for any regressions. In short, VisualReview allows you to:

  • use a (scripting) environment of your choice to automate screen navigation and making screenshots of selected screens
  • accept and reject any differences in screenshots between runs in a user-friendly workflow.
  • compare these screenshots against previously accepted ones.

With these features (and more to come) VisualReview's primary focus is to provide a great development process and environment for development teams.

How does it work?

VisualReview acts as a server that receives screenshots though a regular HTTP upload. When a screenshot is received, it's compared against a baseline and stores any differences it finds. After all screenshots have been analyzed someone from your team (a developer, tester or any other role) opens up the server's analysis page to view any differences and accepts or rejects them. Every screenshot that's been accepted will be set as a baseline for future tests.

VisualReview-how-it-works
Sending screenshots to VisualReview is typically done from a test script. We already provide an API for Protractor (AngularJS's browser testing tool, basically an Angular friendly wrapper around Selenium), however any environment could potentially use VisualReview as the upload is done using a simple HTTP REST call. A great example of this happened during a recent meetup where we presented VisualReview. During our presentation a couple of attendees created a node client for use in their own project. A working version was running even before the meetup was over.

Example workflow

To illustrate how this works in practice I'll be using an example web application. In this case a twitter clone called 'Deep Thoughts' where users can post a single-sentence thought, similar to Reddit's shower thoughts.
VisualReview-example-site
Deep Thoughts is an Angular application so I'll be using Angular's browser testing tool Protractor to test for visual changes. Protractor does not support sending screenshots to VisualReview by default, so we'll be using visualreview-protractor as a dependency to the protractor test suite. After adding some additional protractor configuration and made sure the VisualReview server is running, we're ready to run the test script. The test script could look like this:

var vr = browser.params.visualreview;
describe('the deep thoughts app', function() {
  it('should show the homepage', function() {
    browser.get('http://localhost:8000/#/thoughts');
    vr.takeScreenshot('main');
  });
  [...]
});

With all pieces in place, we can now run the Protractor script:

protractor my-protractor-config.js

When all tests have been executed, the test script ends with the following message:

VisualReview-protractor: test finished. Your results can be viewed at: http://localhost:7000/#/1/1/2/rp

Opening the link in a browser it shows the VisualReview screenshot analysis tool.

VisualReview analysis screen

For this example we've already created a baseline of images, so this screen now highlights differences between the baseline and the new screenshot. As you can see, the left and right side of the submit button are highlighted in red: it seems that someone has changed the button's width. Using keyboard or mouse navigation, I can view both the new screenshot and the baseline screenshot. The differences between the two are highlighted in red.

Now I can decide whether or not I'm going to accept this change using the top menu.

Accepting or rejecting screenshots in VisualReview

If I accept this change, the screenshot will replace the one in the baseline. If I reject it, the baseline image will remain as it is while this screenshot is marked as a 'rejection'.  With this rejection state, I can now point other team members to look at all the rejected screenshots by using the filter option which allows for better team cooperation.

VisualReview filter menu

Open source

VisualReview is an open source project hosted on github. We recently released our first stable version and are very interested in your feedback. Try out the latest release or run it from an example project. Let us know what you think!

 

Building IntelliJ plugins from the command line

Mon, 08/03/2015 - 13:16

For a few years already, IntelliJ IDEA has been my IDE of choice. Recently I dove into the world of plugin development for IntelliJ IDEA and was unhappily surprised. Plugin development all relies on IDE features. It looked hard to create a build script to do the actual plugin compilation and packaging from a build script. The JetBrains folks simply have not catered for that. Unless you're using TeamCity as your CI tool, you're out of luck.

For me it makes no sense writing code if:

  1. it can not be compiled and packaged from the command line
  2. the code can not be compiled and tested on a CI environment
  3. IDE configurations can not be generated from the build script

Google did not help out a lot. Tomasz Dziurko put me in the right direction.

In order to build and test a plugin, the following needs to be in place:

  1. First of all you'll need IntelliJ IDEA. This is quite obvious. The Plugin DevKit plugins need to be installed. If you want to create a language plugin you might want to install Grammar-Kit too.
  2. An IDEA SDK needs to be registered. The SDK can point to your IntelliJ installation.

The plugin module files are only slightly different from your average project.

Update: I ran into some issues with forms and language code generation and added some updates at the end of this post.

Compiling and testing the plugin

Now for the build script. My build tool of choice is Gradle. My plugin code adheres to the default Gradle project structure.

First thing to do is to get a hold of the IntelliJ IDEA libraries in an automated way. Since the IDEA libraries are not available via Maven repos, an IntelliJ IDEA Community Edition download is probably the best option to get a hold of the libraries.

The plan is as follows: download the Linux version of IntelliJ IDEA, and extract it in a predefined location. From there, we can point to the libraries and subsequently compile and test the plugin. The libraries are Java, and as such platform independent. I picked the Linux version since it has a nice, simple file structure.

The following code snippet caters for this:

apply plugin: 'java'

// Pick the Linux version, as it is a tar.gz we can simply extract
def IDEA_SDK_URL = 'http://download.jetbrains.com/idea/ideaIC-14.0.4.tar.gz'
def IDEA_SDK_NAME = 'IntelliJ IDEA Community Edition IC-139.1603.1'

configurations {
    ideaSdk
    bundle // dependencies bundled with the plugin
}

dependencies {
    ideaSdk fileTree(dir: 'lib/sdk/', include: ['*/lib/*.jar'])

    compile configurations.ideaSdk
    compile configurations.bundle
    testCompile 'junit:junit:4.12'
    testCompile 'org.mockito:mockito-core:1.10.19'
}

// IntelliJ IDEA can still run on a Java 6 JRE, so we need to take that into account.
sourceCompatibility = 1.6
targetCompatibility = 1.6

task downloadIdeaSdk(type: Download) {
    sourceUrl = IDEA_SDK_URL
    target = file('lib/idea-sdk.tar.gz')
}

task extractIdeaSdk(type: Copy, dependsOn: [downloadIdeaSdk]) {
    def zipFile = file('lib/idea-sdk.tar.gz')
    def outputDir = file("lib/sdk")

    from tarTree(resources.gzip(zipFile))
    into outputDir
}

compileJava.dependsOn extractIdeaSdk

class Download extends DefaultTask {
    @Input
    String sourceUrl

    @OutputFile
    File target

    @TaskAction
    void download() {
       if (!target.parentFile.exists()) {
           target.parentFile.mkdirs()
       }
       ant.get(src: sourceUrl, dest: target, skipexisting: 'true')
    }
}

If parallel test execution does not work for your plugin, you'd better turn it off as follows:

test {
    // Avoid parallel execution, since the IntelliJ boilerplate is not up to that
    maxParallelForks = 1
}
The plugin deliverable

Obviously, the whole build process should be automated. That includes the packaging of the plugin. A plugin is simply a zip file with all libraries together in a lib folder.

task dist(type: Zip, dependsOn: [jar, test]) {
    from configurations.bundle
    from jar.archivePath
    rename { f -> "lib/${f}" }
    into project.name
    baseName project.name
}

build.dependsOn dist
Handling IntelliJ project files

We also need to generate IntelliJ IDEA project and module files so the plugin can live within the IDE. Telling the IDE it's dealing with a plugin opens some nice features, mainly the ability to run the plugin from within the IDE. Anton Arhipov's blog post put me on the right track.

The Gradle idea plugin helps out in creating those files. This works out of the box for your average project, but for plugins IntelliJ expects some things differently. The project files should mention that we're dealing with a plugin project and the module file should point to the plugin.xml file required for each plugin. Also, the SDK libraries are not to be included in the module file; so, I excluded those from the configuration.

The following code snippet caters for this:

apply plugin: 'idea'

idea {
    project {
        languageLevel = '1.6'
        jdkName = IDEA_SDK_NAME

        ipr {
            withXml {
                it.node.find { node ->
                    node.@name == 'ProjectRootManager'
                }.'@project-jdk-type' = 'IDEA JDK'

                logger.warn "=" * 71
                logger.warn " Configured IDEA JDK '${jdkName}'."
                logger.warn " Make sure you have it configured IntelliJ before opening the project!"
                logger.warn "=" * 71
            }
        }
    }

    module {
        scopes.COMPILE.minus = [ configurations.ideaSdk ]

        iml {
            beforeMerged { module ->
                module.dependencies.clear()
            }
            withXml {
                it.node.@type = 'PLUGIN_MODULE'
                //  &lt;component name="DevKit.ModuleBuildProperties" url="file://$MODULE_DIR$/src/main/resources/META-INF/plugin.xml" />
                def cmp = it.node.appendNode('component')
                cmp.@name = 'DevKit.ModuleBuildProperties'
                cmp.@url = 'file://$MODULE_DIR$/src/main/resources/META-INF/plugin.xml'
            }
        }
    }
}
Put it to work!

Combining the aforementioned code snippets will result in a build script that can be run on any environment. Have a look at my idea-clock plugin for a working example.

Update 1: Forms

For an IntelliJ plugin to use forms it appeared some extra work has to be performed.
This difference is only obvious once you compare the plugin built by IntelliJ with the one built by Gradle:

  1. Include a bunch of helper classes
  2. Instrument the form classes

Including more files in the plugin was easy enough. Check out this commit to see what has to be added. Those classes are used as "helpers" for the form after instrumentation. For instrumentation an Ant task is available. This task can be loaded in Gradle and used as a last step of compilation.

Once I knew what to look for, this post helped me out: How to manage development life cycle of IntelliJ plugins with Maven, along with this build script.

Update 2: Language code generation

The Jetbrains folks promote using JFlex to build the lexer for your custom language. In order to use this from Gradle a custom version of JFlex needs to be used. This was used in an early version of the FitNesse plugin.

The monolithic frontend in the microservices architecture

Mon, 07/27/2015 - 16:39

When you are implementing a microservices architecture you want to keep services small. This should also apply to the frontend. If you don't, you will only reap the benefits of microservices for the backend services. An easy solution is to split your application up into separate frontends. When you have a big monolithic frontend that can’t be split up easily, you have to think about making it smaller. You can decompose the frontend into separate components independently developed by different teams.

Imagine you are working at a company that is switching from a monolithic architecture to a microservices architecture. The application your are working on is a big client facing web application. You have recently identified a couple of self-contained features and created microservices to provide each functionality. Your former monolith has been carved down to bare essentials for providing the user interface, which is your public facing web frontend. This microservice only has one functionality which is providing the user interface. It can be scaled and deployed separate from the other backend services.

You are happy with the transition: Individual services can fit in your head, multiple teams can work on different applications, and you are speaking on conferences on your experiences with the transition. However you’re not quite there yet: The frontend is still a monolith that spans the different backends. This means on the frontend you still have some of the same problems you had before switching to microservices. The image below shows a simplification of the current architecture.

Single frontend

With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Backend teams can't deliver business value without the frontend being updated since an API without a user interface doesn't do much. More backend teams means more new features, and therefore more pressure is put on the frontend team(s) to integrate new features. To compensate for this it is possible to make the frontend team bigger or have multiple teams working on the same project. Because the frontend still has to be deployed in one go, teams cannot work independently. Changes have to be integrated in the same project and the whole project needs to be tested since a change can break other features.
Another option is to have the backend teams integrate their new features with the frontend and submitting a pull request. This helps in dividing the work, but to do this effectively a lot of knowledge has to be shared across the teams to get the code consistent and on the same quality level. This would basically mean that the teams are not working independently. With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Besides not being able to scale, there is also the classical overhead of a separate backend and frontend team. Each time there is a breaking change in the API of one of the services, the frontend has to be updated. Especially when a feature is added to a service, the frontend has to be updated to ensure your customers can even use the feature. If you have a frontend small enough it can be maintained by a team which is also responsible for one or more services which are coupled to the frontend. This means that there is no overhead in cross team communication. But because the frontend and the backend can not be worked on independently, you are not really doing microservices. For an application which is small enough to be maintained by a single team it is probably a good idea not to do microservices.

If you do have multiple teams working on your platform, but you were to have multiple smaller frontend applications there would have been no problem. Each frontend would act as the interface to one or more services. Each of these services will have their own persistence layer. This is known as vertical decomposition. See the image below.

frontend-per-service

When splitting up your application you have to make sure you are making the right split, which is the same as for the backend services. First you have to recognize bounded contexts in which your domain can be split. A bounded context is a partition of the domain model with a clear boundary. Within the bounded context there is high coupling and between different bounded contexts there is low coupling. These bounded contexts will be mapped to micro services within your application. This way the communication between services is also limited. In other words you limit your API surface. This in turn will limit the need to make changes in the API and ensure truly separately operating teams.

Often you are unable to separate your web application into multiple entirely separate applications. A consistent look and feel has to be maintained and the application should behave as single application. However the application and the development team are big enough to justify a microservices architecture. Examples of such big client facing applications can be found in online retail, news, social networks or other online platforms.

Although a total split of your application might not be possible, it might be possible to have multiple teams working on separate parts of the frontend as if they were entirely separate applications. Instead of splitting your web app entirely you are splitting it up in components, which can be maintained separately. This way you are doing a form of vertical decomposition while you still have a single consistent web application. To achieve this you have a couple of options.

Share code

You can share code to make sure that the look and feel of the different frontends is consistent. However then you risk coupling services via the common code. This could even result in not being able to deploy and release separately. It will also require some coordination regarding the shared code.

Therefore when you are going to share code it is generally a good a idea to think about the API that it’s going to provide. Calling your shared library “common”, for example, is generally a bad idea. The name suggests developers should put any code which can be shared by some other service in the library. Common is not a functional term, but a technical term. This means that the library doesn’t focus on providing a specific functionality. This will result in an API without a specific goal, which will be subject to change often. This is especially bad for microservices when multiple teams have to migrate to the new version when the API has been broken.

Although sharing code between microservices has disadvantages, generally all microservices will share code by using open source libraries. Because this code is always used by a lot of projects, special care is given to not breaking compatibility. When you’re going to share code it is a good idea to uphold your shared code to the same standards. When your library is not specific to your business, you might as well release it publicly to encourage you think twice about breaking the API or putting business specific logic in the library.

Composite frontend

It is possible to compose your frontend out of different components. Each of these components could be maintained by a separate team and deployed independent of each other. Again it is important to split along bounded contexts to limit the API surface between the components. The image below shows an example of such a composite frontend.

composite-design

Admittedly this is an idea we already saw in portlets during the SOA age. However, in a microservices architecture you want the frontend components to be able to deploy fully independently and you want to make sure you do a clean separation which ensures there is no or only limited two way communication needed between the components.

It is possible to integrate during development, deployment or at runtime. At each of these integration stages there are different tradeoffs between flexibility and consistency. If you want to have separate deployment pipelines for your components, you want to have a more flexible approach like runtime integration. If it is more likely different versions of components might break functionality, you need more consistency. You would get this at development time integration. Integration at deployment time could give you the same flexibility as runtime integration, if you are able to integrate different versions of components on different environments of your build pipeline. However this would mean creating a different deployment artifact for each environment.

Software architecture should never be a goal, but a means to an end

Combining multiple components via shared libraries into a single frontend is an example of development time integration. However it doesn't give you much flexibility in regards of separate deployment. It is still a classical integration technique. But since software architecture should never be a goal, but a means to an end, it can be the best solution for the problem you are trying to solve.

More flexibility can be found in runtime integration. An example of this is using AJAX to load html and other dependencies of a component. Then the main application only needs to know where to retrieve the component from. This is a good example of a small API surface. Of course doing a request after page load means that the users might see components loading. It also means that clients that don’t execute javascript will not see the content at all. Examples are bots / spiders that don’t execute javascript, real users who are blocking javascript or using a screenreader that doesn’t execute javascript.

When runtime integration via javascript is not an option it is also possible to integrate components using a middleware layer. This layer fetches the html of the different components and composes them into a full page before returning the page to the client. This means that clients will always retrieve all of the html at once. An example of such middleware are the Edge Side Includes of Varnish. To get more flexibility it is also possible to manually implement a server which does this. An open source example of such a server is Compoxure.

Once you have you have your composite frontend up and running you can start to think about the next step: optimization. Having separate components from different sources means that many resources have to be retrieved by the client. Since retrieving multiple resources takes longer than retrieving a single resource, you want to combine resources. Again this can be done at development time or at runtime depending on the integration techniques you chose decomposing your frontend.

Conclusion

When transitioning an application to a microservices architecture you will run into issues if you keep the frontend a monolith. The goal is to achieve good vertical decomposition. What goes for the backend services goes for the frontend as well: Split into bounded contexts to limit the API surface between components, and use integration techniques that avoid coupling. When you are working on single big frontend it might be difficult to make this decomposition, but when you want to deliver faster by using multiple teams working on a microservices architecture, you cannot exclude the frontend from decomposition.

Resources

Sam Newman - From Macro to Micro: How Big Should Your Services Be?
Dan North - Microservices: software that fits in your head

Super fast unit test execution with WallabyJS

Mon, 07/27/2015 - 11:24

Our current AngularJS project has been under development for about 2.5 years, so the number of unit tests has increased enormously. We tend to have a coverage percentage near 100%, which led to 4000+ unit tests. These include service specs and view specs. You may know that AngularJS - when abused a bit - is not suited for super large applications, but since we tamed the beast and have an application with more than 16,000 lines of high performing AngularJS code, we want to keep in charge about the total development process without any performance losses.

We are using Karma Runner with Jasmine, which is fine for a small number of specs and for debugging, but running the full test suite takes up to 3 minutes on a 2.8Ghz MacBook Pro.

We are testing our code continuously, so we came up with a solution to split al the unit tests into several shards. This parallel execution of the unit tests decreased the execution time a lot. We will later write about the details of this Karma parallelization on this blog. Sharding helped us a lot when we want to run the full unit test suite, i.e. when using it in the pre push hook, but during development you want quick feedback cycles about coverage and failing specs (red-green testing).

With such a long unit test cycle, even when running in parallel, many of our developers are fdescribe-ing the specs on which they are working, so that the feedback is instant. However, this is quite labor intensive and sometimes an fdescribe is pushed accidentally.

And then.... we discovered WallabyJS. It is just an ordinary test runner like Karma. Even the configuration file is almost a copy of our karma.conf.js.
The difference is in the details. Out of the box it runs the unit test suite in 50 secs, thanks to the extensive use of Web Workers. Then the fun starts.

Screenshot of Wallaby In action (IntelliJ). Shamelessly grabbed from wallaby.com

I use Wallaby as IntelliJ IDEA plugin, which adds colored annotations to the left margin of my code. Green squares indicate covered lines/statements, orange give me partly covered code and grey means "please write a test for this functionality or I introduce hard to find bugs". Colorblind people see just kale green squares on every line, since the default colors are not chosen very well, but these colors are adjustable via the Preferences menu.

Clicking on a square pops up a box with a list of tests that induces the coverage. When the test failed, it also tells me why.

dialog

A dialog box showing contextual information (wallaby.com)

Since the implementation and the tests are now instrumented, finding bugs and increasing your coverage goes a lot faster. Beside that, you don't need to hassle with fdescribes and fits to run individual tests during development. Thanks to the instrumentation Wallaby is running your tests continuously and re-runs only the relevant tests for the parts that you are working on. Real time.

5 Reasons why you should test your code

Mon, 07/27/2015 - 09:37

It is just like in mathematics class when I had to make a proof for Thales’ theorem I wrote “Can’t you see that B has a right angle?! Q.E.D.”, but he still gave me an F grade.

You want to make things work, right? So you start programming until your feature is implemented. When it is implemented, it works, so you do not need any tests. You want to proceed and make more cool features.

Suddenly feature 1 breaks, because you did something weird in some service that is reused all over your application. Ok, let’s fix it, keep refreshing the page until everything is stable again. This is the point in time where you regret that you (or even better, your teammate) did not write tests.

In this article I give you 5 reasons why you should write them.

1. Regression testing

The scenario describes in the introduction is a typical example of a regression bug. Something works, but it breaks when you are looking the other way.
When you had tests with 100% code coverage, a red error had been appeared in the console or – even better – a siren goes off in the room where you are working.

Although there are some misconceptions about coverage, it at least tells others that there is a fully functional test suite. And it may give you a high grade when an audit company like SIG inspects your software.

coverage

100% Coverage feels so good

100% Code coverage does not mean that you have tested everything.
This means that the test suite it implemented in such a way that it calls every line of the tested code, but says nothing about the assertions made during its test run. If you want to measure if your specs do a fair amount of assertions, you have to do mutation testing.

This works as follows.

An automatic task is running the test suite once. Then some parts of you code are modified, mainly conditions flipped, for loops made shorter/longer, etc. The test suite is run a second time. If there are tests failing after the modifications begin made, there is an assertion done for this case, which is good.
However, 100% coverage does feel really good if you are an OCD-person.

The better your test coverage and assertion density is, the higher probability to catch regression bugs. Especially when an application grows, you may encounter a lot of regression bugs during development, which is good.

Suppose that a form shows a funny easter egg when the filled in birthdate is 06-06-2006 and the line of code responsible for this behaviour is hidden in a complex method. A fellow developer may make changes to this line. Not because he is not funny, but he just does not know. A failing test notices him immediately that he is removing your easter egg, while without a test you would find out the the removal 2 years later.

Still every application contains bugs which you are unaware of. When an end user tells you about a broken page, you may find out that the link he clicked on was generated with some missing information, ie. users//edit instead of users/24/edit.

When you find a bug, first write a (failing) test that reproduces the bug, then fix the bug. This will never happen again. You win.

2. Improve the implementation via new insights

“Premature optimalization is the root of all evil” is something you hear a lot. This does not mean that you have to implement you solution pragmatically without code reuse.

Good software craftmanship is not only about solving a problem effectively, also about maintainability, durability, performance and architecture. Tests can help you with this. If forces you to slow down and think.

If you start writing your tests and you have trouble with it, this may be an indication that your implementation can be improved. Furthermore, your tests let you think about input and output, corner cases and dependencies. So do you think that you understand all aspects of the super method you wrote that can handle everything? Write tests for this method and better code is garanteed.

Test Driven Development even helps you optimizing your code before you even write it, but that is another discussion.

3. It saves time, really

Number one excuse not to write tests is that you do not have time for it or your client does not want to pay for it. Writing tests can indeed cost you some time, even if you are using boilerplate code elimination frameworks like Mox.

However, if I ask you whether you would make other design choices if you had the chance (and time) to start over, you probably would say yes. A total codebase refactoring is a ‘no go’ because you cannot oversee what parts of your application will fail. If you still accept the refactoring challenge, it will at least give you a lot of headache and costs you a lot of time, which you could have been used for writing the tests. But you had no time for writing tests, right? So your crappy implementation stays.

Dilbert bugfix

A bug can always be introduced, even with good refactored code. How many times did you say to yourself after a day of hard working that you spend 90% of your time finding and fixing a nasty bug? You are want to write cool applications, not to fix bugs.
When you have tested your code very well, 90% of the bugs introduced are catched by your tests. Phew, that saved the day. You can focus on writing cool stuff. And tests.

In the beginning, writing tests can take up to more than half of your time, but when you get the hang of it, writing tests become a second nature. It is important that you are writing code for the long term. As an application grows, it really pays off to have tests. It saves you time and developing becomes more fun as you are not being blocked by hard to find bugs.

4. Self-updating documentation

Writing clean self-documenting code is one if the main thing were adhere to. Not only for yourself, especially when you have not seen the code for a while, but also for your fellow developers. We only write comments if a piece of code is particularly hard to understand. Whatever style you prefer, it has to be clean in some way what the code does.

  // Beware! Dragons beyond this point!

Some people like to read the comments, some read the implementation itself, but some read the tests. What I like about the tests, for example when you are using a framework like Jasmine, is that they have a structured overview of all method's features. When you have a separate documentation file, it is as structured as you want, but the main issue with documentation is that it is never up to date. Developers do not like to write documentation and forget to update it when a method signature changes and eventually they stop writing docs.

Developers also do not like to write tests, but they at least serve more purposes than docs. If you are using the test suite as documentation, your documentation is always up to date with no extra effort!

5. It is fun

Nowadays there are no testers and developers. The developers are the testers. People that write good tests, are also the best programmers. Actually, your test is also a program. So if you like programming, you should like writing tests.
The reason why writing tests may feel non-productive is because it gives you the idea that you are not producing something new.

OLYMPUS DIGITAL CAMERA

Is the build red? Fix it immediately!

However, with the modern software development approach, your tests should be an integrated part of your application. The tests can be executed automatically using build tools like Grunt and Gulp. They may run in a continuous integration pipeline via Jenkins, for example. If you are really cool, a new deploy to production is automatically done when the tests pass and everything else is ok. With tests you have more confidence that your code is production ready.

A lot of measurements can be generated as well, like coverage and mutation testing, giving the OCD-oriented developers a big smile when everything is green and the score is 100%.

If the test suite fails, it is first priority to fix it, to keep the codebase in good shape. It takes some discipline, but when you get used to it, you have more fun developing new features and make cool stuff.

Android: Custom ViewMatchers in Espresso

Fri, 07/24/2015 - 16:03

Somehow it seems that testing is still treated like an afterthought in mobile development. The introduction of the Espresso test framework in the Android Testing Support Library improved the situation a little bit, but the documentation is limited and it can be hard to debug problems. And you will run into problems, because testing is hard to learn when there are so few examples to learn from.

Anyway, I recently created my first custom ViewMatcher for Espresso and I figured I would like to share it here. I was building a simple form with some EditText views as input fields, and these fields should display an error message when the user entered an invalid input.

Android TextView with Error Message

Android TextView with error message

In order to test this, my Espresso test enters an invalid value in one of the fields, presses "submit" and checks that the field is actually displaying an error message.

@Test
public void check() {
  Espresso
      .onView(ViewMatchers.withId((R.id.email)))
      .perform(ViewActions.typeText("foo"));
  Espresso
      .onView(ViewMatchers.withId(R.id.submit))
      .perform(ViewActions.click());
  Espresso
      .onView(ViewMatchers.withId((R.id.email)))
      .check(ViewAssertions.matches(
          ErrorTextMatchers.withErrorText(Matchers.containsString("email address is invalid"))));
}

The real magic happens inside the ErrorTextMatchers helper class:

public final class ErrorTextMatchers {

  /**
   * Returns a matcher that matches {@link TextView}s based on text property value.
   *
   * @param stringMatcher {@link Matcher} of {@link String} with text to match
   */
  @NonNull
  public static Matcher<View> withErrorText(final Matcher<String> stringMatcher) {

    return new BoundedMatcher<View, TextView>(TextView.class) {

      @Override
      public void describeTo(final Description description) {
        description.appendText("with error text: ");
        stringMatcher.describeTo(description);
      }

      @Override
      public boolean matchesSafely(final TextView textView) {
        return stringMatcher.matches(textView.getError().toString());
      }
    };
  }
} 

The main details of the implementation are as follows. We make sure that the matcher will only match children of the TextView class by returning a BoundedMatcher from withErrorText(). This makes it very easy to implement the matching logic itself in BoundedMatcher.matchesSafely(): simply take the getError() method from the TextView and feed it to the next Matcher. Finally, we have a simple implementation of the describeTo() method, which is only used to generate debug output to the console.

In conclusion, it turns out to be pretty straightforward to create your own custom ViewMatcher. Who knew? Perhaps there is still hope for testing mobile apps...

You can find an example project with the ErrorTextMatchers on GitHub: github.com/smuldr/espresso-errortext-matcher.

Parallax image scrolling using Storyboards

Tue, 07/21/2015 - 07:37

Parallax image scrolling is a popular concept that is being adopted by many apps these days. It's the small attention to details like this that can really make an app great. Parallax scrolling gives you the illusion of depth by letting objects in the background scroll slower than objects in the foreground. It has been used in the past by many 2d games to make them feel more 3d. True parallax scrolling can become quite complex, but it's not very hard to create a simple parallax image scrolling effect on iOS yourself. This post will show you how to add it to a table view using Storyboards.

NOTE: You can find all source code used by this post on https://github.com/lammertw/ParallaxImageScrolling.

The idea here is to create a UITableView with an image header that has a parallax scrolling effect. When we scroll down the table view (i.e. swipe up), the image should scroll with half the speed of the table. And when we scroll up (i.e. swipe down), the image should become bigger so that it feels like it's stretching while we scroll. The latter is not really a parallax scrolling effect but commonly used in combination with it. The following animation shows these effects:

imageedit_9_7419205352  

But what if we want a "Pull down to Refresh" effect and need to add a UIRefreshControl? Well, then we just drop the stretch effect when scrolling up:  

imageedit_7_3493339154    

And as you might expect, the variation with Pull to Refresh is actually a lot easier to accomplish than the one without.

Parallax Scrolling Libraries

While you can find several objective-c or Swift libraries that provide parallax scrolling similar to the ones here, you'll find that it's not that hard to create these yourself. Doing it yourself has the benefit of customizing it exactly the way your want it and of course it will add to your experience. Plus it might be less code than integrating with such a library. However if you need exactly what such a library provides then using it might work better for you.

The basics

NOTE: You can find all the code of this section at the no-parallax-scrolling branch.

Let's start with a simple example that doesn't have any parallax scrolling effects yet.

imageedit_5_3840368192

Here we have a standard UITableViewController with a cell containing our image at the top and another cell below it with some text. Here is the code:

class ImageTableViewController: UITableViewController {

  override func numberOfSectionsInTableView(tableView: UITableView) -> Int {
    return 2
  }

  override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
    return 1
  }

  override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
    var cellIdentifier = ""
    switch indexPath.section {
    case 0:
      cellIdentifier = "ImageCell"
    case 1:
      cellIdentifier = "TextCell"
    default: ()
    }

    let cell = tableView.dequeueReusableCellWithIdentifier(cellIdentifier, forIndexPath: indexPath) as! UITableViewCell

    return cell
  }

  override func tableView(tableView: UITableView, heightForRowAtIndexPath indexPath: NSIndexPath) -> CGFloat {
    switch indexPath.section {
    case 0:
      return UITableViewAutomaticDimension
    default: ()
      return 50
    }
  }

  override func tableView(tableView: UITableView, estimatedHeightForRowAtIndexPath indexPath: NSIndexPath) -> CGFloat {
    switch indexPath.section {
    case 0:
      return 200
    default: ()
      return 50
    }
  }

}

The only thing of note here is that we're using UITableViewAutomaticDimension for automatic cell heights determined by constraints in the cell: we have a UIImageView with constraints to use the full width and height of the cell and a fixed aspect ratio of 2:1. Because of this aspect ratio, the height of the image (and therefore of the cell) is always half of the width. In landscape it looks like this:

iOS Simulator Screen Shot 20 Jul 2015 17.27.38

We'll see later why this matters.

Parallax scrolling with Pull to Refresh

NOTE: You can find all the code of this section at the pull-to-refresh branch.

As mentioned before, creating the parallax scrolling effect is easiest when it doesn't need to stretch. Commonly you'll only want that if you have a Pull to Refresh. Adding the UIRefreshControl is done in the standard way so I won't go into that.

Container view
The rest is also quite simple. With the basics from below as our starting point, what we need to do first is add a UIView around our UIImageView that acts as a container. Since our image will change it's position while we scroll, we cannot use it anymore to calculate the height of the cell. The container view will have exactly the constraints that our image view had: use the full width and height of the cell and have an aspect ratio of 2:1. Also make sure to enable Clip Subviews on the container view to make sure the image view is clipped by it.

Align Center Y constraint
The image view, which is now inside the container view, will keep its aspect ratio constraint and use the full width of the container view. For the y position we'll add an Align Center Y constraint to vertically center the image within the container. All that looks something like this: Screen Shot 2015-07-20 at 17.46.25

Parallax scrolling using constraint
When we run this code now, it will still behave exactly as before. What we need to do is make the image view scroll with half the speed of the table view when scrolling down. We can do that by changing the constant of the Align Center Y constraint that we just created. First we need to connect it to an outlet of a custom UITableViewCell subclass:

class ImageCell: UITableViewCell {
  @IBOutlet weak var imageCenterYConstraint: NSLayoutConstraint!
}

When the table view scrolls down, we need to lower the Y position of the image by half the amount that we scrolled. To do that we can use scrollViewDidScroll and the content offset of the table view. Since our UITableViewController already adheres to the UIScrollViewDelegate, overriding that method is enough:

override func scrollViewDidScroll(scrollView: UIScrollView) {
  imageCenterYConstraint?.constant = min(0, -scrollView.contentOffset.y / 2.0) // only when scrolling down so we never let it be higher than 0
}

We're left with one small problem. The imageCenterYConstraint is connected to the ImageCell that we created and the scrollViewDidScroll method is in the view controller. So what left is create a imageCenterYConstraint in the view controller and assign it when the cell is created:

weak var imageCenterYConstraint: NSLayoutConstraint?

override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
  var cellIdentifier = ""
  switch indexPath.section {
  case 0:
    cellIdentifier = "ImageCell"
  case 1:
    cellIdentifier = "TextCell"
  default: ()
  }

  // the new part of code:
  let cell = tableView.dequeueReusableCellWithIdentifier(cellIdentifier, forIndexPath: indexPath) as! UITableViewCell
  if let imageCell = cell as? ImageCell {
    imageCenterYConstraint = imageCell.imageCenterYConstraint
  }

  return cell
}

That's all we need to do for our first variation of the parallax image scrolling. Let's go on with something a little more complicated.

Parallax scrolling with Pull to Refresh

NOTE: You can find all the code of this section at the no-pull-to-refresh branch.

When starting from the basics, we need to add a container view again like we did in the Container view paragraph from the previous section. The image view needs some different constraints though. Add the following constraints to the image view:

  • Ass before, keep the 2:1 aspect ratio
  • Add a Leading Space and Trailing Space of 0 to the Superview (our container view) and set the priority to 900. We will break these constraints when stretching the image because the image will become wider than the container view. However we still need them to determine the preferred width.
  • Align Center X to the Superview. We need this one to keep the image in the center when we break the Leading and Trailing Space constraints.
  • Add a Bottom Space and Top Space of 0 to the Superview. Create two outlets to the cell class ImageCell like we did in the previous section for the center Y constraint. We'll call these bottomSpaceConstraint and topSpaceConstraint. Also assign these from the cell to the view controller like we did before so we can access them in our scrollViewDidScroll method.

The result: Screen Shot 2015-07-20 at 21.30.52 We now have all the constraints we need to do the effects for scrolling up and down.

Scrolling down
When we scroll down (swipe up) we want the same effect as in our previous section. Instead of having an 'Align Center Y' constraint that we can change, we now need to do the following:

  • Set the bottom space to minus half of the content offset so it will fall below the container view.
  • Set the top space to plus half of the content offset so it will be below the top of the container view.

With these two calculation we effectively delay the scrolling speed of the image view with the half of the table view scrolling speed.

bottomSpaceConstraint?.constant = -scrollView.contentOffset.y / 2
topSpaceConstraint?.constant = scrollView.contentOffset.y / 2

Scrolling up
When the table view scrolls up (swipe down) the container view is going down. What we want here is that the image view sticks to the top of the screen instead of going down as well. As well need for that is to set the constant of the topSpaceConstraint to the content offset. That means the height of the image will increase. Because of our 2:1 aspect ratio, the width of the image will grow as well. This is why we had to lower the priority of the Leading and Trailing constraint because the image no longer fits inside the container and breaks those constraints.

topSpaceConstraint?.constant = scrollView.contentOffset.y

We're left with one problem now. When the image sticks to the top while the container view goes down, it means that the image falls outside the container view. And since we had to enable Clip Subviews for scrolling down, we now get something like this: iOS Simulator Screen Shot 20 Jul 2015 21.45.44

We can't see the top of the image since it's outside the container view. So what we need is to clip when scrolling down and not clip when scrolling up. We can only do that in code so we need to connect the container view to an outlet, just as we've done with the constraints. Then the final code in scrollViewDidScroll becomes:

func scrollViewDidScroll(scrollView: UIScrollView) { 
  if scrollView.contentOffset.y >= 0 { 
    // scrolling down 
    containerView.clipsToBounds = true 
    bottomSpaceConstraint?.constant = -scrollView.contentOffset.y / 2 
    topSpaceConstraint?.constant = scrollView.contentOffset.y / 2 
  } else { 
    // scrolling up 
    topSpaceConstraint?.constant = scrollView.contentOffset.y 
    containerView.clipsToBounds = false 
  } 
}
Conclusion

So there you have it. Two variations of parallax scrolling without too much effort. As mentioned before, use a dedicated library if you have to, but don't be afraid that it's too complicated to do it yourself.

Additional notes

If you've seen the source code on GitHub you might have noted a few additional things. I didn't want to mention in the main body of this post to prevent any distractions but it's important to mention them anyway.

  • The aspect ratio constraints need to have a priority lower than 1000. Set them 999 or 950 or something (make sure they're higher than the Leading and Trailing Space constraints that we set to 900 in the last section). This is because of an issue related to cells with dynamic height (using UITableViewAutomaticDimension) and rotation. When the user rotates the device, the cell will get its new width while still having the previous height. The new height calculation is not yet done at the beginning of the rotation animation. At this moment, the 2:1 aspect ratio cannot exist, which is why we cannot set it to 1000 (required). Right after the new height is calculated it the aspect ratio constraint will kick back in. It seems that the state in which the aspect ratio constraint cannot exist is not even visible so don't worry about your cell looking strange. Also leaving it at 1000 only seems to generate an error message about the constraint, after which it continues as expected.
  • Instead of assigning the outlets from the ImageCell to new variables in the view controller you may also create a scrollViewDidScroll in the cell, which is then being called from the scrollViewDidScroll from your view controller. You can get the cell using cellForRowAtIndexPath. See the code on GitHub to see this done.

Continuous Delivery of Docker Images

Mon, 07/13/2015 - 20:05

Our customer wanted to drastically cut down time to market for the new version of their application. Large quarterly releases should be replaced by small changes that can be rolled out to production multiple times a day. Below we will explain how to use Docker and Ansible to support this strategy, or, in our customer’s words, how to ‘develop software at the speed of thought’.

To facilitate development at the speed of thought we needed the following:

  1. A platform to deploy Docker images to
  2. Set up logging, monitoring and alerting
  3. Application versioning
  4. Zero downtime deployment
  5. Security

We’ll discuss each property below.

Platform
Our Docker images run on an Ubuntu host because we needed a supported Linux version that is well known. In our case we install the OS using an image and run all other software in a container. Each Docker container hosts exactly one process so it is easy to see what a container is supposed to do. Examples of containers include:

  • Java VMs to run our Scala services
  • HA Proxy
  • Syslog-ng
  • A utility to rotate log files
  • And even an Oracle database (not on acceptance and production because we expected support issues with that setup, but for development it works fine)

Most of the software running in containers is started with a bash script, but recently we started experimenting with Go so a container may need no more than a single executable.

Logging, monitoring and alerting
To save time we decided to offload the development effort of monitoring and alerting to hosted services where possible. This resulted in contracts with Loggly to store application log files, Librato to collect system metrics and OpsGenie to alert Ops based on rules defined in Loggly. Log files are shipped to Loggly using their Syslog-NG plugin. Our application was already relying on statsd so to avoid having to rewrite code, we decided to create a statsd emulator to push metrics to Librato. This may change in the future if we find the time, but for now it works fine. We’re using the Docker stats API to collect information at the container level.

Application versioning
In the Java world the deliverable would be a jar file published to a repository like Artifactory or Nexus. This is still possibile when working with Docker but it makes more sense to use Docker images as deliverables. The images contain everything needed to run the service, including the jar file. Like jar files, Docker images are published, in this case to the Docker registry. We started with Docker hub online but we wanted faster delivery and more control over who can access the images so we introduced our private Docker registry on premise. This works great and we are pushing around 30 to 50 images a day.
The version tag we use for a container is the date and time it was built. When the build starts we tag the sources in Git with a name based on the date and time, e.g. 20150619_1504. Components that pass their test are assembled in a release based on a text file, a composition, that lists all components that should be part of a release. The composition is tagged with a c_ prefix and a date/time stamp and is deployed to the integration test environment. Then a new test is run to find out whether the assembly still works. If so, the composition is labeled with a new rc tag, rc_20150619_1504 in our example. Releases that pass the integration test are deployed to acceptance and eventually production, but not automatically. We decided to make deployment a management decision, executed by a Jenkins job.
This strategy allows us to recreate a version of the software from source, by checking out tags that make up a release and building again, or from the Docker repository by deploying all versions of components as they are listed in the composition file.
Third-party components are tagged using the version number of the supplier.

Zero downtime deployment
To achieve high availability, we chose Ansible to deploy a Docker container based on the composition file mentioned above. Ansible connects to a host and then uses the Docker command to do the following:

  1. Check if the running container version differs from the one we want to deploy
  2. If the version is different, stop the old container and start the new one
  3. If the version is the same, don’t do anything

This saves a lot of time because Ansible will only change containers that need to be changed and leave all others alone.
Using Ansible we can also implement Zero Downtime Deployment:

  1. First shut down the health container on one node
  2. This causes the load balancer to remove the node from the list of active nodes
  3. Update the first node
  4. Restart the health container
  5. Run the update script in parallel on all other nodes.

Security
The problem with the Docker API is that you are either in or out with no levels in between. This means, e.g. that if you add the Docker socket to a container to look at Docker stats you also allow starting and stopping images. Or if you allow access to the Docker executable you also grant access to configuration information like passwords passed to the container at deployment time. To fix this problem we created a Docker wrapper. This wrapper forbids starting privileged containers and hides some information returned by Docker inspect.
One simple security rule is that software that is not installed or is not running, can’t be exploited. Applied to Docker images this means we removed everything we don’t need and made the image as small as possible. Teams extend the base Linux image only by adding the jar file for their application. Recently we started experimenting with Go to run utilities because a Go executable needs no extra software to run. We’re also testing smaller container images.
Finally, remember not to run as root and carefully consider what file systems to share between container and host.

Summary
In summary we found a way to package software in containers, both standard utilities and Scala components, create a tagged and versioned composition that is tested and moves from one environment to the next as a unit. Using Ansible we orchestrated deployment of new releases while keeping always at least one server running.
In the future we plan to work on reducing image size by stripping the base OS and deploying more utilities as Go containers. We will also continue work on our security wrapper and plan to investigate Consul to replace our home made service registry.

This blog was based on a talk by Armin Čoralić at XebiCon 2015. Watch Armin’s presentation here.