Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Xebia Blog
Syndicate content
Updated: 13 hours 55 min ago

How to Dockerize your Dropwizard Application

Wed, 10/29/2014 - 10:47

If you want to deploy your Dropwizard Application on a Docker server, you can Dockerize your Dropwizard Application. Since a Dropwizard Application is already packaged as an executable Java ARchive file, creating a Docker image for such an application should be easy.

 

In this blog, you will learn how to Dockerize a Dropwizard Application using 4 easy steps.

Before you start

  • You are going to use the Dropwizard-example application, which can be found at the Dropwizard GitHub repository.
  • Additionally you need Docker. I used Boot2Docker to run the Dockerized Dropwizard Application on my laptop. If you use boot2Docker, you may need this Boot2Docker workaround to access your Dockerized Dropwizard application.
  • This blog does not describe how to create Dropwizard applications. The Dropwizard getting started guide provides an excellent starting point if you like to know more about building your own Dropwizard applications.

 

Step 1: create a Dockerfile

You can start with creating a Dockerfile. Docker can automatically build images by reading the instructions described in this file. Your Dockerfile could look like this:

FROM dockerfile/java:openjdk-7-jdk

ADD dropwizard-example-1.0.0.jar /data/dropwizard-example-1.0.0.jar

ADD example.keystore /data/example.keystore

ADD example.yml /data/example.yml

RUN java -jar dropwizard-example-1.0.0.jar db migrate /data/example.yml

CMD java -jar dropwizard-example-1.0.0.jar server /data/example.yml

EXPOSE 8080

 

The Dropwizard Application needs a Java Runtime, so you can start from an base image already available at Docker Hub, for example: dockerfile/java:openjdk-7-jdk.

You must add the Dropwizard Application files to the image, using the ADD instruction in your Dockerfile.

Next, simply specify the commands of your Dropwizard Application, which you want to execute during image build and container runtime. In the example above, the db migrate command is executed when the Docker image is build and the server command is executed when you issue a Docker run command to create a running container.

Finally, the EXPOSE instruction tells Docker that your container will listen on the specified port(s) at runtime.

 

Step 2: build the Docker image

Place the Dockerfile and your application files in a directory and execute the Docker build command to build an Docker image.

docker@boot2docker:~$ docker build -t dropwizard/dropwizard-example ~/dropwizard/

 

In the console output you should be able to that the Dropwizard Application db migrate command is executed. If everything is ok, the last line reported informs you that the image is successfully build.

Successfully built dd547483b57b

 

Step 3: run the Docker image

Use the Docker run command to create a container based on the image you have created. If you need to find your image id use the Docker images command to list your images. It should take around 3 seconds to start the Dockerized Dropwizard example application.

Docker run –p 8080:8080 dd547483b57b

Notice that I included the –p option to include a network port binding, which maps 8080 inside the container to port 8080 on the Docker host.  You can verify whether your container is running using the docker ps command.

docker@boot2docker:~$ docker ps

CONTAINER ID        IMAGE                                  COMMAND                CREATED             STATUS              PORTS                    NAMES

3b6fb75adad6        dropwizard/dropwizard-example:latest   "/bin/sh -c 'java -j   3 minutes ago       Up 3 minutes        0.0.0.0:8080->8080/tcp   high_turing

 

  1. Test the application

Now the application is ready for use. You can access the application using your Docker host ip address and the forward port 8080. For example, use the Google Advanced Rest Client App to register “John Doe”.

GoogleRestClient

How to create Java microservices with Dropwizard

Mon, 10/27/2014 - 15:10

On Tuesday October 14th the Amsterdam Middleware Meetup experimented with Dropwizard. The idea was to find out what this technology is about, where it could be useful and what the alternatives are. So below I’ll give you an overview of Dropwizard and compare it to Spring Boot.
The Dropwizard website claims:

Dropwizard pulls together stable, mature libraries from the Java ecosystem into a simple, light-weight package that lets you focus on getting things done.

I’ll discuss each of these claims below.

Stable and mature
Dropwizard uses Jetty, Jersey, Jackson and Metrics as its most important frameworks, but also a host of other stuff like Guava, Liquibase and Joda Time. The latest Dropwizard release is version 0.7.1, released on June 20th 2014. It depends on these versions of some core libraries:
Jetty - 9.2.3.v20140905 - May 2014
Jackson - 2.4.1 - June 2014
Jersey - 2.11 - July 2014

The table shows that stable != out-of-date which is fine of course. The versions of core libraries used are recent though. I guess ‘stable’ means libraries with a long history.

Simple
The components of a Dropwizard application are shown below (taken from the tutorial
http://dropwizard.io/getting-started.html):
Dropview components overview

  1. Application (HelloWorldApplication.java): the applications main method, responsible for startup.
  2. Configuration (HelloWorldConfiguration.java) sets configuration for an environment, this is where you may set hostnames for systems the application depends on or set usernames.
  3. Data object (Saying.java).
  4. Resource (HelloWordResource.java): service implementation entry point
  5. Health Check (TemplateHealthCheck.java): runtime tests that show if the application still works.

Light weight
We did some experiments trying to answer the question whether Dropwizard applications are light weight. The table below summarizes some of the sizes of deployments and tools.
Tomcat size 14 mb
Tomcat lib folder size 7 MB
Jetty size 14,6 MB
Jetty in Dropwizard jar: 5,4 MB
Dropwizard tutorial example 10 mb
Dropwizard extended example 20 MB
Dropwizard Hibernate classes in package: 5 MB

A Tomcat or Jetty installation takes about 14 MB, but if you count only the lib folder the size goes down to about 7 MB. The Jetty folder in Dropwizard however is only 5.5 MB. Apparently Dropwizard managed to strip away some code you don’t really need (or is packaged somewhere else, we didn’t look into that).
Building the tutorial results in a 10 MB jar, so if you would run a webapp in its own Tomcat container, switching to Dropwizard saves quite a bit. On the other hand, deployment size isn’t all that important if we’re still talking < 50 MB.
Compared to your default Weblogic install (513 MB, Weblogic-only on OSX) however, savings are humongous (but this is also true when you compare Weblogic to Tomcat or Jetty).

Productivity
We tried to run the build for the tutorial application (dropwizard-example in the dropwizard project on Github). This works fine and takes about 8 seconds using mocks for external connections. One option to explore would be to run tests against a deployed application. What we’re used to is that deploying an application for test takes lots of time and resources, but starting a Dropwizard app is quite cheap. Therefore it would be possible to run an integration test of services at the end of a build. This would be quite hard to do with e.g. Weblogic or Websphere.

Spring boot
Spring boot is interesting, as well as the discussion around the differences between Spring boot and Dropwizard. See https://groups.google.com/forum/#!topic/dropwizard-user/vH1h2PgC8bU

The official Spring boot website says: Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that can you can "just run". We take an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need very little Spring configuration.
It’s good to see a platform change according to new insights, but still, I remember Rod Johnson saying some ten years ago that J2EE was bloated and complex and Spring was the answer. Now it seems we need Spring boot to make Spring simple? Or is it just that we don’t need application servers anymore to divide resources among processes?

Dropwizard and Docker
Finally we experimented with running Dropwizard in a Docker container. This can be done with limited effort because Dropwizard applications have such a small number of dependencies. Thomas Kruitbosch will report on this later.

References
Spring boot: http://projects.spring.io/spring-boot/
Dropwizard: http://dropwizard.io/

How to deploy a Docker application into production on Amazon AWS

Fri, 10/17/2014 - 17:00

Docker reached production status a few months ago. But having the container technology alone is not enough. You need a complete platform infrastructure before you can deploy your docker application in production. Amazon AWS offers exactly that: a production quality platform that offers capacity provisioning, load balancing, scaling, and application health monitoring for Docker applications.

In this blog, you will learn how to deploy a Docker application to production in five easy steps.

For demonstration purposes, you are going to use the node.js application that was build for CloudFoundry and used to demonstrate Deis in a previous post. A truly useful app of which the sources are available on github.

1. Create a Dockerfile

First thing you need to do is to create a Dockerfile to create an image. This is quite simple: you install the node.js and npm packages, copy the source files and install the javascript modules.

# DOCKER-VERSION 1.0
FROM    ubuntu:latest
#
# Install nodejs npm
#
RUN apt-get update
RUN apt-get install -y nodejs npm
#
# add application sources
#
COPY . /app
RUN cd /app; npm install
#
# Expose the default port
#
EXPOSE  5000
#
# Start command
#
CMD ["nodejs", "/app/web.js"]
2. Test your docker application

Now you can create the Docker image and test it.

$ docker build -t sample-nodejs-cf .
$ docker run -d -p 5000:5000 sample-nodejs-cf

Point your browser at http://localhost:5000, click the 'start' button and Presto!

3. Zip the sources

Now you know that the instance works, you zip the source files. The image will be build on Amazon AWS based on your Dockerfile.

$ zip -r /tmp/sample-nodejs-cf-srcs.zip .
4. Deploy Docker application to Amazon AWS

Now you install and configure the amazon AWS command line interface (CLI) and deploy the docker source files to elastic beanstalk.  You can do this all manually, but here you use the script deploy-to-aws.sh that I created.

$ deploy-to-aws.sh \
         sample-nodejs-cf \
         /tmp/sample-nodejs-cf-srcs.zip \
         demo-env

After about 8-10 minutes your application is running. The output should look like this..

INFO: creating application sample-nodejs-cf
INFO: Creating environment demo-env for sample-nodejs-cf
INFO: Uploading sample-nodejs-cf-srcs.zip for sample-nodejs-cf, version 1412948762.
upload: ./sample-nodejs-cf-srcs.zip to s3://elasticbeanstalk-us-east-1-233211978703/1412948762-sample-nodejs-cf-srcs.zip
INFO: Creating version 1412948762 of application sample-nodejs-cf
INFO: demo-env in status Launching, waiting to get to Ready..
...
INFO: demo-env in status Launching, waiting to get to Ready..
INFO: Updating environment demo-env with version 1412948762 of sample-nodejs-cf
INFO: demo-env in status Updating, waiting to get to Ready..
...
INFO: demo-env in status Updating, waiting to get to Ready..
INFO: Version 1412948762 of sample-nodejs-cf deployed in environment
INFO: current status is Ready, goto http://demo-env-vm2tqi3qk4.elasticbeanstalk.com
5. Test your Docker application on the internet!

Your application is now available on the Internet. Browse to the designated URL and click on start. When you increase the number of instances at Amazon, they will appear in the application. When you deploy a new version of the application, you can observe how new versions of the application  appear without any errors on the client application.

For more information, goto Amazon Elastic Beanstalk adds Docker support. and Dockerizing a Node.js Web App.

Then When Given

Fri, 10/17/2014 - 14:50

People who practice ATDD all know how frustrating it can be to write automated examples. Especially when you get stuck overthinking the preconditions of examples.

This post describes an alternative approach to writing acceptance tests: write them backwards!

Imagine that you are building the very first online phone book. We need to define an acceptance tests for viewing the location of a florist. Using the Given-When-Then formula you would probably describe the behaviour like this.


Given I am on the online phone book homepage
When I type “Florist” in the business type field
And I click …
...

Most of the time you will be discussing and describing details that have nothing to do with viewing the location of a florist. To avoid this, write down the Then clause of the formula first.
Make sure the Then clause contains an observable result.


Then I see the location “Floriststreet 123”

Next, we will try to answer the following question: What caused the Then clause?
Make sure the When clause contains an actor and an action.


When I click “View map” of the search result
Then I see the location “Floriststreet 123”

The last thing we will need to do is answer the following question: Why can I perform that action?
Make sure the Given clause contains a simple precondition.


Given I see a search result for florist “Floral Designs”
When I click “View map” of the search result
Then I see the location “Floriststreet 123”

You might have noticed that I left out certain parts where the user goes to the homepage and selects UI objects in the search area. It was not worth mentioning in the Given-When-Then formula. Too much details make us lose focus of what we really want to check. The essence of this acceptance test is clicking on the link "View map" and exposing the location to the user.

Try it a couple of times and let me know how it went.

AngularJS Training Week

Tue, 10/14/2014 - 07:00

Just a few more weeks and it's the AngularJS Training Week at Xebia in Hilversum (The Netherlands). 4 days full with AngularJS content, from 17 to 20 October, 2014. In these different days we will cover the AngularJS basics, AngularJS advanced topics, Tooling & Scaffolding and Testing with Jasmine, Karma and Protractor.

If you already have some experience or if you are only interested in one or two of the topics, then you can sign up for just the days that are of interest to you.

Visit www.angular-training.com for a full overview of the days and topics or sign up on the Xebia Training website using the links below.

Fast and Easy integration testing with Docker and Overcast

Mon, 10/13/2014 - 18:40
Challenges with integration testing

Suppose that you are writing a MongoDB driver for java. To verify if all the implemented functionality works correctly, you ideally want to test it against a REAL MongoDB server. This brings a couple of challenges:

  • Mongo is not written in java, so we can not embed it easily in our java application
  • We need to install and configure MongoDB somewhere, and maintain the installation, or write scripts to set it up as part of our test run.
  • Every test we run against the mongo server, will change the state, and tests might influence each other. We want to isolate our tests as much as possible.
  • We want to test our driver against multiple versions of MongoDB.
  • We want to run the tests as fast as possible. If we want to run tests in parallel, we need multiple servers. How do we manage them?

Let's try to address these challenges.

First of all, we do not really want to implement our own MonogDB driver. Many implementations exist and we will be reusing the mongo java driver to focus on how one would write the integration test code.

Overcast and Docker

logoWe are going to use Docker and Overcast. Probably you already know Docker. It's a technology to run applications inside software containers. Overcast is the library we will use to manage docker for us. Overcast is a open source java library
developed by XebiaLabs to help you to write test that connect to cloud hosts. Overcast has support for various cloud platforms, including EC2, VirtualBox, Vagrant, Libvirt (KVM). Recently support for Docker has been added by me in Overcast version 2.4.0.

Overcast helps you to decouple your test code from the cloud host setup. You can define a cloud host with all its configuration separately from your tests. In your test code you will only refer to a specific overcast configuration. Overcast will take care of creating, starting, provisioning that host. When the tests are finished it will also tear down the host. In your tests you will use Overcast to get the hostname and ports for this cloud host to be able to connect to them, because usually these are dynamically determined.

We will use Overcast to create Docker containers running a MongoDB server. Overcast will help us to retrieve the dynamically exposed port by the Docker host. The host in our case will always be the docker host. Docker in our case runs on an external Linux host. Overcast will use a TCP connection to communicate with Docker. We map the internal ports to a port on the docker host to make it externally available. MongoDB will internally run on port 27017, but docker will map this port to a local port in the range 49153 to 65535 (defined by docker).

Setting up our tests

Lets get started. First, we need a Docker image with MongoDB installed. Thanks to the Docker community, this is as easy as reusing one of the already existing images from the Docker Hub. All the hard work of creating such an image is already done for us, and thanks to containers we can run it on any host capable of running docker containers. How do we configure Overcast to run the MongoDB container? This is our minimal configuration we put in a file called overcast.conf:

mongodb {
    dockerHost="http://localhost:2375"
    dockerImage="mongo:2.7"
    exposeAllPorts=true
    remove=true
    command=["mongod", "--smallfiles"]
}

That's all! The dockerHost is configured to be localhost with the default port. This is the default value and you can omit this. The docker image called mongo version 2.7 will be automatically pulled from the central docker registry. We set exposeAllPorts to true to inform docker it needs to dynamically map all exposed ports by the docker image. We set remove to true to make sure the container is automatically removed when stopped. Notice we override the default container startup command by passing in an extra parameter "--smallfiles" to improve testing performance. For our setup this is all we need, but overcast also has support for defining static port mappings, setting environment variables, etc. Have a look at the Overcast documentation for more details.

How do we use this overcast host in our test code? Let's have a look at the test code that sets up the Overcast host and instantiates the mongodb client that is used by every test. The code uses the TestNG @BeforeMethod and @AfterMethod annotations.

private CloudHost itestHost;
private Mongo mongoClient;

@BeforeMethod
public void before() throws UnknownHostException {
    itestHost = CloudHostFactory.getCloudHost("mongodb");
    itestHost.setup();

    String host = itestHost.getHostName();
    int port = itestHost.getPort(27017);

    MongoClientOptions options = MongoClientOptions.builder()
        .connectTimeout(300 * 1000)
        .build();

    mongoClient = new MongoClient(new ServerAddress(host, port), options);
    logger.info("Mongo connection: " + mongoClient.toString());
}

@AfterMethod
public void after(){
    mongoClient.close();
    itestHost.teardown();
}

It is important to understand that the mongoClient is the object under test. Like mentioned before, we borrowed this library to demonstrate how one would integration test such a library. The itestHost is the Overcast CloudHost. In before(), we instantiate the cloud host by using the CloudHostFactory. The setup() will pull the required images from the docker registry, create a docker container, and start this container. We get the host and port from the itestHost and use them to build our mongo client. Notice that we put a high connection timeout on the connection options, to make sure the mongodb server is started in time. Especially the first run it can take some time to pull images. You can of course always pull the images beforehand. In the @AfterMethod, we simply close the connection with mongoDB and tear down the docker container.

Writing a test

The before and after are executed for every test, so we will get a completely clean mongodb server for every test, running on a different port. This completely isolates our test cases so that no tests can influence each other. You are free to choose your own testing strategy, sharing a cloud host by multiple tests is also possible. Lets have a look at one of the tests we wrote for mongo client:

@Test
public void shouldCountDocuments() throws DockerException, InterruptedException, UnknownHostException {

    DB db = mongoClient.getDB("mydb");
    DBCollection coll = db.getCollection("testCollection");
    BasicDBObject doc = new BasicDBObject("name", "MongoDB");

    for (int i=0; i &lt; 100; i++) {
        WriteResult writeResult = coll.insert(new BasicDBObject("i", i));
        logger.info("writing document " + writeResult);
    }

    int count = (int) coll.getCount();
    assertThat(count, equalTo(100));
}

Even without knowledge of MongoDB this test should not be that hard to understand. It creates a database, a new collection and inserts 100 documents in the database. Finally the test asserts if the getCount method returns the correct amount of documents in the collection. Many more aspects of the mongodb client can be tested in additional tests in this way. In our example setup, we have implemented two more tests to demonstrate this. Our example project contains 3 tests. When you run the 3 example tests sequentially (assuming the mongo docker image has been pulled), you will see that it takes only a few seconds to run them all. This is extremely fast.

Testing against multiple MongoDB versions

We also want to run all our integration tests against different versions of the mongoDB server to ensure there are no regressions. Overcast allows you to define multiple configurations. Lets add configuration for two more versions of MongoDB:

defaultConfig {
    dockerHost="http://localhost:2375"
    exposeAllPorts=true
    remove=true
    command=["mongod", "--smallfiles"]
}

mongodb27=${defaultConfig}
mongodb27.dockerImage="mongo:2.7"

mongodb26=${defaultConfig}
mongodb26.dockerImage="mongo:2.6"

mongodb24=${defaultConfig}
mongodb24.dockerImage="mongo:2.4"

The default configuration contains the configuration we have already seen. The other three configurations extend from the defaultConfig, and define a specific mongoDB image version. Lets also change our test code a little bit to make the overcast configuration we use in the test setup depend on a parameter:

@Parameters("overcastConfig")
@BeforeMethod
public void before(String overcastConfig) throws UnknownHostException {
    itestHost = CloudHostFactory.getCloudHost(overcastConfig);

Here we used the paramaterized tests feature from TestNG. We can now define a TestNG suite to define our test cases and how to pass in the different overcast configurations. Lets have a look at our TestNG suite definition:

<suite name="MongoSuite" verbose="1">
    <test name="MongoDB27tests">
        <parameter name="overcastConfig" value="mongodb27"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
    <test name="MongoDB26tests">
        <parameter name="overcastConfig" value="mongodb26"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
    <test name="MongoDB24tests">
        <parameter name="overcastConfig" value="mongodb24"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
</suite>

With this test suite definition we define 3 test cases that will pass a different overcast configuration to the tests. The overcast configuration plus the TestNG configuration enables us to externally configure against which mongodb versions we want to run our test cases.

Parallel test execution

Until this point, all tests will be executed sequentially. Due to the dynamic nature of cloud hosts and docker, nothing limits us to run multiple containers at once. Lets change the TestNG configuration a little bit to enable parallel testing:

<suite name="MongoSuite" verbose="1" parallel="tests" thread-count="3">

This configuration will cause all 3 test cases from our test suite definition to run in parallel (in other words our 3 overcast configurations with different MongoDB versions). Lets run the tests now from IntelliJ and see if all tests will pass:

Screen Shot 2014-10-08 at 8.32.38 PM
We see 9 executed test, because we have 3 tests and 3 configurations. All 9 tests have passed. The total execution time turned out to be under 9 seconds. That's pretty impressive!

During test execution we can see docker starting up multiple containers (see next screenshot). As expected it shows 3 containers with a different image version running simultaneously. It also shows the dynamic port mappings in the "PORTS" column:

Screen Shot 2014-10-08 at 8.50.07 PM

That's it!

Summary

To summarise, the advantages of using Docker with Overcast for integration testing are:

  1. Minimal setup. Only a docker capable host is required to run the tests.
  2. Save time. Minimal amount of configuration and infrastructure setup required to run the integration tests thanks to the docker community.
  3. Isolation. All test can run in their isolated environment so the tests will not affect each other.
  4. Flexibility. Use multiple overcast configuration and parameterized tests for testing against multiple versions.
  5. Speed. The docker container starts up very quickly, and overcast and testng allow you to even parallelize the testing by running multiple containers at once.

The example code for our integration test project is available here. You can use Boot2Docker to setup a docker host on Mac or Windows.

Happy testing!

Paul van der Ende 

Note: Due to a bug in the gradle parallel test runner you might run into this random failure when you run the example test code yourself. The work around is to disable parallelism or use a different test runner like IntelliJ or maven.

 

Xebia KnowledgeCast Episode 5: Madhur Kathuria and Scrum Day Europe 2014

Mon, 10/13/2014 - 10:48

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this 5th episode, we share key insights of Madhur Kathuria, Xebia India’s Director of Agile Consulting and Transformation, as well as some impressions of our Knowledge Exchange and Scrum Day Europe 2014. And of course, Serge Beaumont will have Fun With Stickies!

First, Madhur Kathuria shares his vision on Agile and we interview Guido Schoonheim at Scrum Day Europe 2014.

In this episode's Fun With Stickies Serge Beaumont talks about wide versus deep retrospectives.

Then, we interview Martin Olesen and Patricia Kong at Scrum Day Europe 2014.

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, send in a voice message so we can put you ON the show!

Credits

New daily stand up questions

Fri, 10/10/2014 - 15:51

This post provides some alternate standup questions to let your standup be: aimed forward, goal focused, team focused.

The questions are:

  1. What have I achieved since our last SUM?
  2. What is my goal for today?
  3. What things keep me from reaching my goal?
  4. What is our team goal for the end of our sprint day?

The daily stand up runs on a few standard questions. The traditional questions are:

  • What did I accomplish yesterday?
  • What will I be doing today?
  • What obstacles are impeding my progress?

A couple of effects I see when using the above list are:

  • A lot of emphasis is placed on past activities rather than getting the most out of the day at hand.
  • Team members tell what they will be busy with, but not what they aim to complete.
  • Impediments are not related to daily goals.
  • There is no summary for the team relating to the sprint goal.

If you are experiencing the same issues you could try the alternate questions. They worked for me, but any feedback is appreciated of course. Are you using other questions? Let me know your experience. You could use the PDF below to print out the questions for your scrum board.

STAND_EN

STAND_EN

 

The LGPL on Android

Fri, 10/10/2014 - 08:11

My client had me code review an Android app built for them by a third party. As part of my review, I checked the licensing terms of the open source libraries that it used. Most were using Apache 2.0 without a NOTICE file. One was using the GNU Lesser General Public License (LGPL).

My client has commercial reasons to avoid Copyleft-style licenses and so I flagged the library as unusable. The supplier understandably was not thrilled about the rework that implied and asked for an explanation and ideally some way to make it work within the license. Looking into it in more detail, I'm convinced that if you share my client's concerns, then there is no way to use LGPL licensed code on Android. Here's why I believe this to be the case.

The GNU LGPL

When I first encountered the LGPL years ago, it was explained to me as “the GPL, without the requirement to publish your source code”. The actual license terms turn out to be a bit more restrictive. The LGPL is an add-on to the full GPL that weakens (only) the restrictions to how you license and distribute your work. These weaker restrictions are in section 4.

Here's how I read that section:

You may convey a Combined Work under terms of your choice that […] if you also
do each of the following:
  a) [full attribution]
  b) [include a copy of the license]
  c) [if you display any copyright notices, you must mention the licensed Library]
  d) Do one of the following:
    0) [provide means for the user to rebuild or re-link your application against
       a modified version of the Library]
    1) [use runtime linking against a copy already present on the system, and allow
       the user to replace that copy]
  e) [provide clear instructions how to rebuild or re-link your application in light
     of the previous point]

The LGPL on Android

An Android app can use two kinds of libraries: Java libraries and native libraries. Both run into the same problem with the LGPL.

The APK file format for Android apps is a single, digitally signed package. It contains native libraries directly, while Java libraries are packaged along with your own bytecode into the dex file. Android has no means of installing shared libraries into the system outside of your APK, ruling out out (d)(1) as an option. That leaves (d)(0). Making the library replaceable is not the issue. It may not be the simplest thing, but I'm sure there is some way to make it work for both kinds of libraries.

That leaves the digital signature, and here's where it breaks down. Any user who replaces the LGPL licensed library in your app will have to digitally sign their modified APK file. You can't publish your code signing key, so they have to sign with a different key. This breaks signature compatibility, which breaks updates and custom permissions and makes shared preferences and expansion files inaccessible. It can therefore be argued that such an APK file is not usable in lieu of the original app, thus violating the license.

In short

The GNU Lesser General Public License ensures that a user has freedom to modify a so licensed library used by your application, even if your application is itself closed source. Android's app packaging and signature requirements are such that I believe it is impossible to comply with the license when using an LGPL licensed library in a closed source Android app.

Function references in Swift and retain cycles

Thu, 10/09/2014 - 14:49

The Swift programming language comes with some nice features. One of those features are closures, which are similar to blocks in objective-c. As mentioned in the Apple guides, functions are special types of closures and they too can be passed around to other functions and set as property values. In this post I will go through some sample uses and especially explain the dangers of retain cycles that you can quickly run into when retaining function pointers.

Let's first have a look at a fairly simple objective-c sample before we write something similar in Swift.

Objective-c

We will create a button that executes a block statement when tapped.

In the header file we define a property for the block:

@interface BlockButton : UIButton

@property (nonatomic, strong) void (^action)();

@end

Keep in mind that this is a strong reference and the block and references in the block will be retained.

And then the implementation will execute the block when tapped:

#import "BlockButton.h"

@implementation BlockButton

-(void)setAction:(void (^)())action
{
    _action = action;
    [self addTarget:self action:@selector(performAction) forControlEvents:UIControlEventTouchUpInside];
}

-(void)performAction {
    self.action();
}

@end

We can now use this button in one of our view controllers as following:

self.button.action = ^{
    NSLog(@"Button Tapped");
};

We will now see the message "Button Tapped" logged to the console each time we tap the button. And since we don't reference self within our block, we won't get into trouble with retain cycles.

In many cases however it's likely that you will reference self because you might want to call a function that you also need to call from other places. Let's look as such an example:

-(void)viewDidLoad {
    self.button.action = ^{
        [self buttonTapped];
    };
}

-(void)buttonTapped {
    NSLog(@"Button Tapped");
}

Because our view controller (or it's view) retains our button, and the button retains the block, we're creating a retain cycle here because the block will create a strong reference to self. That means that our view controller will never be deallocated and we'll have a memory leak.

This can easily be solved by using a weak reference to self:

__weak typeof(self) weakSelf = self;
self.button.action = ^{
    [weakSelf buttonTapped];
};

Nothing new so far, so let's continue with creating something similar in Swift.

Swift

In Swift we can create a similar Button that executes a closure instead of a block:

class ClosureButton: UIButton {

    var action: (() -> ())? {
        didSet {
            addTarget(self, action: "callClosure", forControlEvents: .TouchUpInside)
        }
    }

    func callClosure() {
        if let action = action {
            action()
        }
    }
}

It doing the same as the objective-c version (and in fact you could use it from objective-c with the same block as before). We can assign it an action from our view controller as following:

button.action = {
    println("Button Tapped")
}

Since this closure doesn't capture self, we won't be running into problems with retain cycles here.

As mentioned earlier, functions are just a special type of closures. Which is pretty nice, because it lets us reference functions immediately like this:

override func viewDidLoad() {
    button.action = buttonTapped
}

func buttonTapped() {
    println("Button Tapped")
}

Nice and easy syntax and good for functional programming. If only it wouldn't give us problems. Without it being immediately obvious, the above sample does create a retain cycle. Why? We're not referencing self anywhere? Or are we? The problem is that the buttonTapped function is part of our view controller instance. So when the button.action references to that function, it creates a strong reference to the view controller as well. In this case we could fix it by making buttonTapped a class function. But since in most cases you'll want to do something with self in such a function, for example accessing variables, this is not an option.

The only thing we can do to fix this is to make sure that the button won't get a strong reference to the view controller. Just like in our last objective-c sample, we need to create a weak reference to self. Unfortunately there is no easy way to simply get a weak reference to our function. So we need a work around here.

Work around 1: wrapping in closure

We can create a weak reference by wrapping the function in a closure:

button.action = { [weak self] in
    self!.buttonTapped()
}

Here we first create a weak reference of self. And in Swift, weak references are always optional. That means self within this closure is now an optional and need to unwrap it first, which is what the exclamation mark is for. Since we know this code cannot be called when self is deallocated we can safely use the ! instead of ?.

A lot less elegant than immediately referencing our function immediately.

In theory, using an unowned reference to self should also work as following:

button.action = { [unowned self] in
    self.buttonTapped()
}

Unfortunately (for reasons unknown to me) this crashes with a EXC_BAD_ACCESS upon deallocation of the ClosureButton. Probably a bug.

Work around 2: method pointer function

Thanks to a question on StackOverflow about this same problem and an answer provided by Rob Napier, there is a way to make the code a bit more elegant again. We can define a function that does the wrapping in a closure for us:

func methodPointer<T: AnyObject>(obj: T, method: (T) -> () -> Void) -> (() -> Void) {
    return { [weak obj] in
        method(obj!)()
    }
}

Now we can get a weak reference to our function a bit easier.

button.action = methodPointer(self, ViewController.buttonTapped)

The reason this works is because you can get a reference to any instance function by calling it as a class function with the instance (in this case self) as argument. For example, the following all does the same thing:

// normal call
self.buttonTapped()

// get reference through class
let myFunction = MyViewController.buttonTapped(self)
myFunction()

// directly through class
MyViewController.buttonTapped(self)()

However, the downside of this is that it only works with functions that take no arguments and return Void. i.e. methods with a () -> () signature, like our buttonTapped.

For each signature we would have to create a separate function. For example for a function that takes a String parameter and returns an Int:

func methodPointer<T: AnyObject>(obj: T, method: (T) -> (String) -> Int) -> ((String) -> Int) {
    return { [weak obj] string in
        method(obj!)(string)
    }
}

We can then use it the same way:

func someFunction() {
    let myFunction = methodPointer(self, MyViewController.stringToInt)
    let myInt = myFunction("123")
}

func stringToInt(string: String) -> Int {
    return string.toInt()
}
Retain cycles within a single class instance

Retain cycles do not only happen when strong references are made between two instances of a class. It's also possible, and probably less obvious, to create a strong reference within the same instance. Let look an an example:

var print: ((String) -> ())?

override func viewDidLoad() {
    print = printToConsole
}

func printToConsole(message: String) {
    println(message)
}

Here we do pretty much the same as in our button examples. We define an optional closure variable and then assign a function reference to it. This creates a strong reference from the print variable to self and thus creating a retain cycle. We need to solve it by using the same tricks we used earlier.

Another example is when we define a lazy variable. Since lazy variables are assigned after initialisation, they are allowed to reference self directly. That means we can set them to a function reference as following:

lazy var print: ((String) -> ()) = self.printToConsole

Of course this also creates a retain cycle.

Conclusion

To avoid creating retain cycles in Swift you should always remember that a reference to an instance function means that you're referencing the instance as well. And thus when assigning to a variable, you're creating a strong reference. Always make sure to wrap such references in a closure with a weak reference to the instance or make sure to manually set the variables to nil once you're done with them.

Unfortunately Swift does not support weak closure variables, which is something that would solve the problem. Hopefully they will support it in the future or come up with a way to create a weak reference to a function much like we can use [weak self] now in closures.

Why 'Why' Is Everything

Mon, 10/06/2014 - 20:46

The 'Why' part is perhaps the most important aspect of a user story. This links to the sprint goal which links ultimately to the product vision and organisation's vision.

Lately, I got reminded of the very truth of this statement. My youngest son is part of a soccer team and they have training every week. Part of the training are exercises that use a so-called speedladder.

foto-1

After the training while driving home I asked him what he especially liked about the training and what he wants to do differently next time. This time he answered that he didn't like the training at all. So I asked him what part he disliked: "The speedladder. It is such a stupid thing to do.". Although I realised it to be a poor mans answer I told him that some parts are not that nice and he needs to accept that: practising is not always fun. I wasn't happy with the answer but couldn't think of a better one.

Some time passed when I overheard the trainers explaining to each other that the speedladder is for improving the 'footwork', coordination, and sensory development. Then I got an idea!
I knew that his ambition is to become as good as Messi :-) so when at home I explained this to my son and that it helps him to improve feints and unparalleled actions. I noticed his twinkling eyes and he enthusiastically replied: "Dad, can we buy a speedladder so I can practise at home?".  Of course I did buy one! Since then the 'speedladder' is the most favourable part of the soccer training!

Summary

The goal, purpose and the 'Why' is the most important thing for persons and teams. Communicating this clearly to the team is one of the most important things a product owner and organisation need to do in order to get high performant teams.

How to create a Value Stream Map

Mon, 10/06/2014 - 08:05

Value Stream Mapping (VSM) is a -very- useful tool to gain insight in the workflow of a process and can be used to identify both Value Adding Activities and Non Value Adding Activities in a process stream while providing handles for optimizing the process chain. The results of a VSM can be used for many occasions: from writing out a business case, to defining a prioritized list to optimize processes within your organization, to pinpointing bottlenecks in your existing processes and gain a common understanding of process related issues.

When creating a VSM of your current software delivery process you quite possibly will be amazed by the amount of waste and therefor the room for improvement you might find. I challenge you to try this out within your own organization. It will leave you with a very powerful tool to explain to your management the steps that need to change, as it will leave you with facts.

To quickly get you started, I wrote out some handles on how to write out a proper Value Stream Map.

In many organizations there is the tendency to ‘solely’ perform local optimizations to steps in the process (i.e. per Business Unit), while in reality the largest process optimizations can be gained by optimizing the area’s which are in between the process steps and do not add any value to the customer at all; the Non Value Adding activities. Value Stream Mapping is a great tool for optimizing the complete process chain, not just the local steps.

Local_vs_complete

The Example - Mapping a Software Delivery Process
Many example value streams found on the internet focus on selling a mortgage, packaging objects in a factory or some logistic process. The example I will be using focuses on a typical Software Delivery Processes as we still see them today: the 'traditional' Software Delivery Process containing many manual steps.

You first need to map the 'as-is' process as you need this to form the baseline. This baseline provides you the required insight to remove steps from the process that do not add any value to your customer and therefor can be seen as pure waste to your organization.

It is important to write out the Value Stream as a group process (a workshop), where group-members represent people that are part of the value chain as it is today*. This is the only way to spot (hidden) activities and will provide a common understanding of the situation today. Apart from that, failure to execute the Value Stream Mapping activity as a group process will very likely reduce the acceptance rate at the end of the day. Never write out a VSM in isolation.

Value Stream mapping is 'a paper and pencil tool’ where you should ask participants to write out the stickies and help you form the map. You yourself will not write on stickies (okay, okay, maybe sometimes … but be careful not to do the work for the group). Writing out a process should take you about 4 to 6 hours, including discussions and the coffee breaks of course. So, now for the steps!

* Note that the example value stream is a simplified and fictional process based on the experience at several customers.

Step 0 Prepare
Make sure you have all materials available.

Here is a list:
- two 4 meter strokes of brown paper.
- Plastic tape to attach paper to the wall
- stickies square multiple colors
- stickies rectangle multiple colors
- small stickies one or two colors
- lot’s of sharpies (people need to be able to pick up the pens)
- colored ‘dot' stickies.

What do you need? (the helpful colleague not depicted)

What do you need? (the helpful colleague not depicted)

Step 1 & 2 define objectives and process steps
Make sure to work one process at a time and start off with defining customer objectives (the Voice Of Customer). A common understanding of the VoC is important because in later stage you will determine with the team which activities are really adding to this VoC and which steps are not. Quite often these objectives are defined in Time, Cost and Quality. For our example, let’s say the customer would like to be able to deliver a new feature every hour, with a max cost of $1000 a feature and with zero defects.

First, write down the Voice of the Customer in the top right corner. Now, together with the group, determine all the actors (organizations / persons / teams) that are part of the current process and glue these actors as orange stickies to the brown paper.

Defining Voice of Customer and Process Steps

Defining Voice of Customer and Process Steps

Step 3 Define activities performed within each process step
With the group, per determine the activities that take place. Underneath the orange stickies, add green stickies that describe the activities that take place in a given step.

Defining activities performed in each step

Defining activities performed in each step

Step 4 Define Work in Progress (WiP)
Now, add pink stickies in between the steps, describing the number of features / requirements / objects / activities that is currently in process in between actors. This is referred to as WiP - Work in Progress. Whenever there is a high WiP limit in between steps, you have identified a bottleneck causing the process 'flow' to stop.

On top of the pink WiP stickies containing particular high WiP levels, add a small sticky indicating what the group thinks is causing the high WiP. For instance, a document has to be distributed via internal mail, or a wait is introduced for a bi-weekly meeting or travel to another location is required. This information can later be used to optimize the process.

Note that in the workshop you should also take some time to finding WiP within the activities itself (this is not depicted in this example). Spend time on finding information for causes of high WiP and add this as stickies to each activity.

Define work in process

Define work in process

Step 5 Identify rework
Rework is waste. Still, many times you'll see that a deliverable is to be returned to a previous step for reprocessing. Together with the group, determine where this happens and what is the cause of this rework. A nice additional is to also write out first-time-right levels.

Identify rework

Identify rework

Step 6 Add additional information
Spend some time in adding additional comments for activities on the green stickies. Some activities might for instance not be optimized, are not easy to handle or from a group perspective considered obsolete. Mark these comments with blue stickies next to the activity at hand.

Add additional information

Add additional information

Step 7 Add Process time, Wait time and Lead time and determining Process Cycle Efficiency

Now, as we have the process more or less complete, we can start adding information related to timing. In this step you would like to determine the following information:

  • process time: the real amount of time that is required to perform a task without interruptions
  • lead time: the actual time that it takes for the activity to be completed (also known as elapse time)
  • wait time: time when no processing is done at all, for example when for waiting on a 'event' like a bi-weekly meeting.

(Not in picture): for every activity on the green sticky, write down a small sticky with two numbers vertically aligned. The top-number reflects the process-time, (i.e. 40 hours). The bottom-number reflects the lead time (i.e. 120 hours).

(In picture): add a block diagram underneath the process, where timing information in the upper section represents total processing time for all activities and timing information the lower section represents total lead time for all activities. (just add up the timing information for the individual activities I described in previous paragraph). Also add noticeable wait time in-between process steps. As a final step, to the right of this block diagram, add the totals.

Now that you have all information on the paper, the following  can be calculated:

  • Total Process Time - The total time required to actually work on activities if one could focus on the activity at hand.
  • Total Lead Time - The total time this process actually needs.
  • Project Cycle Efficiency (PCE): -> Total Process Time / Total Lead Time *100%.

Add this information to the lower right corner of your brown paper. The numbers for this example are:

Total Process Time: add all numbers in top section of stickies: 424 hours
Process Lead Time (PLT): add all numbers in lower section of stickies + wait time in between steps: 1740 hours
Project Cycle Efficiency (PCE) now is: -> Total Process  Time / Total Process Lead Time: 24%.
Note that 24% is -very- high which is caused by using an example. Usually you’ll see a PCE at about 4 - 8% for a traditional process.

Add process, wait and lead times

Add process, wait and lead times

Step 8 Identify Customer Value Add and Non Value Add activities
Now, categorize tasks into 2 types: tasks that add value to the customer (Customer Value Add, CVA) and tasks that do not add value to the customer (Non Value Add, NVA). The NVA you can again split into two categories: tasks that add Business Value (Business Value Add, BVA) and ‘Waste’. When optimizing a process, waste is to be eliminated completely as it does not add value to the customer nor the business as a whole. But also for the activities categorized as 'BVA', you have to ask yourself whether these activities add to the chain.

Mark CVA tasks with a green dot, BVA tasks with a blue dot and Waste with a red dot. Put the legend on the map for later reference.

When identifying CVA, NVA and BVA … force yourself to refer back to the Voice of Customer you jotted down in step 1 and think about who is your customer here. In this example, the customer is not the end user using the system, but the business. And it was the business that wanted Faster, Cheaper & Better. Now when you start to tag each individual task, give yourself some time in figuring out which tasks actually add to these goals.

Determine Customer Value Add & Non Value Add

Determine Customer Value Add & Non Value Add

To give you some guidance on how you can approach tagging each task, I’ll elaborate a bit on how I tagged the activities. Note again, this is just an example, within the workshop your team might tag differently.

Items I tagged as CVA: coding, testing (unit, static, acceptance), execution of tests and configuration of monitoring are adding value to the customer (business). Why? Because all these items relate to a faster, better (high quality through test + monitoring) and cheaper (less errors through higher quality of code) delivery of code.

Items I tagged as BVA: documentation, configuration of environments, deployments of VMs, installation of MW are required to be able to deliver to the customer when using this (typical waterfall) Software Delivery Processes. (Note: I do not necessarily concur with this process.) :)

Items I tagged as pure Waste, not adding any value to the customer: items like getting approval, the process to get funding (although probably required), discussing details and documenting results for later reference or waiting for the Quarterly release cycle. Non of these items are required to either deliver faster, cheaper or better so in that respect these items can be considered waste.

That's it (and step 9) - you've mapped your current process
So, that’s about it! The Value Stream Map is now more or less complete an contains all relevant information required to optimize the process in a next step. Step 9 here would be: Take some time to write out items/bottlenecks that are most important or easy to address and discuss internally with your team about a solution. Focus on items that you either tagged as BVA or pure waste and think of alternatives to eliminate these steps. Put your customer central, not your process! Just dropping an activity as a whole seems somewhat radical, but sometimes good ideas just are! Note by the way that when addressing a bottleneck, another bottleneck will pop up. There always will be a bottleneck somewhere in the process and therefor process optimization must be seen as a continuous process.

A final tip: to be able to perform a Value Stream Mapping workshop at the customer, it might be a good idea to join a more experienced colleague first, just to get a grasp of what the dynamics in such a workshop are like. The fact that all participants are at the same table, outlining the delivery process together and talk about it, will allow you to come up with an optimized process on which each person will buy in. But still, it takes some effort to get the workshop going. Take your time, do not rush it.

For now, I hope you can use the steps above the identify the current largest bottlenecks within your own organization and get going. In a next blog, if there is sufficient interest, I will write about what would be possible solutions in solving the bottlenecks in my example. If you have any ideas, just drop a line below so we can discuss! The aim for me would be to work towards a solution that caters for Continuous Delivery of Software.

Michiel Sens.

Integrating Geb with FitNesse using the Groovy ConfigSlurper

Fri, 10/03/2014 - 18:01

We've been playing around with Geb for a while now and writing tests using WebDriver and Groovy has been a delight! Geb integrates well with JUnit, TestNG, Spock, and Cucumber. All there is left to do is integrating it with FitNesse ... or not :-).

Setup Gradle and Dependencies

First we start with grabbing the gradle fitnesse classpath builder from Arjan Molenaar.
Add the following dependencies to the gradle build file:

compile 'org.codehaus.groovy:groovy-all:2.3.7'
compile 'org.gebish:geb-core:0.9.3'
compile 'org.seleniumhq.selenium:selenium-java:2.43.1
Configure different drivers with the ConfigSlurper

Geb provides a configuration mechanism using the Groovy ConfigSlurper. It's perfect for environment sensitive configuration. Geb uses the geb.env system property to determine the environment to use. So we use the ConfigSlurper to configure different drivers.

import org.openqa.selenium.chrome.ChromeDriver
import org.openqa.selenium.firefox.FirefoxDriver

driver = { new FirefoxDriver() }

environments {
  chrome {
    driver = { new ChromeDriver() }
  }
}
FitNesse using the ConfigSlurper

We need to tweak the gradle build script to let FitNesse play nice with the ConfigSlurper. So we pass the geb.env system property as a JVM argument. Look for the gradle task "wiki" in the gradle build script and add the following lines.

def gebEnv = (System.getProperty("geb.env")) ? (System.getProperty("geb.env")) : "firefox"
jvmArgs "-Dgeb.env=${gebEnv}"

Since FitNesse spins up a separate 'service' process when you execute a test, we need to pass the geb.env system property into the COMMAND_PATTERN of FitNesse. That service needs the geb.env system property to let Geb know which environment to use. Put the following lines in the FitNesse page.

!define COMMAND_PATTERN {java -Dgeb.env=${geb.env} -cp %p %m}

Now you can control the Geb environment by specifying it on the following command line.

gradle wiki -Dgeb.env=chrome

The gradle build script will pass the geb.env system property as JVM argument when FitNesse starts up. And the COMMAND_PATTERN will pass it to the test runner service.

Want to see it in action? Sources can be found here.

Dazzle Your Audience By Doodling

Sun, 09/28/2014 - 10:29

When we were kids, we loved to doodle. Most of us did anyway. I doodled all the time, everywhere, and, to the dismay of my mother, on everything. I still love to doodle. In fact, I believe doodling is essential.

The tragedy of the doodle lies in its definition: "A doodle is an unfocused or unconscious drawing while a person's attention is otherwise occupied." That's why most of us have been taught not to doodle. Seems logical, right? Teacher sees you doodling, that is not paying attention in class, thus not learning as much as you should, so he puts a stop to it. Trouble is though, it's wrong. And it's not just a little bit wrong, it's totally and utterly wrong. Exactly how wrong was shown in a case study by Jackie Andrade. She discovered that doodlers have 29% better recall. So, if you don't doodle, you're doing yourself a disservice.

And you're not just doing yourself a disservice, you're also doing your audience a disservice. Neurologists have discovered a phenomenon dubbed "mirror neurons." When you see something, the same neurons fire as if you were doing it. So, if someone shows you a picture, let's say a slide in a presentation, it is as if you're showing that picture to yourself.

Wait, what? That doesn't sound special at all, now does it? That's why presentations using only slides can be so unintentionally relaxing.

Now, if you see someone write or draw something on a flip chart, dry erase board or any other surface in plain sight, it is as if you're writing or drawing it yourself. And that ensures 29% better recall. Better yet, you'll remember what the presenter wants you to rememeber. Especially if he can trigger an emotional response.

Now, why is that? At EUVIZ in Berlin last month, I attended a presentation by Barbara Siegel from Look2Listen that changed my life. Barbara talked about the latest insights from neuroscience that prove that everyone feels first and thinks later. So, if you want your audience to tune in to your talk, show some emotion! Want people to remember specific points of your talk? Trigger and capture emotion by writing and drawing in real-time. Emotion runs deep and draws firm neurological paths in the brain that help you recreate the memory. Memories are recreated, not stored and retrieved.

Another thing that helps you draw firm neurological paths is exercise. If you get your audience to stand up and move, you increase their brain activity by 7%, hightening alertness and motivation. By getting your audience to sit down again after physical exercise, you trigger a rebalancing of neurotransmitters and other neurochemicals, so they can use the newly spawned neurons in their brain to combine into memories of your talk. Now that got me running every other day! Well, jogging is more like it, but hey: I'm hitting my target heart-rate regularly!

How does this help you become a better public speaker? Remember these two key points:

  1. At the start of your speech, get your audience to stand up and move to ensure 7% more brain activity and prime them for maximum recall.
  2. Make sure to use visuals and metaphors and create most, if not all, of them in real-time to leverage the mirror neuron effect and increase recall by 29%.

Xebia KnowledgeCast Episode 4: Scrum Day Europe 2013, OpenSpace Knowledge Exchange, and Fun With Stickies!

Mon, 09/22/2014 - 16:44

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this fourth episode, we share some impressions of Scrum Day Europe 2013 and Xebia's OpenSpace Knowledge Exchange. And of course, Serge Beaumont will have Fun With Stickies! First, we interview Frank Bakker and Evelien Roos at Scrum Day Europe 2013. Then, Adriaan de Jonge and Jeroen Leenarts talk about continuous delivery and iOS development at the OpenSpace XKE. And in between, Serge Beaumont has Fun With Stickies!

Frank Bakker and Evelien Roos give their first impressions of the Keynotes at Scrum Day Europe 2013. Yes, that was last year, I know. New, more current interviews are coming soon. In fact, this is the last episode in which I use interviews that were recorded last year.

In this episode's Fun With Stickies Serge Beaumont talks about hypothesis stories. Using those, ensures you keep your Agile really agile. A very relevant topic, in my opinion, and it jells nicely with my missing line of the Agile Manifesto: Experimentation over implementation!

Adriaan de Jonge explains how automation in general, and test automation in particular, is useful for continuous delivery. He warns we should focus on the process and customer interaction, not the tool(s). That's right before I can't help myself and ask him which tool to use.

Jeroen Leenarts talks about iOS development. Listening to the interview, which was recorded a year ago, it's amazing to realize that, with the exception of iOS8 having come out in the mean time, all of Jeroen's comments are as relevant today as they were last year. How's that for a world class developer!

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, use the Auphonic recording app to send in a voicemessage as an AIFF, WAV, or FLAC file so we can put you ON the show!

Credits

Hands-on Test Automation Tools session wrap up - Part1

Sun, 09/21/2014 - 15:57

Last week we had our first Hands-on Test Automation sessions.
Developers and Testers were challenged to show and tell their experiences in Test Automation.
That resulted in lots of in depth discussions and hands-on Test Automation Tool shoot-outs.

In this blogpost we'll share the outcome of the different sessions, like the famous Cucumber vs. FitNesse debat.

Stay tuned for upcoming updates!

Test Automation Frameworks

The following Test Automation Frameworks were demoed and discussed

1. FitNesse

FitNesse is a test management and execution tool.
You'll have to write/use fixture code if you want to use Selenium / WebDriver, webservices and databases in your tests.

Pros and Cons
You can have a good drill down in test results.
You can make use of scenario's and scenario libraries to make test automation steps reusable.
But refactoring is hard when scenario's are used extensively since there is no IDE support (yet)

2. Cucumber

Cucumber is a specification tool which describes how software should behave.
You'll have to write/use step definitions if you want to use Selenium / WebDriver, webservices and databases in your tests.

Pros and Cons
Cucumber forces you to write specifications / tests with scenarios (Behaviour in human readable language).
You can drill down into test results, but you'll need reporting libraries like Cucumber Reporting
We recommend using IntelliJ IDEA with the Cucumber plugin since it supports Cucumber seamlessly.
Refactoring becomes less problematic since you're using a IDE.

3. Selenium / WebDriver IDE

Selenium / WebDriver automates human interactions with web browser.
With the Selenium IDE you can record and play your tests in Firefox

Pros and Cons
It can get you started very quickly. You can record and play your tests scripts without writing any code.
Reusability of test automation code is not possible. You'll need to export it into an IDE to introduce reusability.

Must haves in Test Automation Tools

During the parallel sessions we've collected the following must haves in test automation tools.

Testers and Developers becoming best friends

When developers do not feel comfortable with the test automation tool, testers will try to fill the gap all by themselves. Most of the time these efforts result in hard to maintain test automation code. At some point in time test automation will become a bottleneck in Continuous Delivery. When picking a test automation tool consider each other's needs and pick a tool together. Feeling comfortable in writing and maintaining test automation code is really important to make test automation beneficial.

Separating What from How

Tools like FitNesse and Cucumber were designed to separate the What from the How in test automation. When combining both in those tools, you'll end up getting lost in details and you'll lose focus of what you are testing.
Use tools like FitNesse and Cucumber to describe What you are testing and put all details about How you are testing in test automation code (like fixture code and step definitions)

Other interesting tools
  • Thucydides: Reporting tests and results (including functional coverage)
  • Vagrant: Provisioning System Under Test instances
  • Liquibase: Treating database schema changes as 'code'

Stay tuned for upcoming updates!

 

Installing Oracle on Docker (Part 1)

Fri, 09/19/2014 - 12:56

I’ve spent Xebia’s Innovation Day last August experimenting with Docker in a team with two of my colleagues and a guest. We thought Docker sounds really cool, especially if your goal is to run software that doesn’t require lots of infrastructure and can be easily installed, e.g. because it runs from a jar file. We wondered however what would happen if we tried to run enterprise-software, like an Oracle database. Software that is notoriously difficult to install and choosy about the infrastructure it runs on. Hence our aim for the day: install an Oracle database on CoreOS and Docker.

We chose CoreOS because of its small footprint and the fact that it is easily installed in a VM using Vagrant (see https://coreos.com/docs/running-coreos/platforms/vagrant/). We used default Vagrantfile and CoreOS files with one modification: $vb_memory = 2024 in config.rb which allows the Oracle’s pre installer to run. The config files we used can be found here: https://github.com/jvermeir/OraDocker/

Starting with a default CoreOS install we then implemented the steps described here: http://www.tecmint.com/oracle-database-11g-release-2-installation-in-linux/.
Below is a snippet from the first version of our Dockerfile (tag: b0a7b56).
FROM centos:centos6
# Step 1: Setting Hostname
ENV HOSTNAME oracle.docker.xebia.com
# Step 2
RUN yum -y install wget
RUN wget --no-check-certificate https://public-yum.oracle.com/public-yum-ol6.repo -O /etc/yum.repos.d/public-yum-ol6.repo
RUN wget --no-check-certificate https://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
RUN yum -y install oracle-rdbms-server-11gR2-preinstall

Note that this takes awhile because the pre installer downloads a number of CentOS packages that are missing in CoreOS.
Execute this in a shell:
vagrant up
vagrant ssh core-01
cd ~/share/docker
docker build -t oradocker .

This seemed like a good time to do a commit to save our work in Docker:
docker ps # note the container-id and substitute for the last parameter in the line below this one.
docker commit -m "executed pre installer" 07f7389c811e janv/oradocker

At this point we studiously ignore some of the advice listed under ‘Step 2’ in Tecmint’s install manual, namely adding the HOSTNAME to /etc/syconfig/network, allowing access to the xhost (what would anyone want that for?) and mapping an IP address to a hostname in /etc/hosts (setting the hostname through ‘ENV HOSTNAME’ had no real effect as far as we could tell). We tried that but it didn’t seem to work. Denying reality and substituting our own we just ploughed on…

Next we added Docker commands to the Dockerfile that creates the oracle user, copy the relevant installer files and unzip them. Docker starts by sending a build context to the Docker daemon. This takes quite some time because the Oracle installer files are large. There’s probably some way to avoid this, but we didn’t look into that. Unfortunately Docker copies the installer files each time you run docker -t … only to conclude later on that nothing changed.

The next version of our Dockerfile sort of works in the sense that it starts up the installer. The installer then complains about missing swap space. We fixed this temporarily at the CoreOS level by running the following commands:
sudo su -
swapfile=$(losetup -f)
truncate -s 1G /swap
losetup $swapfile /swap
mkswap $swapfile
swapon $swapfile

found here: http://www.spinics.net/lists/linux-btrfs/msg28533.html
This works but it doesn’t survive a reboot.
Now the installer continues only to conclude that networking is configured improperly (one of those IgnoreAndContinue decisions coming back to bite us):
[FATAL] PRVF-0002 : Could not retrieve local nodename

For this to work you need to change /etc/hosts which our Docker version doesn’t allow us to do. Apparently this is fixed in a later version, but we didn’t get around to testing that. And maybe changing /etc/sysconfig/network is even enough, but we didn't try that either.

The latest version of our work is on github (tag: d87c5e0). The repository does not include the Oracle installer files. If you want to try for yourself you can download the files here: ">http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index-092322.html and adapt the docker file if necessary.

Below is a list of ToDo’s:

  1. Avoid copying large installer files if they’re not gonna be used anyway.
  2. Find out what should happen if you call ‘ENV HOSTNAME oracle.docker.xebia.com’.
  3. Make swap file setting permanent on CoreOS.
  4. Upgrade Docker version so we can change /etc/hosts

All in all this was a useful day, even though the end result was not a running database. We hope to continue working on the ToDo’s in the future.

Full width iOS Today Extension

Thu, 09/18/2014 - 10:57

Apple decided that the new Today Extensions of iOS 8 should not be the full width of the notification center. They state in their documentation:

Because space in the Today view is limited and the expected user experience is quick and focused, you shouldn’t create a widget that's too big by default. On both platforms, a widget must fit within the width of the Today view, but it can increase in height to display more content.

Source: https://developer.apple.com/library/ios/documentation/General/Conceptual/ExtensibilityPG/NotificationCenter.html

This means that developers that create Today Extensions can only use a width of 273 points instead of the full 320 points (for iPhones pre iPhone 6) and have a left offset of the remaining 47 points. Though with the release of iOS 8, several apps like DropBox and Evernote do seem to have a Today Extension that uses the full width. This raises question wether or not Apple noticed this and why it came through the approval process. Does Apple not care?

Should you want to create a Today Extension with the full width yourself as well, here is how to do it (in Swift):

override func viewWillAppear(animated: Bool) {
    super.viewWillAppear(animated)
    if let superview = view.superview {
        var frame = superview.frame
        frame = CGRectMake(0, CGRectGetMinY(frame), CGRectGetWidth(frame) + CGRectGetMinX(frame), CGRectGetHeight(frame))
        superview.frame = frame
    }
}

This changes the super view (Today View) of your Today Extension view. It doesn't use any private Api's, but Apple might reject it for not following their rules. So think carefully before you use it.

Become high performing. By being happy.  

Thu, 09/18/2014 - 03:59

The summer holidays are over. Fall is coming. Like the start of every new year, a good moment for new inspiration.

Recently, I went twice to the Boston area for a client of Xebia. I met there (I dislike the word “assessments"..) a number of experienced Scrum teams. They had an excellent understanding of Scrum, but were not able to convert this to an excellent performance. Actually, there were somewhat frustrated and their performance was slightly going down.

So, they were great teams, great team members, their agile processes were running smoothly, but still not a single winning team. Which left in my opinion only one option: a lack of Spirit.   Spirit is the fertilizer of Scrum and actually every framework, methodology and innovation.  But how to boost the spirit?

Screen Shot 2014-09-17 at 10.43.43 PM Until a few years ago, I would “just" organize teambuilding sessions to boost this, parallel with fixing/escalating the root causes. Nobel, but far from effective.   It’s much more about mindset en happiness and taking your own responsibility there.   Let’s explain this a little bit more here.

This are definitely awkward times. Terrible wars and epidemics where we can’t turn our back from anymore, an economic system which hardly survives, a more and more accelerating and a highly demanding society. In all which we have to find “time” for our friends, family, yourself and job or study. The last ones are essential to regain balance in a more and more depressing world. But how?

One of the most important building blocks of the agile mindset and life is happiness. Happiness is the fuel of acceleration and success. But what is happiness? Happiness is the ultimate moment you’re not thinking, but enjoying the moment and forget the world around you. For example, craftmanship will do this with you. Losing track of time while exercising the craft you love.

But too often we prevent our selves from being happy. Why should I be happy in this crazy world?   With this mentality you’re kept hostage by your worrying mind and ignore the ultimate state you were born: pure, happy ready to explore the world (and make mistakes!). It’s not a bad thing to be egocentric sometimes and switch off your dominant mind now and then. Every human being has the state of mind and ability to do this. But too rarely we do.

On the other hand, it’s also not a bad thing to be angry, frightened or sad sometimes. These emotions will help enjoying your moments of happiness more. But often your mind will resist these emotions. They are perceived as a sign of weakness or as a negative thing you should avoid. A wrong assumption. The harder you're trying to resist these emotions, the longer they will stay in your system and prevent you from being happy.

Being aware of these mechanisms I’ve explained above, you’ll be happier, more productive and better company for your family, friends and colleagues. Parties will not be a forced way trying to create happiness anymore, but a celebration of personal responsibility, success and happiness.

Continuous Delivery is about removing waste from the Software Delivery Pipeline

Wed, 09/17/2014 - 15:44

On October the 22nd I will be speaking at the Continuous Delivery and DevOps Conference in Copenhagen where I will share experiences on a very successful implementation of a new website serving about 20.000.000 page views a month.

Components and content for this site were developed by five(!) different vendors and for this project the customer took the initiative to work according to DevOps principles and implement a fully automated Software Delivery Process as they went along. This was a big win for the project, as development teams could now focus on delivering new software instead of fixing issues within the delivery process itself and I was the lucky one to implement this.

This blog is about visualizing the 'waste' we addressed within the project where you might find the diagrams handy when communicating Continuous Delivery principles within your own organization.

To enable yourself to work according to Continuous Delivery principles, an effective starting point is to remove waste from the Software Delivery Process. If you look at a traditional Software Delivery Process you'll find that there are probably many areas in your existing process that do not add any value for the customer at all.

These area's should be seen as pure waste, not adding any value to your customer, costing you either time or money (or both) over-and-over-and-over again. Each time new features are being developed and pushed to production, many people will perform a lot of costly manual work and run into the same issues over and over again. The diagram below provides an example of common area's where you might find waste in your existing Software Development Pipeline. Imagine this process to repeat every time a development team delivers new software. Within your conversation, you might want to an equal diagram to explain pain points within your current Software Delivery Process.

a traditional software delivery process

a traditional software delivery process

Automation of the Software Delivery process within this project, was all about eliminating known waste as much as possible. This resulted in setting up an Agile project structure and start working according to DevOps principles, enabling the team to deliver software on a frequent basis. Next to that, we automated the central build with Jenkins CI, which checks out code from a Git Version Management System, compiles code using maven, stores components in Apache Archiva, kicks off static, unit and functional tests covering both the JEE and PHP codebase and creating Deployment Units for further processing down the line. Deployment Automation itself was implemented by introducing XL Deploy. By doing so, every time a developer pushed new JEE or PHP code into the Git Version Management System, freshly baked deployment units were instantly deployed to the target landscape, which in its turn was managed by Puppet. An abstract diagram of this approach and chosen tooling is provided below.

overview of tooling for automating the software delivery process

overview of tooling for automating the software delivery process

When paving the way for Continuous Delivery, I often like to refer to this as working on the six A's: Setting up Agile (Product Focused) Delivery teams, Automating the build, Automating tests, Automating Deployments, Automating the Provisioning Layer and clean, easy to handle Software Architectures. The A for Architecture is about making sure that the software that is being delivered actually supports automation of the Software Delivery Process itself and put's the customer in the position to work according to Continuous Delivery principles. This A is not visible in the diagram.

After automation of the Software Delivery Process, the customer's Software Development Process behaved like the optimized process below, providing the team the opportunity to push out a constant & fluent flow of new features to the end user. Within your conversation, you might want to use this diagram to explain advantages to your organization.

an optimized software delivery process

an optimized software delivery process

As we automated the Software Delivery Pipeline for the customer we positioned this customer to go live at a press of a button. And on the go-live date, it was just that: a press of the button and 5 minutes later the site was completely live, making this the most boring go-live event I've ever experienced. The project itself was real good fun though! :)

Needless to say that subsequent updates are now moved into live state in a matter of minutes as the whole process just became very reliable. Deploying code just became a non-event. More details on how we made this project a complete success, how we implemented this environment, the project setting, the chosen tooling along with technical details I will happily share at the Continuous Delivery and DevOps Conference in Copenhagen. But of course you can also contact me directly. For now, I just hope to meet you there..

Michiel Sens.