Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Xebia Blog
Syndicate content
Updated: 1 hour 35 min ago

How to create the smallest possible docker container of any image

9 hours 53 min ago

Once you start to do some serious work with Docker, you soon find that downloading images from the registry is a real bottleneck in starting applications. In this blog post we show you how you can reduce the size of any docker image to just a few percent of the original. So is your image too fat, try stripping your Docker image! The strip-docker-image utility demonstrated in this blog makes your containers faster and safer at the same time!


We are working quite intensively on our High Available Docker Container Platform  using CoreOS and Consul which consists of a number of containers (NGiNX, HAProxy, the Registrator and Consul). These containers run on each of the nodes in our CoreOS cluster and when the cluster boots, more than 600Mb is downloaded by the 3 nodes in the cluster. This is quite time consuming.

cargonauts/consul-http-router      latest              7b9a6e858751        7 days ago          153 MB
cargonauts/progrium-consul         latest              32253bc8752d        7 weeks ago         60.75 MB
progrium/registrator               latest              6084f839101b        4 months ago        13.75 MB

The size of the images is not only detrimental to the boot time of our platform, it also increases the attack surface of the container.  With 153Mb of utilities in the  NGiNX based consul-http-router, there is a lot of stuff in the container that you can use once you get inside. As we were thinking of running this router in a DMZ, we wanted to minimise the amount of tools lying around for a potential hacker.

From our colleague Adriaan de Jonge we already learned how to create the smallest possible Docker container  for a Go program. Could we repeat this by just extracting the NGiNX executable from the official distribution and copying it onto a scratch image?  And it turns out we can!

finding the necessary files

Using the utility dpkg we can list all the files that are installed by NGiNX

docker run nginx dpkg -L nginx
...
/.
/usr
/usr/sbin
/usr/sbin/nginx
/usr/share
/usr/share/doc
/usr/share/doc/nginx
...
/etc/init.d/nginx
locating dependent shared libraries

So we have the list of files in the package, but we do not have the shared libraries that are referenced by the executable. Fortunately, these can be retrieved using the ldd utility.

docker run nginx ldd /usr/sbin/nginx
...
	linux-vdso.so.1 (0x00007fff561d6000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fd8f17cf000)
	libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007fd8f1598000)
	libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007fd8f1329000)
	libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007fd8f10c9000)
	libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007fd8f0cce000)
	libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fd8f0ab2000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd8f0709000)
	/lib64/ld-linux-x86-64.so.2 (0x00007fd8f19f0000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fd8f0505000)
Following and including symbolic links

Now we have the executable and the referenced shared libraries, it turns out that ldd normally names the symbolic link and not the actual file name of the shared library.

docker run nginx ls -l /lib/x86_64-linux-gnu/libcrypt.so.1
...
lrwxrwxrwx 1 root root 16 Apr 15 00:01 /lib/x86_64-linux-gnu/libcrypt.so.1 -> libcrypt-2.19.so

By resolving the symbolic links and including both the link and the file, we are ready to export the bare essentials from the container!

getpwnam does not work

But after copying all essentials files to a scratch image, NGiNX did not start.  It appeared that NGiNX tries to resolve the user id 'nginx' and fails to do so.

docker run -P  --entrypoint /usr/sbin/nginx stripped-nginx  -g "daemon off;"
...
2015/06/29 21:29:08 [emerg] 1#1: getpwnam("nginx") failed (2: No such file or directory) in /etc/nginx/nginx.conf:2
nginx: [emerg] getpwnam("nginx") failed (2: No such file or directory) in /etc/nginx/nginx.conf:2

It turned out that the shared libraries for the name switch service reading /etc/passwd and /etc/group are loaded at runtime and not referenced in the shared libraries. By adding these shared libraries ( (/lib/*/libnss*) to the container, NGiNX worked!

strip-docker-image example

So now, the strip-docker-image utility is here for you to use!

    strip-docker-image  -i image-name
                        -t target-image-name
                        [-p package]
                        [-f file]
                        [-x expose-port]
                        [-v]

The options are explained below:

-i image-name           to strip
-t target-image-name    the image name of the stripped image
-p package              package to include from image, multiple -p allowed.
-f file                 file to include from image, multiple -f allowed.
-x port                 to expose.
-v                      verbose.

The following example creates a new nginx image, named stripped-nginx based on the official Docker image:

strip-docker-image -i nginx -t stripped-nginx  \
                           -x 80 \
                           -p nginx  \
                           -f /etc/passwd \
                           -f /etc/group \
                           -f '/lib/*/libnss*' \
                           -f /bin/ls \
                           -f /bin/cat \
                           -f /bin/sh \
                           -f /bin/mkdir \
                           -f /bin/ps \
                           -f /var/run \
                           -f /var/log/nginx \
                           -f /var/cache/nginx

Aside from the nginx package, we add the files /etc/passwd, /etc/group and /lib/*/libnss* shared libraries. The directories /var/run, /var/log/nginx and /var/cache/nginx are required for NGiNX to operate. In addition, we added /bin/sh and a few handy utilities, just to be able to snoop around a little bit.

The stripped image has now shrunk to an incredible 5.4% of the original 132.8 Mb to just 7.3Mb and is still fully operational!

docker images | grep nginx
...
stripped-nginx                     latest              d61912afaf16        21 seconds ago      7.297 MB
nginx                              1.9.2               319d2015d149        12 days ago         132.8 MB

And it works!

ID=$(docker run -P -d --entrypoint /usr/sbin/nginx stripped-nginx  -g "daemon off;")
docker run --link $ID:stripped cargonauts/toolbox-networking curl -s -D - http://stripped
...
HTTP/1.1 200 OK

For HAProxy, checkout the examples directory.

Conclusion

It is possible to use the official images that are maintained and distributed by Docker and strip them down to their bare essentials, ready for use! It speeds up load times and reduces the attack surface of that specific container.

Checkout the github repository for the script and the manual page.

Please send me your examples of incredibly shrunk Docker images!

Scala development with GitHub's Atom editor

Sat, 06/27/2015 - 14:57
.code { font-family: monospace; background-color: #eeeeee; }

GitHub recently released version 1.0 of their Atom editor. This post gives a rough overview of its Scala support.

Basic features

Basic features such as Scala syntax highlighting are provided by the language-scala plugin.

Some work on worksheets as found in e.g. Eclipse has been done in the scala-worksheet-plus plugin, but this is still missing major features and not very useful at this time.

Navigation and completion Ctags

Atom supports basic 'Go to Declaration' (ctrl-alt-down) and 'Search symbol' (cmd-shift-r) support by way of the default ctags-based symbols-view.

While there are multiple sbt plugins for generating ctags, the easiest seems to be to have Ensime download the sources (more on that below) and invoke ctags manually: put this configuration in your home directory and run the 'ctags' command from your project root.

This is useful for searching for symbols, but limited for finding declarations: for example, when checking the declaration for Success, ctags doesn't know whether this is scala.util.Success, akka.actor.Status.Success, spray.http.StatusCodes.Success or some other 3rd-party or local symbol with that name.

Ensime

This is where the Ensime plugin comes in.

Ensime is a service for Scala IDE support, originally written for the Scala support in Emacs. The project metadata for Ensime can be generated with 'sbt gen-ensime' from the ensime-sbt sbt plugin.

Usage

Start the Ensime server from Atom with 'cmd-shift-p' 'Ensime: start'. After a small pause the status bar proclaims 'Indexer ready!' and you should be good to go.

At this point the main features are 'jump to definition' (alt-click), hover for type info, and auto-completion:

atom.io ensime completion

There are some rough edges, but this is a promising start based on a solid foundation.

Conclusions

While Atom is already a pleasant, modern, open source, cross platform editor, it is clearly still early days.

The Scala support in Atom is not yet as polished as in IDE's such as IntelliJ IDEA or as stable as in more mature editors such as Sublime Text, but is already practically useful and has serious potential. Startup is not instant, but I did not notice a 'sluggish feel' as reported by earlier reviewers.

Feel free to share your experiences in the comments, I will keep this post updated as the tools - and our experience with them - evolve.

Git Subproject Compile-time Dependencies in Sbt

Fri, 06/26/2015 - 13:33

When creating a sbt project recently, I tried to include a project with no releases. This means including it using libraryDependencies in the build.sbt does not work. An option is to clone the project and publish it locally, but this is tedious manual work that needs to be repeated every time the cloned project changes.

Most examples explain how to add a direct compile time dependency on a git repository to sbt, but they just show how to add a single project repository as a dependency using an RootProject. After some searching I found the solution to add projects from a multi-project repository. Instead of RootProject the ProjectRef should be used. This allows for a second argument to specify the subproject in the reposityr.

This is my current project/Build.scala file:

import sbt.{Build, Project, ProjectRef, uri}

object GotoBuild extends Build {
  lazy val root = Project("root", sbt.file(".")).dependsOn(staminaCore, staminaJson, ...)

  lazy val staminaCore = ProjectRef(uri("git://github.com/scalapenos/stamina.git#master"), "stamina-core")
  lazy val staminaJson = ProjectRef(uri("git://github.com/scalapenos/stamina.git#master"), "stamina-json")
  ...
}

These subprojects are now a compile time dependency and sbt will pull in and maintain the repository in ~/.sbt/0.13/staging/[sha]/stamina. So no manual checkout with local publish is needed. This is very handy when depending on an internal independent project/module and without needing to create a new release for every change. (One side note is that my IntelliJ currently does not recognize that the library is on the class/source path of the main project, so it complains it cannot find symbols and therefore cannot do proper syntax checking and auto completing.)

How to deploy composite Docker applications with Consul key values to CoreOS

Fri, 06/19/2015 - 16:34

Most examples on the deployment of Docker applications to CoreOS use a single docker application. But as soon as you have an application that consists of more than 1 unit, the number of commands you have to type soon becomes annoying. At Xebia we have a best practice that says "Three strikes and you automate" mandating that a third time you do something similar, you automate. In this blog I share the manual page of the utility called fleetappctl that allows you to perform rolling upgrades and deploy Consul Key value pairs of composite applications to CoreOS and show three examples of its usage.


fleetappctl is a utility that allows you to manage a set of CoreOS fleet unit files as a single application. You can start, stop and deploy the application. fleetappctl is idempotent and does rolling upgrades on template files with multiple instances running. It can substitute placeholders upon deployment time and it is able to deploy Consul key value pairs as part of your application. Using fleetappctl you have everything you need to create a self contained deployment  unit of your composite application and put it under version control.

The command line options to fleetappctl are shown below:

fleetappctl [-d deployment-descriptor-file]
            [-e placeholder-value-file]
            (generate | list | start | stop | destroy)
option -d

The deployment descriptor file describes all the fleet unit files and Consul key-value pair files that make up the application. All the files referenced in the deployment-descriptor may have placeholders for deployment time values. These placeholders are enclosed in  double curly brackets {{ }}.

option -e

The file contains the values for the placeholders to be used on deployment of the application. The file has a simple format:

<name>=<value>
start

starts all units in the order as they appear in the deployment descriptor. If you have a template unit file, you can specify the number of instances you want to start. Start is idempotent, so you may call start multiple times. Start will bring the deployment inline with your descriptor.

If the unit file has changed with respect to the deployed unit file, the corresponding instances will be stopped and restarted with the new unit file. If you have a template file, the instances of the template file will be upgraded one by one.

Any consul key value pairs as defined by the consul.KeyValuePairs entries are created in Consul. Existing values are not overwritten.

generate

generates a deployment descriptor (deployit-manifest.xml) based upon all the unit files found in your directory. If a file is a fleet unit template file the number of instances to start is set to 2, to support rolling upgrades.

stop

stops all units in reverse order of their appearance in the deployment descriptor.

destroy

destroys all units in reverse order of their appearance in the deployment descriptor.

list

lists the runtime status of the units that appear in the deployment descriptor.

Install fleetappctl

to nstall the fleetappctl utility, type the following commands:

curl -q -L https://github.com/mvanholsteijn/fleetappctl/archive/0.25.tar.gz | tar -xzf -
cd fleetappctl-0.25
./install.sh
brew install xmlstarlet
brew install fleetctl
Start the platform

If you do not have the platform running, start it first.

cd ..
git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service.git
cd coreos-container-platform-as-a-service
git checkout 029d3dd8e54a5d0b4c085a192c0ba98e7fc2838d
cd vagrant
vagrant up
./is_platform_ready.sh
Example - Three component web application


The first example is a three component application. It consists of a mount, a Redis database service and a web application. We generate the deployment descriptor, indicate we do not want to start the mount, start the application and then modify the web application unit file to change the service name into 'helloworld'. We perform a rolling upgrade by issuing start again.. Finally we list, stop and destroy the application.

cd ../fleet-units/app
# generate a deployment descriptor
fleetappctl generate

# do not start mount explicitly
xml ed -u '//fleet.UnitConfigurationFile[@name="mnt-data"]/startUnit' \
       -v false deployit-manifest.xml > \
        deployit-manifest.xml.new
mv deployit-manifest.xml{.new,}

# start the app
fleetappctl start 

# Check it is working
curl hellodb.127.0.0.1.xip.io:8080
curl hellodb.127.0.0.1.xip.io:8080

# Change the service name of the application in the unit file
sed -i -e 's/SERVICE_NAME=hellodb/SERVICE_NAME=helloworld/' app-hellodb@.service

# do a rolling upgrade
fleetappctl start 

# Check it is now accessible on the new service name
curl helloworld.127.0.0.1.xip.io:8080

# Show all units of this app
fleetappctl list

# Stop all units of this app
fleetappctl stop
fleetappctl list

# Restart it again
fleetappctl start

# Destroy it
fleetappctl destroy
Example - placeholder references

This example shows the use of a placeholder reference in the unit file of the paas-monitor application. The application takes two optional environment  variables: RELEASE and MESSAGE that allow you to configure the resulting responses. The variable RELEASE is configured in the Docker run command in the fleet unit file through a placeholder. The actual value for the current deployment is taken from an placeholder value file.

cd ../fleetappctl-0.25/examples/paas-monitor
#check out the placeholder reference
grep '{{' paas-monitor@.service

...
ExecStart=/bin/sh -c "/usr/bin/docker run --rm --name %p-%i \
 <strong>--env RELEASE={{release}}</strong> \
...
# checkout our placeholder values
cat dev.env
...
release=V2
# start the app
fleetappctl -e dev.env start

# show current release in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start is idempotent (ie. nothing happens)
fleetappctl -e dev.env start

# update the placeholder value and see a rolling upgrade in the works
echo 'release=V3' > dev.env
fleetappctl -e dev.env start
curl paas-monitor.127.0.0.1.xip.io:8080/status

fleetappctl destroy
Example - Env Consul Key Value Pair deployments


The final example shows the use of a Consul Key Value Pair, the use of placeholders and envconsul to dynamically update the environment variables of a running instance. The environment variables RELEASE and MESSAGE are taken from the keys under /paas-monitor in Consul. In turn the initial value of these keys are loaded on first load and set using values from the placeholder file.

cd ../fleetappctl-0.25/examples/envconsul

#check out the Consul Key Value pairs, and notice the reference to placeholder values
cat keys.consul
...
paas-monitor/release={{release}}
paas-monitor/message={{message}}

# checkout our placeholder values
cat dev.env
...
release=V4
message=Hi guys
# start the app
fleetappctl -e dev.env start

# show current release and message in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# Change the message in Consul
fleetctl ssh paas-monitor@1 \
    curl -X PUT \
    -d \'hello Consul\' \
    http://172.17.8.101:8500/v1/kv/paas-monitor/message

# checkout the changed message
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start does not change the values..
fleetappctl -e dev.env start
Conclusion

CoreOS provides all the basic functionality for a Container Platform as a Service. With the utility fleetappctl it becomes easy to start, stop and upgrade composite applications. The script is an superfluous to fleetctl and does not break other ways of deploying your applications to CoreOS.

The source code, manual page and documentation of fleetappctl can be found on https://github.com/mvanholsteijn/fleetappctl.

 

boot2docker on xhyve

Mon, 06/15/2015 - 09:00

xhyve is a new hypervisor in the vein of KVM on Linux and bhyve on BSD. It’s actually a port of BSD’s bhyve to OS X and more similar to KVM than to Virtualbox in that it’s minimal and command line only which makes it a good fit for an always running virtual machine like boot2docker on OS X.

This post documents the steps to get boot2docker running within xhyve and contains some quick benchmarks to compare xhyve’s performance with Virtualbox.

Read the full blogpost on Simon van der Veldt's website

Using UIPageViewControllers with Segues

Mon, 06/15/2015 - 07:30

I've always wondered how to configure a UIPageViewController much like you configure a UITabBarController inside a Storyboard. It would be nice to create standard embed segues to the view controllers that make up the pages. Unfortunately, such a thing is not possible and currently you can't do a whole lot in a Storyboard with page view controllers.

So I thought I'd create a custom way of connecting pages through segues. This post explains how.

Without segues

First let's create an example without using segues and then later we'll try to modify it to use segues.

Screen Shot 2015-06-14 at 10.01.10

In the above Storyboard we have 2 scenes. One page view controller and another for the individual pages, the content view controller. The page view controller will have 4 pages that each display their page number. It's about as simple as a page view controller can get.

Below the code of our simple content view controller:

class MyContentViewController: UIViewController {

  @IBOutlet weak var pageNumberLabel: UILabel!

  var pageNumber: Int!

  override func viewDidLoad() {
    super.viewDidLoad()
      pageNumberLabel.text = "Page $$pageNumber)"
    }
}

That means that our page view controller just needs to instantiate an instance of our MyContentViewController for each page and set the pageNumber. And since there is no segue between the two scenes, the only way to create an instance of the MyContentViewController is programmatically with the UIStoryboard.instantiateViewControllerWithIdentifier(_:). Of course for that to work, we need to give the content view controller scene an identifier. We'll choose MyContentViewController to match the name of the class.

Our page view controller will look like this:

class MyPageViewController: UIPageViewController {

  let numberOfPages = 4

  override func viewDidLoad() {
    super.viewDidLoad()

    setViewControllers([createViewController(1)], 
      direction: .Forward, 
      animated: false, 
      completion: nil)

    dataSource = self
  }

  func createViewController(pageNumber: Int) -> UIViewController {
    let contentViewController = 
      storyboard?.instantiateViewControllerWithIdentifier("MyContentViewController") 
      as! MyContentViewController
    contentViewController.pageNumber = pageNumber
    return contentViewController
  }

}

extension MyPageViewController: UIPageViewControllerDataSource {
  func pageViewController(pageViewController: UIPageViewController, viewControllerAfterViewController viewController: UIViewController) -> UIViewController? {
    return createViewController(
      mod((viewController as! MyContentViewController).pageNumber, 
      numberOfPages) + 1)
  }

  func pageViewController(pageViewController: UIPageViewController, viewControllerBeforeViewController viewController: UIViewController) -> UIViewController? {
    return createViewController(
      mod((viewController as! MyContentViewController).pageNumber - 2, 
      numberOfPages) + 1)
  }
}

func mod(x: Int, m: Int) -> Int{
  let r = x % m
  return r < 0 ? r + m : r
}

Here we created the createViewController(_:) method that has the page number as argument and creates the instance of MyContentViewController and sets the page number. This method is called from viewDidLoad() to set the initial page and from the two UIPageViewControllerDataSource methods that we're implementing here to get to the next and previous page.

The custom mod(_:_) function is used to have a continuous page navigation where the user can go from the last page to the first and vice versa (the built-in % operator does not do a true modules operation that we need here).

Using segues

The above sample is pretty simple. So how can we change it to use a segue? First of all we need to create a segue between the two scenes. Since we're not doing anything standard here, it will have to be a custom segue. Now we need a way to get an instance of our content view controller through the segue which we can use from our createViewController(_:). The only method we can use to do anything with the segue is UIViewController.performSegueWithIdentifier(_:sender:). We know that calling that method will create an instance of our content view controller, which is the destination of the segue, but we then need a way to get this instance back into our createViewController(_:) method. The only place we can reference the new content view controller instance is from within the custom segue. From it's init method we can set it to a variable which the createViewController(_:) can also access. That looks something as following. First we create the variable:

  var nextViewController: MyContentViewController?

Next we create a new custom segue class that assigns the destination view controller (the new MyContentViewController) to this variable.

public class PageSegue: UIStoryboardSegue {

  public override init!(identifier: String?, source: UIViewController, destination: UIViewController) {
    super.init(identifier: identifier, source: source, destination: destination)
    if let pageViewController = source as? MyPageViewController {
      pageViewController.nextViewController = 
        destinationViewController as? MyContentViewController
    }
  }

  public override func perform() {}
    
}

Since we're only interested in getting the reference to the created view controller we don't need to do anything extra in the perform() method. And the page view controller itself will handle displaying the pages so our segue implementation remains pretty simple.

Now we can change our createViewController(_:) method:

func createViewController(pageNumber: Int) -> UIViewController {
  performSegueWithIdentifier("Page", sender: self)
  nextViewController!.pageNumber = pageNumber
  return nextViewController!
}

The method looks a bit odd since it's not we're never assigning nextViewController anywhere in this view controller. And we're relying on the fact that the segue is created synchronously from the performSegueWithIdentifier call. Otherwise this wouldn't work.

Now we can create the segue in our Storyboard. We need to give it the same identifier as we used above and set the Segue Class to PageSegue

Screen Shot 2015-06-14 at 11.58.49

<2>Generic class

Now we can finally create segues to visualise the relationship between page view controller and content view controller. But let's see if we can write a generic class that has most of the logic which we can reuse for each UIPageViewController.

We'll create a class called SeguePageViewController which will be the super class for our MyPageViewController. We'll move the PageSegue to the same source file and refactor some code to make it more generic:

public class SeguePageViewController: UIPageViewController {

    var pageSegueIdentifier = "Page"
    var nextViewController: UIViewController?

    public func createViewController(sender: AnyObject?) -> MyContentViewController {
        performSegueWithIdentifier(pageSegueIdentifier, sender: sender)
        return nextViewController!
    }

}

public class PageSegue: UIStoryboardSegue {

    public override init!(identifier: String?, source: UIViewController, destination: UIViewController) {
        super.init(identifier: identifier, source: source, destination: destination)
        if let pageViewController = source as? SeguePageViewController {
            pageViewController.nextViewController = destinationViewController as? UIViewController
        }
    }

    public override func perform() {}
    
} 

As you can see, we moved the nextViewController variable and createViewController(_:) to this class and use UIViewController instead of our concrete MyContentViewController class. We also introduced a new variable pageSegueIdentifier to be able to change the identifier of the segue.

The only thing missing now is setting the pageNumber of our MyContentViewController. Since we just made things generic we can't set it from here, so what's the best way to deal with that? You might have noticed that the createViewController(_:) method now has a sender: AnyObject? as argument, which in our case is still the page number. And we know another method that receives this sender object: prepareForSegue(_:sender:). All we need to do is implement this in our MyPageViewController class.

override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  if segue.identifier == pageSegueIdentifier {
    let vc = segue.destinationViewController as! MyContentViewController
    vc.pageNumber = sender as! Int
  }
}

If you're surprised to see the sender argument being used this way you might want to read my previous post about Understanding the ‚Äėsender‚Äô in segues and use it to pass on data to another view controller

Conclusion

Wether or not you'd want to use this approach of using segues for page view controllers might be a matter of personal preference. It doesn't give any huge benefits but it does give you a visual indication of what's happening in the Storyboard. It doesn't really save you any code since the amount of code in MyPageViewController remained about the same. We just replaced the createViewController(_:) with the prepareForSegue(_:sender:) method.

It would be nice if Apple offers a better solution that depends less on the data source of a page view controller and let you define the paging logic directly in the storyboard.

3 easy ways of improving your Kanban system

Sun, 06/14/2015 - 21:10

You are working in a Kanban team and after a few months it doesn’t seem to be as easy and effective as it used to be. Here are three tips on how to get that energy back up.

1. Recheck and refresh all policies

IMG_1884New team members haven‚Äôt gone through the same process as the other team members. This means that they probably won‚Äôt pick up on the subtleties of the board and policies that go with it. Over time team agreements that were very explicit (remember the Kanban Practice: ‚ÄúMake Policies Explicit‚ÄĚ?) aren‚Äôt anymore. For example: A blue stickie that used to have a class of service associated with it becomes a stickie for the person that works on the project. Unclear policies lead to loss of focus, confusion and lack of flow. Do a check if all policies are still valid, clear and understood by everybody. If not, get rid of them or improve them!

2. Check if WIP limits are understood and adhered to

In the first time after a team starts using Kanban, Work in Progress (WIP) limits are not completely optimized for flow. They usually are higher to minimize resistance from team and stakeholders as tighter WIP limits require a higher maturity of the organization. The team should start to adjust (lower) the limits to optimize flow. Lack team focus on improvement of flow can make that they don't adjust WIP limits.

Reevaluate current WIP limits, adjust them for bottlenecks and lower and discuss next steps. It will return the focus to help the team to ‚ÄúStart Finishing‚ÄĚ.

3. Do a complete overhaul of the board

IMG_0307It sounds counter intuitive, but sometimes it helps to take the board and do a complete runthrough of the steps to design a Kanban System. Analyze up- and downstream stakeholders and model the steps the work flows through. After that update the legend and policies for classes of service.

Even if the board looks almost the same, the discussion and analysis might lead to subtle changes that make a world of difference. And it’s good for team spirit!

 

I hope these tips can help you. Please share any that you have in the comments!

ATDD with Lego Mindstorms EV3

Fri, 06/12/2015 - 20:09

We have been building automated acceptance tests using web browsers, mobile devices and web services for several years now. Last week Erik Zeedijk and I came up with the (crazy) idea to implement features for a robot using ATDD. In this blog we will explain how we used ATDD while experimenting with Lego Mindstorms EV3.

What are we building?

When you practice ATDD it is really important to have a common understanding about what you need to build and test. So we had several conversations about requirements and specifications.

We wanted to build a robot that could help us clean up post-its from tables or floors after brainstorms and other sticky note related sessions. For our first iteration we decided to experiment with moving the robot and using the color scanner. The robot should search for yellow post-it notes all by itself. No manual interactions because we hate manual work. After the conversation we wrote down the following acceptance criteria to scope our development activities:

  • The robot needs to move his way on a white square board
  • The robot needs to turn away from the edges of the square board
  • The robot needs to stop a yellow post-it

We used Cucumber to write down the features and acceptance test scenarios like:


Feature: Move robot
Scenario: Robot turns when a purple line is detected
When the robot is moving and encounters a purple line
Then the robot turns left

Scenario: Robot stops moving when yellow post-it detected
When the robot is moving and encounters yellow post-it
Then the robot stops

Just enough information to start rocking with the robot! Ready? Yes! Tests? Yes! Lets go!

How did we build it?

First we played around with the Lego Mindstorms EV3 programming software (4GL tool). We could easily build the application by click our way through the conditions, loops and color detections.

But we couldn't find a way to hook up our Acceptance Tests. So we decided to look for programming languages. First hit was the leJOS. It comes with a custom firmware which you install on a SD card. This firmware allows you to run Java on the Lego EV3 brick.

Installing the firmware on the EV3 Brick

The EV3 main computer ‚ÄúBrick‚ÄĚ is actually pretty powerful.

It has a 300mhz ARM processor with 64MB RAM and takes micro SD cards for storage.
It also has Bluetooth and Wi-Fi making it possible to wirelessly connect to it and load programs on it.

There are several custom firmware’s you can run on the EV3 Brick. The LeJOS firmware allows us to do just that where we can program in Java with having access to a specialized API created for the EV3 Brick. Install it by following these steps:

  1. Download the latest firmware on sourceforge: http://sourceforge.net/projects/ev3.lejos.p/files/
  2. Get a blank SD card (2GB ‚Äď 32GB, SDXC not supported) and format this card to FAT32.
  3. Unzip the ‚Äėlejosimage.zip‚Äô file from the leJOS home directory to the root directory of the SD card.
  4. Download Java for Lego Mindstorms EV3 from the Oracle website: http://java.com/legomindstorms
  5. Copy the downloaded ‚ÄėOracle JRE.tar.gz‚Äô file to the root of the SD card.
  6. Safely remove the SD card from your computer, insert it into the EV3 Brick and boot the EV3. The first boot will take approximately 8 minutes while it created the needed partitions for the EV3 Brick to work.
ATDD with Cucumber

Cucumber helped us creating scenarios in an human readable language. We created the Step Definitions to glue it all the automation together and used the remote access to the EV3 via Java RMI.

We also wrote some support code to help us make the automation code readable and maintainable.

Below you can find an example of the StepDefinition of a Given statement of a scenario.

@Given("^the robot is driving$")
public void the_robot_is_driving() throws Throwable {
Ev3BrickIO.start();
assertThat(Ev3BrickIO.leftMotor.isMoving(), is(true));
assertThat(Ev3BrickIO.rightMotor.isMoving(), is(true));
}
Behavior Programming

A robot needs to perform a series of functions. In our case we want to let the robot drive around until he finds a yellow post-it, after which he stops. The obvious way of programming these functions is to make use of several if, then, else statements and loops. This works fine at first, however, as soon as the patterns become more complex the code becomes complex as well since adding new actions can have influence on other actions.

A solution to this problem is Behavior Programming. The concept of this is based around the fact that a robot can only do one action at a time depending on the state he is in. These behaviors are written as separate independent classes controlled by an overall structure so that behavior can be changed or added without touching other behaviors.

The following is important in this concept:

  1. One behavior is active at a time and then controls the robot.
  2. Each behavior is given a priority. For example; the turn behavior has priority over the drive behavior so that the robot will turn once the color sensor indicates he reaches the edge of the field.
  3. Each behavior is a self contained, independent object.
  4. The behaviors can decide if they should take control of the robot.
  5. Once a behavior becomes active it has a higher priority than any of the other behaviors
Behaviour API and Coding

The Behavior API is fairly easy to understand as it only exists out of one interface and one class. The individual behavior classes build-up is then defined by the overall Behavior interface and the individual behavior classes are controlled by an Arbitrator class that decides which behavior should be the active one. A separate class should be created for each behavior you want the robot to have.

leJOS Behaviour Pattern

 

When an Arbitrator object is instantiated, it is given an array of Behavior objects. Once it has these, the start() method is called and it begins arbitrating; deciding which behavior will become active.

In the code example below the Arbitrator is being created where the first two lines create instances of our behaviors. The third line places them into an array, with the lowest priority behavior taking the lowest array index. The fourth line creates the Arbitrator, and the fifth line starts the Arbitration process. One you create more behavior classes these can simply be added to the array in order to be executed by the Arbitrator.

DriveForward driveForward = new DriveForward();
Behavior stopOnYellow = new StopOnYellow();
behaviorList = new Behavior[]{driveForward, stopOnYellow};
arbitrator = new Arbitrator(behaviorList, true);
arbitrator.start();

We will now take a detailed look at one of the behavior classes; DriveForward(). Each of the standard methods is defined. First we create the action() method in which contains what we want the robot to perform when the behavior becomes active. In this case moving the left and the right motor forwards.

public void action() {
try {
suppressed = false;

Ev3BrickIO.leftMotor.forward();
Ev3BrickIO.rightMotor.forward();
while( !suppressed ) {
Thread.yield();
}

Ev3BrickIO.leftMotor.stop(true);
Ev3BrickIO.rightMotor.stop(true);
} catch (RemoteException e) {
e.printStackTrace();
}
}

The suppress() method will stop the action once this is called.

public void suppress() {
suppressed = true;
}

The takeControl() method tells the Arbitrator which behavior should become active. In order to keep the robot moving after having performed more actions we decided to create a global variable to keep track of the fact if the robot is running. While this is the case we simply return 'true' from the takeControl() method.

public boolean takeControl() {
return Ev3BrickIO.running;
}

You can find our leJOS EV3 Automation code here.

Test Automation Day

Our robot has the potential to do much more and we have not yet implemented the feature of picking up our post-its! Next week we'll continue working with the Robot during the Test Automation Day.
Come and visit our booth where you can help us to enable the Lego Mindstorms EV3 Robot to clean-up our sticky mess at the office!

Feel free to use our Test Automation Day ‚ā¨50 discount on the ticket price by using the following code: TAD15-XB50

We hope to see you there!

Android Design Support: NavigationView

Tue, 06/09/2015 - 09:18

When Google announced that Android apps should follow their Material Design standards, they did not give the developers a lot of tools to actually implement this new look and feel. Of course Google’s own apps were all quickly updated and looked amazing, but the rest of us were left with little more than fancy design guidelines and no real components to use in our apps.

So last weeks release of the Android Design Support Library came as a relief to many. It promises to help us quickly create nice looking apps that are consistent with the rest of the platform, without having to roll everything for ourselves. Think of it as AppCompat’s UI-centric companion.

The NavigationView is the part of this library which I thought the most interesting. It helps you create the slick sliding-over-everything navigation drawer that is such a recognizable part of material apps. I will demonstrate how to use this component and how to avoid some common mistakes.

(function(d, t) { var g = d.createElement(t), s = d.getElementsByTagName(t)[0]; g.src = 'http://assets.gfycat.com/js/gfyajax-0.517d.js'; s.parentNode.insertBefore(g, s); }(document, 'script'));

Basic Setup

The basic setup is pretty straightforward, you add a DrawerLayout and NavigationView to your main layout resource:

<android.support.v4.widget.DrawerLayout
  android:id="@+id/drawer_layout"
  xmlns:android="http://schemas.android.com/apk/res/android"
  xmlns:app="http://schemas.android.com/apk/res-auto"
  android:layout_width="match_parent"
  android:layout_height="match_parent"
  android:fitsSystemWindows="true">

  <!-- The main content view -->
  <LinearLayout
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:orientation="vertical">

    <!-- Toolbar instead of ActionBar so the drawer can slide on top -->
    <android.support.v7.widget.Toolbar
      android:id="@+id/toolbar"
      android:layout_width="match_parent"
      android:layout_height="@dimen/abc_action_bar_default_height_material"
      android:background="?attr/colorPrimary"
      android:minHeight="?attr/actionBarSize"
      android:theme="@style/AppTheme.Toolbar"
      app:titleTextAppearance="@style/AppTheme.Toolbar.Title"/>

    <!-- Real content goes here -->
    <FrameLayout
      android:id="@+id/content"
      android:layout_width="match_parent"
      android:layout_height="0dp"
      android:layout_weight="1"/>
  </LinearLayout>

  <!-- The navigation drawer -->
  <android.support.design.widget.NavigationView
    android:id="@+id/navigation"
    android:layout_width="wrap_content"
    android:layout_height="match_parent"
    android:layout_gravity="start"
    android:background="@color/ternary"
    app:headerLayout="@layout/drawer_header"
    app:itemIconTint="@color/drawer_item_text"
    app:itemTextColor="@color/drawer_item_text"
    app:menu="@menu/drawer"/>

</android.support.v4.widget.DrawerLayout>

And a drawer.xml menu resource for the navigation items:

<menu xmlns:android="http://schemas.android.com/apk/res/android">
  <!-- group with single selected item so only one item is highlighted in the nav menu -->
  <group android:checkableBehavior="single">
    <item
      android:id="@+id/drawer_item_1"
      android:icon="@drawable/ic_info"
      android:title="@string/item_1"/>
    <item
      android:id="@+id/drawer_item_2"
      android:icon="@drawable/ic_help"
      android:title="@string/item_2"/>
  </group>
</menu>

Then wire it up in your Activity. Notice the nice onNavigationItemSelected(MenuItem) callback:

public class MainActivity extends AppCompatActivity implements
    NavigationView.OnNavigationItemSelectedListener {

  private static final long DRAWER_CLOSE_DELAY_MS = 250;
  private static final String NAV_ITEM_ID = "navItemId";

  private final Handler mDrawerActionHandler = new Handler();
  private DrawerLayout mDrawerLayout;
  private ActionBarDrawerToggle mDrawerToggle;
  private int mNavItemId;

  @Override
  protected void onCreate(final Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);

    mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout);
    Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar);
    setSupportActionBar(toolbar);

    // load saved navigation state if present
    if (null == savedInstanceState) {
      mNavItemId = R.id.drawer_item_1;
    } else {
      mNavItemId = savedInstanceState.getInt(NAV_ITEM_ID);
    }

    // listen for navigation events
    NavigationView navigationView = (NavigationView) findViewById(R.id.navigation);
    navigationView.setNavigationItemSelectedListener(this);

    // select the correct nav menu item
    navigationView.getMenu().findItem(mNavItemId).setChecked(true);

    // set up the hamburger icon to open and close the drawer
    mDrawerToggle = new ActionBarDrawerToggle(this, mDrawerLayout, toolbar, R.string.open,
        R.string.close);
    mDrawerLayout.setDrawerListener(mDrawerToggle);
    mDrawerToggle.syncState();

    navigate(mNavItemId);
  }
  
  private void navigate(final int itemId) {
    // perform the actual navigation logic, updating the main content fragment etc
  }

  @Override
  public boolean onNavigationItemSelected(final MenuItem menuItem) {
    // update highlighted item in the navigation menu
    menuItem.setChecked(true);
    mNavItemId = menuItem.getItemId();
    
    // allow some time after closing the drawer before performing real navigation
    // so the user can see what is happening
    mDrawerLayout.closeDrawer(GravityCompat.START);
    mDrawerActionHandler.postDelayed(new Runnable() {
      @Override
      public void run() {
        navigate(menuItem.getItemId());
      }
    }, DRAWER_CLOSE_DELAY_MS);
    return true;
  }

  @Override
  public void onConfigurationChanged(final Configuration newConfig) {
    super.onConfigurationChanged(newConfig);
    mDrawerToggle.onConfigurationChanged(newConfig);
  }

  @Override
  public boolean onOptionsItemSelected(final MenuItem item) {
    if (item.getItemId() == android.support.v7.appcompat.R.id.home) {
      return mDrawerToggle.onOptionsItemSelected(item);
    }
    return super.onOptionsItemSelected(item);
  }

  @Override
  public void onBackPressed() {
    if (mDrawerLayout.isDrawerOpen(GravityCompat.START)) {
      mDrawerLayout.closeDrawer(GravityCompat.START);
    } else {
      super.onBackPressed();
    }
  }

  @Override
  protected void onSaveInstanceState(final Bundle outState) {
    super.onSaveInstanceState(outState);
    outState.putInt(NAV_ITEM_ID, mNavItemId);
  }
}
Extra Style

This setup results in a nice-looking menu with some default styling. If you want to go a bit further, you can add a header view to the drawer and add some colors to the navigation menu itself:

<android.support.design.widget.NavigationView
  android:id="@+id/navigation"
  android:layout_width="wrap_content"
  android:layout_height="match_parent"
  android:layout_gravity="start"
  android:background="@color/drawer_bg"
  app:headerLayout="@layout/drawer_header"
  app:itemIconTint="@color/drawer_item"
  app:itemTextColor="@color/drawer_item"
  app:menu="@menu/drawer"/>

Where the drawer_item color is actually a ColorStateList, where the checked state is used by the current active navigation item:

<selector xmlns:android="http://schemas.android.com/apk/res/android">
  <item android:color="@color/drawer_item_checked" android:state_checked="true" />
  <item android:color="@color/drawer_item_default" />
</selector>
Open Issues

The current version of the library does come with its limitiations. My main issue is with the system that highlights the current item in the navigation menu. The itemBackground attribute for the NavigationView does not handle the checked state of the item correctly: somehow either all items are highlighted or none of them are. This makes this attribute basically unusable for most apps. I ran into more trouble when trying to work with submenu's in the navigation items. Once again the highlighting refused to work as expected: updating the selected item in a submenu makes the highlight overlay disappear altogether. In the end it seems that managing the selected item is still a chore that has to be solved manually in the app itself, which is not what I expected from what is supposed to be a drag-and-drop component aimed to take work away from the developers.

Conclusion

I think the NavigationView component missed the mark a little. My initial impression was pretty positive: I was able to quickly put together a nice looking navigation menu with very little code. The issues with the highlighting of the current item makes it more difficult to use than I would expect, but let's hope that these quirks are removed in an upcoming release of the design library.

You can find the complete source code of the demo project on GitHub: github.com/smuldr/design-support-demo.

Microservices architecture principle #6: One team is responsible for full life cycle of a Micro service

Tue, 06/02/2015 - 20:08

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Today's blog is the last in a series about our Microservices principles. This blog explains why a Microservice should be the responsibility of exactly one team (but one team may be responsible for more services).

Being responsible for the full life cycle of a service means that a single team can deploy and manage a service as well as create new versions and retire obsolete ones. This means that users of the service have a single point of contact for all questions regarding the use of the service. This property makes it easier to track changes in a service. Developers can focus on a specific area of the business they are supporting so they will become specialists in that area. This in turn will lead to better quality. The need to also fix errors and problems in production systems is a strong motivator to make sure code works correctly and problems are easy to find.
Having different teams working on different services introduces a challenge that may lead to a more robust software landscape. If TeamA needs a change in TeamB’s service in order to complete it’s task, some form of planning has to take place. Both teams have to cater for slipping schedules and unforeseen events that cause the delivery date of a feature to change. However, depending on a commitment made by another team is tricky; there are lots of valid reasons why a change may be late (e.g. production issues or illness temporarily reduces a teams capacity or high priority changes take precedence). So TeamA may never depend on TeamB to finish before the deadline. TeamA will learn to protect its weekends and evenings by changing their architecture. Not making assumptions about another teams schedule, in a Microservice environment, will therefore lead to more robust software.

Microservices architecture principle #6: One team is responsible for full life cycle of a Micro service

Tue, 06/02/2015 - 20:08

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Today's blog is the last in a series about our Microservices principles. This blog explains why a Microservice should be the responsibility of exactly one team (but one team may be responsible for more services).

Being responsible for the full life cycle of a service means that a single team can deploy and manage a service as well as create new versions and retire obsolete ones. This means that users of the service have a single point of contact for all questions regarding the use of the service. This property makes it easier to track changes in a service. Developers can focus on a specific area of the business they are supporting so they will become specialists in that area. This in turn will lead to better quality. The need to also fix errors and problems in production systems is a strong motivator to make sure code works correctly and problems are easy to find.
Having different teams working on different services introduces a challenge that may lead to a more robust software landscape. If TeamA needs a change in TeamB’s service in order to complete it’s task, some form of planning has to take place. Both teams have to cater for slipping schedules and unforeseen events that cause the delivery date of a feature to change. However, depending on a commitment made by another team is tricky; there are lots of valid reasons why a change may be late (e.g. production issues or illness temporarily reduces a teams capacity or high priority changes take precedence). So TeamA may never depend on TeamB to finish before the deadline. TeamA will learn to protect its weekends and evenings by changing their architecture. Not making assumptions about another teams schedule, in a Microservice environment, will therefore lead to more robust software.

Microservices principles #5: Best technology for the job over one technology for all

Mon, 06/01/2015 - 11:39

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Over the next couple of days we will cover each of these principles in more detail in a series of blog posts.
In this blog we cover: ‚ÄúBest technology for the job over one technology for all‚ÄĚ

A common benefit of service based (and loosely coupled) architectures is the possibility to choose a different technology for each service. Even though this concept isn’t new, it’s rarely applied. Often the reason for this seems to be that even though the services should operate independently they do share (parts of) the same stack. This is further fueled by an urge to consolidate all development under a single technology. Reasoning here usually being that developers become more interchangeable and therefore more valuable if everything runs on the same technology, which should be a good thing.

So, if this isn’t new territory why drag it up again? Why would a Microservices architecture merit changing an existing approach? The short answer is autonomy. The long(er) answer is that a Microservices Architecture does not try to centralize common (technological) functions in singleton-esque services. No, the focus of a Microsservices architecture is on service autonomy, centered around business capability and a Microservice can therefore implement its own stack.  This to make a Microservice easier to deploy on their own and removes dependencies on other services as much as possible.

But autonomous deployment isn‚Äôt the most important reason to consider technology on a per-service basis. The most important reason is the simple ‚Äúuse the best tool for the job‚ÄĚ. Not all technology is created equal. This isn‚Äôt limited to the choice of programming language or even the framework. It applies to the whole stack including the data layer.

Instead of spending a significant sum to buy large, bloated, multipurpose middleware, consider lightweight, single purpose containers. Pick containers that run the tech you need. You don't need Java applications with a relational database for everything. Other languages, frameworks and even datastores exist that cater to specific needs. Think of other languages like Scala or Go, frameworks like Akka or Play and database alternatives that focus on specific needs like storing (and retrieving) geographical data.

The choice of stack also relates to the choices you can make for your application landscape. If you have existing components that work for you or if you have components you want to buy off the shelve, it’s a real benefit to not be limited by an existing stack. For example, if you have opted for a Windows-only environment you are limiting your options.

Concerns about maintaining such a diverse landscape should consider that a lot of complexity comes from trying to maintain a single stack for everything. Smaller and simpler stacks should be easier to maintain. And having a single operations team for all those different technologies doesn't sound like a good idea? You're right! If you still have separate development and operations teams it may also be time to revisit that strategy. The devops approach makes running the services a shared responsibility. This doesn’t happen overnight but it is also a reason why Microservices can be such a good fit for organizations that have adopted an Agile way of working and/or apply Continuous Delivery.

Finally giving your developers a broader toolset to play with should keep them engaged. The opportunity to work with more than one technology can be a factor in retaining and attracting talent.

Microservices architecture principle #4: Asynchronous communication over synchronous communication

Fri, 05/29/2015 - 13:37

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Over the next couple of days we will cover each of these principles in more detail in a series of blog posts.
This blog explains why we prefer asynchronous communication over synchronous communication

In a previous post in this series we explained that we prefer autonomy of Microservices over coordination between Microservices. That does not imply a Microservices landscape with all Microservices running without being dependent on any other Microservice. There will always be dependencies, but we try to minimise the number of dependencies.  Once you have minimised the number of dependencies, how should these be implemented such that autonomy is maintained as much as possible? Synchronous dependencies between services imply that the calling service is blocked and waiting for a response by the called service before continuing it's operation. This is tight coupling, does not scale very well, and the calling service may be impacted by errors in the called service. In a high available robust Microservices landscape that is not preferable. Measures can be taken (think of things like circuit breakers) but it requires extra effort.

The preferred alternative is to use asynchronous communication. In this pattern the calling service simply publishes it's request (or data) and continues with other work (unrelated to  this request). The service has a separate thread listening for incoming responses (or data) and processes these when they come in. It is not blocking and waiting for a response after it sent a request, this improves scalability. Problems in another service will not break this service. If other services are temporarily broken the calling service might not be able to complete a process completely, but the calling service is not broken itself. Thus using the asynchronous pattern the services are more decoupled compared to the synchronous pattern and which preserves the autonomy of the service .

Microservices Architecture Principle #3: small bounded contexts over one comprehensive model

Wed, 05/27/2015 - 21:18

Microservices are a hot topic. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Over the next couple of days we will cover each of these principles in more detail in a series of blog posts. Today we discuss the Domain Driven Design (DDD) concept of "Bounded Context" and how it plays a major role in designing Microservices.

One of the discussion points around Microservices, since the term was coined in 2013, is how big (or rather, how small) a Microservice should be. Some people, such as Fred George, claim services should be small, maybe between 100-1000 lines of code (LoC). However, LoC is a poor metric for measuring software in general and even more so for determining the scope of a Microservice. Rather, when identifying the scope of our Microservices, we look at the functionality that a service needs to provide, and how the service relates to other services. Our aim is to design Microservices that are autonomous, ie. have a low coupling with other services, have well defined interfaces, and implement a single business capability, ie. have high cohesion.

A technique that can be used for this is "Context Mapping".  Via this technique we identify the various contexts in the IT landscape and their boundaries. The Context Map is the primary tool used to make boundaries between domains explicit. A Bounded Context encapsulates the details of a single domain, such as domain model, data model, application services, etc., and defines the integration points with other bounded contexts/domains. This matches perfectly with our definition of a Microservice: autonomous, well defined interfaces, implementing a business capability. This makes Context Mapping (and DDD in general) an excellent tool in the architects toolbox for identifying and designing Microservices.

Another factor in sizing our services is that we would like to have models that can "fit in your head", so as to be able to reason about them efficiently. Most projects define a single comprehensive model encompassing the full domain, as this seems natural, and appears easier to maintain as one does not have to worry about the interaction between multiple models, or translate from one context to the other.

For small systems this may be true, but for large systems the costs will start to outweigh the benefits: maintaining a single model requires centralization. Naturally the model will tend to fragment: a domain expert from the accounting domain will think differently about 'inventory' than a logistics domain expert, for example. It requires lots of coordinated efforts to disambiguate all terms across all domains. And worse, this 'unified vocabulary' is awkward and unnatural to use, and will very likely be ignored in most cases. Here bounded contexts will help again: they make clear where we can safely use the natural domain terms and where we will need to bridge to other domains. With the right boundaries and sizes of our bounded contexts we can make sure our domain models "fit in your head" and that we do not have to switch between models too often.

So maybe the best answer to the question of how big a Microservice should be is: it should have a well defined bounded context that will enable us to work without having to consider, or swap, between contexts.

Quickly build a XL Deploy plugin for deploying container applications to CoreOS

Mon, 05/25/2015 - 21:57

You can use fleetctl and script files to deploy your container applications to CoreOS. However, using XL Deploy for deployment automation is a great solution when you need to deploy and track versions of many applications. What does it take to create a XL Deploy plugin to deploy these container applications to your CoreOS clusters?

XL Deploy can be extended with custom plugins which add deployment capabilities. Using XL Rules custom plugins can be created quickly with limited effort. In this blog you can read how a plugin can be created in a matter of hours.

In a number of blog posts, Mark van Holsteijn explained how to create a high available Docker container platform using CoreOS and Consul. In these posts shell scripts (with fleetctl commands) are used deploy container applications. Based on these scripts I have built a XL Deploy plugin which deploys fleet unit configuration files to a CoreOS cluster.

 

Deploying these container applications using XL Deploy has a number of advantages:

  • Docker containers can be deployed without creating, adjusting and maintaining scripts for individual¬†applications.
  • XL Deploy will track and report the applications deployed to the CoreOS clusters.
  • Additional deployment scenarios can be added with limited effort.
  • Deployments will be consistent and configuration is managed across environments.
  • XL Deploy permissions can be used to control (direct) access to the CoreOS cluster(s).

 

Building an XL Deploy plugin is fast, since you can:

  • Reuse existing XL Deploy capabilities, like the Overthere plugin.
  • Utilize XL Deploy template processing to inject property values in rules and deploy scripts.
  • Exploit the XL Deploy unified deployment model to get the deltas which drive the required fleetctl deployment commands for any type of deployment (new, update, undeploy and rollback deployments).
  • Use xml and script based rules to build deployment tasks.

Getting started

  • Install XL Deploy, you can download a free edition¬†here.¬†If you are not familiar with XL Deploy, read the¬†getting started¬†documentation.
  • Next add the plugin resources to the ext directory of your XL Deploy installation. You can find the plugin resources in¬†this Github repository. Add the synthetic.xml, xl-rules.xml file from the repository root. In addition, add the scripts directory and its contents. Restart XL Deploy.
  • Next, setup a¬†CoreOS cluster. This¬†blog post¬†explains how you can setup such a platform locally.
  • Now you can connect to XL deploy using your browser. On the deployment¬†tab¬†you can import the sample application, located in the¬†sample-app folder¬†of the¬†plugin resources Github repository
  • You can now setup the target deployment container based on the Overthere.SshHost configuration item type. Verfiy that you can connect to your CoreOS cluster using this XL Deploy container.
  • Next, you can setup a XL Deploy environment, which contains your target deployment container.
  • Now you can use the¬†deployment tab to deploy and undeploy your fleet configuration file applications.

 

Building the plugin

The plugin consists of two xml files and a number of script files. Below you find a description of the plugin implementation steps.

The CoreOS container application deployments are based on fleet unit configuration files. So, first we create a XL Deploy configuration Item type definition which represents such a file. This XL Deploy deployed type is defined in the XL Deploy synthetic.xml file. The snippet below shows the contents of this file. I have assigned the name ‚Äúfleet.DeployedUnit‚ÄĚ.

synthetic

The definition contains  a container-type attribute. The Overthere.SshHost container is referenced. The plugin can simple use the Overthere.SshHost container type to connect to the CoreOS cluster and to execute fleet commands.

Furthermore, I have added two properties. One property which can be used to specify the number of instances. Note that XL Deploy dictionaries can be utilized to define the number of instances for each environment separately. The second property is a flag which controls whether instances will be started (or only submitted and loaded).

If you want to deploy a fleet configuration file using fleetctl, you can issue the following three commands: submit, load and start. In the plugin, I have created a separate script file for each of these fleetctl commands.¬† The caption below shows the script file to load a fleet configuration file. This load script uses the file name property and numberOfInstances property of the ‚Äúfleet.DeployedUnit‚ÄĚ configuration item.

load-unit

Finally, the plugin can be completed with XML-based rules which create the deployment steps. The caption below shows the rule which adds steps to [1] submit the unit configuration and [2] load the unit when (a version of ) the application is deployed.

rules-xldeploy

Using rules, you can easily define logic to add deployment steps. These steps can closely resemble the commands you perform when you are using fleetctl. For this plugin I have utized xml-based rules only. Using script rules, you can add more intelligence to your plugin. For example, the logic of the restart script can be converted to rules and more fine grained deployment steps.

More information

If you are interested in building your own XL Deploy plugin, the XL Deploy product documentation contains tutorials which will get you started.

If you want to know how you can create a High Available Docker Container Platform Using CoreOS And Consul, the following blogs are great starting point:

Microservices architecture principle #2: Autonomy over coordination

Mon, 05/25/2015 - 16:13

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Over the next couple of days we will cover each of these principles in more detail in a series of blog posts.
This blog explains why we prefer autonomy of services over coordination between services.

Our Xebia colleague Serge Beaumont posted "Autonomy over Coordination" in a tweet earlier this year and for me it summarised one of the crucial aspects for creating an agile, scalable and robust IT system or organisational structure. Autonomy over coordination is closely related to the business capabilities described in the previous post in this series, each capability should be implemented in one microservice. Once you have defined your business capabilities correctly the dependencies between those capabilities are minimised. Therefore minimal coordination between capabilities is required, leading to optimal autonomy. Increased autonomy for a microservice gives it freedom to evolve without impacting other services: the optimal technology can be used, it can scale without having to scale others, etc. For the team responsible for the service the advantages are similar, the autonomy enables them to make optimal choices that make their team function at its best.

The drawbacks of less autonomy and more coordination are evident and we all have experienced these. For example, a change leads to a snowball of dependent changes that must be deployed at the same moment, making changes to a module requires approval of other teams,  not being able to scale up a compute intensive function without scaling the whole system, ... the list is endless.

So in summary, pay attention to defining you business capabilities (microservices) in such a manner that autonomy is maximised, it will give you both organisational and technical advantages.

Microservices architecture principle #1: Each Microservice delivers a single complete business capability

Sat, 05/23/2015 - 21:13

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Over the next couple of days we will cover each of these principles in more detail in a series of blog posts.
This blog explains why a Microservice should deliver a complete business capability.

A complete business capability is a process that can be finished consecutively without interruptions or excursions to other services. This means that a business capability should not depend on other services to complete its work.
If a process in a microservice depends on other microservices we would end up in the dependency hell ESBs introduced: in order to service a customer request we need many other services and therefore if one of them fails everything stops. A more robust solution would be to define a service that handles a process that makes sense to a user. Examples include ordering a book in a web shop. This process would start with the selection of a book and end with creating an order. Actually fulfilling the order is a different process that lives in its own service. The fulfillment process might run right after the order process but it doesn’t have to. If the customer orders a PDF version of a book order fulfillment may be completed right away. If the order was for the print version, all the order service can promise is to ask shipping to send the book. Separating these two processes in different services allows us to make choices about the way the process is completed, making sure that a problem or delay in one service has no impact on other services.

So, building a microservice such that it does a single thing well without interruptions or waiting time is at the foundation of a robust architecture.

Agile goes beyond Epic Levels

Fri, 05/15/2015 - 17:10

IMG_5514A snapshot from my personal backlog last week:

  • The Agile transformation¬†at ING¬†was¬†frontpage news¬†in the Netherlands. This made us even more realize how epic this transformation and assignment actually is.
  • The Agile-built hydrogen race¬†car from the TU Delft set¬†an official track record¬†on the Nurburgring. We're proud on our guys in Delft!
  • Hanging out with Boeings‚Äô¬†Agile champs at their facilities in Seattle exchanging knowledge.¬†Impressive and extremely fruitful!
  • Coaching¬†the State of Washington on their ground breaking Agile initiatives¬†together with my friend and fellow consultant from ScrumInc,¬†Joe Justice.

One thing became clear for me after a week like this: Something Agile is cookin’.  And it’s BIG!

In this blog I will be explaining why and how Agile will develop in the near future.

Introduction; what’s happening?

Human kind¬†is currently facing the¬†biggest era change¬†since the¬†19th Century. Our industries, education, technologies¬†and¬†society¬†are simply¬†not compliant anymore with today‚Äôs and tomorrows needs. ¬†Some systems like healthcare and the economy are that broken they actually should be to be reinvented again. Everything has just become too vulnerable and complex. Just Quantitative-easing‚Äúlubricating the engine‚ÄĚ like quantitive easing the economy, are no sustainable solutions anymore.¬†Like¬†Russell Ackoff¬†already said, you can only fix a system as a¬†whole not only by separate parts.

This reinvention will only succeed when we are able to learn and adjust our systems very rapidly.  Agile, Lean and a different way of organizing our selfs, can make this reality.  Lean will provide us with the right tools to do exactly what’s needed, nothing more, nothing less.  But applying Lean for only efficiency purposes will not bring the innovations and creativity we need.  We also need an additional catalyst and engine: Agile. It will provide us with the right mindset and tools to innovate, inspect and adapt very fast.  And finally, we must reorganize ourself more on cooperation not on directive command and control. This was useful in the industrial revolution, not in our modern complex times.

Agile’s for everything

Unlike most people think, Agile is not only a software development tool. You can apply it to almost everything. slider_Forze_VI_ValkenburgFor example, as Xebia consultants we've successfully coached Agile and Lean non-IT initiatives in Marketing, Innovation, Education, Automotive, Aviation and non-Profit organizations. It just simply works! And how. A productivity increase of 500% is no exception. But above all, team members and customers are much happier.

Agile's for everybody

At this moment, a lot of people are still unconsciously addicted to their patterns and unaware about the unlimited possibilities out there.  It’s like having a walk in the forrest.  You can bring your own lunch like you always do, but there are some delicious fruits out there for free!  Technology like 3D printing offer unlimited possibilities straight on your desk, where only a few years a go, you needed a complicated, million dollar machine for this. The same goes for Agile. It’s open source and out there waiting for you. It will also help you getting more out of all these new awesome developments!

The maturity of Agile explained

Until recently, most agile initiatives emerged bottom up, but stalled on a misfit between Agile and (conventional) organizations.  Loads of software was produced, but could not be brought to production, simply because the whole development chain was not Agile yet. Tools like TDD and Continuous Integration improved the situation significantly, but dependencies were still not managed properly most of the time.

Screen Shot 2015-05-13 at 7.53.15 PM

The last couple of years, some good scaled agile frameworks like LeSS and SAFe emerged. Managing the dependencies better, but not directly encouraging the Agile Mindset and motivation of people.  In parallel, departments like HR, Control and Finance were struggling with Agile. There was a scaled agile framework implemented, but the hierarchical organization structure was not adjusted creating a gap between fast moving Agile teams and departments still hindered by non-Agile procedures, proceses and systems.

Therefor, we see conventional organizations moving towards a more Agile, community based model like Spotify, Google or Zappos.  ING is now transforming towards to a similar organization model.

Predictions for the near future

My expectation is that we will see Agile transformations continue on a much wider scale.  For example, people developing their own products in an agile fashion while using 3D printing.  Governments will use Agile and Holacracy for solving issues like the reshaping the economic system together with society. Or like I have observed last week, the State of Washington government using these techniques successfully in solving the challenges they're facing.

For me, it currently feels like the early Nineties again when the Internet emerged.  In that time I explained to many people the Internet would be like electricity for them in the near future.  Most people laughed and stated it was just a new way of communication.  The same applies now for the Agile Mindset.   It's not just a hype or a corporate tool. It will reshape the world as we know it today.

 

 

 

 

How to improve Scrum team performance with Kanban

Mon, 05/11/2015 - 21:22
This blogpost has been a collaborative effort between Jeroen Willemsen and Jasper Sonnevelt

In this post we will take a look at a real life example of a Scrum team transitioning to Kanban. We will address a few pitfalls and discuss how to circumvent those. With this we provide additional insights to advance your agile way of working.

The example

At the time we joined this project, three Scrum-teams worked together to develop a large software product. The teams shared a Product Owner (PO) and backlog and just had their MVP released. Now, bug-reports started coming in and the Scrum-teams faced several problems:

  • The developers were making a lot of progress per sprint, so the PO had to put a lot of effort in the backlog to create a sufficiently sized work inventory.New bugs came to light as the product started to become adopted. Filing and fixing these bugs and monitoring the progress was taking a lot of effort from the PO. Solving each bug in the next sprint¬Ě reduced the team's response time.
  • Prioritizing the bug-fixing and the newly requested features became hard as the PO had to many stakeholders to manage.
  • The Product Owner saw Kanban as a possible solution to these problems. He convinced the team to implement it in an effort to deal with these problems and to provide a better way of working.

The practices that were implemented included:

  • No more sprint replenishment rhythm (planning & backlog refinement);
  • A Work in Progress (WIP) limit.

As a result the backlog quality started to deprecate. Less information was written down about a story and developers did not take the time to ask the right questions towards the PO. The WIP limit maximised the amount of work the team could take on. This made them focus on the current, sometimes unclear and too complex stories. Because of this developers would keep working on their current tasks, making assumptions along the way. These assumptions could sometimes lead to conflicting insights as developers would collaborate on stories, while at the same time being influenced by different stakeholders. This resulted in misalignment.

All of these actions resulted in a lower burn down rate, less predictability and therefore nervous stakeholders. And after a few weeks, the PO decided to stop the migration to Kanban, and go back to Scrum.

However, the latter might not have been necessary for the team to be successful. After all, it is about how you implement Kanban. There were a few key elements missing for instance. In our experience, teams that are successful (would) have also implemented:

  • Forecasting based on historical data and simulations: Scrum teams use Planning Poker and Velocity to make predictions about the amount of work they will be able to do in a sprint. When sprint cadence is let go this will become more difficult. Practices in the Kanban Method tell us that by using historical data in simulations, the same and probably more predictability can be achieved.
  • Policies for prioritizing, WIP monitoring and handover criteria: Kanban demands a very high amount of discipline from all participants. Policies will help here. In the case of this team, it would have benefited greatly from having clear defined policies about priorities and prioritization. For instance: Who can (and is allowed to) prioritize? How can the team see what has most priority? Are there different classes of service we can identify? And how do we treat those? The same holds for WIP-limits and handover criteria. We always use a Definition of Ready and Definition of Done in Scrum teams. Keeping these in a Kanban team is a very good practice.
  • A feedback cycle to review data and process on a regular basis: Where Scrum demands a retrospective after every sprint, Kanban does not explicitly define such a thing. Kanban only dictates that you should implement feedback loops. When an organization starts implementing Kanban it is key that they do review the implementation, data and process at a regular basis. This is crucial during the first weeks: all participants will have to get used to the new way of working. Policies should be reviewed as well, as they might need adjustments to facilitate the team in having an optimized workflow.

The hard thing about Kanban

What most people think and see when talking about Kanban is introducing WIP limits to their board, adding columns (other than To do, In Progress, Done) and stop using sprints. And herein lies one of the problems. For organizations that are used to working with Scrum letting go of sprints is a very unnatural thing to do. It feels like letting go of predictability and regular feedback. And instead of making the organization a bit better people feel like they have just taken a step back.

The hard thing about Kanban is that it doesn't provide you with a clear cut solution to your problems like Scrum does. Kanban is a method that helps you fix the problems in your organization in a way that best fits your context. For instance: it tells you to visualize a lot of things but only provides examples of what you could visualize. Typically teams and organizations visualize their process and work(types).

To summarize:

  • Kanban uses the current process and doesn't enforce a process of it's own. Therefore it demands a higher degree of discipline, both from the team-members and the the rest of the organization. If you are not aware of this the introduction of Kanban will only lead to chaos.
  • Kanban doesn't say anything about (To Do column) replenishment frequency or demo¬Ě frequency.

Recommendations:

  • Implement forecasting based on historical data and simulations, policies for prioritizing, WIP limits and handover criteria and a feedback cycle to review data and process on a regular basis.
  • Define clear policies about how to collaborate. This will help create transparency and predictability.
  • The Scrum ceremonies happen in a rhythm that is very easy to understand and learn for stakeholders. Keep that mind when designing Kanban policies.

 

Understanding the 'sender' in segues and use it to pass on data to another view controller

Fri, 05/08/2015 - 22:59

One of the downsides of using segues in storyboards is that you often still need to write code to pass on data from the source view controller to the destination view controller. The prepareForSegue(_:sender:) method is the right place to do this. Sometimes you need to manually trigger a segue by calling performSegueWithIdentifier(_:sender:), and it's there you usually know what data you need to pass on. How can we avoid adding extra state variables in our source view controller just for passing on data? A simple trick is to use the sender parameter that both methods have.

The sender parameter is normally used by storyboards to indicate the UI element that triggered the segue, for example an UIButton when pressed or an UITableViewCell that triggers the segue by selecting it. This allows you to determine what triggered the segue in prepareForSegue:sender:, and based on that (and of course the segue identifier) take some actions and configure the destination view controller, or even determine that it shouldn't perform the segue at all by returning false in shouldPerformSegueWithIdentifier(_:sender:).

When it's not possible to trigger the segue from a UI element in the Storyboard, you need to use performSegueWithIdentifier(_:sender:) instead to manually trigger it. This might happen when no direct user interaction should trigger the action of some control that was created in code. Maybe you want to execute some additional logic when pressing a button and after that perform the segue. Whatever the situation is, you can use the sender argument to your benefit. You can pass in whatever you may need in prepareForSegue(_:sender:) or shouldPerformSegueWithIdentifier(_:sender:).

Let's have a look at some examples.

Screen Shot 2015-05-08 at 23.25.37

Here we have two very simple view controllers. The first has three buttons for different colors. When tapping on any of the buttons, the name of the selected color will be put on a label and it will push the second view controller. The pushed view controller will set its background color to the color represented by the tapped button. To do that, we need to pass on a UIColor object to the target view controller.

Even though this could be handled by creating 3 distinct segues from the buttons directly to the destination view controller, we chose to handle the button tap ourselves and the trigger the segue manually.

You might come up with something like the following code to accomplish this:

class ViewController: UIViewController {

    @IBOutlet weak var label: UILabel!

    var tappedColor: UIColor?

    @IBAction func tappedRed(sender: AnyObject) {
        label.text = "Tapped Red"
        tappedColor = UIColor.redColor()
        performSegueWithIdentifier("ShowColor", sender: sender)
    }

    @IBAction func tappedGreen(sender: AnyObject) {
        label.text = "Tapped Green"
        tappedColor = UIColor.greenColor()
        performSegueWithIdentifier("ShowColor", sender: sender)
    }

    @IBAction func tappedBlue(sender: AnyObject) {
        label.text = "Tapped Blue"
        tappedColor = UIColor.blueColor()
        performSegueWithIdentifier("ShowColor", sender: sender)
    }

    override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
        if segue.identifier == "ShowColor" {
            if let colorViewController = segue.destinationViewController as? ColorViewController {
                colorViewController.color = tappedColor
            }
        }
    }

}

class ColorViewController: UIViewController {

    var color: UIColor?

    override func viewDidLoad() {
        super.viewDidLoad()

        view.backgroundColor = color
    }

}

We created a state variable called tappedColor to keep track of the color that needs to be passed on. It is set in each of the action methods before calling performSegueWithIdentifier("ShowColor", sender: sender) and then read again in prepareForSegue(_:sender:) so we can pass it on to the destination view controller.

The action methods will have the tapped UIButtons set as the sender argument, and since that's the actual element that initiated the action, it makes sense to set that as the sender when performing the segue. So that's what we do in the above code. But since we don't actually use the sender when preparing the segue, we might as well pass on the color directly instead. Here is a new version of the ViewController that does exactly that:

class ViewController: UIViewController {

    @IBOutlet weak var label: UILabel!

    @IBAction func tappedRed(sender: AnyObject) {
        label.text = "Tapped Red"
        performSegueWithIdentifier("ShowColor", sender: UIColor.redColor())
    }

    @IBAction func tappedGreen(sender: AnyObject) {
        label.text = "Tapped Green"
        performSegueWithIdentifier("ShowColor", sender: UIColor.greenColor())
    }

    @IBAction func tappedBlue(sender: AnyObject) {
        label.text = "Tapped Blue"
        performSegueWithIdentifier("ShowColor", sender: UIColor.blueColor())
    }

    override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
        if segue.identifier == "ShowColor" {
            if let colorViewController = segue.destinationViewController as? ColorViewController {
                colorViewController.color = sender as? UIColor
            }
        }
    }

}

This allows us to get rid of our extra tappedColor variable.

It might seem to (and perhaps it does) abuse the sender parameter though, so use it with care and only where appropriate. Be aware of the consequences; if some other code or some element in a Storyboard triggers the same segue (i.e. with the same identifier), then the sender might just be an UI element instead of the object you expected, which will lead to unexpected results and perhaps even crashes when you force cast the sender to something it's not.

You can find the sample code in the form of an Xcode project on https://github.com/lammertw/SegueColorSample.