Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Xebia Blog
Syndicate content
Updated: 5 hours 25 min ago

Building IntelliJ plugins from the command line

17 hours 52 sec ago

For a few years already, IntelliJ IDEA has been my IDE of choice. Recently I dove into the world of plugin development for IntelliJ IDEA and was unhappily surprised. Plugin development all relies on IDE features. It looked hard to create a build script to do the actual plugin compilation and packaging from a build script. The JetBrains folks simply have not catered for that. Unless you're using TeamCity as your CI tool, you're out of luck.

For me it makes no sense writing code if:

  1. it can not be compiled and packaged from the command line
  2. the code can not be compiled and tested on a CI environment
  3. IDE configurations can not be generated from the build script

Google did not help out a lot. Tomasz Dziurko put me in the right direction.

In order to build and test a plugin, the following needs to be in place:

  1. First of all you'll need IntelliJ IDEA. This is quite obvious. The Plugin DevKit plugins need to be installed. If you want to create a language plugin you might want to install Grammar-Kit too.
  2. An IDEA SDK needs to be registered. The SDK can point to your IntelliJ installation.

The plugin module files are only slightly different from your average project.

Compiling and testing the plugin

Now for the build script. My build tool of choice is Gradle. My plugin code adheres to the default Gradle project structure.

First thing to do is to get a hold of the IntelliJ IDEA libraries in an automated way. Since the IDEA libraries are not available via Maven repos, an IntelliJ IDEA Community Edition download is probably the best option to get a hold of the libraries.

The plan is as follows: download the Linux version of IntelliJ IDEA, and extract it in a predefined location. From there, we can point to the libraries and subsequently compile and test the plugin. The libraries are Java, and as such platform independent. I picked the Linux version since it has a nice, simple file structure.

The following code snippet caters for this:

apply plugin: 'java'

// Pick the Linux version, as it is a tar.gz we can simply extract
def IDEA_SDK_URL = 'http://download.jetbrains.com/idea/ideaIC-14.0.4.tar.gz'
def IDEA_SDK_NAME = 'IntelliJ IDEA Community Edition IC-139.1603.1'

configurations {
    ideaSdk
    bundle // dependencies bundled with the plugin
}

dependencies {
    ideaSdk fileTree(dir: 'lib/sdk/', include: ['*/lib/*.jar'])

    compile configurations.ideaSdk
    compile configurations.bundle
    testCompile 'junit:junit:4.12'
    testCompile 'org.mockito:mockito-core:1.10.19'
}

// IntelliJ IDEA can still run on a Java 6 JRE, so we need to take that into account.
sourceCompatibility = 1.6
targetCompatibility = 1.6

task downloadIdeaSdk(type: Download) {
    sourceUrl = IDEA_SDK_URL
    target = file('lib/idea-sdk.tar.gz')
}

task extractIdeaSdk(type: Copy, dependsOn: [downloadIdeaSdk]) {
    def zipFile = file('lib/idea-sdk.tar.gz')
    def outputDir = file("lib/sdk")

    from tarTree(resources.gzip(zipFile))
    into outputDir
}

compileJava.dependsOn extractIdeaSdk

class Download extends DefaultTask {
    @Input
    String sourceUrl

    @OutputFile
    File target

    @TaskAction
    void download() {
       if (!target.parentFile.exists()) {
           target.parentFile.mkdirs()
       }
       ant.get(src: sourceUrl, dest: target, skipexisting: 'true')
    }
}

If parallel test execution does not work for your plugin, you'd better turn it off as follows:

test {
    // Avoid parallel execution, since the IntelliJ boilerplate is not up to that
    maxParallelForks = 1
}
The plugin deliverable

Obviously, the whole build process should be automated. That includes the packaging of the plugin. A plugin is simply a zip file with all libraries together in a lib folder.

task dist(type: Zip, dependsOn: [jar, test]) {
    from configurations.bundle
    from jar.archivePath
    rename { f -> "lib/${f}" }
    into project.name
    baseName project.name
}

build.dependsOn dist
Handling IntelliJ project files

We also need to generate IntelliJ IDEA project and module files so the plugin can live within the IDE. Telling the IDE it's dealing with a plugin opens some nice features, mainly the ability to run the plugin from within the IDE. Anton Arhipov's blog post put me on the right track.

The Gradle idea plugin helps out in creating those files. This works out of the box for your average project, but for plugins IntelliJ expects some things differently. The project files should mention that we're dealing with a plugin project and the module file should point to the plugin.xml file required for each plugin. Also, the SDK libraries are not to be included in the module file; so, I excluded those from the configuration.

The following code snippet caters for this:

apply plugin: 'idea'

idea {
    project {
        languageLevel = '1.6'
        jdkName = IDEA_SDK_NAME

        ipr {
            withXml {
                it.node.find { node ->
                    node.@name == 'ProjectRootManager'
                }.'@project-jdk-type' = 'IDEA JDK'

                logger.warn "=" * 71
                logger.warn " Configured IDEA JDK '${jdkName}'."
                logger.warn " Make sure you have it configured IntelliJ before opening the project!"
                logger.warn "=" * 71
            }
        }
    }

    module {
        scopes.COMPILE.minus = [ configurations.ideaSdk ]

        iml {
            beforeMerged { module ->
                module.dependencies.clear()
            }
            withXml {
                it.node.@type = 'PLUGIN_MODULE'
                //  <component name="DevKit.ModuleBuildProperties" url="file://$MODULE_DIR$/src/main/resources/META-INF/plugin.xml" />
                def cmp = it.node.appendNode('component')
                cmp.@name = 'DevKit.ModuleBuildProperties'
                cmp.@url = 'file://$MODULE_DIR$/src/main/resources/META-INF/plugin.xml'
            }
        }
    }
}
Put it to work!

Combining the aforementioned code snippets will result in a build script that can be run on any environment. Have a look at my idea-clock plugin for a working example.

The monolithic frontend in the microservices architecture

Mon, 07/27/2015 - 16:39

When you are implementing a microservices architecture you want to keep services small. This should also apply to the frontend. If you don't, you will only reap the benefits of microservices for the backend services. An easy solution is to split your application up into separate frontends. When you have a big monolithic frontend that can’t be split up easily, you have to think about making it smaller. You can decompose the frontend into separate components independently developed by different teams.

Imagine you are working at a company that is switching from a monolithic architecture to a microservices architecture. The application your are working on is a big client facing web application. You have recently identified a couple of self-contained features and created microservices to provide each functionality. Your former monolith has been carved down to bare essentials for providing the user interface, which is your public facing web frontend. This microservice only has one functionality which is providing the user interface. It can be scaled and deployed separate from the other backend services.

You are happy with the transition: Individual services can fit in your head, multiple teams can work on different applications, and you are speaking on conferences on your experiences with the transition. However you’re not quite there yet: The frontend is still a monolith that spans the different backends. This means on the frontend you still have some of the same problems you had before switching to microservices. The image below shows a simplification of the current architecture.

Single frontend

With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Backend teams can't deliver business value without the frontend being updated since an API without a user interface doesn't do much. More backend teams means more new features, and therefore more pressure is put on the frontend team(s) to integrate new features. To compensate for this it is possible to make the frontend team bigger or have multiple teams working on the same project. Because the frontend still has to be deployed in one go, teams cannot work independently. Changes have to be integrated in the same project and the whole project needs to be tested since a change can break other features.
Another option is to have the backend teams integrate their new features with the frontend and submitting a pull request. This helps in dividing the work, but to do this effectively a lot of knowledge has to be shared across the teams to get the code consistent and on the same quality level. This would basically mean that the teams are not working independently. With a monolithic frontend you never get the flexibility to scale across teams as promised by microservices.

Besides not being able to scale, there is also the classical overhead of a separate backend and frontend team. Each time there is a breaking change in the API of one of the services, the frontend has to be updated. Especially when a feature is added to a service, the frontend has to be updated to ensure your customers can even use the feature. If you have a frontend small enough it can be maintained by a team which is also responsible for one or more services which are coupled to the frontend. This means that there is no overhead in cross team communication. But because the frontend and the backend can not be worked on independently, you are not really doing microservices. For an application which is small enough to be maintained by a single team it is probably a good idea not to do microservices.

If you do have multiple teams working on your platform, but you were to have multiple smaller frontend applications there would have been no problem. Each frontend would act as the interface to one or more services. Each of these services will have their own persistence layer. This is known as vertical decomposition. See the image below.

frontend-per-service

When splitting up your application you have to make sure you are making the right split, which is the same as for the backend services. First you have to recognize bounded contexts in which your domain can be split. A bounded context is a partition of the domain model with a clear boundary. Within the bounded context there is high coupling and between different bounded contexts there is low coupling. These bounded contexts will be mapped to micro services within your application. This way the communication between services is also limited. In other words you limit your API surface. This in turn will limit the need to make changes in the API and ensure truly separately operating teams.

Often you are unable to separate your web application into multiple entirely separate applications. A consistent look and feel has to be maintained and the application should behave as single application. However the application and the development team are big enough to justify a microservices architecture. Examples of such big client facing applications can be found in online retail, news, social networks or other online platforms.

Although a total split of your application might not be possible, it might be possible to have multiple teams working on separate parts of the frontend as if they were entirely separate applications. Instead of splitting your web app entirely you are splitting it up in components, which can be maintained separately. This way you are doing a form of vertical decomposition while you still have a single consistent web application. To achieve this you have a couple of options.

Share code

You can share code to make sure that the look and feel of the different frontends is consistent. However then you risk coupling services via the common code. This could even result in not being able to deploy and release separately. It will also require some coordination regarding the shared code.

Therefore when you are going to share code it is generally a good a idea to think about the API that it‚Äôs going to provide. Calling your shared library ‚Äúcommon‚ÄĚ, for example, is generally a bad idea. The name suggests developers should put any code which can be shared by some other service in the library. Common is not a functional term, but a technical term. This means that the library doesn‚Äôt focus on providing a specific functionality. This will result in an API without a specific goal, which will be subject to change often. This is especially bad for microservices when multiple teams have to migrate to the new version when the API has been broken.

Although sharing code between microservices has disadvantages, generally all microservices will share code by using open source libraries. Because this code is always used by a lot of projects, special care is given to not breaking compatibility. When you’re going to share code it is a good idea to uphold your shared code to the same standards. When your library is not specific to your business, you might as well release it publicly to encourage you think twice about breaking the API or putting business specific logic in the library.

Composite frontend

It is possible to compose your frontend out of different components. Each of these components could be maintained by a separate team and deployed independent of each other. Again it is important to split along bounded contexts to limit the API surface between the components. The image below shows an example of such a composite frontend.

composite-design

Admittedly this is an idea we already saw in portlets during the SOA age. However, in a microservices architecture you want the frontend components to be able to deploy fully independently and you want to make sure you do a clean separation which ensures there is no or only limited two way communication needed between the components.

It is possible to integrate during development, deployment or at runtime. At each of these integration stages there are different tradeoffs between flexibility and consistency. If you want to have separate deployment pipelines for your components, you want to have a more flexible approach like runtime integration. If it is more likely different versions of components might break functionality, you need more consistency. You would get this at development time integration. Integration at deployment time could give you the same flexibility as runtime integration, if you are able to integrate different versions of components on different environments of your build pipeline. However this would mean creating a different deployment artifact for each environment.

Software architecture should never be a goal, but a means to an end

Combining multiple components via shared libraries into a single frontend is an example of development time integration. However it doesn't give you much flexibility in regards of separate deployment. It is still a classical integration technique. But since software architecture should never be a goal, but a means to an end, it can be the best solution for the problem you are trying to solve.

More flexibility can be found in runtime integration. An example of this is using AJAX to load html and other dependencies of a component. Then the main application only needs to know where to retrieve the component from. This is a good example of a small API surface. Of course doing a request after page load means that the users might see components loading. It also means that clients that don’t execute javascript will not see the content at all. Examples are bots / spiders that don’t execute javascript, real users who are blocking javascript or using a screenreader that doesn’t execute javascript.

When runtime integration via javascript is not an option it is also possible to integrate components using a middleware layer. This layer fetches the html of the different components and composes them into a full page before returning the page to the client. This means that clients will always retrieve all of the html at once. An example of such middleware are the Edge Side Includes of Varnish. To get more flexibility it is also possible to implement a server which does this yourself. An open source example of such a server is Compoxure.

Once you have you have your composite frontend up and running you can start to think about the next step: optimization. Having separate components from different sources means that many resources have to be retrieved by the client. Since retrieving multiple resources takes longer than retrieving a single resource, you want to combine resources. Again this can be done at development time or at runtime depending on the integration techniques you chose decomposing your frontend.

Conclusion

When transitioning an application to a microservices architecture you will run into issues if you keep the frontend a monolith. The goal is to achieve good vertical decomposition. What goes for the backend services goes for the frontend as well: Split into bounded contexts to limit the API surface between components, and use integration techniques that avoid coupling. When you are working on single big frontend it might be difficult to make this decomposition, but when you want to deliver faster by using multiple teams working on a microservices architecture, you cannot exclude the frontend from decomposition.

Resources

Sam Newman - From Macro to Micro: How Big Should Your Services Be?
Dan North - Microservices: software that fits in your head

Super fast unit test execution with WallabyJS

Mon, 07/27/2015 - 11:24

Our current AngularJS project has been under development for about 2.5 years, so the number of unit tests has increased enormously. We tend to have a coverage percentage near 100%, which led to 4000+ unit tests. These include service specs and view specs. You may know that AngularJS - when abused a bit - is not suited for super large applications, but since we tamed the beast and have an application with more than 16,000 lines of high performing AngularJS code, we want to keep in charge about the total development process without any performance losses.

We are using Karma Runner with Jasmine, which is fine for a small number of specs and for debugging, but running the full test suite takes up to 3 minutes on a 2.8Ghz MacBook Pro.

We are testing our code continuously, so we came up with a solution to split al the unit tests into several shards. This parallel execution of the unit tests decreased the execution time a lot. We will later write about the details of this Karma parallelization on this blog. Sharding helped us a lot when we want to run the full unit test suite, i.e. when using it in the pre push hook, but during development you want quick feedback cycles about coverage and failing specs (red-green testing).

With such a long unit test cycle, even when running in parallel, many of our developers are fdescribe-ing the specs on which they are working, so that the feedback is instant. However, this is quite labor intensive and sometimes an fdescribe is pushed accidentally.

And then.... we discovered WallabyJS. It is just an ordinary test runner like Karma. Even the configuration file is almost a copy of our karma.conf.js.
The difference is in the details. Out of the box it runs the unit test suite in 50 secs, thanks to the extensive use of Web Workers. Then the fun starts.

Screenshot of Wallaby In action (IntelliJ). Shamelessly grabbed from wallaby.com

I use Wallaby as IntelliJ IDEA plugin, which adds colored annotations to the left margin of my code. Green squares indicate covered lines/statements, orange give me partly covered code and grey means "please write a test for this functionality or I introduce hard to find bugs". Colorblind people see just kale green squares on every line, since the default colors are not chosen very well, but these colors are adjustable via the Preferences menu.

Clicking on a square pops up a box with a list of tests that induces the coverage. When the test failed, it also tells me why.

dialog

A dialog box showing contextual information (wallaby.com)

Since the implementation and the tests are now instrumented, finding bugs and increasing your coverage goes a lot faster. Beside that, you don't need to hassle with fdescribes and fits to run individual tests during development. Thanks to the instrumentation Wallaby is running your tests continuously and re-runs only the relevant tests for the parts that you are working on. Real time.

5 Reasons why you should test your code

Mon, 07/27/2015 - 09:37

It is just like in mathematics class when I had¬†to make a proof for Thales‚Äô theorem I wrote ‚ÄúCan‚Äôt you see that B has a right angle?! Q.E.D.‚ÄĚ, but he¬†still gave me an F grade.

You want to make things work, right? So you start programming until your feature is implemented. When it is implemented, it works, so you do not need any tests. You want to proceed and make more cool features.

Suddenly feature 1 breaks, because you did something weird in some service that is reused all over your application. Ok, let’s fix it, keep refreshing the page until everything is stable again. This is the point in time where you regret that you (or even better, your teammate) did not write tests.

In this article I give you 5 reasons why you should write them.

1. Regression testing

The scenario describes in the introduction is a typical example of a regression bug. Something works, but it breaks when you are looking the other way.
When you had tests with 100% code coverage, a red error had been appeared in the console or ‚Äď even better ‚Äď a siren goes off in the room where you are working.

Although there are some misconceptions about coverage, it at least tells others that there is a fully functional test suite. And it may give you a high grade when an audit company like SIG inspects your software.

coverage

100% Coverage feels so good

100% Code coverage does not mean that you have tested everything.
This means that the test suite it implemented in such a way that it calls every line of the tested code, but says nothing about the assertions made during its test run. If you want to measure if your specs do a fair amount of assertions, you have to do mutation testing.

This works as follows.

An automatic task is running the test suite once. Then some parts of you code are modified, mainly conditions flipped, for loops made shorter/longer, etc. The test suite is run a second time. If there are tests failing after the modifications begin made, there is an assertion done for this case, which is good.
However, 100% coverage does feel really good if you are an OCD-person.

The better your test coverage and assertion density is, the higher probability to catch regression bugs. Especially when an application grows, you may encounter a lot of regression bugs during development, which is good.

Suppose that a form shows a funny easter egg when the filled in birthdate is 06-06-2006 and the line of code responsible for this behaviour is hidden in a complex method. A fellow developer may make changes to this line. Not because he is not funny, but he just does not know. A failing test notices him immediately that he is removing your easter egg, while without a test you would find out the the removal 2 years later.

Still every application contains bugs which you are unaware of. When an end user tells you about a broken page, you may find out that the link he clicked on was generated with some missing information, ie. users//edit instead of users/24/edit.

When you find a bug, first write a (failing) test that reproduces the bug, then fix the bug. This will never happen again. You win.

2. Improve the implementation via new insights

‚ÄúPremature optimalization is the root of all evil‚ÄĚ is something you hear a lot. This does not mean that you have to implement you solution pragmatically without code reuse.

Good software craftmanship is not only about solving a problem effectively, also about maintainability, durability, performance and architecture. Tests can help you with this. If forces you to slow down and think.

If you start writing your tests and you have trouble with it, this may be an indication that your implementation can be improved. Furthermore, your tests let you think about input and output, corner cases and dependencies. So do you think that you understand all aspects of the super method you wrote that can handle everything? Write tests for this method and better code is garanteed.

Test Driven Development even helps you optimizing your code before you even write it, but that is another discussion.

3. It saves time, really

Number one excuse not to write tests is that you do not have time for it or your client does not want to pay for it. Writing tests can indeed cost you some time, even if you are using boilerplate code elimination frameworks like Mox.

However, if I ask you whether¬†you would make other design choices if you had the chance (and time) to start over, you probably would say yes. A total codebase refactoring is a ‚Äėno go‚Äô because you cannot oversee what parts of your application will fail. If you still accept the refactoring challenge, it will at least give you a lot of headache and costs you a lot of time, which you could have been used for writing the tests. But you had no time for writing tests, right? So your crappy implementation stays.

Dilbert bugfix

A bug can always be introduced, even with good refactored code. How many times did you say to yourself after a day of hard working that you spend 90% of your time finding and fixing a nasty bug? You are want to write cool applications, not to fix bugs.
When you have tested your code very well, 90% of the bugs introduced are catched by your tests. Phew, that saved the day. You can focus on writing cool stuff. And tests.

In the beginning, writing tests can take up to more than half of your time, but when you get the hang of it, writing tests become a second nature. It is important that you are writing code for the long term. As an application grows, it really pays off to have tests. It saves you time and developing becomes more fun as you are not being blocked by hard to find bugs.

4. Self-updating documentation

Writing clean self-documenting code is one if the main thing were adhere to. Not only for yourself, especially when you have not seen the code for a while, but also for your fellow developers. We only write comments if a piece of code is particularly hard to understand. Whatever style you prefer, it has to be clean in some way what the code does.

  // Beware! Dragons beyond this point!

Some people like to read the comments, some read the implementation itself, but some read the tests. What I like about the tests, for example when you are using a framework like Jasmine, is that they have a structured overview of all method's features. When you have a separate documentation file, it is as structured as you want, but the main issue with documentation is that it is never up to date. Developers do not like to write documentation and forget to update it when a method signature changes and eventually they stop writing docs.

Developers also do not like to write tests, but they at least serve more purposes than docs. If you are using the test suite as documentation, your documentation is always up to date with no extra effort!

5. It is fun

Nowadays there are no testers and developers. The developers are the testers. People that write good tests, are also the best programmers. Actually, your test is also a program. So if you like programming, you should like writing tests.
The reason why writing tests may feel non-productive is because it gives you the idea that you are not producing something new.

OLYMPUS DIGITAL CAMERA

Is the build red? Fix it immediately!

However, with the modern software development approach, your tests should be an integrated part of your application. The tests can be executed automatically using build tools like Grunt and Gulp. They may run in a continuous integration pipeline via Jenkins, for example. If you are really cool, a new deploy to production is automatically done when the tests pass and everything else is ok. With tests you have more confidence that your code is production ready.

A lot of measurements can be generated as well, like coverage and mutation testing, giving the OCD-oriented developers a big smile when everything is green and the score is 100%.

If the test suite fails, it is first priority to fix it, to keep the codebase in good shape. It takes some discipline, but when you get used to it, you have more fun developing new features and make cool stuff.

Android: Custom ViewMatchers in Espresso

Fri, 07/24/2015 - 16:03

Somehow it seems that testing is still treated like an afterthought in mobile development. The introduction of the Espresso test framework in the Android Testing Support Library improved the situation a little bit, but the documentation is limited and it can be hard to debug problems. And you will run into problems, because testing is hard to learn when there are so few examples to learn from.

Anyway, I recently created my first custom ViewMatcher for Espresso and I figured I would like to share it here. I was building a simple form with some EditText views as input fields, and these fields should display an error message when the user entered an invalid input.

Android TextView with Error Message

Android TextView with error message

In order to test this, my Espresso test enters an invalid value in one of the fields, presses "submit" and checks that the field is actually displaying an error message.

@Test
public void check() {
  Espresso
      .onView(ViewMatchers.withId((R.id.email)))
      .perform(ViewActions.typeText("foo"));
  Espresso
      .onView(ViewMatchers.withId(R.id.submit))
      .perform(ViewActions.click());
  Espresso
      .onView(ViewMatchers.withId((R.id.email)))
      .check(ViewAssertions.matches(
          ErrorTextMatchers.withErrorText(Matchers.containsString("email address is invalid"))));
}

The real magic happens inside the ErrorTextMatchers helper class:

public final class ErrorTextMatchers {

  /**
   * Returns a matcher that matches {@link TextView}s based on text property value.
   *
   * @param stringMatcher {@link Matcher} of {@link String} with text to match
   */
  @NonNull
  public static Matcher<View> withErrorText(final Matcher<String> stringMatcher) {

    return new BoundedMatcher<View, TextView>(TextView.class) {

      @Override
      public void describeTo(final Description description) {
        description.appendText("with error text: ");
        stringMatcher.describeTo(description);
      }

      @Override
      public boolean matchesSafely(final TextView textView) {
        return stringMatcher.matches(textView.getError().toString());
      }
    };
  }
} 

The main details of the implementation are as follows. We make sure that the matcher will only match children of the TextView class by returning a BoundedMatcher from withErrorText(). This makes it very easy to implement the matching logic itself in BoundedMatcher.matchesSafely(): simply take the getError() method from the TextView and feed it to the next Matcher. Finally, we have a simple implementation of the describeTo() method, which is only used to generate debug output to the console.

In conclusion, it turns out to be pretty straightforward to create your own custom ViewMatcher. Who knew? Perhaps there is still hope for testing mobile apps...

You can find an example project with the ErrorTextMatchers on GitHub: github.com/smuldr/espresso-errortext-matcher.

Parallax image scrolling using Storyboards

Tue, 07/21/2015 - 07:37

Parallax image scrolling is a popular concept that is being adopted by many apps these days. It's the small attention to details like this that can really make an app great. Parallax scrolling gives you the illusion of depth by letting objects in the background scroll slower than objects in the foreground. It has been used in the past by many 2d games to make them feel more 3d. True parallax scrolling can become quite complex, but it's not very hard to create a simple parallax image scrolling effect on iOS yourself. This post will show you how to add it to a table view using Storyboards.

NOTE: You can find all source code used by this post on https://github.com/lammertw/ParallaxImageScrolling.

The idea here is to create a UITableView with an image header that has a parallax scrolling effect. When we scroll down the table view (i.e. swipe up), the image should scroll with half the speed of the table. And when we scroll up (i.e. swipe down), the image should become bigger so that it feels like it's stretching while we scroll. The latter is not really a parallax scrolling effect but commonly used in combination with it. The following animation shows these effects:

imageedit_9_7419205352  

But what if we want a "Pull down to Refresh" effect and need to add a UIRefreshControl? Well, then we just drop the stretch effect when scrolling up:  

imageedit_7_3493339154    

And as you might expect, the variation with Pull to Refresh is actually a lot easier to accomplish than the one without.

Parallax Scrolling Libraries

While you can find several objective-c or Swift libraries that provide parallax scrolling similar to the ones here, you'll find that it's not that hard to create these yourself. Doing it yourself has the benefit of customizing it exactly the way your want it and of course it will add to your experience. Plus it might be less code than integrating with such a library. However if you need exactly what such a library provides then using it might work better for you.

The basics

NOTE: You can find all the code of this section at the no-parallax-scrolling branch.

Let's start with a simple example that doesn't have any parallax scrolling effects yet.

imageedit_5_3840368192

Here we have a standard UITableViewController with a cell containing our image at the top and another cell below it with some text. Here is the code:

class ImageTableViewController: UITableViewController {

  override func numberOfSectionsInTableView(tableView: UITableView) -> Int {
    return 2
  }

  override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
    return 1
  }

  override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
    var cellIdentifier = ""
    switch indexPath.section {
    case 0:
      cellIdentifier = "ImageCell"
    case 1:
      cellIdentifier = "TextCell"
    default: ()
    }

    let cell = tableView.dequeueReusableCellWithIdentifier(cellIdentifier, forIndexPath: indexPath) as! UITableViewCell

    return cell
  }

  override func tableView(tableView: UITableView, heightForRowAtIndexPath indexPath: NSIndexPath) -> CGFloat {
    switch indexPath.section {
    case 0:
      return UITableViewAutomaticDimension
    default: ()
      return 50
    }
  }

  override func tableView(tableView: UITableView, estimatedHeightForRowAtIndexPath indexPath: NSIndexPath) -> CGFloat {
    switch indexPath.section {
    case 0:
      return 200
    default: ()
      return 50
    }
  }

}

The only thing of note here is that we're using UITableViewAutomaticDimension for automatic cell heights determined by constraints in the cell: we have a UIImageView with constraints to use the full width and height of the cell and a fixed aspect ratio of 2:1. Because of this aspect ratio, the height of the image (and therefore of the cell) is always half of the width. In landscape it looks like this:

iOS Simulator Screen Shot 20 Jul 2015 17.27.38

We'll see later why this matters.

Parallax scrolling with Pull to Refresh

NOTE: You can find all the code of this section at the pull-to-refresh branch.

As mentioned before, creating the parallax scrolling effect is easiest when it doesn't need to stretch. Commonly you'll only want that if you have a Pull to Refresh. Adding the UIRefreshControl is done in the standard way so I won't go into that.

Container view
The rest is also quite simple. With the basics from below as our starting point, what we need to do first is add a UIView around our UIImageView that acts as a container. Since our image will change it's position while we scroll, we cannot use it anymore to calculate the height of the cell. The container view will have exactly the constraints that our image view had: use the full width and height of the cell and have an aspect ratio of 2:1. Also make sure to enable Clip Subviews on the container view to make sure the image view is clipped by it.

Align Center Y constraint
The image view, which is now inside the container view, will keep its aspect ratio constraint and use the full width of the container view. For the y position we'll add an Align Center Y constraint to vertically center the image within the container. All that looks something like this: Screen Shot 2015-07-20 at 17.46.25

Parallax scrolling using constraint
When we run this code now, it will still behave exactly as before. What we need to do is make the image view scroll with half the speed of the table view when scrolling down. We can do that by changing the constant of the Align Center Y constraint that we just created. First we need to connect it to an outlet of a custom UITableViewCell subclass:

class ImageCell: UITableViewCell {
  @IBOutlet weak var imageCenterYConstraint: NSLayoutConstraint!
}

When the table view scrolls down, we need to lower the Y position of the image by half the amount that we scrolled. To do that we can use scrollViewDidScroll and the content offset of the table view. Since our UITableViewController already adheres to the UIScrollViewDelegate, overriding that method is enough:

override func scrollViewDidScroll(scrollView: UIScrollView) {
  imageCenterYConstraint?.constant = min(0, -scrollView.contentOffset.y / 2.0) // only when scrolling down so we never let it be higher than 0
}

We're left with one small problem. The imageCenterYConstraint is connected to the ImageCell that we created and the scrollViewDidScroll method is in the view controller. So what left is create a imageCenterYConstraint in the view controller and assign it when the cell is created:

weak var imageCenterYConstraint: NSLayoutConstraint?

override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
  var cellIdentifier = ""
  switch indexPath.section {
  case 0:
    cellIdentifier = "ImageCell"
  case 1:
    cellIdentifier = "TextCell"
  default: ()
  }

  // the new part of code:
  let cell = tableView.dequeueReusableCellWithIdentifier(cellIdentifier, forIndexPath: indexPath) as! UITableViewCell
  if let imageCell = cell as? ImageCell {
    imageCenterYConstraint = imageCell.imageCenterYConstraint
  }

  return cell
}

That's all we need to do for our first variation of the parallax image scrolling. Let's go on with something a little more complicated.

Parallax scrolling with Pull to Refresh

NOTE: You can find all the code of this section at the no-pull-to-refresh branch.

When starting from the basics, we need to add a container view again like we did in the Container view paragraph from the previous section. The image view needs some different constraints though. Add the following constraints to the image view:

  • Ass before, keep the 2:1 aspect ratio
  • Add a Leading Space and Trailing Space of 0 to the Superview (our container view) and set the priority to 900. We will break these constraints when stretching the image because the image will become wider than the container view. However we still need them to determine the preferred width.
  • Align Center X to the Superview. We need this one to keep the image in the center when we break the Leading and Trailing Space constraints.
  • Add a Bottom Space and Top Space of 0 to the Superview. Create two outlets to the cell class ImageCell like we did in the previous section for the center Y constraint. We'll call these bottomSpaceConstraint and topSpaceConstraint. Also assign these from the cell to the view controller like we did before so we can access them in our scrollViewDidScroll method.

The result: Screen Shot 2015-07-20 at 21.30.52 We now have all the constraints we need to do the effects for scrolling up and down.

Scrolling down
When we scroll down (swipe up) we want the same effect as in our previous section. Instead of having an 'Align Center Y' constraint that we can change, we now need to do the following:

  • Set the bottom space to minus half of the content offset so it will fall below the container view.
  • Set the top space to plus half of the content offset so it will be below the top of the container view.

With these two calculation we effectively delay the scrolling speed of the image view with the half of the table view scrolling speed.

bottomSpaceConstraint?.constant = -scrollView.contentOffset.y / 2
topSpaceConstraint?.constant = scrollView.contentOffset.y / 2

Scrolling up
When the table view scrolls up (swipe down) the container view is going down. What we want here is that the image view sticks to the top of the screen instead of going down as well. As well need for that is to set the constant of the topSpaceConstraint to the content offset. That means the height of the image will increase. Because of our 2:1 aspect ratio, the width of the image will grow as well. This is why we had to lower the priority of the Leading and Trailing constraint because the image no longer fits inside the container and breaks those constraints.

topSpaceConstraint?.constant = scrollView.contentOffset.y

We're left with one problem now. When the image sticks to the top while the container view goes down, it means that the image falls outside the container view. And since we had to enable Clip Subviews for scrolling down, we now get something like this: iOS Simulator Screen Shot 20 Jul 2015 21.45.44

We can't see the top of the image since it's outside the container view. So what we need is to clip when scrolling down and not clip when scrolling up. We can only do that in code so we need to connect the container view to an outlet, just as we've done with the constraints. Then the final code in scrollViewDidScroll becomes:

func scrollViewDidScroll(scrollView: UIScrollView) { 
  if scrollView.contentOffset.y >= 0 { 
    // scrolling down 
    containerView.clipsToBounds = true 
    bottomSpaceConstraint?.constant = -scrollView.contentOffset.y / 2 
    topSpaceConstraint?.constant = scrollView.contentOffset.y / 2 
  } else { 
    // scrolling up 
    topSpaceConstraint?.constant = scrollView.contentOffset.y 
    containerView.clipsToBounds = false 
  } 
}
Conclusion

So there you have it. Two variations of parallax scrolling without too much effort. As mentioned before, use a dedicated library if you have to, but don't be afraid that it's too complicated to do it yourself.

Additional notes

If you've seen the source code on GitHub you might have noted a few additional things. I didn't want to mention in the main body of this post to prevent any distractions but it's important to mention them anyway.

  • The aspect ratio constraints need to have a priority lower than 1000. Set them 999 or 950 or something (make sure they're higher than the Leading and Trailing Space constraints that we set to 900 in the last section). This is because of an issue related to cells with dynamic height (using UITableViewAutomaticDimension) and rotation. When the user rotates the device, the cell will get its new width while still having the previous height. The new height calculation is not yet done at the beginning of the rotation animation. At this moment, the 2:1 aspect ratio cannot exist, which is why we cannot set it to 1000 (required). Right after the new height is calculated it the aspect ratio constraint will kick back in. It seems that the state in which the aspect ratio constraint cannot exist is not even visible so don't worry about your cell looking strange. Also leaving it at 1000 only seems to generate an error message about the constraint, after which it continues as expected.
  • Instead of assigning the outlets from the ImageCell to new variables in the view controller you may also create a scrollViewDidScroll in the cell, which is then being called from the scrollViewDidScroll from your view controller. You can get the cell using cellForRowAtIndexPath. See the code on GitHub to see this done.

Continuous Delivery of Docker Images

Mon, 07/13/2015 - 20:05

Our customer wanted to drastically cut down time to market for the new version of their application. Large quarterly releases should be replaced by small changes that can be rolled out to production multiple times a day. Below we will explain how to use Docker and Ansible to support this strategy, or, in our customer‚Äôs words, how to ‚Äėdevelop software at the speed of thought‚Äô.

To facilitate development at the speed of thought we needed the following:

  1. A platform to deploy Docker images to
  2. Set up logging, monitoring and alerting
  3. Application versioning
  4. Zero downtime deployment
  5. Security

We’ll discuss each property below.

Platform
Our Docker images run on an Ubuntu host because we needed a supported Linux version that is well known. In our case we install the OS using an image and run all other software in a container. Each Docker container hosts exactly one process so it is easy to see what a container is supposed to do. Examples of containers include:

  • Java VMs to run our Scala services
  • HA Proxy
  • Syslog-ng
  • A utility to rotate log files
  • And even an Oracle database (not on acceptance and production because we expected support issues with that setup, but for development it works fine)

Most of the software running in containers is started with a bash script, but recently we started experimenting with Go so a container may need no more than a single executable.

Logging, monitoring and alerting
To save time we decided to offload the development effort of monitoring and alerting to hosted services where possible. This resulted in contracts with Loggly to store application log files, Librato to collect system metrics and OpsGenie to alert Ops based on rules defined in Loggly. Log files are shipped to Loggly using their Syslog-NG plugin. Our application was already relying on statsd so to avoid having to rewrite code, we decided to create a statsd emulator to push metrics to Librato. This may change in the future if we find the time, but for now it works fine. We’re using the Docker stats API to collect information at the container level.

Application versioning
In the Java world the deliverable would be a jar file published to a repository like Artifactory or Nexus. This is still possibile when working with Docker but it makes more sense to use Docker images as deliverables. The images contain everything needed to run the service, including the jar file. Like jar files, Docker images are published, in this case to the Docker registry. We started with Docker hub online but we wanted faster delivery and more control over who can access the images so we introduced our private Docker registry on premise. This works great and we are pushing around 30 to 50 images a day.
The version tag we use for a container is the date and time it was built. When the build starts we tag the sources in Git with a name based on the date and time, e.g. 20150619_1504. Components that pass their test are assembled in a release based on a text file, a composition, that lists all components that should be part of a release. The composition is tagged with a c_ prefix and a date/time stamp and is deployed to the integration test environment. Then a new test is run to find out whether the assembly still works. If so, the composition is labeled with a new rc tag, rc_20150619_1504 in our example. Releases that pass the integration test are deployed to acceptance and eventually production, but not automatically. We decided to make deployment a management decision, executed by a Jenkins job.
This strategy allows us to recreate a version of the software from source, by checking out tags that make up a release and building again, or from the Docker repository by deploying all versions of components as they are listed in the composition file.
Third-party components are tagged using the version number of the supplier.

Zero downtime deployment
To achieve high availability, we chose Ansible to deploy a Docker container based on the composition file mentioned above. Ansible connects to a host and then uses the Docker command to do the following:

  1. Check if the running container version differs from the one we want to deploy
  2. If the version is different, stop the old container and start the new one
  3. If the version is the same, don’t do anything

This saves a lot of time because Ansible will only change containers that need to be changed and leave all others alone.
Using Ansible we can also implement Zero Downtime Deployment:

  1. First shut down the health container on one node
  2. This causes the load balancer to remove the node from the list of active nodes
  3. Update the first node
  4. Restart the health container
  5. Run the update script in parallel on all other nodes.

Security
The problem with the Docker API is that you are either in or out with no levels in between. This means, e.g. that if you add the Docker socket to a container to look at Docker stats you also allow starting and stopping images. Or if you allow access to the Docker executable you also grant access to configuration information like passwords passed to the container at deployment time. To fix this problem we created a Docker wrapper. This wrapper forbids starting privileged containers and hides some information returned by Docker inspect.
One simple security rule is that software that is not installed or is not running, can’t be exploited. Applied to Docker images this means we removed everything we don’t need and made the image as small as possible. Teams extend the base Linux image only by adding the jar file for their application. Recently we started experimenting with Go to run utilities because a Go executable needs no extra software to run. We’re also testing smaller container images.
Finally, remember not to run as root and carefully consider what file systems to share between container and host.

Summary
In summary we found a way to package software in containers, both standard utilities and Scala components, create a tagged and versioned composition that is tested and moves from one environment to the next as a unit. Using Ansible we orchestrated deployment of new releases while keeping always at least one server running.
In the future we plan to work on reducing image size by stripping the base OS and deploying more utilities as Go containers. We will also continue work on our security wrapper and plan to investigate Consul to replace our home made service registry.

This blog was based on a talk by Armin ńĆoralińá at XebiCon 2015. Watch Armin‚Äôs presentation here.

How to create the smallest possible docker container of any image

Tue, 06/30/2015 - 10:46

Once you start to do some serious work with Docker, you soon find that downloading images from the registry is a real bottleneck in starting applications. In this blog post we show you how you can reduce the size of any docker image to just a few percent of the original. So is your image too fat, try stripping your Docker image! The strip-docker-image utility demonstrated in this blog makes your containers faster and safer at the same time!


We are working quite intensively on our High Available Docker Container Platform  using CoreOS and Consul which consists of a number of containers (NGiNX, HAProxy, the Registrator and Consul). These containers run on each of the nodes in our CoreOS cluster and when the cluster boots, more than 600Mb is downloaded by the 3 nodes in the cluster. This is quite time consuming.

cargonauts/consul-http-router      latest              7b9a6e858751        7 days ago          153 MB
cargonauts/progrium-consul         latest              32253bc8752d        7 weeks ago         60.75 MB
progrium/registrator               latest              6084f839101b        4 months ago        13.75 MB

The size of the images is not only detrimental to the boot time of our platform, it also increases the attack surface of the container.  With 153Mb of utilities in the  NGiNX based consul-http-router, there is a lot of stuff in the container that you can use once you get inside. As we were thinking of running this router in a DMZ, we wanted to minimise the amount of tools lying around for a potential hacker.

From our colleague Adriaan de Jonge we already learned how to create the smallest possible Docker container  for a Go program. Could we repeat this by just extracting the NGiNX executable from the official distribution and copying it onto a scratch image?  And it turns out we can!

finding the necessary files

Using the utility dpkg we can list all the files that are installed by NGiNX

docker run nginx dpkg -L nginx
...
/.
/usr
/usr/sbin
/usr/sbin/nginx
/usr/share
/usr/share/doc
/usr/share/doc/nginx
...
/etc/init.d/nginx
locating dependent shared libraries

So we have the list of files in the package, but we do not have the shared libraries that are referenced by the executable. Fortunately, these can be retrieved using the ldd utility.

docker run nginx ldd /usr/sbin/nginx
...
	linux-vdso.so.1 (0x00007fff561d6000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fd8f17cf000)
	libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007fd8f1598000)
	libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007fd8f1329000)
	libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007fd8f10c9000)
	libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007fd8f0cce000)
	libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fd8f0ab2000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd8f0709000)
	/lib64/ld-linux-x86-64.so.2 (0x00007fd8f19f0000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fd8f0505000)
Following and including symbolic links

Now we have the executable and the referenced shared libraries, it turns out that ldd normally names the symbolic link and not the actual file name of the shared library.

docker run nginx ls -l /lib/x86_64-linux-gnu/libcrypt.so.1
...
lrwxrwxrwx 1 root root 16 Apr 15 00:01 /lib/x86_64-linux-gnu/libcrypt.so.1 -> libcrypt-2.19.so

By resolving the symbolic links and including both the link and the file, we are ready to export the bare essentials from the container!

getpwnam does not work

But after copying all essentials files to a scratch image, NGiNX did not start.  It appeared that NGiNX tries to resolve the user id 'nginx' and fails to do so.

docker run -P  --entrypoint /usr/sbin/nginx stripped-nginx  -g "daemon off;"
...
2015/06/29 21:29:08 [emerg] 1#1: getpwnam("nginx") failed (2: No such file or directory) in /etc/nginx/nginx.conf:2
nginx: [emerg] getpwnam("nginx") failed (2: No such file or directory) in /etc/nginx/nginx.conf:2

It turned out that the shared libraries for the name switch service reading /etc/passwd and /etc/group are loaded at runtime and not referenced in the shared libraries. By adding these shared libraries ( (/lib/*/libnss*) to the container, NGiNX worked!

strip-docker-image example

So now, the strip-docker-image utility is here for you to use!

    strip-docker-image  -i image-name
                        -t target-image-name
                        [-p package]
                        [-f file]
                        [-x expose-port]
                        [-v]

The options are explained below:

-i image-name           to strip
-t target-image-name    the image name of the stripped image
-p package              package to include from image, multiple -p allowed.
-f file                 file to include from image, multiple -f allowed.
-x port                 to expose.
-v                      verbose.

The following example creates a new nginx image, named stripped-nginx based on the official Docker image:

strip-docker-image -i nginx -t stripped-nginx  \
                           -x 80 \
                           -p nginx  \
                           -f /etc/passwd \
                           -f /etc/group \
                           -f '/lib/*/libnss*' \
                           -f /bin/ls \
                           -f /bin/cat \
                           -f /bin/sh \
                           -f /bin/mkdir \
                           -f /bin/ps \
                           -f /var/run \
                           -f /var/log/nginx \
                           -f /var/cache/nginx

Aside from the nginx package, we add the files /etc/passwd, /etc/group and /lib/*/libnss* shared libraries. The directories /var/run, /var/log/nginx and /var/cache/nginx are required for NGiNX to operate. In addition, we added /bin/sh and a few handy utilities, just to be able to snoop around a little bit.

The stripped image has now shrunk to an incredible 5.4% of the original 132.8 Mb to just 7.3Mb and is still fully operational!

docker images | grep nginx
...
stripped-nginx                     latest              d61912afaf16        21 seconds ago      7.297 MB
nginx                              1.9.2               319d2015d149        12 days ago         132.8 MB

And it works!

ID=$(docker run -P -d --entrypoint /usr/sbin/nginx stripped-nginx  -g "daemon off;")
docker run --link $ID:stripped cargonauts/toolbox-networking curl -s -D - http://stripped
...
HTTP/1.1 200 OK

For HAProxy, checkout the examples directory.

Conclusion

It is possible to use the official images that are maintained and distributed by Docker and strip them down to their bare essentials, ready for use! It speeds up load times and reduces the attack surface of that specific container.

Checkout the github repository for the script and the manual page.

Please send me your examples of incredibly shrunk Docker images!

Scala development with GitHub's Atom editor

Sat, 06/27/2015 - 14:57
.code { font-family: monospace; background-color: #eeeeee; }

GitHub recently released version 1.0 of their Atom editor. This post gives a rough overview of its Scala support.

Basic features

Basic features such as Scala syntax highlighting are provided by the language-scala plugin.

Some work on worksheets as found in e.g. Eclipse has been done in the scala-worksheet-plus plugin, but this is still missing major features and not very useful at this time.

Navigation and completion Ctags

Atom supports basic 'Go to Declaration' (ctrl-alt-down) and 'Search symbol' (cmd-shift-r) support by way of the default ctags-based symbols-view.

While there are multiple sbt plugins for generating ctags, the easiest seems to be to have Ensime download the sources (more on that below) and invoke ctags manually: put this configuration in your home directory and run the 'ctags' command from your project root.

This is useful for searching for symbols, but limited for finding declarations: for example, when checking the declaration for Success, ctags doesn't know whether this is scala.util.Success, akka.actor.Status.Success, spray.http.StatusCodes.Success or some other 3rd-party or local symbol with that name.

Ensime

This is where the Ensime plugin comes in.

Ensime is a service for Scala IDE support, originally written for the Scala support in Emacs. The project metadata for Ensime can be generated with 'sbt gen-ensime' from the ensime-sbt sbt plugin.

Usage

Start the Ensime server from Atom with 'cmd-shift-p' 'Ensime: start'. After a small pause the status bar proclaims 'Indexer ready!' and you should be good to go.

At this point the main features are 'jump to definition' (alt-click), hover for type info, and auto-completion:

atom.io ensime completion

There are some rough edges, but this is a promising start based on a solid foundation.

Conclusions

While Atom is already a pleasant, modern, open source, cross platform editor, it is clearly still early days.

The Scala support in Atom is not yet as polished as in IDE's such as IntelliJ IDEA or as stable as in more mature editors such as Sublime Text, but is already practically useful and has serious potential. Startup is not instant, but I did not notice a 'sluggish feel' as reported by earlier reviewers.

Feel free to share your experiences in the comments, I will keep this post updated as the tools - and our experience with them - evolve.

Git Subproject Compile-time Dependencies in Sbt

Fri, 06/26/2015 - 13:33

When creating a sbt project recently, I tried to include a project with no releases. This means including it using libraryDependencies in the build.sbt does not work. An option is to clone the project and publish it locally, but this is tedious manual work that needs to be repeated every time the cloned project changes.

Most examples explain how to add a direct compile time dependency on a git repository to sbt, but they just show how to add a single project repository as a dependency using an RootProject. After some searching I found the solution to add projects from a multi-project repository. Instead of RootProject the ProjectRef should be used. This allows for a second argument to specify the subproject in the reposityr.

This is my current project/Build.scala file:

import sbt.{Build, Project, ProjectRef, uri}

object GotoBuild extends Build {
  lazy val root = Project("root", sbt.file(".")).dependsOn(staminaCore, staminaJson, ...)

  lazy val staminaCore = ProjectRef(uri("git://github.com/scalapenos/stamina.git#master"), "stamina-core")
  lazy val staminaJson = ProjectRef(uri("git://github.com/scalapenos/stamina.git#master"), "stamina-json")
  ...
}

These subprojects are now a compile time dependency and sbt will pull in and maintain the repository in ~/.sbt/0.13/staging/[sha]/stamina. So no manual checkout with local publish is needed. This is very handy when depending on an internal independent project/module and without needing to create a new release for every change. (One side note is that my IntelliJ currently does not recognize that the library is on the class/source path of the main project, so it complains it cannot find symbols and therefore cannot do proper syntax checking and auto completing.)

How to deploy composite Docker applications with Consul key values to CoreOS

Fri, 06/19/2015 - 16:34

Most examples on the deployment of Docker applications to CoreOS use a single docker application. But as soon as you have an application that consists of more than 1 unit, the number of commands you have to type soon becomes annoying. At Xebia we have a best practice that says "Three strikes and you automate" mandating that a third time you do something similar, you automate. In this blog I share the manual page of the utility called fleetappctl that allows you to perform rolling upgrades and deploy Consul Key value pairs of composite applications to CoreOS and show three examples of its usage.


fleetappctl is a utility that allows you to manage a set of CoreOS fleet unit files as a single application. You can start, stop and deploy the application. fleetappctl is idempotent and does rolling upgrades on template files with multiple instances running. It can substitute placeholders upon deployment time and it is able to deploy Consul key value pairs as part of your application. Using fleetappctl you have everything you need to create a self contained deployment  unit of your composite application and put it under version control.

The command line options to fleetappctl are shown below:

fleetappctl [-d deployment-descriptor-file]
            [-e placeholder-value-file]
            (generate | list | start | stop | destroy)
option -d

The deployment descriptor file describes all the fleet unit files and Consul key-value pair files that make up the application. All the files referenced in the deployment-descriptor may have placeholders for deployment time values. These placeholders are enclosed in  double curly brackets {{ }}.

option -e

The file contains the values for the placeholders to be used on deployment of the application. The file has a simple format:

<name>=<value>
start

starts all units in the order as they appear in the deployment descriptor. If you have a template unit file, you can specify the number of instances you want to start. Start is idempotent, so you may call start multiple times. Start will bring the deployment inline with your descriptor.

If the unit file has changed with respect to the deployed unit file, the corresponding instances will be stopped and restarted with the new unit file. If you have a template file, the instances of the template file will be upgraded one by one.

Any consul key value pairs as defined by the consul.KeyValuePairs entries are created in Consul. Existing values are not overwritten.

generate

generates a deployment descriptor (deployit-manifest.xml) based upon all the unit files found in your directory. If a file is a fleet unit template file the number of instances to start is set to 2, to support rolling upgrades.

stop

stops all units in reverse order of their appearance in the deployment descriptor.

destroy

destroys all units in reverse order of their appearance in the deployment descriptor.

list

lists the runtime status of the units that appear in the deployment descriptor.

Install fleetappctl

to nstall the fleetappctl utility, type the following commands:

curl -q -L https://github.com/mvanholsteijn/fleetappctl/archive/0.25.tar.gz | tar -xzf -
cd fleetappctl-0.25
./install.sh
brew install xmlstarlet
brew install fleetctl
Start the platform

If you do not have the platform running, start it first.

cd ..
git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service.git
cd coreos-container-platform-as-a-service
git checkout 029d3dd8e54a5d0b4c085a192c0ba98e7fc2838d
cd vagrant
vagrant up
./is_platform_ready.sh
Example - Three component web application


The first example is a three component application. It consists of a mount, a Redis database service and a web application. We generate the deployment descriptor, indicate we do not want to start the mount, start the application and then modify the web application unit file to change the service name into 'helloworld'. We perform a rolling upgrade by issuing start again.. Finally we list, stop and destroy the application.

cd ../fleet-units/app
# generate a deployment descriptor
fleetappctl generate

# do not start mount explicitly
xml ed -u '//fleet.UnitConfigurationFile[@name="mnt-data"]/startUnit' \
       -v false deployit-manifest.xml > \
        deployit-manifest.xml.new
mv deployit-manifest.xml{.new,}

# start the app
fleetappctl start 

# Check it is working
curl hellodb.127.0.0.1.xip.io:8080
curl hellodb.127.0.0.1.xip.io:8080

# Change the service name of the application in the unit file
sed -i -e 's/SERVICE_NAME=hellodb/SERVICE_NAME=helloworld/' app-hellodb@.service

# do a rolling upgrade
fleetappctl start 

# Check it is now accessible on the new service name
curl helloworld.127.0.0.1.xip.io:8080

# Show all units of this app
fleetappctl list

# Stop all units of this app
fleetappctl stop
fleetappctl list

# Restart it again
fleetappctl start

# Destroy it
fleetappctl destroy
Example - placeholder references

This example shows the use of a placeholder reference in the unit file of the paas-monitor application. The application takes two optional environment  variables: RELEASE and MESSAGE that allow you to configure the resulting responses. The variable RELEASE is configured in the Docker run command in the fleet unit file through a placeholder. The actual value for the current deployment is taken from an placeholder value file.

cd ../fleetappctl-0.25/examples/paas-monitor
#check out the placeholder reference
grep '{{' paas-monitor@.service

...
ExecStart=/bin/sh -c "/usr/bin/docker run --rm --name %p-%i \
 <strong>--env RELEASE={{release}}</strong> \
...
# checkout our placeholder values
cat dev.env
...
release=V2
# start the app
fleetappctl -e dev.env start

# show current release in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start is idempotent (ie. nothing happens)
fleetappctl -e dev.env start

# update the placeholder value and see a rolling upgrade in the works
echo 'release=V3' > dev.env
fleetappctl -e dev.env start
curl paas-monitor.127.0.0.1.xip.io:8080/status

fleetappctl destroy
Example - Env Consul Key Value Pair deployments


The final example shows the use of a Consul Key Value Pair, the use of placeholders and envconsul to dynamically update the environment variables of a running instance. The environment variables RELEASE and MESSAGE are taken from the keys under /paas-monitor in Consul. In turn the initial value of these keys are loaded on first load and set using values from the placeholder file.

cd ../fleetappctl-0.25/examples/envconsul

#check out the Consul Key Value pairs, and notice the reference to placeholder values
cat keys.consul
...
paas-monitor/release={{release}}
paas-monitor/message={{message}}

# checkout our placeholder values
cat dev.env
...
release=V4
message=Hi guys
# start the app
fleetappctl -e dev.env start

# show current release and message in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# Change the message in Consul
fleetctl ssh paas-monitor@1 \
    curl -X PUT \
    -d \'hello Consul\' \
    http://172.17.8.101:8500/v1/kv/paas-monitor/message

# checkout the changed message
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start does not change the values..
fleetappctl -e dev.env start
Conclusion

CoreOS provides all the basic functionality for a Container Platform as a Service. With the utility fleetappctl it becomes easy to start, stop and upgrade composite applications. The script is an superfluous to fleetctl and does not break other ways of deploying your applications to CoreOS.

The source code, manual page and documentation of fleetappctl can be found on https://github.com/mvanholsteijn/fleetappctl.

 

boot2docker on xhyve

Mon, 06/15/2015 - 09:00

xhyve is a new hypervisor in the vein of KVM on Linux and bhyve on BSD. It’s actually a port of BSD’s bhyve to OS X and more similar to KVM than to Virtualbox in that it’s minimal and command line only which makes it a good fit for an always running virtual machine like boot2docker on OS X.

This post documents the steps to get boot2docker running within xhyve and contains some quick benchmarks to compare xhyve’s performance with Virtualbox.

Read the full blogpost on Simon van der Veldt's website

Using UIPageViewControllers with Segues

Mon, 06/15/2015 - 07:30

I've always wondered how to configure a UIPageViewController much like you configure a UITabBarController inside a Storyboard. It would be nice to create standard embed segues to the view controllers that make up the pages. Unfortunately, such a thing is not possible and currently you can't do a whole lot in a Storyboard with page view controllers.

So I thought I'd create a custom way of connecting pages through segues. This post explains how.

Without segues

First let's create an example without using segues and then later we'll try to modify it to use segues.

Screen Shot 2015-06-14 at 10.01.10

In the above Storyboard we have 2 scenes. One page view controller and another for the individual pages, the content view controller. The page view controller will have 4 pages that each display their page number. It's about as simple as a page view controller can get.

Below the code of our simple content view controller:

class MyContentViewController: UIViewController {

  @IBOutlet weak var pageNumberLabel: UILabel!

  var pageNumber: Int!

  override func viewDidLoad() {
    super.viewDidLoad()
      pageNumberLabel.text = "Page $$pageNumber)"
    }
}

That means that our page view controller just needs to instantiate an instance of our MyContentViewController for each page and set the pageNumber. And since there is no segue between the two scenes, the only way to create an instance of the MyContentViewController is programmatically with the UIStoryboard.instantiateViewControllerWithIdentifier(_:). Of course for that to work, we need to give the content view controller scene an identifier. We'll choose MyContentViewController to match the name of the class.

Our page view controller will look like this:

class MyPageViewController: UIPageViewController {

  let numberOfPages = 4

  override func viewDidLoad() {
    super.viewDidLoad()

    setViewControllers([createViewController(1)], 
      direction: .Forward, 
      animated: false, 
      completion: nil)

    dataSource = self
  }

  func createViewController(pageNumber: Int) -> UIViewController {
    let contentViewController = 
      storyboard?.instantiateViewControllerWithIdentifier("MyContentViewController") 
      as! MyContentViewController
    contentViewController.pageNumber = pageNumber
    return contentViewController
  }

}

extension MyPageViewController: UIPageViewControllerDataSource {
  func pageViewController(pageViewController: UIPageViewController, viewControllerAfterViewController viewController: UIViewController) -> UIViewController? {
    return createViewController(
      mod((viewController as! MyContentViewController).pageNumber, 
      numberOfPages) + 1)
  }

  func pageViewController(pageViewController: UIPageViewController, viewControllerBeforeViewController viewController: UIViewController) -> UIViewController? {
    return createViewController(
      mod((viewController as! MyContentViewController).pageNumber - 2, 
      numberOfPages) + 1)
  }
}

func mod(x: Int, m: Int) -> Int{
  let r = x % m
  return r < 0 ? r + m : r
}

Here we created the createViewController(_:) method that has the page number as argument and creates the instance of MyContentViewController and sets the page number. This method is called from viewDidLoad() to set the initial page and from the two UIPageViewControllerDataSource methods that we're implementing here to get to the next and previous page.

The custom mod(_:_) function is used to have a continuous page navigation where the user can go from the last page to the first and vice versa (the built-in % operator does not do a true modules operation that we need here).

Using segues

The above sample is pretty simple. So how can we change it to use a segue? First of all we need to create a segue between the two scenes. Since we're not doing anything standard here, it will have to be a custom segue. Now we need a way to get an instance of our content view controller through the segue which we can use from our createViewController(_:). The only method we can use to do anything with the segue is UIViewController.performSegueWithIdentifier(_:sender:). We know that calling that method will create an instance of our content view controller, which is the destination of the segue, but we then need a way to get this instance back into our createViewController(_:) method. The only place we can reference the new content view controller instance is from within the custom segue. From it's init method we can set it to a variable which the createViewController(_:) can also access. That looks something as following. First we create the variable:

  var nextViewController: MyContentViewController?

Next we create a new custom segue class that assigns the destination view controller (the new MyContentViewController) to this variable.

public class PageSegue: UIStoryboardSegue {

  public override init!(identifier: String?, source: UIViewController, destination: UIViewController) {
    super.init(identifier: identifier, source: source, destination: destination)
    if let pageViewController = source as? MyPageViewController {
      pageViewController.nextViewController = 
        destinationViewController as? MyContentViewController
    }
  }

  public override func perform() {}
    
}

Since we're only interested in getting the reference to the created view controller we don't need to do anything extra in the perform() method. And the page view controller itself will handle displaying the pages so our segue implementation remains pretty simple.

Now we can change our createViewController(_:) method:

func createViewController(pageNumber: Int) -> UIViewController {
  performSegueWithIdentifier("Page", sender: self)
  nextViewController!.pageNumber = pageNumber
  return nextViewController!
}

The method looks a bit odd since it's not we're never assigning nextViewController anywhere in this view controller. And we're relying on the fact that the segue is created synchronously from the performSegueWithIdentifier call. Otherwise this wouldn't work.

Now we can create the segue in our Storyboard. We need to give it the same identifier as we used above and set the Segue Class to PageSegue

Screen Shot 2015-06-14 at 11.58.49

<2>Generic class

Now we can finally create segues to visualise the relationship between page view controller and content view controller. But let's see if we can write a generic class that has most of the logic which we can reuse for each UIPageViewController.

We'll create a class called SeguePageViewController which will be the super class for our MyPageViewController. We'll move the PageSegue to the same source file and refactor some code to make it more generic:

public class SeguePageViewController: UIPageViewController {

    var pageSegueIdentifier = "Page"
    var nextViewController: UIViewController?

    public func createViewController(sender: AnyObject?) -> MyContentViewController {
        performSegueWithIdentifier(pageSegueIdentifier, sender: sender)
        return nextViewController!
    }

}

public class PageSegue: UIStoryboardSegue {

    public override init!(identifier: String?, source: UIViewController, destination: UIViewController) {
        super.init(identifier: identifier, source: source, destination: destination)
        if let pageViewController = source as? SeguePageViewController {
            pageViewController.nextViewController = destinationViewController as? UIViewController
        }
    }

    public override func perform() {}
    
} 

As you can see, we moved the nextViewController variable and createViewController(_:) to this class and use UIViewController instead of our concrete MyContentViewController class. We also introduced a new variable pageSegueIdentifier to be able to change the identifier of the segue.

The only thing missing now is setting the pageNumber of our MyContentViewController. Since we just made things generic we can't set it from here, so what's the best way to deal with that? You might have noticed that the createViewController(_:) method now has a sender: AnyObject? as argument, which in our case is still the page number. And we know another method that receives this sender object: prepareForSegue(_:sender:). All we need to do is implement this in our MyPageViewController class.

override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  if segue.identifier == pageSegueIdentifier {
    let vc = segue.destinationViewController as! MyContentViewController
    vc.pageNumber = sender as! Int
  }
}

If you're surprised to see the sender argument being used this way you might want to read my previous post about Understanding the ‚Äėsender‚Äô in segues and use it to pass on data to another view controller

Conclusion

Wether or not you'd want to use this approach of using segues for page view controllers might be a matter of personal preference. It doesn't give any huge benefits but it does give you a visual indication of what's happening in the Storyboard. It doesn't really save you any code since the amount of code in MyPageViewController remained about the same. We just replaced the createViewController(_:) with the prepareForSegue(_:sender:) method.

It would be nice if Apple offers a better solution that depends less on the data source of a page view controller and let you define the paging logic directly in the storyboard.

3 easy ways of improving your Kanban system

Sun, 06/14/2015 - 21:10

You are working in a Kanban team and after a few months it doesn’t seem to be as easy and effective as it used to be. Here are three tips on how to get that energy back up.

1. Recheck and refresh all policies

IMG_1884New team members haven‚Äôt gone through the same process as the other team members. This means that they probably won‚Äôt pick up on the subtleties of the board and policies that go with it. Over time team agreements that were very explicit (remember the Kanban Practice: ‚ÄúMake Policies Explicit‚ÄĚ?) aren‚Äôt anymore. For example: A blue stickie that used to have a class of service associated with it becomes a stickie for the person that works on the project. Unclear policies lead to loss of focus, confusion and lack of flow. Do a check if all policies are still valid, clear and understood by everybody. If not, get rid of them or improve them!

2. Check if WIP limits are understood and adhered to

In the first time after a team starts using Kanban, Work in Progress (WIP) limits are not completely optimized for flow. They usually are higher to minimize resistance from team and stakeholders as tighter WIP limits require a higher maturity of the organization. The team should start to adjust (lower) the limits to optimize flow. Lack team focus on improvement of flow can make that they don't adjust WIP limits.

Reevaluate current WIP limits, adjust them for bottlenecks and lower and discuss next steps. It will return the focus to help the team to ‚ÄúStart Finishing‚ÄĚ.

3. Do a complete overhaul of the board

IMG_0307It sounds counter intuitive, but sometimes it helps to take the board and do a complete runthrough of the steps to design a Kanban System. Analyze up- and downstream stakeholders and model the steps the work flows through. After that update the legend and policies for classes of service.

Even if the board looks almost the same, the discussion and analysis might lead to subtle changes that make a world of difference. And it’s good for team spirit!

 

I hope these tips can help you. Please share any that you have in the comments!

ATDD with Lego Mindstorms EV3

Fri, 06/12/2015 - 20:09

We have been building automated acceptance tests using web browsers, mobile devices and web services for several years now. Last week Erik Zeedijk and I came up with the (crazy) idea to implement features for a robot using ATDD. In this blog we will explain how we used ATDD while experimenting with Lego Mindstorms EV3.

What are we building?

When you practice ATDD it is really important to have a common understanding about what you need to build and test. So we had several conversations about requirements and specifications.

We wanted to build a robot that could help us clean up post-its from tables or floors after brainstorms and other sticky note related sessions. For our first iteration we decided to experiment with moving the robot and using the color scanner. The robot should search for yellow post-it notes all by itself. No manual interactions because we hate manual work. After the conversation we wrote down the following acceptance criteria to scope our development activities:

  • The robot needs to move his way on a white square board
  • The robot needs to turn away from the edges of the square board
  • The robot needs to stop a yellow post-it

We used Cucumber to write down the features and acceptance test scenarios like:


Feature: Move robot
Scenario: Robot turns when a purple line is detected
When the robot is moving and encounters a purple line
Then the robot turns left

Scenario: Robot stops moving when yellow post-it detected
When the robot is moving and encounters yellow post-it
Then the robot stops

Just enough information to start rocking with the robot! Ready? Yes! Tests? Yes! Lets go!

How did we build it?

First we played around with the Lego Mindstorms EV3 programming software (4GL tool). We could easily build the application by click our way through the conditions, loops and color detections.

But we couldn't find a way to hook up our Acceptance Tests. So we decided to look for programming languages. First hit was the leJOS. It comes with a custom firmware which you install on a SD card. This firmware allows you to run Java on the Lego EV3 brick.

Installing the firmware on the EV3 Brick

The EV3 main computer ‚ÄúBrick‚ÄĚ is actually pretty powerful.

It has a 300mhz ARM processor with 64MB RAM and takes micro SD cards for storage.
It also has Bluetooth and Wi-Fi making it possible to wirelessly connect to it and load programs on it.

There are several custom firmware’s you can run on the EV3 Brick. The LeJOS firmware allows us to do just that where we can program in Java with having access to a specialized API created for the EV3 Brick. Install it by following these steps:

  1. Download the latest firmware on sourceforge: http://sourceforge.net/projects/ev3.lejos.p/files/
  2. Get a blank SD card (2GB ‚Äď 32GB, SDXC not supported) and format this card to FAT32.
  3. Unzip the ‚Äėlejosimage.zip‚Äô file from the leJOS home directory to the root directory of the SD card.
  4. Download Java for Lego Mindstorms EV3 from the Oracle website: http://java.com/legomindstorms
  5. Copy the downloaded ‚ÄėOracle JRE.tar.gz‚Äô file to the root of the SD card.
  6. Safely remove the SD card from your computer, insert it into the EV3 Brick and boot the EV3. The first boot will take approximately 8 minutes while it created the needed partitions for the EV3 Brick to work.
ATDD with Cucumber

Cucumber helped us creating scenarios in an human readable language. We created the Step Definitions to glue it all the automation together and used the remote access to the EV3 via Java RMI.

We also wrote some support code to help us make the automation code readable and maintainable.

Below you can find an example of the StepDefinition of a Given statement of a scenario.

@Given("^the robot is driving$")
public void the_robot_is_driving() throws Throwable {
Ev3BrickIO.start();
assertThat(Ev3BrickIO.leftMotor.isMoving(), is(true));
assertThat(Ev3BrickIO.rightMotor.isMoving(), is(true));
}
Behavior Programming

A robot needs to perform a series of functions. In our case we want to let the robot drive around until he finds a yellow post-it, after which he stops. The obvious way of programming these functions is to make use of several if, then, else statements and loops. This works fine at first, however, as soon as the patterns become more complex the code becomes complex as well since adding new actions can have influence on other actions.

A solution to this problem is Behavior Programming. The concept of this is based around the fact that a robot can only do one action at a time depending on the state he is in. These behaviors are written as separate independent classes controlled by an overall structure so that behavior can be changed or added without touching other behaviors.

The following is important in this concept:

  1. One behavior is active at a time and then controls the robot.
  2. Each behavior is given a priority. For example; the turn behavior has priority over the drive behavior so that the robot will turn once the color sensor indicates he reaches the edge of the field.
  3. Each behavior is a self contained, independent object.
  4. The behaviors can decide if they should take control of the robot.
  5. Once a behavior becomes active it has a higher priority than any of the other behaviors
Behaviour API and Coding

The Behavior API is fairly easy to understand as it only exists out of one interface and one class. The individual behavior classes build-up is then defined by the overall Behavior interface and the individual behavior classes are controlled by an Arbitrator class that decides which behavior should be the active one. A separate class should be created for each behavior you want the robot to have.

leJOS Behaviour Pattern

 

When an Arbitrator object is instantiated, it is given an array of Behavior objects. Once it has these, the start() method is called and it begins arbitrating; deciding which behavior will become active.

In the code example below the Arbitrator is being created where the first two lines create instances of our behaviors. The third line places them into an array, with the lowest priority behavior taking the lowest array index. The fourth line creates the Arbitrator, and the fifth line starts the Arbitration process. One you create more behavior classes these can simply be added to the array in order to be executed by the Arbitrator.

DriveForward driveForward = new DriveForward();
Behavior stopOnYellow = new StopOnYellow();
behaviorList = new Behavior[]{driveForward, stopOnYellow};
arbitrator = new Arbitrator(behaviorList, true);
arbitrator.start();

We will now take a detailed look at one of the behavior classes; DriveForward(). Each of the standard methods is defined. First we create the action() method in which contains what we want the robot to perform when the behavior becomes active. In this case moving the left and the right motor forwards.

public void action() {
try {
suppressed = false;

Ev3BrickIO.leftMotor.forward();
Ev3BrickIO.rightMotor.forward();
while( !suppressed ) {
Thread.yield();
}

Ev3BrickIO.leftMotor.stop(true);
Ev3BrickIO.rightMotor.stop(true);
} catch (RemoteException e) {
e.printStackTrace();
}
}

The suppress() method will stop the action once this is called.

public void suppress() {
suppressed = true;
}

The takeControl() method tells the Arbitrator which behavior should become active. In order to keep the robot moving after having performed more actions we decided to create a global variable to keep track of the fact if the robot is running. While this is the case we simply return 'true' from the takeControl() method.

public boolean takeControl() {
return Ev3BrickIO.running;
}

You can find our leJOS EV3 Automation code here.

Test Automation Day

Our robot has the potential to do much more and we have not yet implemented the feature of picking up our post-its! Next week we'll continue working with the Robot during the Test Automation Day.
Come and visit our booth where you can help us to enable the Lego Mindstorms EV3 Robot to clean-up our sticky mess at the office!

Feel free to use our Test Automation Day ‚ā¨50 discount on the ticket price by using the following code: TAD15-XB50

We hope to see you there!

Android Design Support: NavigationView

Tue, 06/09/2015 - 09:18

When Google announced that Android apps should follow their Material Design standards, they did not give the developers a lot of tools to actually implement this new look and feel. Of course Google’s own apps were all quickly updated and looked amazing, but the rest of us were left with little more than fancy design guidelines and no real components to use in our apps.

So last weeks release of the Android Design Support Library came as a relief to many. It promises to help us quickly create nice looking apps that are consistent with the rest of the platform, without having to roll everything for ourselves. Think of it as AppCompat’s UI-centric companion.

The NavigationView is the part of this library which I thought the most interesting. It helps you create the slick sliding-over-everything navigation drawer that is such a recognizable part of material apps. I will demonstrate how to use this component and how to avoid some common mistakes.

(function(d, t) { var g = d.createElement(t), s = d.getElementsByTagName(t)[0]; g.src = 'http://assets.gfycat.com/js/gfyajax-0.517d.js'; s.parentNode.insertBefore(g, s); }(document, 'script'));

Basic Setup

The basic setup is pretty straightforward, you add a DrawerLayout and NavigationView to your main layout resource:

<android.support.v4.widget.DrawerLayout
  android:id="@+id/drawer_layout"
  xmlns:android="http://schemas.android.com/apk/res/android"
  xmlns:app="http://schemas.android.com/apk/res-auto"
  android:layout_width="match_parent"
  android:layout_height="match_parent"
  android:fitsSystemWindows="true">

  <!-- The main content view -->
  <LinearLayout
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:orientation="vertical">

    <!-- Toolbar instead of ActionBar so the drawer can slide on top -->
    <android.support.v7.widget.Toolbar
      android:id="@+id/toolbar"
      android:layout_width="match_parent"
      android:layout_height="@dimen/abc_action_bar_default_height_material"
      android:background="?attr/colorPrimary"
      android:minHeight="?attr/actionBarSize"
      android:theme="@style/AppTheme.Toolbar"
      app:titleTextAppearance="@style/AppTheme.Toolbar.Title"/>

    <!-- Real content goes here -->
    <FrameLayout
      android:id="@+id/content"
      android:layout_width="match_parent"
      android:layout_height="0dp"
      android:layout_weight="1"/>
  </LinearLayout>

  <!-- The navigation drawer -->
  <android.support.design.widget.NavigationView
    android:id="@+id/navigation"
    android:layout_width="wrap_content"
    android:layout_height="match_parent"
    android:layout_gravity="start"
    android:background="@color/ternary"
    app:headerLayout="@layout/drawer_header"
    app:itemIconTint="@color/drawer_item_text"
    app:itemTextColor="@color/drawer_item_text"
    app:menu="@menu/drawer"/>

</android.support.v4.widget.DrawerLayout>

And a drawer.xml menu resource for the navigation items:

<menu xmlns:android="http://schemas.android.com/apk/res/android">
  <!-- group with single selected item so only one item is highlighted in the nav menu -->
  <group android:checkableBehavior="single">
    <item
      android:id="@+id/drawer_item_1"
      android:icon="@drawable/ic_info"
      android:title="@string/item_1"/>
    <item
      android:id="@+id/drawer_item_2"
      android:icon="@drawable/ic_help"
      android:title="@string/item_2"/>
  </group>
</menu>

Then wire it up in your Activity. Notice the nice onNavigationItemSelected(MenuItem) callback:

public class MainActivity extends AppCompatActivity implements
    NavigationView.OnNavigationItemSelectedListener {

  private static final long DRAWER_CLOSE_DELAY_MS = 250;
  private static final String NAV_ITEM_ID = "navItemId";

  private final Handler mDrawerActionHandler = new Handler();
  private DrawerLayout mDrawerLayout;
  private ActionBarDrawerToggle mDrawerToggle;
  private int mNavItemId;

  @Override
  protected void onCreate(final Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.activity_main);

    mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout);
    Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar);
    setSupportActionBar(toolbar);

    // load saved navigation state if present
    if (null == savedInstanceState) {
      mNavItemId = R.id.drawer_item_1;
    } else {
      mNavItemId = savedInstanceState.getInt(NAV_ITEM_ID);
    }

    // listen for navigation events
    NavigationView navigationView = (NavigationView) findViewById(R.id.navigation);
    navigationView.setNavigationItemSelectedListener(this);

    // select the correct nav menu item
    navigationView.getMenu().findItem(mNavItemId).setChecked(true);

    // set up the hamburger icon to open and close the drawer
    mDrawerToggle = new ActionBarDrawerToggle(this, mDrawerLayout, toolbar, R.string.open,
        R.string.close);
    mDrawerLayout.setDrawerListener(mDrawerToggle);
    mDrawerToggle.syncState();

    navigate(mNavItemId);
  }
  
  private void navigate(final int itemId) {
    // perform the actual navigation logic, updating the main content fragment etc
  }

  @Override
  public boolean onNavigationItemSelected(final MenuItem menuItem) {
    // update highlighted item in the navigation menu
    menuItem.setChecked(true);
    mNavItemId = menuItem.getItemId();
    
    // allow some time after closing the drawer before performing real navigation
    // so the user can see what is happening
    mDrawerLayout.closeDrawer(GravityCompat.START);
    mDrawerActionHandler.postDelayed(new Runnable() {
      @Override
      public void run() {
        navigate(menuItem.getItemId());
      }
    }, DRAWER_CLOSE_DELAY_MS);
    return true;
  }

  @Override
  public void onConfigurationChanged(final Configuration newConfig) {
    super.onConfigurationChanged(newConfig);
    mDrawerToggle.onConfigurationChanged(newConfig);
  }

  @Override
  public boolean onOptionsItemSelected(final MenuItem item) {
    if (item.getItemId() == android.support.v7.appcompat.R.id.home) {
      return mDrawerToggle.onOptionsItemSelected(item);
    }
    return super.onOptionsItemSelected(item);
  }

  @Override
  public void onBackPressed() {
    if (mDrawerLayout.isDrawerOpen(GravityCompat.START)) {
      mDrawerLayout.closeDrawer(GravityCompat.START);
    } else {
      super.onBackPressed();
    }
  }

  @Override
  protected void onSaveInstanceState(final Bundle outState) {
    super.onSaveInstanceState(outState);
    outState.putInt(NAV_ITEM_ID, mNavItemId);
  }
}
Extra Style

This setup results in a nice-looking menu with some default styling. If you want to go a bit further, you can add a header view to the drawer and add some colors to the navigation menu itself:

<android.support.design.widget.NavigationView
  android:id="@+id/navigation"
  android:layout_width="wrap_content"
  android:layout_height="match_parent"
  android:layout_gravity="start"
  android:background="@color/drawer_bg"
  app:headerLayout="@layout/drawer_header"
  app:itemIconTint="@color/drawer_item"
  app:itemTextColor="@color/drawer_item"
  app:menu="@menu/drawer"/>

Where the drawer_item color is actually a ColorStateList, where the checked state is used by the current active navigation item:

<selector xmlns:android="http://schemas.android.com/apk/res/android">
  <item android:color="@color/drawer_item_checked" android:state_checked="true" />
  <item android:color="@color/drawer_item_default" />
</selector>
Open Issues

The current version of the library does come with its limitiations. My main issue is with the system that highlights the current item in the navigation menu. The itemBackground attribute for the NavigationView does not handle the checked state of the item correctly: somehow either all items are highlighted or none of them are. This makes this attribute basically unusable for most apps. I ran into more trouble when trying to work with submenu's in the navigation items. Once again the highlighting refused to work as expected: updating the selected item in a submenu makes the highlight overlay disappear altogether. In the end it seems that managing the selected item is still a chore that has to be solved manually in the app itself, which is not what I expected from what is supposed to be a drag-and-drop component aimed to take work away from the developers.

Conclusion

I think the NavigationView component missed the mark a little. My initial impression was pretty positive: I was able to quickly put together a nice looking navigation menu with very little code. The issues with the highlighting of the current item makes it more difficult to use than I would expect, but let's hope that these quirks are removed in an upcoming release of the design library.

You can find the complete source code of the demo project on GitHub: github.com/smuldr/design-support-demo.

Microservices architecture principle #6: One team is responsible for full life cycle of a Micro service

Tue, 06/02/2015 - 20:08

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Today's blog is the last in a series about our Microservices principles. This blog explains why a Microservice should be the responsibility of exactly one team (but one team may be responsible for more services).

Being responsible for the full life cycle of a service means that a single team can deploy and manage a service as well as create new versions and retire obsolete ones. This means that users of the service have a single point of contact for all questions regarding the use of the service. This property makes it easier to track changes in a service. Developers can focus on a specific area of the business they are supporting so they will become specialists in that area. This in turn will lead to better quality. The need to also fix errors and problems in production systems is a strong motivator to make sure code works correctly and problems are easy to find.
Having different teams working on different services introduces a challenge that may lead to a more robust software landscape. If TeamA needs a change in TeamB’s service in order to complete it’s task, some form of planning has to take place. Both teams have to cater for slipping schedules and unforeseen events that cause the delivery date of a feature to change. However, depending on a commitment made by another team is tricky; there are lots of valid reasons why a change may be late (e.g. production issues or illness temporarily reduces a teams capacity or high priority changes take precedence). So TeamA may never depend on TeamB to finish before the deadline. TeamA will learn to protect its weekends and evenings by changing their architecture. Not making assumptions about another teams schedule, in a Microservice environment, will therefore lead to more robust software.

Microservices architecture principle #6: One team is responsible for full life cycle of a Micro service

Tue, 06/02/2015 - 20:08

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Today's blog is the last in a series about our Microservices principles. This blog explains why a Microservice should be the responsibility of exactly one team (but one team may be responsible for more services).

Being responsible for the full life cycle of a service means that a single team can deploy and manage a service as well as create new versions and retire obsolete ones. This means that users of the service have a single point of contact for all questions regarding the use of the service. This property makes it easier to track changes in a service. Developers can focus on a specific area of the business they are supporting so they will become specialists in that area. This in turn will lead to better quality. The need to also fix errors and problems in production systems is a strong motivator to make sure code works correctly and problems are easy to find.
Having different teams working on different services introduces a challenge that may lead to a more robust software landscape. If TeamA needs a change in TeamB’s service in order to complete it’s task, some form of planning has to take place. Both teams have to cater for slipping schedules and unforeseen events that cause the delivery date of a feature to change. However, depending on a commitment made by another team is tricky; there are lots of valid reasons why a change may be late (e.g. production issues or illness temporarily reduces a teams capacity or high priority changes take precedence). So TeamA may never depend on TeamB to finish before the deadline. TeamA will learn to protect its weekends and evenings by changing their architecture. Not making assumptions about another teams schedule, in a Microservice environment, will therefore lead to more robust software.

Microservices principles #5: Best technology for the job over one technology for all

Mon, 06/01/2015 - 11:39

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Over the next couple of days we will cover each of these principles in more detail in a series of blog posts.
In this blog we cover: ‚ÄúBest technology for the job over one technology for all‚ÄĚ

A common benefit of service based (and loosely coupled) architectures is the possibility to choose a different technology for each service. Even though this concept isn’t new, it’s rarely applied. Often the reason for this seems to be that even though the services should operate independently they do share (parts of) the same stack. This is further fueled by an urge to consolidate all development under a single technology. Reasoning here usually being that developers become more interchangeable and therefore more valuable if everything runs on the same technology, which should be a good thing.

So, if this isn’t new territory why drag it up again? Why would a Microservices architecture merit changing an existing approach? The short answer is autonomy. The long(er) answer is that a Microservices Architecture does not try to centralize common (technological) functions in singleton-esque services. No, the focus of a Microsservices architecture is on service autonomy, centered around business capability and a Microservice can therefore implement its own stack.  This to make a Microservice easier to deploy on their own and removes dependencies on other services as much as possible.

But autonomous deployment isn‚Äôt the most important reason to consider technology on a per-service basis. The most important reason is the simple ‚Äúuse the best tool for the job‚ÄĚ. Not all technology is created equal. This isn‚Äôt limited to the choice of programming language or even the framework. It applies to the whole stack including the data layer.

Instead of spending a significant sum to buy large, bloated, multipurpose middleware, consider lightweight, single purpose containers. Pick containers that run the tech you need. You don't need Java applications with a relational database for everything. Other languages, frameworks and even datastores exist that cater to specific needs. Think of other languages like Scala or Go, frameworks like Akka or Play and database alternatives that focus on specific needs like storing (and retrieving) geographical data.

The choice of stack also relates to the choices you can make for your application landscape. If you have existing components that work for you or if you have components you want to buy off the shelve, it’s a real benefit to not be limited by an existing stack. For example, if you have opted for a Windows-only environment you are limiting your options.

Concerns about maintaining such a diverse landscape should consider that a lot of complexity comes from trying to maintain a single stack for everything. Smaller and simpler stacks should be easier to maintain. And having a single operations team for all those different technologies doesn't sound like a good idea? You're right! If you still have separate development and operations teams it may also be time to revisit that strategy. The devops approach makes running the services a shared responsibility. This doesn’t happen overnight but it is also a reason why Microservices can be such a good fit for organizations that have adopted an Agile way of working and/or apply Continuous Delivery.

Finally giving your developers a broader toolset to play with should keep them engaged. The opportunity to work with more than one technology can be a factor in retaining and attracting talent.

Microservices architecture principle #4: Asynchronous communication over synchronous communication

Fri, 05/29/2015 - 13:37

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Over the next couple of days we will cover each of these principles in more detail in a series of blog posts.
This blog explains why we prefer asynchronous communication over synchronous communication

In a previous post in this series we explained that we prefer autonomy of Microservices over coordination between Microservices. That does not imply a Microservices landscape with all Microservices running without being dependent on any other Microservice. There will always be dependencies, but we try to minimise the number of dependencies.  Once you have minimised the number of dependencies, how should these be implemented such that autonomy is maintained as much as possible? Synchronous dependencies between services imply that the calling service is blocked and waiting for a response by the called service before continuing it's operation. This is tight coupling, does not scale very well, and the calling service may be impacted by errors in the called service. In a high available robust Microservices landscape that is not preferable. Measures can be taken (think of things like circuit breakers) but it requires extra effort.

The preferred alternative is to use asynchronous communication. In this pattern the calling service simply publishes it's request (or data) and continues with other work (unrelated to  this request). The service has a separate thread listening for incoming responses (or data) and processes these when they come in. It is not blocking and waiting for a response after it sent a request, this improves scalability. Problems in another service will not break this service. If other services are temporarily broken the calling service might not be able to complete a process completely, but the calling service is not broken itself. Thus using the asynchronous pattern the services are more decoupled compared to the synchronous pattern and which preserves the autonomy of the service .