Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!


What to expect at I/O’14 - Develop

Google Code Blog - Mon, 06/23/2014 - 19:20
By Reto Meier, Google Developer Advocate

Google I/O 2014 will be live in less than 48 hours. Last Friday we shared a sneak peek of content and activities around design principles and techniques. This morning we’re excited to give a glimpse into what we have in store for develop experiences.

Google I/O at its core has always been about providing you with the inspiration and resources to develop remarkable applications using Google’s platforms, tools and technologies.

This year we’ll have a full lineup of sessions from the Android, Chrome and Cloud Platform teams, highlighting what’s new as well as showcasing cross-product integrations. Here’s a sample of some of the sessions we’ll be live streaming:
  • What’s new in Android || Wednesday 1-1:45PM (Room 8): Join us for a thrilling, guided tour of all the latest developments in Android technologies and APIs. We’ll cover everything that’s new and improved in the Android platform since…well, since the last time.
  • Making the mobile web fast, feature rich and beautiful || Thursday 10-10:45AM (Room 6): Reintroducing the mobile web! What is the mobile web good at? Why should developers build for it? And how do mobile web and native complement each other? The mobile web is often the first experience new users have with your brand and you're on the hook for delivering success to them. There's been massive investment in mobile browsers; so now we have the speed, the features, and the tools to help you make great mobile web apps.
  • Predicting the future with the Google Cloud Platform || Thursday 4-4:45PM (Room 7): Can you predict the future using Big Data? Can you divine if your users will come back to your site or where the next social conflict will arise? And most importantly, can Brazil be defeated at soccer on their own turf? In this talk, we'll go through the process of data extraction, modelling and prediction as well as generating a live dashboard to visualize the results. We’ll demonstrate how you can use Google Cloud and Open Source technologies to make predictions about the biggest soccer matches in the world. You’ll see how to use Google BigQuery for data analytics and Monte Carlo simulations, as well as how to create machine learning models in R and pandas. We predict that after this talk you’ll have the necessary tools to cast your own eye on the future.
In addition, we’ve invited notable speakers such as Ray Kurzweil, Regina Dugan, Peter Norvig, and a panel of robotics experts, hosted by Women Techmakers, and will be hosting two Solve for X workshops. These speakers are defining the future with their groundbreaking research and technology, and want to bring you along for the ride.

Finally, we want to give you ample face to face time with the teams behind the products, so are hosting informal ‘Box talks for Accessibility, Android, Android NDK / Gaming Performance, Cloud, Chrome, Dart, and Go. Swing by the Develop Sandbox to connect, discuss, learn and maybe even have an app performance review.

See you at I/O!

Reto Meier manages the Scalable Developer Advocacy team as part of Google's Developer Relations organization, and wrote Professional Android 4 Application Development.

Posted by Louis Gray, Googler
Categories: Programming

Don’t Overwhelm Yourself Trying to Learn Too Much

Making the Complex Simple - John Sonmez - Mon, 06/23/2014 - 15:00

It’s a great idea to educate yourself. I fully subscribe to the idea of lifetime learning–and you should too. But, in the software development field, sometimes there are so many new technologies, so many things to learn, that we can start to feel overwhelmed and like all we ever do is learn. You can start […]

The post Don’t Overwhelm Yourself Trying to Learn Too Much appeared first on Simple Programmer.

Categories: Programming

How to verify Web Service State in a Protractor Test

Xebia Blog - Sat, 06/21/2014 - 08:24

Sometimes it can be useful to verify the state of a web service in an end-to-end test. In my case, I was testing a web application that was using a third-party Javascript plugin that logged page views to a Rest service. I wanted to have some tests to verify that all our web pages did include the plugin, and that it was communicating with the Rest service properly when a new page was opened.
Because the webpages were written with AngularJS, Protractor was our framework of choice for our end-to-end test suite. But how to verify web service state in Protractor?

My first draft of a Protractor test looked like this:

var businessMonitoring = require('../util/businessMonitoring.js');
var wizard = require('./../pageobjects/wizard.js');

describe('Business Monitoring', function() {
  it('should log the page name of every page view in the wizard', function() {;

    // We opened the first page of the wizard and we expect it to have been logged
    // We have clicked the ‘next’ button so the ‘completed’ page has opened, this should have // been logged as well

The next thing I had to write was the businessMonitoring.js script, which should somehow make contact with the Rest service to verify that the correct page name was logged.
First I needed a simple plugin to make http requests. I found the 'request' npm package , which provides a simple API to make a http request like this:

var request = require('request');

var executeRequest = function(method, url) {
  var defer = protractor.promise.defer();
  // method can be ‘GET’, ‘POST’ or ‘PUT’
  request({uri: url, method: method, json: true}, function(error, response, body) {

    if (error || response.statusCode >= 400) {
        error : error,
        message : response
    } else {

  // Return a promise so the caller can wait on it for the request to complete
  return defer.promise;

Then I completed the businessmonitoring.js script with a method that gets the last request from the Rest service, using the request plugin.
It looked like this:

var businessMonitoring = exports; 

< .. The request wrapper with the executeRequest method is included here, left out here for brevity ..>

businessMonitoring.getMonitoredPageName = function () {

    var defer = protractor.promise.defer();

    executeRequest('GET', 'lastRequest')  // Calls the method which was defined above
      .then(function success(data) {
      }, function error(e) {
        defer.reject('Error when calling BusinessMonitoring web service: ' + e);

    return defer.promise;

It just fires a GET request to the Rest service to see which page was logged. It is an Ajax call so the result is not immediately available, so a promise is returned instead.
But when I plugged the script into my Protractor test, it didn't work.
I could see that the requests to the Rest service were done, but they were done immediately before any of my end-to-end tests were executed.
How come?

The reason is that Protractor uses the WebdriverJS framework to handle its control flow. Statements like expect(), which we use in our Protractor tests, don't execute their assertions immediately, but instead they put their assertions on a queue. WebdriverJS first fills the queue with all assertions and other statements from the test, and then it executes the commands on the queue. Click here for a more extensive explanation of the WebdriverJs control flow.

That means that all statements in Protractor tests need to return promises, otherwise they will execute immediately when Protractor is only building its test queue. And that's what happened with my first implementation of the businessMonitoring mock.
The solution is to let the getMonitoredPageName return its promise within another promise, like this:

var businessMonitoring = exports; 

businessMonitoring.getMonitoredPageName = function () {
  // Return a promise that will execute the rest call,
  // so that the call is only done when the controlflow queue is executed.
  var deferredExecutor = protractor.promise.defer();

  deferredExecutor.then(function() {
    var defer = protractor.promise.defer();

    executeRequest('GET', 'lastRequest')
      .then(function success(data) {
      }, function error(e) {
        defer.reject('Error when calling BusinessMonitoring mock: ' + e);

    return defer.promise;

  return deferredExecutor;

Protractor takes care of resolving all the promises, so the code in my Protractor test did not have to be changed.

What to expect at I/O’14 - Design

Google Code Blog - Fri, 06/20/2014 - 22:00

By Roman Nurik, Design Advocate

Google I/O 2014 is less than a week away, and we could not be more excited to show you what we have planned to help you create great experiences for your users.

Over the next few days, we’ll be sharing things to look out for both in-person and virtually that highlight design principles and techniques, the latest tools and implementations with which to develop, and developer-minded products and strategies to help distribute your app. First up, design!

As we shared in April, this year’s event will include content geared to designers and developers who are interested in design. These include:

SESSIONS: Designing for wearables || Wednesday, 4-5PM (Room 8): In this session, lead Google designers will discuss the process for Android Wear and Glass, and give tips for developers and designers working in this emerging space. By attending you’ll hear our take on new design approaches to wearable technology. This is just one of a number of sessions from Google user experience researchers and designers discussing our approach to cross-platform design, as well as opportunities to learn about testing and rapid prototyping. Filter by “Design, Sessions, Live Streaming” to find the design sessions we’ll be playing over the live stream.

WORKSHOPS: Design sprints with Google Ventures || Thursday, 11AM -1PM (Workshop 1): We’ll offer a full slate of interactive workshops ranging from designing for Glass to effectively using data in your design processes, including this workshop on design techniques that you can use to gather data about customers, get your team focused on the same challenges, quickly sketch new ideas, and turn those sketches into prototypes.

PANELS: Cross-platform design panel || Thursday, 1:15-2PM (Design Sandbox, Floor 2): This year, the sandbox will be alive with talks, panels and opportunities to meet directly with Googlers. During this panel, Google designers will share their thoughts on designing for multiple platforms, providing background information on how the design team creates consistent, intuitive and delightful experiences in a multi-platform world. There are many opportunities like this as well as for UX reviews and informal ‘Box talks in the Design Sandbox, offering you opportunities to connect with Googlers who share your passion for designing simple and engaging applications.

To see the full Google I/O schedule, visit

See you at I/O!

Roman Nurik is a Design Advocate at Google, helping designers and developers build amazing products on platforms like Android and the web.

Posted by Louis Gray, Googler
Categories: Programming

Concordion without the JUnit Code

Xebia Blog - Fri, 06/20/2014 - 20:58

Concordion is a framework to support Behaviour Driven Design. It is based on JUnit to run tests and HTML enriched with a little Concordion syntax to call fixture methods and make assertions on test outcome. I won't describe Concordion because it is well documented here:
Instead I'll describe a small utility class I've created to avoid code duplication. Concordion requires a JUnit class for each test. The utility I'll describe below allows you to run all Concordion tests without having a utility class for each test.

In Concordion you specify test cases and expected outcomes in a HTML file. Each HTML file is accompanied by a Java class of the same name that has the @RunWith(ConcordionRunner.class annotation. This Java class is comparable with Fitnesse's fixture classes. Here you can create methods that are used in the HTML file to connect the test case to the business code.

In my particular use case the team ended up writing lots of mostly empty Java files. The system under test was processing XML message files, so all the test needed to do was call a single method to hand the XML to the business code and then validate results. Each Java class was basically the same, except for its name.

To avoid this duplication I created a class that uses the JavaAssist library to generate a Java class on the fly and run it as a JUnit test. You can find my code on Github:

git clone

ConcordionRunner generates a class using a template. The template can be really simple like in my example where FixtureTemplate extends MyFixture. MyFixture holds all fixture code to connect the test to the application under test. This is where we would put all fixture code necessary to call a service using a XML message. In the example there's just the single getGreeting() method.
HelloWorldAgain.html is the actual Concordion test. It shows the call to getGreeting(), which is a method of MyFixture.
The dependencies are like this:
FixtureTemplate extends MyFixture
YourTest.html uses Concordion
Example uses ConcordionRunner uses JUnitCore
The example in shows how to use ConcordionRunner to execute a test. This could easily be extended to recursively go through a directory and execute all tests found. Note that Example writes the generated class to a file. This may help in troubleshooting but isn't really necessary.
Now it would be nice to adapt the Eclipse plugin to allow you to right click the HTML file and run it as a Concordion test without adding a unit test.

mod_spdy is now an Apache project

Google Code Blog - Thu, 06/19/2014 - 19:30
Author PictureBy Matthew Steele, PageSpeed Insights Team

Just over two years ago, we launched mod_spdy, a plugin for the popular Apache Web Server that adds support for the SPDY protocol. At the time, our goal was both to speed up the web and help fuel the growth and adoption of SPDY by making it easy for Apache 2.2 users to install and enable SPDY on their sites. Today, SPDY is now widely adopted, officially supported by several web servers and many popular sites, and the IETF is using it as the basis for the upcoming HTTP/2.0 protocol. The time seems right for mod_spdy to cease being a third-party add-on, and to instead become a core part of Apache httpd.

We’re pleased to announce that Google has formally donated mod_spdy’s code to the Apache Software Foundation, and it is now a part of the Apache httpd codebase. “The intent is to work on making it fully part of [Apache] 2.4 and, of course, a core part of 2.6/3.0” - Jim Jagielski, co-founder of the ASF. Being a part of Apache core will make SPDY support even more widely available for Apache httpd users, and pave the way for HTTP/2.0. It will also further improve that support over the original version of mod_spdy by better integrating SPDY and HTTP/2.0’s multiplexing features with the core part of the server.

We’re grateful for all the adoption and feedback we’ve gotten from mod_spdy users over the past two years, and we’re very excited to see the Apache Software Foundation take it from here!

Matthew Steele is a Software Engineer on the Google PageSpeed Insights Team in Cambridge, MA. He and his team focus on developing tools to help site owners make their sites faster and more usable.

Posted by Louis Gray, Googler
Categories: Programming

Web Fundamentals and Web Starter Kit: Resources for Modern Web Development

Google Code Blog - Thu, 06/19/2014 - 18:45
Cross-posted from the Chromium Blog

Author PhotoBy Paul Kinlan, Chrome Developer Relations

The web is a rich computing platform with unparalleled reach. In recent years, mobile devices have brought the web to billions of new users and introduced many new device capabilities, screen sizes, input methods, and more. To help developers navigate this brave new world, we built Web Fundamentals, a curated source for modern best practices. Today, we’re making it even easier to build multi-device experiences with the Beta release of the Web Starter Kit.

Web Fundamentals Preview

Web Fundamentals' guidelines are intended to be fundamental to the platform: useful no matter which framework you choose or which browser your users run. We have articles about responsive layouts, forms, touch, media, performance, device capabilities, and setting up a development workflow. Articles cover both coding and design. For example, the article on layout design patterns explains both the usability tradeoffs between different layout options and how to implement them. The performance section complements PageSpeed Insights, an auditing tool that encourages instant (<1 second) mobile web sites.

Designed to help you apply Web Fundamentals’ best practices in new projects, Web Starter Kit is a lightweight boilerplate with templates and tooling. Web Starter Kit gives you responsive layout, a visual style guide, and optional workflow features like performance optimization so you can keep your pages lean and fast.

Web Starter Kit preview animation

Both Web Fundamentals and the Web Starter Kit are actively developed and curated by a team of developers from Google and several open-source contributors. Our source code is available on GitHub, and we welcome contributions and feedback. Looking ahead, we’ll be adding new content and working with the web development community to refine our advice. Please file an issue or submit a pull request to help us capture web development best practices.

We look forward to a more modern, multi-device web!

Posted by Paul Kinlan, Developer Advocate and Web Fundamentalist

Posted by Louis Gray, Googler
Categories: Programming

Time is The Great Equalizer

Time really is the great equalizer.

I was reading an article by Dr. Donald E. Wemore, a time management specialist, and here’s what he had to say:

"Time is the great equalizer for all of us. We all have 24 hours in a day, 7 days a week, yielding 168 hours per week. Take out 56 hours for sleep (we do spend about a third of our week dead) and we are down to 112 hours to achieve all the results we desire. We cannot save time (ever have any time left over on a Sunday night that you could lop over to the next week?); it can only be spent. And there are only two ways to spend our time: we can spend it wisely, or, not so wisely."

Well put.

And what’s his recommendation to manage time better?

Work smarter, not harder.

In my experience, that’s the only approach that works.

If you find yourself struggling too much, there’s a good chance your time management strategies are off.

Don’t keep throwing time and energy at things if it’s not working.

Change your approach.

The fastest thing you can change in any situation is you.

You Might Also Like

7 Days of Agile Results: A Time Management Boot Camp for Productivity on Fire

10 Big Ideas from Getting Results the Agile Way

Productivity on Fire

Categories: Architecture, Programming

How I Explained My Job to My Grandmother

Well, she wasn’t my grandmother, but you get the idea.

I was trying to explain to somebody that’s in a very different job, what my job is all about.

Here’s what I said …

As far as my day job, I do complex, complicated things. 

I'm in the business of business transformation

I help large Enterprises get ahead in the world through technology and innovation.

I help Enterprises change their capabilities -- their business capabilities, technology capabilities, and people capabilities. 

It’s all about capabilities.

This involves figuring out their current state, their desired future state, the gaps between, the ROI of addressing the gaps, and then a Roadmap for making it happen.  

The interesting thing I've learned though is how much business transformation applies to personal transformation

It's all about figuring out your unique service and contribution to the world -- your unique value -- and then optimizing your strengths to realize your potential and do what you do best in a way that's valued -- where you can both generate value, as well as capture the value -- and lift the world to a better place.

Interestingly, she said she got it, it made sense, and it sounds inspiring.

What a relief.

Categories: Architecture, Programming

Introduction to Agile Presentation

I gave an Introduction to Agile talk recently:

Introduction to Agile Presentation (Slideshow)

I kept it focused on three simple things:

  1. What is Agile and the Agile Mindset (the Values and Principles)
  2. A rapid tour of the big 3 (Extreme Programming, Scrum, and Lean)
  3. Build a shared vocabulary and simple mental models so teams could hit the ground running and work more effectively with each other.

The big take away that I wanted the audience to have was that it’s a journey, but a very powerful one.

It’s a very healthy way to create an organization that embraces agility, empowers people, and ship stuff that customers care about.

In fact, the most powerful aspect of going Agile is that you create a learning organization.

The system and ecosystem you are in can quickly improve if you simply embrace change and focus on learning as a way of driving both continues improvement as well as growing capability.

So many things get a lot better over time, if they get a little better every day.

This was actually my first real talk on Agile and Agile development.  I’ve done lots of talks on Getting Results the Agile Way, and lots of other topics from security to performance to application architecture to team development and the Cloud.  But this was the first time a group asked me to share what I learned from Agile development in patterns & practices.

It was actually fun.

As part of the talk, I shared some of my favorite take aways and insights from the Agile World.

I’ll be sure to share some of these insights in future posts.

For now, if there is one thing to take away, it’s a reminder from David Anderson (Agile Management):

“Don’t do Agile.  Embrace agility.”

Way to be.

I shared my slides on SlideShare at Introduction to Agile Presentation (Slides) to help you learn the language, draw the visuals, and spread the word.

I’ll try to share more of my slides in the future, now that SlideShare seems to be a bit more robust.

You Might Also Like

Don’t Push Agile, Pull It

Extreme Programing at a Glance (Visual)

Scrum at a Glance (Visual)

Waterfall to Agile

What is Agile?

Categories: Architecture, Programming

New ways to connect your app to the Cloud using Android Studio and Google Cloud Platform

Android Developers Blog - Wed, 06/18/2014 - 19:03
Posted by Manfred Zabarauskas, Product Manager on Google Cloud Platform

Many Android developers like Snapchat or Pulse build and host their app backends on the Google Cloud Platform, and enjoy automatic management, with simple expansion to support millions of users.

To quickly add a Google Cloud Platform backend to your Android app, you can now use a number of built-in features in Android Studio 0.6.1+.

Google App Engine backend module templates

Google App Engine enables you to run your backend applications on Google's infrastructure, without ever requiring you to maintain any servers.

To simplify the process of adding an App Engine backend to your app, Android Studio now provides three App Engine backend module templates which you can add to your app. You can find them under "New → Module" menu:

  1. App Engine Java Servlet Module provides a simple App Engine Java backend servlet with minimal boilerplate code,
  2. App Engine Java Endpoints Module template leverages Google Cloud Endpoints for your backend, and includes automated object marshalling/unmarshalling, generation of strongly-typed Java client libraries and so on,
  3. App Engine Backend with Google Cloud Messaging includes both Google Cloud Endpoints and Google Cloud Messaging integration, which enables additional features like push notifications.

When you choose one of these template types, a new Gradle module with your specified module/package name will be added to your project containing your new App Engine backend. All of the required dependencies/permissions will be automatically set up for you.

You can then run it locally (on http://localhost:8080) by selecting the run configuration with your backend's module name, as shown in the image below.

For more information about these backend templates, including their deployment live to App Engine and code examples which show how to connect your Android app to these backends, see their documentation on GitHub. (Also, the code for these templates lives in the same GitHub repository, so do not hesitate to submit a pull request if you have any suggestions!)

Built-in rich editing support for Google Cloud Endpoints

Once you have added the backend module to your Android application, you can use Google Cloud Endpoints to streamline the communication between your backend and your Android app. Cloud Endpoints automatically generate strongly-typed client libraries from simple Java server-side API annotations, automate Java object marshalling to and from JSON, provide built-in OAuth 2.0 support and so on.

As a concrete example, "App Engine Java Endpoints Module" contains a simple annotated Endpoints API at <backend-name>/src/main/java/<package-name>/ file (shown below):

import javax.inject.Named;

@Api(name = "myApi",
     version = "v1",
     namespace = @ApiNamespace(ownerDomain = "<package-name>",
                               ownerName = "<package-name>",
public class MyEndpoint {
    @ApiMethod(name = "sayHi")
    public MyBean sayHi(@Named("name") String name) {
        MyBean response = new MyBean();
        response.setData("Hi, " + name);
        return response;

On deployment, this annotated Endpoints API definition class generates a RESTful API. You can explore this generated API (and even make calls to it) by navigating to Endpoints API explorer as shown in the image below:

Google APIs explorer is available at http://localhost:8080/_ah/api/explorer once you deploy your app. Color overlays in this screenshot match the colors in the API snippet above (

To simplify calling this generated API from your Android app, Android Studio will automatically set up your project to automatically include all compile dependencies and permissions required to consume Cloud Endpoints, and will re-generate strongly-typed client libraries if your backend changes. This means that you can start calling the client libraries from your Android app immediately after defining the server-side Endpoints API:

Code completion for automatically generated, strongly-typed client libraries. Color overlays match the colors in the API snipped above (

As server-side Endpoints API definitions have to conform to a number of syntactic rules, Android Studio also includes a number of Endpoints-specific inspections and quick-fixes, which help you to avoid mistakes when writing Endpoints APIs.

For example, "@Named" annotation is required for all non-entity type parameters passed to server-side methods. If you forget to add this annotation when modifying sayHi method in file, Android Studio will underline the problematic statement as-you-type:

Furthermore, to help you easily fix the problems with Cloud Endpoints, Android Studio will provide quick-fixes for the most common Cloud Endpoints development mistakes. To see these quick-fix suggestions, press Alt + Enter if you're running on Linux/Windows or ⌥ + Enter if you're running on Mac:

As expected, choosing the first quick-fix ("Add @Named") will automatically add "@Named" annotation to method's parameter, solving this problem in two key presses.

The underlying work-horses: Gradle, and Gradle plug-in for App Engine

Under the hood, Gradle is used to build both your app and your App Engine backend. In fact, when you add an App Engine backend to your Android app, an open-source App Engine plug-in for Gradle is automatically downloaded by Android Studio, and common App Engine tasks become available as Gradle targets. This allows you to use the same build system across your IDE, command-line or continuous integration environments.

For example, to deploy your backend to App Engine from Android Studio, you can launch the "appengineUpdate" task from the "Gradle tasks" tool window:

Similarly, if you want to integrate your backend's deployment into your command-line scripts, simply launch "./gradlew backend:appengineUpdate" command from your project's root directory.

Try out a codelab, or see these features live at Google I/O 2014!

If you want to give these features a spin in a more guided environment, try out our Cloud Endpoints codelab for Android. We will also be demonstrating some of these features live at Less Code, More Services, Better Android Apps session in Google I/O 2014 (as well as some of the new and even more exciting stuff), so don't forget to tune in!

We look forward to your questions or feedback, and learning about the amazing applications you have built using Android Studio and Google Cloud Platform. You can find us lurking on StackOverflow's App Engine and Cloud Endpoints forums!

Join the discussion on

+Android Developers
Categories: Programming

Optimizing the Critical Rendering Path for fast mobile websites

Google Code Blog - Wed, 06/18/2014 - 18:00
Author PhotoBy Ilya Grigorik, Google Developer Advocate

Have you ever wondered how a browser converts HTML, CSS, and JavaScript into actual pixels on your screen—a sequence of steps known as the Critical Rendering Path? And, more importantly, how you can optimize your resources so that your web pages render faster on mobile? Well, you can now learn all about that in our new online Udacity course “Website Performance Optimization: Critical Rendering Path”.
This short course will provide the full overview of how the browser constructs a web page: constructing the DOM and CSSOM from your HTML and CSS markup, building the render tree, performing layout, running JavaScript, and painting the pixels. You will learn why CSS blocks rendering, why JavaScript may block DOM construction, and how to measure and optimize the performance of each of the above steps using the Chrome Developer Tools.

Once you’ve completed the course, the interactive quizzes, and the course project, you’ll know everything you need to make all of your mobile pages render in one second or less!

Ilya Grigorik is Developer Advocate at Google, where he spends his days and nights on making the web fast and driving adoption of performance best practices.

Posted by Louis Gray, Googler
Categories: Programming

Deploying a Node.js app to Docker on CoreOS using Deis

Xebia Blog - Wed, 06/18/2014 - 17:00

The world of on-premise private PaaSes is changing rapidly. A few years ago, we were building on on-premise private PaaSes based upon the existing infrastructure and using Puppet as an automation tool to quickly provision new application servers.  We provided a self-service portal where development teams could get any type of server in any type of environment running within minutes.  We created a virtual server for each application to keep it manageable, which of course is quite resource intensive.

Since June 9th, Docker has been declared production ready, so this opens  the option of provisioning light weight containers to the teams instead of full virtual machine. This will increase the speed of provisioning even further while reducing the cost of creating a platform and minimising resource consumption.

To illustrate how easy life is becoming, we are going to deploy an original CloudFoundry node.js application to Docker on a CoreOS cluster. This hands-on experiment is based on MacOS, Vagrant and VirtualBox.

Step 1. Installing  etcdctl and fleetctl

Before you start, you need to install etcdctl and fleetctl on your host.  etcd is a nifty distributed key-value store while fleet manages the deployment of  (Docker) services to a CoreOS cluster.
$ brew install go etcdctl
$ git clone
$ cd fleet && ./build && mv bin/fleetctl /usr/local/bin


Step 2. Install the Deis Command Line Client

To control the PaaS you need to install the Deis command line client:

$ brew install python
$ sudo pip install deis

Step 3. Build the platform

Deis  provides all the facilities for building, deployment and managing applications.

$ git clone
$ cd deis
$ vagrant up

$ ssh-add ~/.vagrant.d/insecure_private_key
$ export DOCKER_HOST=tcp://
$ make pull

Step 4. Start the platform

Now all is set to start the platform:

$ make run

After this run has completed, you can see that the 7 components in the Deis Architecture have been started using the list-units command: The builder, the cache, the controller, the database, the logger, the registry and the router. This architecture looks quite similar to the architecture of CloudFoundry.

$ fleetctl list-units

UNIT STATE LOAD ACTIVE SUB DESC MACHINE deis-builder.service launched loaded active running deis-builder 79874bde.../ deis-cache.service launched loaded active running deis-cache 79874bde.../ deis-controller.service launched loaded active running deis-controller 79874bde.../ deis-database.service launched loaded active running deis-database 79874bde.../ deis-logger.service launched loaded active running deis-logger 79874bde.../ deis-registry.service launched loaded active running deis-registry 79874bde.../ deis-router.1.service launched loaded active running deis-router 79874bde.../

Alternatively, you can inspect the result by looking inside the virtual machine:

$ vagrant ssh -c "docker ps"

Now we have our platform running, we can start using it!

Step 5. Register a new user to Deis and add the public ssh key

$ deis register \
     --username=mark \
     --password=goddesses \
$ deis keys:add ~/.ssh/

Step 6. Create a Cluster

Create a application cluster under the domain ''.  The --hosts specifies all hosts in the cluster: the only available host  at this moment in the cluster is

$ deis clusters:create dev \
        --hosts= \

Step 7. Get the app

We created a simple but effective  node.js application that show you what happens when you scale or push a new version of the application.

$ git clone
$ deis apps:create appmon --cluster=dev
$ deis config:set RELEASE=deis-v1
$ git push deis master

Step 8. open your application

Voila! Your application is running. Now click on start monitoring.

$ deis apps:open 

you should see something like this:


Step 9. scaling your application

To see scaling in action,  type the following command:

$ deis ps:scale web=4

It will start 3 new containers which will show up in the list.



Step 10. upgrading your application

Now make a change to the application, for instance change the message to 'Greetings from Deis release' and push your change:

$ git commit -a -m "Message changed"
$ git  push deis master

After a while you will see the following on your monitor!



Step 11. Looking on CoreOS

You can use  fleetctl again to look at the new services that have been added to the platform!


$ fleetctl list-units

UNIT STATE LOAD ACTIVE SUB DESC MACHINE app-mon_v7.web.1-announce.service launched loaded active running app-mon_v7.web.1 announce 79874bde.../ app-mon_v7.web.1-log.service launched loaded active running app-mon_v7.web.1 log 79874bde.../ app-mon_v7.web.1.service launched loaded active running app-mon_v7.web.1 79874bde.../ app-mon_v7.web.2-announce.service launched loaded active running app-mon_v7.web.2 announce 79874bde.../ app-mon_v7.web.2-log.service launched loaded active running app-mon_v7.web.2 log 79874bde.../ app-mon_v7.web.2.service launched loaded active running app-mon_v7.web.2 79874bde.../ app-mon_v7.web.3-announce.service launched loaded active running app-mon_v7.web.3 announce 79874bde.../ app-mon_v7.web.3-log.service launched loaded active running app-mon_v7.web.3 log 79874bde.../ app-mon_v7.web.3.service launched loaded active running app-mon_v7.web.3 79874bde.../ app-mon_v7.web.4-announce.service launched loaded active running app-mon_v7.web.4 announce 79874bde.../ app-mon_v7.web.4-log.service launched loaded active running app-mon_v7.web.4 log 79874bde.../ app-mon_v7.web.4.service launched loaded active running app-mon_v7.web.4 79874bde.../ deis-builder.service launched loaded active running deis-builder 79874bde.../ deis-cache.service launched loaded active running deis-cache 79874bde.../ deis-controller.service launched loaded active running deis-controller 79874bde.../ deis-database.service launched loaded active running deis-database 79874bde.../ deis-logger.service launched loaded active running deis-logger 79874bde.../ deis-registry.service launched loaded active running deis-registry 79874bde.../ deis-router.1.service launched loaded active running deis-router 79874bde.../



Deis is a very simple and easy to use way to create a PaaS using Docker and CoreOS. The node.js application we created, could be deployed using Deis without a single modification. We will be diving into Deis and CoreOS in more details in following posts!

FlatBuffers: A Memory-Efficient Serialization Library

Android Developers Blog - Wed, 06/18/2014 - 01:05

By Wouter van Oortmerssen, Fun Propulsion Labs at Google

Game developers, we've just released FlatBuffers, a C++ serialization library that allows you to read data without unpacking or allocating additional memory, as an open source project.

FlatBuffers stores serialized data in buffers in a cross-platform way, supporting format evolution that is fully forwards and backwards compatible through a schema. These buffers can be stored in files or sent across the network as-is, and accessed in-place without parsing overhead.

The FlatBuffers schema compiler and runtime is written in platform independent C++ with no library dependencies outside the STL, which makes it possible to use on any platform that has a C++ compiler. We have provided methods to build the FlatBuffers library, example applications, and unit tests for Android, Linux, OSX and Windows. The schema compiler can generate code to read and write FlatBuffers binary files for C++ and Java. It can additionally parse JSON-formatted data into type-safe binaries.

Game developers can use this library to store game data with less overhead than alternative solutions (e.g. Protocol Buffers or JSON). We’re excited about the possibilities, and want to hear from you about how we can make this even better!

Download the latest release from the FlatBuffers page in GitHub and join our discussion list!

Fun Propulsion Labs is a team within Google that's dedicated to advancing gaming on Android and other platforms.

Categories: Programming

Neo4j: LOAD CSV – Handling conditionals

Mark Needham - Wed, 06/18/2014 - 00:41

While building up the Neo4j World Cup Graph I’ve been making use of the LOAD CSV function and I frequently found myself needing to do different things depending on the value in one of the columns.

For example I have one CSV file which contains the different events that can happen in a football match:

"1012","Antonin Panenka","174835",21,"penalty"
"1012","Faisal Al Dakhil","2204",57,"goal"
"102","Roger Milla","79318",106,"goal"
"102","Roger Milla","79318",108,"goal"
"102","Bernardo Redin","44555",115,"goal"
"102","Andre Kana-biyik","174649",44,"yellow"

If the type is ‘penalty’, ‘owngoal’ or ‘goal’ then I want to create a SCORED_GOAL relationship whereas if it’s ‘yellow’, ‘yellowred’ or ‘red’ then I want to create a RECEIVED_CARD relationship instead.

I learnt – from reading a cypher script written by Chris Leishman – that we can make FOREACH mimic a conditional by creating a collection with one item in to represent ‘true’ and an empty collection to represent ‘false’.

In this case we’d end up with something like this to handle the case where a row represents a goal:

// removed for conciseness
// goals
FOREACH(n IN (CASE WHEN csvLine.type IN ["penalty", "goal", "owngoal"] THEN [1] else [] END) |
  FOREACH(t IN CASE WHEN team = home THEN [home] ELSE [away] END |
    MERGE (stats)-[:SCORED_GOAL]->(penalty:Goal {time: csvLine.time, type: csvLine.type})

And equally when we want to process a row that represents a card we’d have this:

// cards
FOREACH(n IN (CASE WHEN csvLine.type IN ["yellow", "red", "yellowred"] THEN [1] else [] END) |
  FOREACH(t IN CASE WHEN team = home THEN [home] ELSE [away] END |
    MERGE (stats)-[:RECEIVED_CARD]->(card {time: csvLine.time, type: csvLine.type})

And if we put everything together we get this:

MATCH (home)<-[:HOME_TEAM]-(match:Match {id: csvLine.match_id})-[:AWAY_TEAM]->(away)
MATCH (player:Player {id: csvLine.player_id})-[:IN_SQUAD]->(squad)<-[:NAMED_SQUAD]-(team)
MATCH (player)-[:STARTED|:SUBSTITUTE]->(stats)-[:IN_MATCH]->(match)
// goals
FOREACH(n IN (CASE WHEN csvLine.type IN ["penalty", "goal", "owngoal"] THEN [1] else [] END) |
  FOREACH(t IN CASE WHEN team = home THEN [home] ELSE [away] END |
    MERGE (stats)-[:SCORED_GOAL]->(penalty:Goal {time: csvLine.time, type: csvLine.type})
// cards
FOREACH(n IN (CASE WHEN csvLine.type IN ["yellow", "red", "yellowred"] THEN [1] else [] END) |
  FOREACH(t IN CASE WHEN team = home THEN [home] ELSE [away] END |
    MERGE (stats)-[:RECEIVED_CARD]->(card {time: csvLine.time, type: csvLine.type})

You can have a look at the code on github or follow the instructions to get all the World Cup graph into your own local Neo4j.

Feedback welcome as always.

Categories: Programming

One Change at a Time

Xebia Blog - Tue, 06/17/2014 - 08:22

One of the twelve Agile principles states "At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly" [1]. Many Agile teams have retrospectives every two weeks that result in concrete actions that are executed in the next time period (sprint). There are many types of tuning and adjustments that a team can do. Examples are actions that improve the flow of work, automation of tasks, team coorporation.

Is it good habit for retrospectives to focus on the same type of improvement or should the team alter the type of improvements that will be done? In this blog I will look into the effect of multiple consecutive actions that affect the flow of work.

The simulation is inspired by the GetKanban game [2].

An Experiment

Ideally one would have a set-up for an experiment in which two exactly equivalent teams are compared. One team that would perform consecutive actions to improve the flow of work. The other team would just make one adjustment to improve the flow, and focus subsequent improvements on other areas than flow. At the same time the flow is measured and verified after a certain period of time whether the first or the second team has achieved better results in terms of flow.

Such an experiment is in practise (very) difficult to perform. In this blog I will study the same experiment by making use of simulations.


For the purpose of simulation I consider a team consisting of three specialists: 1 designer, 1 developer, and 1 tester. The team uses a kanban process to achieve flow. See the picture below for the begin situation.


In the simulation, at the beginning of each working day it is determined how much work will be completed by the team. During the simulation the average cycle time is measured. The initial Work in Progress (WiP) limits are set to 3 for each column and is indicated by the red '3's in the picture.

The average amount of work done by the 'team' and the average effort of one work item are such that on average it takes one card about 5,5 days to complete.

At the end of each work day, cards are pulled into the next columns (if allowed by the WiP limits). The policy is to always pull in as much work as allowed so the columns are maximally filled. Furthermore, the backlog is assumed to always have enough user stories ready to be pulled into the 'design' column. This very much resembles developing a new product when the backlog is filled with more than enough stories.

The system starts with a clean board and all column empty. After letting the system run for 75 simulated work days, we will trigger a policy change. Particularly the WiP limit for the 'design' is increased from '3' to '5'. After this policy change the system runs for another 100 work days.

From the chart showing the average cycle time we will be able to study the effect of WiP limit changing adjustments.


The simulation assumes a simple uniform distribution for the amount of work done by the team and the effort assigned to a work item. I assume this is OK for the purpose of this blog. A consequence of this, is that the result probably can't be scaled. For instance, the situation in which a column in the picture above is a single scrum team is not applicable since a more complex probability distribution should be used instead of the uniform distribution.


The picture below shows the result of running the experiment.




After the start it takes the system little over 40 work days to reach the stable state of an average cycle time of about 24(*) days. This is the cycle time one would expect. Remember, the 'ready' column has a size of '3' and the other columns get work done. So, one would expect a cycle time of around 4 times 5,5 which equals 22 days which is close to 24.

At day 75 the WiP limit is changed. As can be inferred from the picture, the cycle time starts to rise only at day 100 (takes about 1 cycle time (24 days) to respond). The new stable state is reached at day 145 with an average cycle time of around 30(**) days. It takes 70 days(!) to reach the new equilibrium.

The chart shows the following interesting features:

  1. It takes roughly 2 times the (new) average cycle time to reach the equilibrium state,
  2. The response time (when one begins to notice an effect of the policy change) is about the length of the average cycle time.

(*) One can calculate (using transition matrices) the theoretical average cycle time for this system to be 24 days.

(**) Similar, the theoretical average cycle time of the system after policy change is 31 days.



In this blog we have seen that when a team makes adjustments that affect the flow, the system needs time to get to its new stable state. Until this state has been reached any new tuning of the flow is questionable. Simulations show that the time it takes to reach the new stable state is about 2 times the average cycle time.

For scrum teams that have 2-week sprints, the system may need about 2 months before new tuning of flow is effective. Meanwhile, the team can very well focus on other improvements, e.g. retrospectives that focus on the team aspect or collaboration with the team's environment.

Moreover, don't expect to see any changes in measurements of e.g. cycle time within the time period of the average cycle time after making a flow affecting change.

To summarise, after making flow affecting changes (e.g. increasing or decreasing WiP limits):

  • Let the system run for at least the duration of the average cycle time so it has time to respond to the change,
  • After it responds, notice the effect of the change,
  • If the effect is positive, let the system run for another duration of the average cycle time, to get to the new stable state,
  • If the effect is negative, do something else, e.g. go back to the old state, and remember that the system needs to respond to this as well!

[1] Agile manifesto,

[2] GetKanban,

Stop Taking Yourself So Seriously

Making the Complex Simple - John Sonmez - Mon, 06/16/2014 - 15:00

There is a pretty good chance I screwed up today. There is a pretty good chance that this blog article will flop. Maybe, you’ll come here to my blog, expecting to be enlightened, but instead find a bunch of worthless drivel that wasn’t worth the time it took you to read. But, guess what? I […]

The post Stop Taking Yourself So Seriously appeared first on Simple Programmer.

Categories: Programming

Combining Salt with Docker

Xebia Blog - Sat, 06/14/2014 - 10:59

You could use Salt to build and run Docker containers but that is not how I use it here. This blogpost is about Docker containers that run Salt minions, which is just an experiment. The use case? Suppose you have several containers that run a particular piece of middleware, and this piece of middleware needs a security update, i.e. an OpenSSL hotfix. It is necessary to perform the update immediately.


The Dockerfile

In order to build a container you have to write down the container description in a file called Dockerfile. Here is the Dockerfile:

# Standard heading stuff

FROM centos

# Do Salt install stuff and squeeze in a master.conf snippet that tells the minion
# to contact the master specified.

RUN rpm -Uvh
RUN yum install -y salt-minion --enablerepo=epel-testing
RUN [ ! -d /etc/salt/minion.d ] && mkdir /etc/salt/minion.d
ADD ./master.conf /etc/salt/minion.d/master.conf

# Run the Salt Minion and do not detach from the terminal.
# This is important because the Docker container will exit whenever
# the CMD process exits.

CMD /usr/bin/salt-minion


Build the image

Time to run the Dockerfile through docker. The command is:

$ docker build --rm=true -t salt-minion .

provided that you run this command in the directory where file Dockerfile and master.conf resides. Docker creates an image with tag ‘salt-minion’ and throws away all intermediate images after a successful build.


Run a container

The command is:

$ docker run -d salt-minion

and Docker returns:


The Salt minion on the container is started and searches for a Salt master to connect to, defined by the configuration setting “master” in file /etc/salt/minion.d/master.conf. You might want to run the Salt master in “auto_accept” mode so that minion keys are accepted automatically. Docker assigns a container id to the running container. That is the magic key that docker reports as a result of the run command.

The following command shows the running container:

$ docker ps
CONTAINER ID        IMAGE                COMMAND                CREATED             STATUS              NAMES
273a6b77a8fa        salt-minion:latest   /bin/sh -c /etc/rc.l   3 seconds ago       Up 3 seconds        distracted_lumiere


Apply the hot fix
There you are: the Salt minion is controlled by your Salt master. Provided that you have a state module that contains the OpenSSL hot fix, you can now easily update all docker nodes to include the hotfix:

salt \* state.sls openssl-hotfix

That is all there is to it.

Simplicity is the Ultimate Enabler

“Everything should be made as simple as possible, but not simpler.” – Albert Einstein

Simplicity is among the ultimate of pursuits.  It’s one of your most efficient and effective tools in your toolbox.  I used simplicity as the basis for my personal results system, Agile Results, and it’s served me well for more than a decade.

And yet, simplicity still isn’t treated as a first-class citizen.

It’s almost always considered as an afterthought.  And, by then, it’s too little, too late.

In the book, Simple Architectures for Complex Enterprises (Developer Best Practices), Roger Sessions shares his insights on how simplicity is the ultimate enabler to solving the myriad of problems that complexity creates.

Complex Problems Do Not Require Complex Solutions

Simplicity is the only thing that actually works.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“So yes, the problems are complex.  But complex problems do not ipso facto require complex solutions.  Au contraire!  The basic premise of this book is that simple solutions are the only solutions to complex problems that work.  The complex solutions are simply too complex.”

Simplicity is the Antidote to Complexity

It sounds obvious but it’s true.  You can’t solve a problem with the same complexity that got you there in the first place.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“The antidote to complexity is simplicity.  Replace complexity with simplicity and the battle is three-quarters over.  Of course, replacing complexity with simplicity is not necessarily simple.” 

Focus on Simplicity as a Core Value

If you want to achieve simplicity, you first have to explicitly focus on it as a core value.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“The first thing you need to do to achieve simplicity is focus on simplicity as a core value.  We all discuss the importance of agility, security, performance, and reliability of IT systems as if they are the most important of all requirements.  We need to hold simplicity to as high a standard as we hold these other features.  We need to understand what makes architectures simple with as much critical reasoning as we use to understand what makes architectures secure, fast, or reliable.  In fact, I argue that simplicity is not merely the equal of these other characteristics; it is superior to all of them.  It is, in many ways, the ultimate enabler.”

A Security Example

Complex systems work against security.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“Take security for example.  Simple systems that lack security can be made secure.  Complex systems that appear to be secure usually aren't.  And complex systems that aren't secure are virtually impossible to make either simple or secure.”

An Agility Example

Complexity works against agility, and agility is the key to lasting solutions.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“Consider agility.  Simple systems, with their well-defined and minimal interactions, can be put together in new ways that were never considered when these systems were first created.  Complex systems can never used in an agile wayThey are simply too complex.  And, of course, retrospectively making them simple is almost impossible.”

Nobody Ever Considers Simplicity as a Critical Feature

And that’s the problem.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“Yet, despite the importance of simplicity as a core system requirement, simplicity is almost never considered in architectural planning, development, or reviews.  I recently finished a number of speaking engagements.  I spoke to more than 100 enterprise architects, CIOs, and CTOs spanning many organizations and countries.  In each presentation, I asked if anybody in the audience had ever considered simplicity as a critical architectural feature for any projects on which they had participated. Not one person had. Ever.”

The Quest for Simplicity is Never Over

Simplicity is a quest.  And the quest is never over.  Simplicity is a ongoing pursuit and it’s a dynamic one.  It’s not a one time event, and it’s not static.

Via Simple Architectures for Complex Enterprises (Developer Best Practices):

“The quest for simplicity is never over.  Even systems that are designed from the beginning with simplicity in mind (rare systems, indeed!) will find themselves under a never-ending attack. A quick tweak for performance here, a quick tweak for interoperability there, and before you know it, a system that was beautifully simple two years ago has deteriorated into a mass of incomprehensibility.”

Simplicity is your ultimate sword for hacking your way through complexity … in work … in life … in systems … and ecosystems.

Wield it wisely.

You Might Also Like

10 Ways to Make Information More Useful

Reduce Complexity, Cost, and Time

Simple Enterprise Strategy

Categories: Architecture, Programming