Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!


Using UIPageViewControllers with Segues

Xebia Blog - Mon, 06/15/2015 - 07:30

I've always wondered how to configure a UIPageViewController much like you configure a UITabBarController inside a Storyboard. It would be nice to create standard embed segues to the view controllers that make up the pages. Unfortunately, such a thing is not possible and currently you can't do a whole lot in a Storyboard with page view controllers.

So I thought I'd create a custom way of connecting pages through segues. This post explains how.

Without segues

First let's create an example without using segues and then later we'll try to modify it to use segues.

Screen Shot 2015-06-14 at 10.01.10

In the above Storyboard we have 2 scenes. One page view controller and another for the individual pages, the content view controller. The page view controller will have 4 pages that each display their page number. It's about as simple as a page view controller can get.

Below the code of our simple content view controller:

class MyContentViewController: UIViewController {

  @IBOutlet weak var pageNumberLabel: UILabel!

  var pageNumber: Int!

  override func viewDidLoad() {
      pageNumberLabel.text = "Page $$pageNumber)"

That means that our page view controller just needs to instantiate an instance of our MyContentViewController for each page and set the pageNumber. And since there is no segue between the two scenes, the only way to create an instance of the MyContentViewController is programmatically with the UIStoryboard.instantiateViewControllerWithIdentifier(_:). Of course for that to work, we need to give the content view controller scene an identifier. We'll choose MyContentViewController to match the name of the class.

Our page view controller will look like this:

class MyPageViewController: UIPageViewController {

  let numberOfPages = 4

  override func viewDidLoad() {

      direction: .Forward, 
      animated: false, 
      completion: nil)

    dataSource = self

  func createViewController(pageNumber: Int) -> UIViewController {
    let contentViewController = 
      as! MyContentViewController
    contentViewController.pageNumber = pageNumber
    return contentViewController


extension MyPageViewController: UIPageViewControllerDataSource {
  func pageViewController(pageViewController: UIPageViewController, viewControllerAfterViewController viewController: UIViewController) -> UIViewController? {
    return createViewController(
      mod((viewController as! MyContentViewController).pageNumber, 
      numberOfPages) + 1)

  func pageViewController(pageViewController: UIPageViewController, viewControllerBeforeViewController viewController: UIViewController) -> UIViewController? {
    return createViewController(
      mod((viewController as! MyContentViewController).pageNumber - 2, 
      numberOfPages) + 1)

func mod(x: Int, m: Int) -> Int{
  let r = x % m
  return r < 0 ? r + m : r

Here we created the createViewController(_:) method that has the page number as argument and creates the instance of MyContentViewController and sets the page number. This method is called from viewDidLoad() to set the initial page and from the two UIPageViewControllerDataSource methods that we're implementing here to get to the next and previous page.

The custom mod(_:_) function is used to have a continuous page navigation where the user can go from the last page to the first and vice versa (the built-in % operator does not do a true modules operation that we need here).

Using segues

The above sample is pretty simple. So how can we change it to use a segue? First of all we need to create a segue between the two scenes. Since we're not doing anything standard here, it will have to be a custom segue. Now we need a way to get an instance of our content view controller through the segue which we can use from our createViewController(_:). The only method we can use to do anything with the segue is UIViewController.performSegueWithIdentifier(_:sender:). We know that calling that method will create an instance of our content view controller, which is the destination of the segue, but we then need a way to get this instance back into our createViewController(_:) method. The only place we can reference the new content view controller instance is from within the custom segue. From it's init method we can set it to a variable which the createViewController(_:) can also access. That looks something as following. First we create the variable:

  var nextViewController: MyContentViewController?

Next we create a new custom segue class that assigns the destination view controller (the new MyContentViewController) to this variable.

public class PageSegue: UIStoryboardSegue {

  public override init!(identifier: String?, source: UIViewController, destination: UIViewController) {
    super.init(identifier: identifier, source: source, destination: destination)
    if let pageViewController = source as? MyPageViewController {
      pageViewController.nextViewController = 
        destinationViewController as? MyContentViewController

  public override func perform() {}

Since we're only interested in getting the reference to the created view controller we don't need to do anything extra in the perform() method. And the page view controller itself will handle displaying the pages so our segue implementation remains pretty simple.

Now we can change our createViewController(_:) method:

func createViewController(pageNumber: Int) -> UIViewController {
  performSegueWithIdentifier("Page", sender: self)
  nextViewController!.pageNumber = pageNumber
  return nextViewController!

The method looks a bit odd since it's not we're never assigning nextViewController anywhere in this view controller. And we're relying on the fact that the segue is created synchronously from the performSegueWithIdentifier call. Otherwise this wouldn't work.

Now we can create the segue in our Storyboard. We need to give it the same identifier as we used above and set the Segue Class to PageSegue

Screen Shot 2015-06-14 at 11.58.49

<2>Generic class

Now we can finally create segues to visualise the relationship between page view controller and content view controller. But let's see if we can write a generic class that has most of the logic which we can reuse for each UIPageViewController.

We'll create a class called SeguePageViewController which will be the super class for our MyPageViewController. We'll move the PageSegue to the same source file and refactor some code to make it more generic:

public class SeguePageViewController: UIPageViewController {

    var pageSegueIdentifier = "Page"
    var nextViewController: UIViewController?

    public func createViewController(sender: AnyObject?) -> MyContentViewController {
        performSegueWithIdentifier(pageSegueIdentifier, sender: sender)
        return nextViewController!


public class PageSegue: UIStoryboardSegue {

    public override init!(identifier: String?, source: UIViewController, destination: UIViewController) {
        super.init(identifier: identifier, source: source, destination: destination)
        if let pageViewController = source as? SeguePageViewController {
            pageViewController.nextViewController = destinationViewController as? UIViewController

    public override func perform() {}

As you can see, we moved the nextViewController variable and createViewController(_:) to this class and use UIViewController instead of our concrete MyContentViewController class. We also introduced a new variable pageSegueIdentifier to be able to change the identifier of the segue.

The only thing missing now is setting the pageNumber of our MyContentViewController. Since we just made things generic we can't set it from here, so what's the best way to deal with that? You might have noticed that the createViewController(_:) method now has a sender: AnyObject? as argument, which in our case is still the page number. And we know another method that receives this sender object: prepareForSegue(_:sender:). All we need to do is implement this in our MyPageViewController class.

override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  if segue.identifier == pageSegueIdentifier {
    let vc = segue.destinationViewController as! MyContentViewController
    vc.pageNumber = sender as! Int

If you're surprised to see the sender argument being used this way you might want to read my previous post about Understanding the ‘sender’ in segues and use it to pass on data to another view controller


Wether or not you'd want to use this approach of using segues for page view controllers might be a matter of personal preference. It doesn't give any huge benefits but it does give you a visual indication of what's happening in the Storyboard. It doesn't really save you any code since the amount of code in MyPageViewController remained about the same. We just replaced the createViewController(_:) with the prepareForSegue(_:sender:) method.

It would be nice if Apple offers a better solution that depends less on the data source of a page view controller and let you define the paging logic directly in the storyboard.

3 easy ways of improving your Kanban system

Xebia Blog - Sun, 06/14/2015 - 21:10

You are working in a Kanban team and after a few months it doesn’t seem to be as easy and effective as it used to be. Here are three tips on how to get that energy back up.

1. Recheck and refresh all policies

IMG_1884New team members haven’t gone through the same process as the other team members. This means that they probably won’t pick up on the subtleties of the board and policies that go with it. Over time team agreements that were very explicit (remember the Kanban Practice: “Make Policies Explicit”?) aren’t anymore. For example: A blue stickie that used to have a class of service associated with it becomes a stickie for the person that works on the project. Unclear policies lead to loss of focus, confusion and lack of flow. Do a check if all policies are still valid, clear and understood by everybody. If not, get rid of them or improve them!

2. Check if WIP limits are understood and adhered to

In the first time after a team starts using Kanban, Work in Progress (WIP) limits are not completely optimized for flow. They usually are higher to minimize resistance from team and stakeholders as tighter WIP limits require a higher maturity of the organization. The team should start to adjust (lower) the limits to optimize flow. Lack team focus on improvement of flow can make that they don't adjust WIP limits.

Reevaluate current WIP limits, adjust them for bottlenecks and lower and discuss next steps. It will return the focus to help the team to “Start Finishing”.

3. Do a complete overhaul of the board

IMG_0307It sounds counter intuitive, but sometimes it helps to take the board and do a complete runthrough of the steps to design a Kanban System. Analyze up- and downstream stakeholders and model the steps the work flows through. After that update the legend and policies for classes of service.

Even if the board looks almost the same, the discussion and analysis might lead to subtle changes that make a world of difference. And it’s good for team spirit!


I hope these tips can help you. Please share any that you have in the comments!

ATDD with Lego Mindstorms EV3

Xebia Blog - Fri, 06/12/2015 - 20:09

We have been building automated acceptance tests using web browsers, mobile devices and web services for several years now. Last week Erik Zeedijk and I came up with the (crazy) idea to implement features for a robot using ATDD. In this blog we will explain how we used ATDD while experimenting with Lego Mindstorms EV3.

What are we building?

When you practice ATDD it is really important to have a common understanding about what you need to build and test. So we had several conversations about requirements and specifications.

We wanted to build a robot that could help us clean up post-its from tables or floors after brainstorms and other sticky note related sessions. For our first iteration we decided to experiment with moving the robot and using the color scanner. The robot should search for yellow post-it notes all by itself. No manual interactions because we hate manual work. After the conversation we wrote down the following acceptance criteria to scope our development activities:

  • The robot needs to move his way on a white square board
  • The robot needs to turn away from the edges of the square board
  • The robot needs to stop a yellow post-it

We used Cucumber to write down the features and acceptance test scenarios like:

Feature: Move robot
Scenario: Robot turns when a purple line is detected
When the robot is moving and encounters a purple line
Then the robot turns left

Scenario: Robot stops moving when yellow post-it detected
When the robot is moving and encounters yellow post-it
Then the robot stops

Just enough information to start rocking with the robot! Ready? Yes! Tests? Yes! Lets go!

How did we build it?

First we played around with the Lego Mindstorms EV3 programming software (4GL tool). We could easily build the application by click our way through the conditions, loops and color detections.

But we couldn't find a way to hook up our Acceptance Tests. So we decided to look for programming languages. First hit was the leJOS. It comes with a custom firmware which you install on a SD card. This firmware allows you to run Java on the Lego EV3 brick.

Installing the firmware on the EV3 Brick

The EV3 main computer “Brick” is actually pretty powerful.

It has a 300mhz ARM processor with 64MB RAM and takes micro SD cards for storage.
It also has Bluetooth and Wi-Fi making it possible to wirelessly connect to it and load programs on it.

There are several custom firmware’s you can run on the EV3 Brick. The LeJOS firmware allows us to do just that where we can program in Java with having access to a specialized API created for the EV3 Brick. Install it by following these steps:

  1. Download the latest firmware on sourceforge:
  2. Get a blank SD card (2GB – 32GB, SDXC not supported) and format this card to FAT32.
  3. Unzip the ‘’ file from the leJOS home directory to the root directory of the SD card.
  4. Download Java for Lego Mindstorms EV3 from the Oracle website:
  5. Copy the downloaded ‘Oracle JRE.tar.gz’ file to the root of the SD card.
  6. Safely remove the SD card from your computer, insert it into the EV3 Brick and boot the EV3. The first boot will take approximately 8 minutes while it created the needed partitions for the EV3 Brick to work.
ATDD with Cucumber

Cucumber helped us creating scenarios in an human readable language. We created the Step Definitions to glue it all the automation together and used the remote access to the EV3 via Java RMI.

We also wrote some support code to help us make the automation code readable and maintainable.

Below you can find an example of the StepDefinition of a Given statement of a scenario.

@Given("^the robot is driving$")
public void the_robot_is_driving() throws Throwable {
assertThat(Ev3BrickIO.leftMotor.isMoving(), is(true));
assertThat(Ev3BrickIO.rightMotor.isMoving(), is(true));
Behavior Programming

A robot needs to perform a series of functions. In our case we want to let the robot drive around until he finds a yellow post-it, after which he stops. The obvious way of programming these functions is to make use of several if, then, else statements and loops. This works fine at first, however, as soon as the patterns become more complex the code becomes complex as well since adding new actions can have influence on other actions.

A solution to this problem is Behavior Programming. The concept of this is based around the fact that a robot can only do one action at a time depending on the state he is in. These behaviors are written as separate independent classes controlled by an overall structure so that behavior can be changed or added without touching other behaviors.

The following is important in this concept:

  1. One behavior is active at a time and then controls the robot.
  2. Each behavior is given a priority. For example; the turn behavior has priority over the drive behavior so that the robot will turn once the color sensor indicates he reaches the edge of the field.
  3. Each behavior is a self contained, independent object.
  4. The behaviors can decide if they should take control of the robot.
  5. Once a behavior becomes active it has a higher priority than any of the other behaviors
Behaviour API and Coding

The Behavior API is fairly easy to understand as it only exists out of one interface and one class. The individual behavior classes build-up is then defined by the overall Behavior interface and the individual behavior classes are controlled by an Arbitrator class that decides which behavior should be the active one. A separate class should be created for each behavior you want the robot to have.

leJOS Behaviour Pattern


When an Arbitrator object is instantiated, it is given an array of Behavior objects. Once it has these, the start() method is called and it begins arbitrating; deciding which behavior will become active.

In the code example below the Arbitrator is being created where the first two lines create instances of our behaviors. The third line places them into an array, with the lowest priority behavior taking the lowest array index. The fourth line creates the Arbitrator, and the fifth line starts the Arbitration process. One you create more behavior classes these can simply be added to the array in order to be executed by the Arbitrator.

DriveForward driveForward = new DriveForward();
Behavior stopOnYellow = new StopOnYellow();
behaviorList = new Behavior[]{driveForward, stopOnYellow};
arbitrator = new Arbitrator(behaviorList, true);

We will now take a detailed look at one of the behavior classes; DriveForward(). Each of the standard methods is defined. First we create the action() method in which contains what we want the robot to perform when the behavior becomes active. In this case moving the left and the right motor forwards.

public void action() {
try {
suppressed = false;

while( !suppressed ) {

} catch (RemoteException e) {

The suppress() method will stop the action once this is called.

public void suppress() {
suppressed = true;

The takeControl() method tells the Arbitrator which behavior should become active. In order to keep the robot moving after having performed more actions we decided to create a global variable to keep track of the fact if the robot is running. While this is the case we simply return 'true' from the takeControl() method.

public boolean takeControl() {
return Ev3BrickIO.running;

You can find our leJOS EV3 Automation code here.

Test Automation Day

Our robot has the potential to do much more and we have not yet implemented the feature of picking up our post-its! Next week we'll continue working with the Robot during the Test Automation Day.
Come and visit our booth where you can help us to enable the Lego Mindstorms EV3 Robot to clean-up our sticky mess at the office!

Feel free to use our Test Automation Day €50 discount on the ticket price by using the following code: TAD15-XB50

We hope to see you there!

Unit and integration are ambiguous names for tests

Coding the Architecture - Simon Brown - Fri, 06/12/2015 - 17:10

I've blogged about architecturally-aligned testing before, which essentially says that our automated tests should be aligned with the constructs that we use to describe a software system. I want to expand upon this topic and clarify a few things, but first...


In a nutshell, I think the terms "unit test" and "integration test" are ambiguous and open to interpretation. We tend to all have our own definition, but I've seen those definitions vary wildly, which again highlights that our industry often lacks a shared vocabulary. What is a "unit"? How big is a "unit"? And what does an "integration test" test the integration of? Are we integrating components? Or are we integrating our system with the database? The Wikipedia page for unit testing talks about individual classes or methods ... and that's good, so why don't we use a much more precise terminology instead that explicitly specifies the scope of the test?

Align the names of the tests with the things they are testing

I won't pretend to have all of the answers here, but aligning the names of the tests with the things they are testing seems to be a good starting point. And if you have a nicely defined way of describing your software system, the names for those tests fall into place really easily. Again, this comes back to creating that shared vocabulary - a ubiquitous language for describing the structures at different levels of abstraction within a software system.

Here are some example test definitions for a software system written in Java or C#, based upon my C4 model (a software system is made up of containers, containers contain components, and components are made up of classes).

Name What? Test scope? Mocking, stubbing, etc? Lines of production code tested per test case? Speed Class tests Tests focused on individual classes, often by mocking out dependencies. These are typically referred to as “unit” tests. A single class - one method at a time, with multiple test cases to test each path through the method if needed. Yes; classes that represent service providers (time, random numbers, key generators, non-deterministic behaviour, etc), interactions with “external” services via inter-process communication (e.g. databases, messaging systems, file systems, other infrastructure services, etc ... the adapters in a “ports and adapters” architecture). Tens Very fast running; test cases typically execute against resources in memory. Component tests Tests focused on components/services through their public interface. These are typically referred to as “integration” tests and include interactions with “external” services (e.g. databases, file systems, etc). See also ComponentTest on Martin Fowler's bliki. A component or service - one operation at a time, with multiple test cases to test each path through the operation if needed. Yes; other components. Hundreds Slightly slower; component tests may incur network hops to span containers (e.g. JVM to database). System tests UI, API, functional and acceptance tests (“end-to-end” tests; from the outside-in). See also BroadStackTest on Martin Fowler's bliki. A single scenario, use case, user story, feature, etc across a whole software system. Not usually, although perhaps other systems if necessary. Thousands Slower still; a single test case may require an execution path through multiple containers (e.g. web browser to web server to database).


This definition of the tests doesn't say anything about TDD, when you write your tests and how many tests you should write. That's all interesting, but it's independent to the topic we're discussing here. So then, do *you* have a good clear definition of what "unit testing" means? Do your colleagues at work agree? Can you say the same for "integration testing"? What do you think about aligning the names of the tests with the things they are testing? Thoughts?

Categories: Architecture

Software Engineering Radio - Episode 228

Coding the Architecture - Simon Brown - Fri, 06/12/2015 - 10:23

A quick note to say that my recent interview/chat with Sven Johann for Software Engineering Radio has now been published. As you might expect, it covers software architecture, diagramming, the use of UML, my C4 model, doing "just enough" and why the code should (but doesn't often) reflect the software architecture. You can listen to the podcast here. Enjoy!

Categories: Architecture

Diagrams for System Evolution

Coding the Architecture - Simon Brown - Thu, 06/11/2015 - 08:17

Every system has an architecture (whether deliberately designed or not) and most systems change and evolve during their existence - which includes their architecture. Sometimes this happens gradually and sometimes in large jumps. Let's have a look at a simple example.

Basic context diagram for a furniture company


The main elements are:

  • Sales Orderbook where sales staff book orders and designers occasionally view sales.
  • Product Catalogue where designers record the details of new products.
  • Warehouse Management application that takes orders and either delivers to customers from stock or forward orders to manufacturing.

The data flow is generally one way, with each system pushing out data with little feedback, due to an old batch-style architecture. For example, the designer updates product information but all the updates for the day are pushed downstream at end-of-day. The Warehouse application only accepts a feed from the Sales Orderbook, which includes both orders and product details.

There are multiple issues with a system like this:

  1. Modification to the products (new colours, sizes, materials etc) are not delivered to the manufacturer until after an order is placed. This means a long lead time for manufacturing and an unhappy customer.
  2. The sales orderbook receives a subset of the product information when updated. This means the information could become out-of-sync and not all of it is available.
  3. The warehouse system also relies on the sales orderbook for product information. This may be out of date or inaccurate.
  4. The designer does not have easy-to-access information about how product changes affect sales.

The users are unhappy with the system and want the processes to change as follows:

  1. The product designer wishes to see historic order information alongside the products in the catalog. Therefore the catalog should be able to retrieve data from the Sales Orderbook when required.
  2. Don't batch push products to the Orderbook and warehouse management. They should access the product catalog as a service when required.

System redesign

Therefore the system is re-architected as follows:

This is a significant modification to how the systems works:

  • The Sales Orderbook and Product Catalogue have a bi-directional relationship and request information from each other.
  • Only orders are sent from the Sales Orderbook to the Product Catalogue.
  • There is now a direct connection between the Warehouse Manager and the Product Catalogue.
  • The product catalogue now provides all features the Designer requires. They do not need to access any other interfaces into the system.

However, if you place the two diagrams next to each other they look similar and the scale of the change is not obvious.


When we write code, we would use a ‘diff’ tool to show us the modifications.


I often do something similar on my architecture diagrams. For example:

System before modification:


System after modification:

When placed side-by-side the differences are much more obvious:


I’ve used similar colours to those used by popular diff tools; with red for a removed relationship and green for a new relationship (I would use the same colours if components were added or removed). I’ve used blue to indicate where a relationship has changed significantly. Note that I’ve included a Key on these diagrams.

You may be thinking; why have I coloured before and after diagrams rather than having a single, coloured diagram? A single diagram can work if you are only adding and removing elements but if they are being modified it becomes very cluttered. As an example, consider the relationship between the Sales Orderbook and the Product Catalogue. Would the description on a combined diagram be "product updates", "exchange sales and product information", or both? How would I show which was before and after? Lastly, what would I do with the arrow on the connection? Colourise it differently from the line?


I find that having separate, coloured diagrams for the current and desired system architectures allows the changes to be seen much more clearly. Having spoken to several software architects, it appears that many use similar techniques, however there appears to be no standard way to do this. Therefore we should make sure that we follow standard advice such as using keys and labelling all elements and connections.

Categories: Architecture

Sponsored Post: Datadog, Tumblr, Power Admin, Learninghouse, MongoDB, Internap, Aerospike, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • Make Tumblr fast, reliable and available for hundreds of millions of visitors and tens of millions of users.  As a Site Reliability Engineer you are a software developer with a love of highly performant, fault-tolerant, massively distributed systems. Apply here now! 

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • 90 Days. 1 Bootcamp. A whole new life. Interested in learning how to code? Concordia St. Paul's Coding Bootcamp is an intensive, fast-paced program where you learn to be a software developer. In this full-time, 12-week on-campus course, you will learn either .NET or Java and acquire the skills needed for entry-level developer positions. For more information, read the Guide to Coding Bootcamp or visit

  • June 2nd – 4th, Santa Clara: Register for the largest NoSQL event of the year, Couchbase Connect 2015, and hear how innovative companies like Cisco, TurboTax, Joyent, PayPal, Nielsen and Ryanair are using our NoSQL technology to solve today’s toughest big data challenges. Register Today.

  • The Art of Cyberwar: Security in the Age of Information. Cybercrime is an increasingly serious issue both in the United States and around the world; the estimated annual cost of global cybercrime has reached $100 billion with over 1.5 million victims per day affected by data breaches, DDOS attacks, and more. Learn about the current state of cybercrime and the cybersecurity professionals in charge with combatting it in The Art of Cyberwar: Security in the Age of Information, provided by Russell Sage Online, a division of The Sage Colleges.

  • MongoDB World brings together over 2,000 developers, sysadmins, and DBAs in New York City on June 1-2 to get inspired, share ideas and get the latest insights on using MongoDB. Organizations like Salesforce, Bosch, the Knot, Chico’s, and more are taking advantage of MongoDB for a variety of ground-breaking use cases. Find out more at but hurry! Super Early Bird pricing ends on April 3.
Cool Products and Services
  • Datadog is a monitoring service for scaling cloud infrastructures that bridges together data from servers, databases, apps and other tools. Datadog provides Dev and Ops teams with insights from their cloud environments that keep applications running smoothly. Datadog is available for a 14 day free trial at

  • Here's a little quiz for you: What do these companies all have in common? Symantec, RiteAid, CarMax, NASA, Comcast, Chevron, HSBC, Sauder Woodworking, Syracuse University, USDA, and many, many more? Maybe you guessed it? Yep! They are all customers who use and trust our software, PA Server Monitor, as their monitoring solution. Try it out for yourself and see why we’re trusted by so many. Click here for your free, 30-Day instant trial download!

  • Turn chaotic logs and metrics into actionable data. Scalyr replaces all your tools for monitoring and analyzing logs and system metrics. Imagine being able to pinpoint and resolve operations issues without juggling multiple tools and tabs. Get visibility into your production systems: log aggregation, server metrics, monitoring, intelligent alerting, dashboards, and more. Trusted by companies like Codecademy and InsideSales. Learn more and get started with an easy 2-minute setup. Or see how Scalyr is different if you're looking for a Splunk alternative or Loggly alternative.

  • Instructions for implementing Redis functionality in Aerospike. Aerospike Director of Applications Engineering, Peter Milne, discusses how to obtain the semantic equivalent of Redis operations, on simple types, using Aerospike to improve scalability, reliability, and ease of use. Read more.

  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here:

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required.

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

Android Design Support: NavigationView

Xebia Blog - Tue, 06/09/2015 - 09:18

When Google announced that Android apps should follow their Material Design standards, they did not give the developers a lot of tools to actually implement this new look and feel. Of course Google’s own apps were all quickly updated and looked amazing, but the rest of us were left with little more than fancy design guidelines and no real components to use in our apps.

So last weeks release of the Android Design Support Library came as a relief to many. It promises to help us quickly create nice looking apps that are consistent with the rest of the platform, without having to roll everything for ourselves. Think of it as AppCompat’s UI-centric companion.

The NavigationView is the part of this library which I thought the most interesting. It helps you create the slick sliding-over-everything navigation drawer that is such a recognizable part of material apps. I will demonstrate how to use this component and how to avoid some common mistakes.

(function(d, t) { var g = d.createElement(t), s = d.getElementsByTagName(t)[0]; g.src = ''; s.parentNode.insertBefore(g, s); }(document, 'script'));

Basic Setup

The basic setup is pretty straightforward, you add a DrawerLayout and NavigationView to your main layout resource:


  <!-- The main content view -->

    <!-- Toolbar instead of ActionBar so the drawer can slide on top -->

    <!-- Real content goes here -->

  <!-- The navigation drawer -->


And a drawer.xml menu resource for the navigation items:

<menu xmlns:android="">
  <!-- group with single selected item so only one item is highlighted in the nav menu -->
  <group android:checkableBehavior="single">

Then wire it up in your Activity. Notice the nice onNavigationItemSelected(MenuItem) callback:

public class MainActivity extends AppCompatActivity implements
    NavigationView.OnNavigationItemSelectedListener {

  private static final long DRAWER_CLOSE_DELAY_MS = 250;
  private static final String NAV_ITEM_ID = "navItemId";

  private final Handler mDrawerActionHandler = new Handler();
  private DrawerLayout mDrawerLayout;
  private ActionBarDrawerToggle mDrawerToggle;
  private int mNavItemId;

  protected void onCreate(final Bundle savedInstanceState) {

    mDrawerLayout = (DrawerLayout) findViewById(;
    Toolbar toolbar = (Toolbar) findViewById(;

    // load saved navigation state if present
    if (null == savedInstanceState) {
      mNavItemId =;
    } else {
      mNavItemId = savedInstanceState.getInt(NAV_ITEM_ID);

    // listen for navigation events
    NavigationView navigationView = (NavigationView) findViewById(;

    // select the correct nav menu item

    // set up the hamburger icon to open and close the drawer
    mDrawerToggle = new ActionBarDrawerToggle(this, mDrawerLayout, toolbar,,

  private void navigate(final int itemId) {
    // perform the actual navigation logic, updating the main content fragment etc

  public boolean onNavigationItemSelected(final MenuItem menuItem) {
    // update highlighted item in the navigation menu
    mNavItemId = menuItem.getItemId();
    // allow some time after closing the drawer before performing real navigation
    // so the user can see what is happening
    mDrawerActionHandler.postDelayed(new Runnable() {
      public void run() {
    return true;

  public void onConfigurationChanged(final Configuration newConfig) {

  public boolean onOptionsItemSelected(final MenuItem item) {
    if (item.getItemId() == {
      return mDrawerToggle.onOptionsItemSelected(item);
    return super.onOptionsItemSelected(item);

  public void onBackPressed() {
    if (mDrawerLayout.isDrawerOpen(GravityCompat.START)) {
    } else {

  protected void onSaveInstanceState(final Bundle outState) {
    outState.putInt(NAV_ITEM_ID, mNavItemId);
Extra Style

This setup results in a nice-looking menu with some default styling. If you want to go a bit further, you can add a header view to the drawer and add some colors to the navigation menu itself:


Where the drawer_item color is actually a ColorStateList, where the checked state is used by the current active navigation item:

<selector xmlns:android="">
  <item android:color="@color/drawer_item_checked" android:state_checked="true" />
  <item android:color="@color/drawer_item_default" />
Open Issues

The current version of the library does come with its limitiations. My main issue is with the system that highlights the current item in the navigation menu. The itemBackground attribute for the NavigationView does not handle the checked state of the item correctly: somehow either all items are highlighted or none of them are. This makes this attribute basically unusable for most apps. I ran into more trouble when trying to work with submenu's in the navigation items. Once again the highlighting refused to work as expected: updating the selected item in a submenu makes the highlight overlay disappear altogether. In the end it seems that managing the selected item is still a chore that has to be solved manually in the app itself, which is not what I expected from what is supposed to be a drag-and-drop component aimed to take work away from the developers.


I think the NavigationView component missed the mark a little. My initial impression was pretty positive: I was able to quickly put together a nice looking navigation menu with very little code. The issues with the highlighting of the current item makes it more difficult to use than I would expect, but let's hope that these quirks are removed in an upcoming release of the design library.

You can find the complete source code of the demo project on GitHub:

Extracting software architecture from code

Coding the Architecture - Simon Brown - Mon, 06/08/2015 - 22:47

I ran my Extracting software architecture from code session at Skills Matter this evening and it was a lot of fun. The basic premise of the short workshop part was simple; "here's some code, now draw some software architecture diagrams to describe it". Some people did this individually and others worked in groups. It sounds easy, but you can see for yourself what happened.


There are certainly some common themes, but each diagram is different. Also, people's perception of what architectural information can be extracted from the code differed slightly too, but more on that topic another day. If you want to have a go yourself, the codebase I used was a slightly cutdown version* of the Spring PetClinic web application. The presentation parts of the session were videoed and I'm creating a 1-day version of this workshop that I'll be running at a conference or two in the Autumn.

This again raises some basic questions about the software development industry though. Why, in 2015, do we still not have a consistent way to do this? Why don't we have a shared vocabulary? When will we be able to genuinely call ourselves "software engineers"?

* The original version ships with three different database profile implementations, and I removed two of them for simplicity.

Categories: Architecture

Leveraging AWS to Build a Scalable Data Pipeline

While at Netflix and LinkedIn Siddharth "Sid" Anand wrote some great articles for HS. Sid is back, this time as a Data Architect at Agari. Original article is here.

Data-rich companies (e.g. LinkedIn, Facebook, Google, and Twitter) have historically built custom data pipelines over bare metal in custom-designed data centers. In order to meet strict requirements on data security, fault-tolerance, cost control, job scalability, and uptime, they need to closely manage their core technology. Like serving systems (e.g. web application servers and OLTP databases) that need to be up 24x7 to display content to users the world over, data pipelines need to be up and running in order to pick the most engaging and up-to-date content to display. In other words, updated ranking models, new content recommendations, and the like are what make data pipelines an integral part of an end user’s web experience by picking engaging, personalized content. 

Agari, a data-driven email security company, is no different in its demand for a low-latency, reliable, and scalable data pipeline.  It must process a flood of inbound email and email authentication metrics, analyze this data in a timely manner, often enriching it with 3rd party data and model-driven derived data, and publish findings. One twist is that Agari, unlike the companies listed above, operates completely in the cloud, specifically in AWS.  This has turned out to be more a boon than a disadvantage. 

Below is one such data pipeline used at Agari.

Agari Data Pipeline

Data Flow
Categories: Architecture

Gone Fishin' 2015

Well, not exactly Fishin', but I'll be on a month long vacation starting today.

I won't be posting (much) new content, so we'll all have a break. Disappointing, I know.

Please use this time for quiet contemplation and other inappropriate activities.

See you on down the road...

Categories: Architecture

Visionary Leadership: How To Be a Visionary Leader (Or at Least Look Like One)

“Remember this: Anticipation is the ultimate power. Losers react; leaders anticipate.” – Tony Robbins

Have you ever noticed how some leaders have a knack for "the art of the possible" and for making it relevant to the current landscape?

They are Visionary Leaders and they practice Visionary Leadership.

Visionary Leaders inspire us and show us how we can change the world, at least our slice of it, and create the change we want to be.

Visionary Leaders see things early and they connect the dots.

Visionary Leaders luck their way into the future.  They practice looking ahead for what's pertinent and what's probable.

Visionary Leaders also practice telling stories.  They tell stories of the future and how all the dots connect in a meaningful way.

And they put those stories of the future into context.  They don't tell disjointed stories, or focus on flavor-of-the-month fads.  That's what Trend Hoppers do.

Instead, Visionary Leaders focus on meaningful trends and insights that will play a role in shaping the future in a relevant way.

Visionary leaders tell us compelling stories of the future in a way that motivates us to take action and to make the most of what's coming our way.

Historians, on the other hand, tell us compelling stories of the past.

They excite us with stories about how we've "been there, and done that."

By contrast, Visionary Leaders win our hearts and minds with "the art of the possible" and inspire us to co-create the future, and to use future insights to own our destiny.

And Followers, well, they follow.

Not because they don't see some things coming.  But because they don't see things early enough, and they don't turn what they see into well-developed stories with coherence.

If you want to build your capacity for vision and develop your skills as a Visionary Leader, start to pay attention to signs of the future and connect the dots in a meaningful way.

With great practice, comes great progress, and progressing even a little in Visionary Leadership can make a world of difference for you and those around you.

You Might Also Like

7 Metaphors for Leadership Transformation

10 Free Leadership Skills for Work and Life

10 Leadership Ideas to Improve Your Influence and Impact

Emotional Intelligence is a Key Leadership Skill

Integrating Generalist

Categories: Architecture, Programming

Diagramming microservices with the C4 model

Coding the Architecture - Simon Brown - Mon, 06/08/2015 - 08:40

Here's a question I'm being asked more and more ... how do you diagram a microservices architecture with the C4 software architecture model?

It's actually quite straightforward providing that you have a defined view of what a microservice is. If a typical modular monolithic application is a container with a number of components running inside it, a microservice is simply a container with a much smaller number of components running inside it. The actual number of components will depend on your implementation strategy. It could range from the very simple (i.e. one, where a microservice is a container with a single component running inside) through to something like a mini-layered or hexagonal architecture. And by "container", I mean anything ranging from a traditional web server (e.g. Apache Tomcat, IIS, etc) through to something like a self-contained Spring Boot or Dropwizard application. In concrete terms then...

  • System context diagram: No changes ... you're still building a system with users (people) and other system dependencies.
  • Containers diagram: If each of your microservices can be deployed individually, then that should be reflected on the containers diagram. In other words, each microservice is represented by a separate container.
  • Component diagrams: Depending on the complexity of your microservices, I would question whether drawing a component diagram for every microservice is worth the effort. Of course, if each microservice is a mini-layered or hexagonal architecture then perhaps there's some value. I would certainly consider using something like Structurizr for doing this automatically from the code though.

So there you go, that's how I would approach diagramming a microservices architecture with the C4 model.

Categories: Architecture

Stuff The Internet Says On Scalability For June 5th, 2015

Hey, it's HighScalability time:

Stunning Multi-Wavelength Image Of The Solar Atmosphere.
  • 4x: amount spent by Facebook users
  • Quotable Quotes:
    • Facebook: Facebook's average data set for CF has 100 billion ratings, more than a billion users, and millions of items. In comparison, the well-known Netflix Prize recommender competition featured a large-scale industrial data set with 100 million ratings, 480,000 users, and 17,770 movies (items).
    • @BenedictEvans: The number of photos shared on social networks this year will probably be closer to 2 trillion than to 1 trillion.
    • @jeremysliew: For every 10 photos shared on @Snapchat, 5 are shared on @Facebook and 1 on @Instagtam. 8,696 photos/sec on Snapchat.
    • @RubenVerborgh: “Don’t ask for an API, ask for data access. Tim Berners-Lee called for open data, not open services.” —@pietercolpaert #SemDev2015 #ESWC2015
    • Craig Timberg: When they thought about security, they foresaw the need to protect the network against potential intruders or military threats, but they didn’t anticipate that the Internet’s own users would someday use the network to attack one another. 
    • Janet Abbate: They [ARPANET inventors] thought they were building a classroom, and it turned into a bank.
    • A.C. Hadfield: The power of accurate observation is often called cynicism by those who don’t possess it.
    • @plightbo: Every business is becoming a software business
    • @potsdamnhacker: Replaced Go service with an Erlang one. Already used hot-code reloading, fault tolerance, runtime inspectability to great effect. #hihaters
    • @alsargent: Given continuous deployment, average lifetime of a #Docker image @newrelic is 12hrs. Different ops pattern than VMs. #velocityconf
    • @abt_programming: "If you think good architecture is expensive, try bad architecture" - Brian Foote - and Joseph Yoder
    • @KlangianProverb: "I thought studying distributed systems would make me understand software better—it made me understand reality better."—Old Klangian Proverb
    • @rachelmetz: google's error rate for image recognition was 28 percent in 2008, now it's like 5 percent, quoc le says.

  • Fear or strength? Apple’s Tim Cook Delivers Blistering Speech On Encryption, Privacy. With Google Now on Tap Google is saying we've joyously crossed the freaky line and we unapologetically plan to leverage our huge lead in machine learning to the max. Apple knows they can't match this feature. Google knows this is a clear and distinct exploitable marketing idea, like a super thin MacBook Air slowly slipping out of a manila envelope.

  • How does Kubernetes compare to Mesos? cmcluck, who works at Google and was one of the founders of the project explains: we looked really closely at Apache Mesos and liked a lot of what we saw, but there were a couple of things that stopped us just jumping on it. (1) it was written in C++ and the containers world was moving to Go -- we knew we planned to make a sustained and considerable investment in this and knew first hand that Go was more productive (2) we wanted something incredibly simple to showcase the critical constructs (pods, labels, label selectors, replication controllers, etc) and to build it directly with the communities support and mesos was pretty large and somewhat monolithic (3) we needed what Joe Beda dubbed 'over-modularity' because we wanted a whole ecosystem to emerge, (4) we wanted 'cluster environment' to be lightweight and something you could easily turn up or turn down, kinda like a VM; the systems integrators i knew who worked with mesos felt that it was powerful but heavy and hard to setup (though i will note our friends at Mesosphere are helping to change this). so we figured doing something simple to create a first class cluster environment for native app management, 'but this time done right' as Tim Hockin likes to say everyday. < Also, CoreOS (YC S13) Raises $12M to Bring Kubernetes to the Enterprise.

  • If structure arises in the universe because electrons can't occupy the same space, why does structure arise in software?

  • The cost of tyranny is far lower than one might hope. How much would it cost for China to intercept connections and replace content flowing at 1.9-terabits/second? About $200K says Robert Graham in Scalability of the Great Cannon. Low? Probably. But for the price of a closet in the new San Francisco you can edit an entire people's perception of the Internet in real-time.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Paper: Heracles: Improving Resource Efficiency at Scale

Underutilization and segregation are the classic strategies for ensuring resources are available when work absolutely must get done. Keep a database on its own server so when the load spikes another VM or high priority thread can't interfere with RAM, power, disk, or CPU access. And when you really need fast and reliable networking you can't rely on QOS, you keep a dedicated line.

Google flips the script in Heracles: Improving Resource Efficiency at Scale, shooting for high resource utilization while combining different load profiles.

I'm assuming the project name Heracles was chosen not simply for his legendary strength, but because when strength failed, Heracles could always depend on his wits. Who can ever forget when Heracles tricked Atlas into taking the sky back onto his shoulders? Good times.

The problem: better utilization of compute resources while complying  service level objectives (SLOs) for latency-critical (LC) and best effort batch (BE) tasks: 

Categories: Architecture

What Does it Mean to Poke a Complex System?

A little bit of follow up...

In How Can We Build Better Complex Systems? Containers, Microservices, And Continuous Delivery I had a question about what Mary Poppendieck meant when she talked about poking complex systems.

InfoQ interviewed Mary & Tom Poppendieck and found out the answer:

Categories: Architecture

Microservices architecture principle #6: One team is responsible for full life cycle of a Micro service

Xebia Blog - Tue, 06/02/2015 - 20:08

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Today's blog is the last in a series about our Microservices principles. This blog explains why a Microservice should be the responsibility of exactly one team (but one team may be responsible for more services).

Being responsible for the full life cycle of a service means that a single team can deploy and manage a service as well as create new versions and retire obsolete ones. This means that users of the service have a single point of contact for all questions regarding the use of the service. This property makes it easier to track changes in a service. Developers can focus on a specific area of the business they are supporting so they will become specialists in that area. This in turn will lead to better quality. The need to also fix errors and problems in production systems is a strong motivator to make sure code works correctly and problems are easy to find.
Having different teams working on different services introduces a challenge that may lead to a more robust software landscape. If TeamA needs a change in TeamB’s service in order to complete it’s task, some form of planning has to take place. Both teams have to cater for slipping schedules and unforeseen events that cause the delivery date of a feature to change. However, depending on a commitment made by another team is tricky; there are lots of valid reasons why a change may be late (e.g. production issues or illness temporarily reduces a teams capacity or high priority changes take precedence). So TeamA may never depend on TeamB to finish before the deadline. TeamA will learn to protect its weekends and evenings by changing their architecture. Not making assumptions about another teams schedule, in a Microservice environment, will therefore lead to more robust software.

Microservices architecture principle #6: One team is responsible for full life cycle of a Micro service

Xebia Blog - Tue, 06/02/2015 - 20:08

Microservices are a hot topic. Because of that a lot of people are saying a lot of things. To help organizations make the best of this new architectural style Xebia has defined a set of principles that we feel should be applied when implementing a Microservice Architecture. Today's blog is the last in a series about our Microservices principles. This blog explains why a Microservice should be the responsibility of exactly one team (but one team may be responsible for more services).

Being responsible for the full life cycle of a service means that a single team can deploy and manage a service as well as create new versions and retire obsolete ones. This means that users of the service have a single point of contact for all questions regarding the use of the service. This property makes it easier to track changes in a service. Developers can focus on a specific area of the business they are supporting so they will become specialists in that area. This in turn will lead to better quality. The need to also fix errors and problems in production systems is a strong motivator to make sure code works correctly and problems are easy to find.
Having different teams working on different services introduces a challenge that may lead to a more robust software landscape. If TeamA needs a change in TeamB’s service in order to complete it’s task, some form of planning has to take place. Both teams have to cater for slipping schedules and unforeseen events that cause the delivery date of a feature to change. However, depending on a commitment made by another team is tricky; there are lots of valid reasons why a change may be late (e.g. production issues or illness temporarily reduces a teams capacity or high priority changes take precedence). So TeamA may never depend on TeamB to finish before the deadline. TeamA will learn to protect its weekends and evenings by changing their architecture. Not making assumptions about another teams schedule, in a Microservice environment, will therefore lead to more robust software.

Why You Dont' Want to Aim for 100% Uptime According to Google's Urs Hölzle

Wait, you don't want 100% uptime? Who said such a crazy thing? Risk taker Urs Hölzle, senior VP for technical infrastructure, in Google's Infrastructure Chief Talks SDN: Whenever you try something new, there are going to be problems with it....We were willing to take the risk to get the innovation. Our VP who runs our site reliability gave a great talk about not aiming for 100% uptime....The easiest way to make it be at 100% is to resist change, because change is when bad things happen. Looks great for your SLA, but it's bad for your business because you slow down innovation.... In the first year of running B4, [we asked] "Will we have an outage?" Realistically, yes there's a high chance because it was all new code. Are we going to be perfect? Probably not. You have to have a willingness to take a little risk.
Categories: Architecture

Extracting software architecture from code

Coding the Architecture - Simon Brown - Tue, 06/02/2015 - 09:19

I'm running a short "In The Brain" session at Skills Matter in London next Monday evening, focussed around the topic of extracting the software architecture model from code.

It’s often said that the code is the true embodiement of the software architecture, yet my experience suggests that it’s difficult to actually extract this information from the code. Why isn’t the architecture in the code? Join me for this "In The Brain" session where we’ll look at a simple Java web application to see what information we can extract from the code and how to supplement it with information we can't. This session will be interactive, so bring a laptop.

This will be an interactive session so you will need to bring a laptop, or at least something that you can browse a GitHub repository with. We'll be looking at a Java web application, but the concepts are applicable to other programming languages and platforms. See you next week.

Categories: Architecture