Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Automatically launching and configuring an EC2 instance with ansible

Agile Testing - Grig Gheorghiu - Fri, 06/26/2015 - 20:53
Ansible makes it easy to configure an EC2 instance from soup to nuts when it comes to launching the instance and configuring it.  Here's a complete playbook I use for this purpose:

$ cat ec2-launch-instance-api.yml
---
- name: Create a new api EC2 instance
hosts: localhost
gather_facts: False
vars:
keypair: api
instance_type: t2.small
security_group: api-core
image: ami-5189a661
region: us-west-2
vpc_subnet: subnet-xxxxxxx
name_tag: api01
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ vpc_subnet }}"
assign_public_ip: yes
instance_tags:
Name: "{{ name_tag }}"
register: ec2

- name: Add Route53 DNS record for this instance (overwrite if needed)
route53:
command: create
zone: mycompany.com
record: "{{name_tag}}.mycompany.com"
type: A
ttl: 3600
value: "{{item.private_ip}}"
overwrite: yes
with_items: ec2.instances

- name: Add new instance to proper ansible group
add_host: hostname={{name_tag}} groupname=api-servers ansible_ssh_host={{ item.private_ip }} ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/Users/grig.gheorghiu/.ssh/api.pem
with_items: ec2.instances

- name: Wait for SSH to come up
wait_for: host={{ item.private_ip }} port=22 search_regex=OpenSSH delay=210 timeout=420 state=started
with_items: ec2.instances

- name: Configure api EC2 instance
hosts: api-servers
sudo: True
gather_facts: True
roles:
- base
- tuning
- postfix
- monitoring
- nginx
- api


The first thing I do in this playbook is to launch a new EC2 instance, add or update its Route53 DNS A record, add it to an ansible group and wait for it to be accessible via ssh. Then I configure this instance by applying a handful or roles to it. That's it.
Some things to note:
1) Ansible uses boto under the covers, so you need that installed on your local host, and you also need a ~/.boto configuration file with your AWS credentials:
[Credentials]aws_access_key_id = xxxxxaws_secret_access_key = yyyyyyyyyy
2) When launching an EC2 instance with ansible via the ansible ec2 module, the hosts variable should point to localhost and gather_facts should be set to False. 
3) The various parameters expected by the EC2 API (keypair name, instance type, VPN subnet, security group, instance name tag etc) can be set in the vars section and then used in the tasks section in the ec2 stanza.
4) I used the ansible route53 module for managing DNS. This module has a handy property called overwrite, which when set to yes will update a DNS record in place if it exists, or will create it if it doesn't exist. 5) The add_host task is very useful in that it adds the newly created instance to a hosts group, in my case api-servers. This host group has a group_vars/api-servers configuration file already, where I set various ansible variables used in different roles (mostly secret-type variables such as API keys, user names, passwords etc). The group_vars directory is NOT checked in.
6) In the final task of the playbook, the [api-servers] group (which consists of only the newly created EC2 instance) gets the respective roles applied to it. Why does this group only consist of the newly created EC2 instance? Because when I run the playbook with ansible-playbook, I indicate an empy hosts file to make sure this group is empty:
$ ansible-playbook -i hosts/myhosts.empty ec2-launch-instance-api.yml
If instead I wanted to also apply the specified roles to my existing EC2 instances in that group, I would specify a hosts file that already has those instances defined in the [api-servers] group.

Automatically launching and configuring an EC2 instance with ansible

Agile Testing - Grig Gheorghiu - Fri, 06/26/2015 - 20:53
Ansible makes it easy to configure an EC2 instance from soup to nuts when it comes to launching the instance and configuring it.  Here's a complete playbook I use for this purpose:

$ cat ec2-launch-instance-api.yml
---
- name: Create a new api EC2 instance
hosts: localhost
gather_facts: False
vars:
keypair: api
instance_type: t2.small
security_group: api-core
image: ami-5189a661
region: us-west-2
vpc_subnet: subnet-xxxxxxx
name_tag: api01
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ vpc_subnet }}"
assign_public_ip: yes
instance_tags:
Name: "{{ name_tag }}"
register: ec2

- name: Add Route53 DNS record for this instance (overwrite if needed)
route53:
command: create
zone: mycompany.com
record: "{{name_tag}}.mycompany.com"
type: A
ttl: 3600
value: "{{item.private_ip}}"
overwrite: yes
with_items: ec2.instances

- name: Add new instance to proper ansible group
add_host: hostname={{name_tag}} groupname=api-servers ansible_ssh_host={{ item.private_ip }} ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/Users/grig.gheorghiu/.ssh/api.pem
with_items: ec2.instances

- name: Wait for SSH to come up
wait_for: host={{ item.private_ip }} port=22 search_regex=OpenSSH delay=210 timeout=420 state=started
with_items: ec2.instances

- name: Configure api EC2 instance
hosts: api-servers
sudo: True
gather_facts: True
roles:
- base
- tuning
- postfix
- monitoring
- nginx
- api


The first thing I do in this playbook is to launch a new EC2 instance, add or update its Route53 DNS A record, add it to an ansible group and wait for it to be accessible via ssh. Then I configure this instance by applying a handful or roles to it. That's it.
Some things to note:
1) Ansible uses boto under the covers, so you need that installed on your local host, and you also need a ~/.boto configuration file with your AWS credentials:
[Credentials]aws_access_key_id = xxxxxaws_secret_access_key = yyyyyyyyyy
2) When launching an EC2 instance with ansible via the ansible ec2 module, the hosts variable should point to localhost and gather_facts should be set to False. 
3) The various parameters expected by the EC2 API (keypair name, instance type, VPN subnet, security group, instance name tag etc) can be set in the vars section and then used in the tasks section in the ec2 stanza.
4) I used the ansible route53 module for managing DNS. This module has a handy property called overwrite, which when set to yes will update a DNS record in place if it exists, or will create it if it doesn't exist. 5) The add_host task is very useful in that it adds the newly created instance to a hosts group, in my case api-servers. This host group has a group_vars/api-servers configuration file already, where I set various ansible variables used in different roles (mostly secret-type variables such as API keys, user names, passwords etc). The group_vars directory is NOT checked in.
6) In the final task of the playbook, the [api-servers] group (which consists of only the newly created EC2 instance) gets the respective roles applied to it. Why does this group only consist of the newly created EC2 instance? Because when I run the playbook with ansible-playbook, I indicate an empy hosts file to make sure this group is empty:
$ ansible-playbook -i hosts/myhosts.empty ec2-launch-instance-api.yml
If instead I wanted to also apply the specified roles to my existing EC2 instances in that group, I would specify a hosts file that already has those instances defined in the [api-servers] group.

Git Subproject Compile-time Dependencies in Sbt

Xebia Blog - Fri, 06/26/2015 - 13:33

When creating a sbt project recently, I tried to include a project with no releases. This means including it using libraryDependencies in the build.sbt does not work. An option is to clone the project and publish it locally, but this is tedious manual work that needs to be repeated every time the cloned project changes.

Most examples explain how to add a direct compile time dependency on a git repository to sbt, but they just show how to add a single project repository as a dependency using an RootProject. After some searching I found the solution to add projects from a multi-project repository. Instead of RootProject the ProjectRef should be used. This allows for a second argument to specify the subproject in the reposityr.

This is my current project/Build.scala file:

import sbt.{Build, Project, ProjectRef, uri}

object GotoBuild extends Build {
  lazy val root = Project("root", sbt.file(".")).dependsOn(staminaCore, staminaJson, ...)

  lazy val staminaCore = ProjectRef(uri("git://github.com/scalapenos/stamina.git#master"), "stamina-core")
  lazy val staminaJson = ProjectRef(uri("git://github.com/scalapenos/stamina.git#master"), "stamina-json")
  ...
}

These subprojects are now a compile time dependency and sbt will pull in and maintain the repository in ~/.sbt/0.13/staging/[sha]/stamina. So no manual checkout with local publish is needed. This is very handy when depending on an internal independent project/module and without needing to create a new release for every change. (One side note is that my IntelliJ currently does not recognize that the library is on the class/source path of the main project, so it complains it cannot find symbols and therefore cannot do proper syntax checking and auto completing.)

Inspirational Quotes, Inspirational Life Quotes, and Great Leadership Quotes

I know several people looking for inspiration.

I believe the right words ignite or re-ignite us.

There is no better way to prime your mind for great things to come than filling your head and hear with the greatest inspirational quotes that the world has ever known.

Of course, the challenge is finding the best inspirational quotes to draw from.

Well, here you go …

3 Great Inspirational Quotes Collections at Your Fingertips

I revamped a few of my best inspirational quotes collections to really put the gems of insight at your fingertips:

  1. Inspirational Quotes – light a fire from the inside out, or find your North Star that pulls you forward
  2. Inspirational Life Quotes -
  3. Great Leadership Quotes – learn what great leadership really looks like and how it helps lifts others up

Each of these inspirational quotes collection is hand-crafted with deep words of wisdom, insight, and action.

You'll find inspirational quotes from Charles Dickens, Confucius, Dr. Seuss, George Bernard Shaw, Henry David Thoreau, Horace, Lao Tzu,  Lewis Carroll, Mahatma Gandhi, Oprah Winfrey, Oscar Wilde, Paulo Coelho, Ralph Waldo Emerson, Stephen King, Tony Robbins, and more.

You'll even find an inspirational quote from The Wizard of Oz (and it’s not “There’s no place like home.”)

Inspirational Quotes Jump Start

Here are a few of my favorites inspirational quotes to get you started:

“Courage doesn’t always roar. Sometimes courage is the quiet voice at the end of the day saying, ‘I will try again tomorrow.’”

Mary Anne Radmacher

“Do not follow where the path may lead. Go, instead, where there is no path and leave a trail.”

Ralph Waldo Emerson

“Don’t cry because it’s over, smile because it happened.”

Dr. Seuss

“It is not length of life, but depth of life.”

Ralph Waldo Emerson

“Life is not measured by the number of breaths you take, but by every moment that takes your breath away.”

Anonymous

“You live but once; you might as well be amusing.”

Coco Chanel

“It is never too late to be who you might have been.”

George Eliot

“Smile, breathe and go slowly.”

Thich Nhat Hanh

“What lies behind us and what lies before us are tiny matters compared to what lies within us.”

Ralph Waldo Emerson

These inspirational quotes are living breathing collections.  I periodically sweep them to reflect new additions, and I re-organize or re-style the quotes if I find a better way.

I invest a lot of time on quotes because I’ve learned the following simple truth:

Quotes change lives.

The right words, at the right time, can be just that little bit you need, to breakthrough or get unstuck, or find your mojo again.

Have you had your dose of inspiration today?

Categories: Architecture, Programming

Deploying monitoring tools with ansible

Agile Testing - Grig Gheorghiu - Fri, 06/26/2015 - 01:10
At my previous job, I used Chef for configuration management. New job, new tools, so I decided to use ansible, which I had played with before. Part of that was that I got sick of tools based on Ruby. Managing all the gems dependencies and migrating from one Ruby version to another was a nightmare that I didn't want to go through again. That's one reason why at my new job we settled on Go as the main language we use for our backend API layer.

Back to ansible. Since it's written in Python, it's already got good marks in my book. Plus it doesn't need a server and it's fairly easy to wrap your head around. I've been very happy with it so far.

For external monitoring, we use Pingdom because it works and it's cheap. We also use New Relic for application performance monitoring, but it's very expensive, so I've been looking at ways to supplement it with Open Source tools.

An announcement about telegraf drew my attention the other day: here was a metrics collection tool written in Go and sending its data to InfluxDB, which is a scalable database also written in Go and designed to receive time-series data. Seemed like a perfect fit. I just needed a way to display the data from InfluxDB. Cool, it looked like Grafana supports InfluxDB! It turns out however that Grafana support for the latest InfluxDB version 0.9.0 is experimental, i.e. doesn't really work. Plus telegraf itself has some rough edges in the way it tags the data it sends to InfluxDB. Long story short, after a whole day of banging my head against the telegraf/InfluxDB/Grafana wall, I decided to abandon this toolset.

Instead, I reached again to trusty old Graphite and its loyal companion statsd. I had problems with Graphite not scaling well before, but for now we're not sending it such a huge amount of metrics, so it will do. I also settled on collectd as the OS metric collector. It's small, easy to configure, and very stable. The final piece of the puzzle was a process monitoring and alerting tool. I chose monit for this purpose. Again: simple, serverless, small footprint, widely used, stable, easy to configure.

This seems like a lot of tools, but it's not really that bad if you have a good solid configuration management system in place -- ansible in my case.

Here are some tips and tricks specific to ansible for dealing with multiple monitoring tools that need to be deployed across various systems.

Use roles extensively

This is of course recommended no matter what configuration management system you use. With ansible, it's easy to use the commmand 'ansible-galaxy init rolename' to create the directory structure for a new role. My approach is to create a new role for each major application or tool that I want to deploy. Here are some of the roles I created:

  • a base role that adds users, deals with ssh keys and sudoers.d files, creates directory structures common to all servers, etc.
  • a tuning role that mostly configures TCP-related parameters in sysctl.conf
  • a postfix role that installs and configures postfix to use Amazon SES
  •  a go role that installs golang from source and configures GOPATH and GOROOT
  • an nginx role that installs nginx and deploys self-signed SSL certs for development/testing purposes
  • a collectd role that installs collectd and deploys (as an ansible template) a collectd.conf configuration file common to all systems, which sends data to graphite (the system name is customized as {{inventory_hostname}} in the write_graphite plugin)
  • a monit role that installs monit and deploys (again as an ansible template) a monitrc file that monitors resource metrics such as CPU, memory, disk etc. common to all systems
  • an api role that does the heavy lifting for the installation and configuration of the packages that are required by our API layer
Use an umbrella 'monitoring' role
At first I was configuring each ansible playbook to use both the monit role and the collectd role. I realized that it's a bit more clear and also easier to maintain if instead playbooks use a more generic monitoring role, which does nothing but list monit and collectd as dependencies in its meta/main.yml file:

dependencies:  - { role: monit }  - { role: collectd }
Customize monitoring-related configuration files in other roles
A nice thing about both monit and collectd, and a main reason I chose them, is that they read configuration files from a directory called /etc/monit/conf.d for monit and /etc/collectd/collectd.conf.d for collectd. This makes it easy for each role to add its own configuration files. For example, the api role adds 2 files as custom checks in  /etc/monit/conf.d: check_api and check_nginx. It also adds 2 files as custom metric collectors in /etc/collectd/collectd.conf.d: nginx.conf and memcached.conf. The api role does this via a file called tasks/monitoring.yml file which gets included in tasks/main.yml.
As another example, the nginx role also adds its own check_nginx configuration file to /etc/monit/conf.d via a tasks/monitoring.yml file.
The rule of thumb I arrived at is this: each low-level role such as monit and collectd installs the common configuration files needed by all other roles, whereas each higher-level role such as api installs its own custom checks and metric collectors via a monitoring.yml task file. This way, it's easy to see at a glance what each high-level role does for monitoring: just look in its monitoring.yml task file.
To wrap this post up, here is an example of a playbook I use to build API servers:
$ cat api-servers.yml---
- hosts: api-servers  sudo: yes  roles:    - base    - tuning    - postfix    - monitoring    - nginx    - api
I call this playbook with:
$ ansible-playbook -i hosts/myhosts api-servers.yml
To make this work, the hosts/myhosts file has a section similar to this:
[api-servers]api01 ansible_ssh_host=api01.mydomain.comapi02 ansible_ssh_host=api02.mydomain.com




New Azure Billing APIs Available

ScottGu's Blog - Scott Guthrie - Thu, 06/25/2015 - 06:59

Organizations moving to the cloud can achieve significant cost savings.  But to achieve the maximum benefit you need to be able to accurately track your cloud spend in order to monitor and predict your costs. Enterprises need to be able to get detailed, granular consumption data and derive insights to effectively manage their cloud consumption.

I’m excited to announce the public preview release of two new Azure Billing APIs today: the Azure Usage API and Azure RateCard API which provide customers and partners programmatic access to their Azure consumption and pricing details:

Azure Usage API – A REST API that customers and partners can use to get their usage data for an Azure subscription. As part of this new Billing API we now correlate the usage/costs by the resource tags you can now set set on your Azure resources (for example: you could assign a tag “Department abc” or “Project X” to a VM or Database in order to better track spend on a resource and charge it back to an internal group within your company). To get more details, please read the MSDN page on the Usage API. Enterprise Agreement (EA) customers can also use this API to get a more granular view into their consumption data, and to complement what they get from the EA Billing CSV.

Azure RateCard API – A REST API that customers and partners can use to get the list of the available resources they can use, along with metadata and price information about them. To get more details, please read the MSDN page on the RateCard API.

You can start taking advantage of both of these APIs today.  You can write your own custom code that uses the APIs to construct your own custom reports, or alternatively you can also now take advantage of pre-built bill tracking systems provided by our partners which already integrate the APIs into their existing solutions.

Partner Solutions

Two of our Azure Billing partners (Cloudyn and Cloud Cruiser) have already integrated the new Billing APIs into their products:

Cloudyn has integrated with Azure Billing APIs to provide IT financial management insights on cost optimization. You can read more about their integration experience in Microsoft Azure Billing APIs enable Cloudyn to Provide ITFM for Customers.

Cloud Cruiser has integrated with the Azure RateCard API to provide an estimate of what it would cost the customer to run the same workloads on Azure. They are also working on integrating with the Azure Usage API to provide insights based on the Azure consumption. You can read more about their integration in Cloud Cruiser and Microsoft Azure Billing API Integration.

You can adopt one or both of the above solutions immediately and use them to better track your Azure bill without having to write a single line of code.

image

Cloudyn's integration enables you to view and query the breakdown of Azure usage by resource tags (e.g. “Dev/Test”, “Department abc”, “Project X”):

image

Cloudyn's integration showing trend of estimated charges over time:

image

Cloud Cruiser's integration to show estimated cost of running workload on Azure:  

image Using the Billing APIs directly

You can also use the new Billing APIs directly to write your own custom reports and billing tracking logic.  To get started with the APIs, you can leverage the code samples on Github.

The Billing APIs leverage the new Azure Resource Manager and use Azure Active Directory for Authentication and follow the Azure Role-based access control policies.  The code samples we’ve published show a variety of common scenarios and how to integrate this logic end to end. Summary

The new Azure Billing APIs make it much easier to track your bill and save money.

As always, please reach out to us on the Azure Feedback forum and through the Azure MSDN forum.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Software architecture workshops in Australia during August

Coding the Architecture - Simon Brown - Mon, 06/22/2015 - 08:02

This is a quick update on my upcoming trip to Australia ... in conjunction with the lovely folks behind the YOW! conference, we've scheduled two public software architecture sketching workshops as follows.

This workshop addresses one of the major problems I've evidenced during my career in software development; namely that people find it really hard to describe, communicate and visualise the design of their software. Sure, we have UML, but very few people use it and instead resort to an ad hoc "boxes and lines" notation. This is fine, but the resulting diagrams (as illustrated below) need to make sense and they rarely do in my experience.

Some typical software architecture sketches

My workshop explores this problem from a number of different perspectives, not least by giving you an opportunity to practice these skills yourself. My Voxxed article titled Simple Sketches for Diagramming Your Software Architecture provides an introduction to this topic and the C4 model that I use. I've been running this workshop in various formats for nearly 10 years now and the techniques we'll cover have proven invaluable for software development teams around the world in the following situations:

  • Doing (just enough) up front design on whiteboards.
  • Communicating software design retrospectively before architecture refactoring.
  • Explaining how the software works when somebody new joins the team.
  • Documenting/recording the software architecture in something like a software architecture document or software guidebook.
"Really surprising training! I expected some typical spoken training about someones revolutionary method and I found a hands-on training that used the "do-yourself-and-fail" approach to learn the lesson, and taught us a really valuable, reasonable and simple method as an approach to start the architecture of a system. Highly recommended!" (an attendee from the software architecture sketching workshop at GOTO Amsterdam 2014)

Attendees will also receive a copy of my Software Architecture for Developers ebook and a year subscription to Structurizr. Please do feel free to e-mail me with any questions. See you in Australia!

Categories: Architecture

Leadership Skills for Making Things Happen

"A leader is one who knows the way, goes the way, and shows the way." -- John C. Maxwell

How many people do you know that talk a good talk, but don’t walk the walk?

Or, how many people do you know have a bunch of ideas that you know will never see the light of day?  They can pontificate all day long, but the idea of turning those ideas into work that could be done, is foreign to them.

Or, how many people do you know can plan all day long, but their plan is nothing more than a list of things that will never happen?  Worse, maybe they turn it into a team sport, and everybody participates in the planning process of all the outcomes, ideas and work that will never happen. (And, who exactly wants to be accountable for that?)

It doesn’t need to be this way.

A lot of people have Hidden Strengths they can develop into Learned Strengths.   And one of the most important bucket of strengths is Leading Implementation.

Leading Implementation is a set of leadership skills for making things happen.

It includes the following leadership skills:

  1. Coaching and Mentoring
  2. Customer Focus
  3. Delegation
  4. Effectiveness
  5. Monitoring Performance
  6. Planning and Organizing
  7. Thoroughness

Let’s say you want to work on these leadership skills.  The first thing you need to know is that these are not elusive skills reserved exclusively for the elite.

No, these are commonly Hidden Strengths that you and others around you already have, and they just need to be developed.

If you don’t think you are good at any of these, then before you rule yourself out, and scratch them off your list, you need to ask yourself some key reflective questions:

  1. Do you know what good actually looks like?  Who are you role models?   What do they do differently than you, and is it really might and magic or do they simply do behaviors or techniques that you could learn, too?
  2. How much have you actually practiced?   Have you really spent any sort of time working at the particular skill in question?
  3. How did you create an effective feedback loop?  So many people rapidly improve when they figure out how to create an effective learning loop and an effective feedback loop.
  4. Who did you learn from?  Are you expecting yourself to just naturally be skilled?  Really?  What if you found a good mentor or coach, one that could help you create an effective learning loop and feedback loop, so you can improve and actually chart and evaluate your progress?
  5. Do you have a realistic bar?  It’s easy to fall into the trap of “all or nothing.”   What if instead of focusing on perfection, you focused on progress?   Could a little improvement in a few of these areas, change your game in a way that helps you operate at a higher level?

I’ve seen far too many starving artists and unproductive artists, as well as mad scientists, that had brilliant ideas that they couldn’t turn into reality.  While some were lucky to pair with the right partners and bring their ideas to live, I’ve actually seen another pattern of productive artists.

They develop some of the basic leadership skills in themselves to improve their ability to execute.

Not only are they more effective on the job, but they are happier with their ability to express their ideas and turn their ideas into action.

Even better, when they partner with somebody who has strong execution, they amplify their impact even more because they have a better understanding and appreciation of what it takes to execute ideas.

Like talk, ideas are cheap.

The market rewards execution.

Categories: Architecture, Programming

How to deploy composite Docker applications with Consul key values to CoreOS

Xebia Blog - Fri, 06/19/2015 - 16:34

Most examples on the deployment of Docker applications to CoreOS use a single docker application. But as soon as you have an application that consists of more than 1 unit, the number of commands you have to type soon becomes annoying. At Xebia we have a best practice that says "Three strikes and you automate" mandating that a third time you do something similar, you automate. In this blog I share the manual page of the utility called fleetappctl that allows you to perform rolling upgrades and deploy Consul Key value pairs of composite applications to CoreOS and show three examples of its usage.


fleetappctl is a utility that allows you to manage a set of CoreOS fleet unit files as a single application. You can start, stop and deploy the application. fleetappctl is idempotent and does rolling upgrades on template files with multiple instances running. It can substitute placeholders upon deployment time and it is able to deploy Consul key value pairs as part of your application. Using fleetappctl you have everything you need to create a self contained deployment  unit of your composite application and put it under version control.

The command line options to fleetappctl are shown below:

fleetappctl [-d deployment-descriptor-file]
            [-e placeholder-value-file]
            (generate | list | start | stop | destroy)
option -d

The deployment descriptor file describes all the fleet unit files and Consul key-value pair files that make up the application. All the files referenced in the deployment-descriptor may have placeholders for deployment time values. These placeholders are enclosed in  double curly brackets {{ }}.

option -e

The file contains the values for the placeholders to be used on deployment of the application. The file has a simple format:

<name>=<value>
start

starts all units in the order as they appear in the deployment descriptor. If you have a template unit file, you can specify the number of instances you want to start. Start is idempotent, so you may call start multiple times. Start will bring the deployment inline with your descriptor.

If the unit file has changed with respect to the deployed unit file, the corresponding instances will be stopped and restarted with the new unit file. If you have a template file, the instances of the template file will be upgraded one by one.

Any consul key value pairs as defined by the consul.KeyValuePairs entries are created in Consul. Existing values are not overwritten.

generate

generates a deployment descriptor (deployit-manifest.xml) based upon all the unit files found in your directory. If a file is a fleet unit template file the number of instances to start is set to 2, to support rolling upgrades.

stop

stops all units in reverse order of their appearance in the deployment descriptor.

destroy

destroys all units in reverse order of their appearance in the deployment descriptor.

list

lists the runtime status of the units that appear in the deployment descriptor.

Install fleetappctl

to nstall the fleetappctl utility, type the following commands:

curl -q -L https://github.com/mvanholsteijn/fleetappctl/archive/0.25.tar.gz | tar -xzf -
cd fleetappctl-0.25
./install.sh
brew install xmlstarlet
brew install fleetctl
Start the platform

If you do not have the platform running, start it first.

cd ..
git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service.git
cd coreos-container-platform-as-a-service
git checkout 029d3dd8e54a5d0b4c085a192c0ba98e7fc2838d
cd vagrant
vagrant up
./is_platform_ready.sh
Example - Three component web application


The first example is a three component application. It consists of a mount, a Redis database service and a web application. We generate the deployment descriptor, indicate we do not want to start the mount, start the application and then modify the web application unit file to change the service name into 'helloworld'. We perform a rolling upgrade by issuing start again.. Finally we list, stop and destroy the application.

cd ../fleet-units/app
# generate a deployment descriptor
fleetappctl generate

# do not start mount explicitly
xml ed -u '//fleet.UnitConfigurationFile[@name="mnt-data"]/startUnit' \
       -v false deployit-manifest.xml > \
        deployit-manifest.xml.new
mv deployit-manifest.xml{.new,}

# start the app
fleetappctl start 

# Check it is working
curl hellodb.127.0.0.1.xip.io:8080
curl hellodb.127.0.0.1.xip.io:8080

# Change the service name of the application in the unit file
sed -i -e 's/SERVICE_NAME=hellodb/SERVICE_NAME=helloworld/' app-hellodb@.service

# do a rolling upgrade
fleetappctl start 

# Check it is now accessible on the new service name
curl helloworld.127.0.0.1.xip.io:8080

# Show all units of this app
fleetappctl list

# Stop all units of this app
fleetappctl stop
fleetappctl list

# Restart it again
fleetappctl start

# Destroy it
fleetappctl destroy
Example - placeholder references

This example shows the use of a placeholder reference in the unit file of the paas-monitor application. The application takes two optional environment  variables: RELEASE and MESSAGE that allow you to configure the resulting responses. The variable RELEASE is configured in the Docker run command in the fleet unit file through a placeholder. The actual value for the current deployment is taken from an placeholder value file.

cd ../fleetappctl-0.25/examples/paas-monitor
#check out the placeholder reference
grep '{{' paas-monitor@.service

...
ExecStart=/bin/sh -c "/usr/bin/docker run --rm --name %p-%i \
 <strong>--env RELEASE={{release}}</strong> \
...
# checkout our placeholder values
cat dev.env
...
release=V2
# start the app
fleetappctl -e dev.env start

# show current release in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start is idempotent (ie. nothing happens)
fleetappctl -e dev.env start

# update the placeholder value and see a rolling upgrade in the works
echo 'release=V3' > dev.env
fleetappctl -e dev.env start
curl paas-monitor.127.0.0.1.xip.io:8080/status

fleetappctl destroy
Example - Env Consul Key Value Pair deployments


The final example shows the use of a Consul Key Value Pair, the use of placeholders and envconsul to dynamically update the environment variables of a running instance. The environment variables RELEASE and MESSAGE are taken from the keys under /paas-monitor in Consul. In turn the initial value of these keys are loaded on first load and set using values from the placeholder file.

cd ../fleetappctl-0.25/examples/envconsul

#check out the Consul Key Value pairs, and notice the reference to placeholder values
cat keys.consul
...
paas-monitor/release={{release}}
paas-monitor/message={{message}}

# checkout our placeholder values
cat dev.env
...
release=V4
message=Hi guys
# start the app
fleetappctl -e dev.env start

# show current release and message in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# Change the message in Consul
fleetctl ssh paas-monitor@1 \
    curl -X PUT \
    -d \'hello Consul\' \
    http://172.17.8.101:8500/v1/kv/paas-monitor/message

# checkout the changed message
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start does not change the values..
fleetappctl -e dev.env start
Conclusion

CoreOS provides all the basic functionality for a Container Platform as a Service. With the utility fleetappctl it becomes easy to start, stop and upgrade composite applications. The script is an superfluous to fleetctl and does not break other ways of deploying your applications to CoreOS.

The source code, manual page and documentation of fleetappctl can be found on https://github.com/mvanholsteijn/fleetappctl.

 

Diff'ing software architecture diagrams again

Coding the Architecture - Simon Brown - Fri, 06/19/2015 - 12:42

In Diff'ing software architecture diagrams, I showed that creating a software architecture model with a textual format provides you with the ability to version control and diff different versions of the model. As a follow-up, somebody asked me whether Structurizr provides a way to recreate what Robert Annett originally posted in Diagrams for System Evolution. In other words, can the colours of the lines be changed? As you can see from the images below, the answer is yes.

Before Before

To do this, you simply add some tags to the relationships and add the appropriate styles to the view configuration. structurizr.com will even auto-generate a key for you.

Diagram key

And yes, you can do the same with elements too. As this illustrates, the choice of approach is yours.

Categories: Architecture

Startup Thinking

“Startups don't win by attacking. They win by transcending.  There are exceptions of course, but usually the way to win is to race ahead, not to stop and fight.” -- Paul Graham

A startup is the largest group of people you can convince to build a different future.

Whether you launch a startup inside a big company or launch a startup as a new entity, there are a few things that determine the strength of the startup: a sense of mission, space to think, new thinking, and the ability to do work.

The more clarity you have around Startup Thinking, the more effective you can be whether you are starting startups inside our outside of a big company.

In the book, Zero to One: Notes on Startups, or How to Build the Future, Peter Thiel shares his thoughts about Startup Thinking.

Startups are Bound Together by a Sense of Mission

It’s the mission.  A startup has an advantage when there is a sense of mission that everybody lives and breathes.  The mission shapes the attitudes and the actions that drive towards meaningful outcomes.

Via Zero to One: Notes on Startups, or How to Build the Future:

“New technology tends to come from new ventures--startups.  From the Founding Fathers in politics to the Royal Society in science to Fairchild Semiconductor's ‘traitorous eight’ in business, small groups of people bound together by a sense of mission have changed the world for the better.  The easiest explanation for this is negative: it's hard to develop new things in big organizations, and it's even harder to do it by yourself.  Bureaucratic hierarchies move slowly, and entrenched interests shy away from risk.” 

Signaling Work is Not the Same as Doing Work

One strength of a startup is the ability to actually do work.  With other people.  Rather than just talk about it, plan for it, and signal about it, a startup can actually make things happen.

Via Zero to One: Notes on Startups, or How to Build the Future:

“In the most dysfunctional organizations, signaling that work is being done becomes a better strategy for career advancement than actually doing work (if this describes your company, you should quit now).  At the other extreme, a lone genius might create a classic work of art or literature, but he could never create an entire industry.  Startups operate on the principle that you need to work with other people to get stuff done, but you also need to stay small enough so that you actually can.”

New Thinking is a Startup’s Strength

The strength of a startup is new thinking.  New thinking is even more valuable than agility.  Startups provide the space to think.

Via Zero to One: Notes on Startups, or How to Build the Future:

“Positively defined, a startup is the largest group of people you can convince of a plan to build a different future.  A new company's most important strength is new thinking: even more important than nimbleness, small size affords space to think.  This book is about the questions you must ask and answer to succeed in the business of doing new things: what follows is not a manual or a record of knowledge but an exercise in thinking.  Because that is what a startup has to do: question received ideas and rethink business from scratch.”

Do you have stinking thinking or do you beautiful mind?

New thinking will take you places.

You Might Also Like

How To Get Innovation to Succeed Instead of Fail

Management Innovation is at the Top of the Innovation Stack

The Innovation Revolution

The New Competitive Landscape

The New Realities that Call for New Organizational and Management Capabilities

Categories: Architecture, Programming

Diff'ing software architecture diagrams

Coding the Architecture - Simon Brown - Wed, 06/17/2015 - 14:02

Robert Annett wrote a post titled Diagrams for System Evolution where he describes a simple approach to showing how to visually describe changes to a software architecture. In essence, in order to show how a system is to change, he'll draw different versions of the same diagram and use colour-coding to highlight the elements/relationships that will be added, removed or modified.

I've typically used a similar approach for describing as-is and to-be architectures in the past too. It's a technique that works well. Although you can version control diagrams, it's still tricky to diff them using a tool. One solution that addresses this problem is to not create diagrams, but instead create a textual description of your software architecture model that is then subsequently rendered with some tooling. You could do this with an architecture description language (such as Darwin) although I would much rather use my regular programming language instead.

Creating a software architecture model as code

This is exactly what Structurizr is designed to do. I've recreated Robert's diagrams with Structurizr as follows.

Before

Before

And since the diagrams were created by a model described as Java code, that description can be diff'ed using your regular toolchain.

The diff between the two model versions

Code provides opportunities

This perhaps isn't as obvious as Robert's visual approach, and I would likely still highlight the actual differences on diagrams using notation as Robert did too. Creating a textual description of a software architecture model does provide some interesting opportunities though.

Categories: Architecture

How to Configure Alerts to Prevent Too Many Server Notifications

This is a guest post by Ashish Mohindroo, who leads Product for Happy Apps, a new uptime and performance monitoring system.

It isn't uncommon for system administrators to receive a stream of alarms for situations that either can wait to be addressed, will remedy themselves, or simply weren't problems to begin with. The other side of the equation is missing the reports that indicate a problem that has to be addressed right away. Options for customizing server notifications let you decide the conditions that trigger alerts, set the level of alerts, and choose the recipients based on each alert's importance.

The only thing worse than receiving too many notifications is not receiving the one alert that would keep a small glitch from becoming a big problem. Preventing over-notification requires fine-tuning system alerts so that the right people find out about problems and potential problems at the right time. Here's a three-step approach to customizing alerts:

Categories: Architecture

boot2docker on xhyve

Xebia Blog - Mon, 06/15/2015 - 09:00

xhyve is a new hypervisor in the vein of KVM on Linux and bhyve on BSD. It’s actually a port of BSD’s bhyve to OS X and more similar to KVM than to Virtualbox in that it’s minimal and command line only which makes it a good fit for an always running virtual machine like boot2docker on OS X.

This post documents the steps to get boot2docker running within xhyve and contains some quick benchmarks to compare xhyve’s performance with Virtualbox.

Read the full blogpost on Simon van der Veldt's website

Using UIPageViewControllers with Segues

Xebia Blog - Mon, 06/15/2015 - 07:30

I've always wondered how to configure a UIPageViewController much like you configure a UITabBarController inside a Storyboard. It would be nice to create standard embed segues to the view controllers that make up the pages. Unfortunately, such a thing is not possible and currently you can't do a whole lot in a Storyboard with page view controllers.

So I thought I'd create a custom way of connecting pages through segues. This post explains how.

Without segues

First let's create an example without using segues and then later we'll try to modify it to use segues.

Screen Shot 2015-06-14 at 10.01.10

In the above Storyboard we have 2 scenes. One page view controller and another for the individual pages, the content view controller. The page view controller will have 4 pages that each display their page number. It's about as simple as a page view controller can get.

Below the code of our simple content view controller:

class MyContentViewController: UIViewController {

  @IBOutlet weak var pageNumberLabel: UILabel!

  var pageNumber: Int!

  override func viewDidLoad() {
    super.viewDidLoad()
      pageNumberLabel.text = "Page $$pageNumber)"
    }
}

That means that our page view controller just needs to instantiate an instance of our MyContentViewController for each page and set the pageNumber. And since there is no segue between the two scenes, the only way to create an instance of the MyContentViewController is programmatically with the UIStoryboard.instantiateViewControllerWithIdentifier(_:). Of course for that to work, we need to give the content view controller scene an identifier. We'll choose MyContentViewController to match the name of the class.

Our page view controller will look like this:

class MyPageViewController: UIPageViewController {

  let numberOfPages = 4

  override func viewDidLoad() {
    super.viewDidLoad()

    setViewControllers([createViewController(1)], 
      direction: .Forward, 
      animated: false, 
      completion: nil)

    dataSource = self
  }

  func createViewController(pageNumber: Int) -> UIViewController {
    let contentViewController = 
      storyboard?.instantiateViewControllerWithIdentifier("MyContentViewController") 
      as! MyContentViewController
    contentViewController.pageNumber = pageNumber
    return contentViewController
  }

}

extension MyPageViewController: UIPageViewControllerDataSource {
  func pageViewController(pageViewController: UIPageViewController, viewControllerAfterViewController viewController: UIViewController) -> UIViewController? {
    return createViewController(
      mod((viewController as! MyContentViewController).pageNumber, 
      numberOfPages) + 1)
  }

  func pageViewController(pageViewController: UIPageViewController, viewControllerBeforeViewController viewController: UIViewController) -> UIViewController? {
    return createViewController(
      mod((viewController as! MyContentViewController).pageNumber - 2, 
      numberOfPages) + 1)
  }
}

func mod(x: Int, m: Int) -> Int{
  let r = x % m
  return r < 0 ? r + m : r
}

Here we created the createViewController(_:) method that has the page number as argument and creates the instance of MyContentViewController and sets the page number. This method is called from viewDidLoad() to set the initial page and from the two UIPageViewControllerDataSource methods that we're implementing here to get to the next and previous page.

The custom mod(_:_) function is used to have a continuous page navigation where the user can go from the last page to the first and vice versa (the built-in % operator does not do a true modules operation that we need here).

Using segues

The above sample is pretty simple. So how can we change it to use a segue? First of all we need to create a segue between the two scenes. Since we're not doing anything standard here, it will have to be a custom segue. Now we need a way to get an instance of our content view controller through the segue which we can use from our createViewController(_:). The only method we can use to do anything with the segue is UIViewController.performSegueWithIdentifier(_:sender:). We know that calling that method will create an instance of our content view controller, which is the destination of the segue, but we then need a way to get this instance back into our createViewController(_:) method. The only place we can reference the new content view controller instance is from within the custom segue. From it's init method we can set it to a variable which the createViewController(_:) can also access. That looks something as following. First we create the variable:

  var nextViewController: MyContentViewController?

Next we create a new custom segue class that assigns the destination view controller (the new MyContentViewController) to this variable.

public class PageSegue: UIStoryboardSegue {

  public override init!(identifier: String?, source: UIViewController, destination: UIViewController) {
    super.init(identifier: identifier, source: source, destination: destination)
    if let pageViewController = source as? MyPageViewController {
      pageViewController.nextViewController = 
        destinationViewController as? MyContentViewController
    }
  }

  public override func perform() {}
    
}

Since we're only interested in getting the reference to the created view controller we don't need to do anything extra in the perform() method. And the page view controller itself will handle displaying the pages so our segue implementation remains pretty simple.

Now we can change our createViewController(_:) method:

func createViewController(pageNumber: Int) -> UIViewController {
  performSegueWithIdentifier("Page", sender: self)
  nextViewController!.pageNumber = pageNumber
  return nextViewController!
}

The method looks a bit odd since it's not we're never assigning nextViewController anywhere in this view controller. And we're relying on the fact that the segue is created synchronously from the performSegueWithIdentifier call. Otherwise this wouldn't work.

Now we can create the segue in our Storyboard. We need to give it the same identifier as we used above and set the Segue Class to PageSegue

Screen Shot 2015-06-14 at 11.58.49

<2>Generic class

Now we can finally create segues to visualise the relationship between page view controller and content view controller. But let's see if we can write a generic class that has most of the logic which we can reuse for each UIPageViewController.

We'll create a class called SeguePageViewController which will be the super class for our MyPageViewController. We'll move the PageSegue to the same source file and refactor some code to make it more generic:

public class SeguePageViewController: UIPageViewController {

    var pageSegueIdentifier = "Page"
    var nextViewController: UIViewController?

    public func createViewController(sender: AnyObject?) -> MyContentViewController {
        performSegueWithIdentifier(pageSegueIdentifier, sender: sender)
        return nextViewController!
    }

}

public class PageSegue: UIStoryboardSegue {

    public override init!(identifier: String?, source: UIViewController, destination: UIViewController) {
        super.init(identifier: identifier, source: source, destination: destination)
        if let pageViewController = source as? SeguePageViewController {
            pageViewController.nextViewController = destinationViewController as? UIViewController
        }
    }

    public override func perform() {}
    
} 

As you can see, we moved the nextViewController variable and createViewController(_:) to this class and use UIViewController instead of our concrete MyContentViewController class. We also introduced a new variable pageSegueIdentifier to be able to change the identifier of the segue.

The only thing missing now is setting the pageNumber of our MyContentViewController. Since we just made things generic we can't set it from here, so what's the best way to deal with that? You might have noticed that the createViewController(_:) method now has a sender: AnyObject? as argument, which in our case is still the page number. And we know another method that receives this sender object: prepareForSegue(_:sender:). All we need to do is implement this in our MyPageViewController class.

override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  if segue.identifier == pageSegueIdentifier {
    let vc = segue.destinationViewController as! MyContentViewController
    vc.pageNumber = sender as! Int
  }
}

If you're surprised to see the sender argument being used this way you might want to read my previous post about Understanding the ‘sender’ in segues and use it to pass on data to another view controller

Conclusion

Wether or not you'd want to use this approach of using segues for page view controllers might be a matter of personal preference. It doesn't give any huge benefits but it does give you a visual indication of what's happening in the Storyboard. It doesn't really save you any code since the amount of code in MyPageViewController remained about the same. We just replaced the createViewController(_:) with the prepareForSegue(_:sender:) method.

It would be nice if Apple offers a better solution that depends less on the data source of a page view controller and let you define the paging logic directly in the storyboard.

3 easy ways of improving your Kanban system

Xebia Blog - Sun, 06/14/2015 - 21:10

You are working in a Kanban team and after a few months it doesn’t seem to be as easy and effective as it used to be. Here are three tips on how to get that energy back up.

1. Recheck and refresh all policies

IMG_1884New team members haven’t gone through the same process as the other team members. This means that they probably won’t pick up on the subtleties of the board and policies that go with it. Over time team agreements that were very explicit (remember the Kanban Practice: “Make Policies Explicit”?) aren’t anymore. For example: A blue stickie that used to have a class of service associated with it becomes a stickie for the person that works on the project. Unclear policies lead to loss of focus, confusion and lack of flow. Do a check if all policies are still valid, clear and understood by everybody. If not, get rid of them or improve them!

2. Check if WIP limits are understood and adhered to

In the first time after a team starts using Kanban, Work in Progress (WIP) limits are not completely optimized for flow. They usually are higher to minimize resistance from team and stakeholders as tighter WIP limits require a higher maturity of the organization. The team should start to adjust (lower) the limits to optimize flow. Lack team focus on improvement of flow can make that they don't adjust WIP limits.

Reevaluate current WIP limits, adjust them for bottlenecks and lower and discuss next steps. It will return the focus to help the team to “Start Finishing”.

3. Do a complete overhaul of the board

IMG_0307It sounds counter intuitive, but sometimes it helps to take the board and do a complete runthrough of the steps to design a Kanban System. Analyze up- and downstream stakeholders and model the steps the work flows through. After that update the legend and policies for classes of service.

Even if the board looks almost the same, the discussion and analysis might lead to subtle changes that make a world of difference. And it’s good for team spirit!

 

I hope these tips can help you. Please share any that you have in the comments!

ATDD with Lego Mindstorms EV3

Xebia Blog - Fri, 06/12/2015 - 20:09

We have been building automated acceptance tests using web browsers, mobile devices and web services for several years now. Last week Erik Zeedijk and I came up with the (crazy) idea to implement features for a robot using ATDD. In this blog we will explain how we used ATDD while experimenting with Lego Mindstorms EV3.

What are we building?

When you practice ATDD it is really important to have a common understanding about what you need to build and test. So we had several conversations about requirements and specifications.

We wanted to build a robot that could help us clean up post-its from tables or floors after brainstorms and other sticky note related sessions. For our first iteration we decided to experiment with moving the robot and using the color scanner. The robot should search for yellow post-it notes all by itself. No manual interactions because we hate manual work. After the conversation we wrote down the following acceptance criteria to scope our development activities:

  • The robot needs to move his way on a white square board
  • The robot needs to turn away from the edges of the square board
  • The robot needs to stop a yellow post-it

We used Cucumber to write down the features and acceptance test scenarios like:


Feature: Move robot
Scenario: Robot turns when a purple line is detected
When the robot is moving and encounters a purple line
Then the robot turns left

Scenario: Robot stops moving when yellow post-it detected
When the robot is moving and encounters yellow post-it
Then the robot stops

Just enough information to start rocking with the robot! Ready? Yes! Tests? Yes! Lets go!

How did we build it?

First we played around with the Lego Mindstorms EV3 programming software (4GL tool). We could easily build the application by click our way through the conditions, loops and color detections.

But we couldn't find a way to hook up our Acceptance Tests. So we decided to look for programming languages. First hit was the leJOS. It comes with a custom firmware which you install on a SD card. This firmware allows you to run Java on the Lego EV3 brick.

Installing the firmware on the EV3 Brick

The EV3 main computer “Brick” is actually pretty powerful.

It has a 300mhz ARM processor with 64MB RAM and takes micro SD cards for storage.
It also has Bluetooth and Wi-Fi making it possible to wirelessly connect to it and load programs on it.

There are several custom firmware’s you can run on the EV3 Brick. The LeJOS firmware allows us to do just that where we can program in Java with having access to a specialized API created for the EV3 Brick. Install it by following these steps:

  1. Download the latest firmware on sourceforge: http://sourceforge.net/projects/ev3.lejos.p/files/
  2. Get a blank SD card (2GB – 32GB, SDXC not supported) and format this card to FAT32.
  3. Unzip the ‘lejosimage.zip’ file from the leJOS home directory to the root directory of the SD card.
  4. Download Java for Lego Mindstorms EV3 from the Oracle website: http://java.com/legomindstorms
  5. Copy the downloaded ‘Oracle JRE.tar.gz’ file to the root of the SD card.
  6. Safely remove the SD card from your computer, insert it into the EV3 Brick and boot the EV3. The first boot will take approximately 8 minutes while it created the needed partitions for the EV3 Brick to work.
ATDD with Cucumber

Cucumber helped us creating scenarios in an human readable language. We created the Step Definitions to glue it all the automation together and used the remote access to the EV3 via Java RMI.

We also wrote some support code to help us make the automation code readable and maintainable.

Below you can find an example of the StepDefinition of a Given statement of a scenario.

@Given("^the robot is driving$")
public void the_robot_is_driving() throws Throwable {
Ev3BrickIO.start();
assertThat(Ev3BrickIO.leftMotor.isMoving(), is(true));
assertThat(Ev3BrickIO.rightMotor.isMoving(), is(true));
}
Behavior Programming

A robot needs to perform a series of functions. In our case we want to let the robot drive around until he finds a yellow post-it, after which he stops. The obvious way of programming these functions is to make use of several if, then, else statements and loops. This works fine at first, however, as soon as the patterns become more complex the code becomes complex as well since adding new actions can have influence on other actions.

A solution to this problem is Behavior Programming. The concept of this is based around the fact that a robot can only do one action at a time depending on the state he is in. These behaviors are written as separate independent classes controlled by an overall structure so that behavior can be changed or added without touching other behaviors.

The following is important in this concept:

  1. One behavior is active at a time and then controls the robot.
  2. Each behavior is given a priority. For example; the turn behavior has priority over the drive behavior so that the robot will turn once the color sensor indicates he reaches the edge of the field.
  3. Each behavior is a self contained, independent object.
  4. The behaviors can decide if they should take control of the robot.
  5. Once a behavior becomes active it has a higher priority than any of the other behaviors
Behaviour API and Coding

The Behavior API is fairly easy to understand as it only exists out of one interface and one class. The individual behavior classes build-up is then defined by the overall Behavior interface and the individual behavior classes are controlled by an Arbitrator class that decides which behavior should be the active one. A separate class should be created for each behavior you want the robot to have.

leJOS Behaviour Pattern

 

When an Arbitrator object is instantiated, it is given an array of Behavior objects. Once it has these, the start() method is called and it begins arbitrating; deciding which behavior will become active.

In the code example below the Arbitrator is being created where the first two lines create instances of our behaviors. The third line places them into an array, with the lowest priority behavior taking the lowest array index. The fourth line creates the Arbitrator, and the fifth line starts the Arbitration process. One you create more behavior classes these can simply be added to the array in order to be executed by the Arbitrator.

DriveForward driveForward = new DriveForward();
Behavior stopOnYellow = new StopOnYellow();
behaviorList = new Behavior[]{driveForward, stopOnYellow};
arbitrator = new Arbitrator(behaviorList, true);
arbitrator.start();

We will now take a detailed look at one of the behavior classes; DriveForward(). Each of the standard methods is defined. First we create the action() method in which contains what we want the robot to perform when the behavior becomes active. In this case moving the left and the right motor forwards.

public void action() {
try {
suppressed = false;

Ev3BrickIO.leftMotor.forward();
Ev3BrickIO.rightMotor.forward();
while( !suppressed ) {
Thread.yield();
}

Ev3BrickIO.leftMotor.stop(true);
Ev3BrickIO.rightMotor.stop(true);
} catch (RemoteException e) {
e.printStackTrace();
}
}

The suppress() method will stop the action once this is called.

public void suppress() {
suppressed = true;
}

The takeControl() method tells the Arbitrator which behavior should become active. In order to keep the robot moving after having performed more actions we decided to create a global variable to keep track of the fact if the robot is running. While this is the case we simply return 'true' from the takeControl() method.

public boolean takeControl() {
return Ev3BrickIO.running;
}

You can find our leJOS EV3 Automation code here.

Test Automation Day

Our robot has the potential to do much more and we have not yet implemented the feature of picking up our post-its! Next week we'll continue working with the Robot during the Test Automation Day.
Come and visit our booth where you can help us to enable the Lego Mindstorms EV3 Robot to clean-up our sticky mess at the office!

Feel free to use our Test Automation Day €50 discount on the ticket price by using the following code: TAD15-XB50

We hope to see you there!

Unit and integration are ambiguous names for tests

Coding the Architecture - Simon Brown - Fri, 06/12/2015 - 17:10

I've blogged about architecturally-aligned testing before, which essentially says that our automated tests should be aligned with the constructs that we use to describe a software system. I want to expand upon this topic and clarify a few things, but first...

Why?

In a nutshell, I think the terms "unit test" and "integration test" are ambiguous and open to interpretation. We tend to all have our own definition, but I've seen those definitions vary wildly, which again highlights that our industry often lacks a shared vocabulary. What is a "unit"? How big is a "unit"? And what does an "integration test" test the integration of? Are we integrating components? Or are we integrating our system with the database? The Wikipedia page for unit testing talks about individual classes or methods ... and that's good, so why don't we use a much more precise terminology instead that explicitly specifies the scope of the test?

Align the names of the tests with the things they are testing

I won't pretend to have all of the answers here, but aligning the names of the tests with the things they are testing seems to be a good starting point. And if you have a nicely defined way of describing your software system, the names for those tests fall into place really easily. Again, this comes back to creating that shared vocabulary - a ubiquitous language for describing the structures at different levels of abstraction within a software system.

Here are some example test definitions for a software system written in Java or C#, based upon my C4 model (a software system is made up of containers, containers contain components, and components are made up of classes).

Name What? Test scope? Mocking, stubbing, etc? Lines of production code tested per test case? Speed Class tests Tests focused on individual classes, often by mocking out dependencies. These are typically referred to as “unit” tests. A single class - one method at a time, with multiple test cases to test each path through the method if needed. Yes; classes that represent service providers (time, random numbers, key generators, non-deterministic behaviour, etc), interactions with “external” services via inter-process communication (e.g. databases, messaging systems, file systems, other infrastructure services, etc ... the adapters in a “ports and adapters” architecture). Tens Very fast running; test cases typically execute against resources in memory. Component tests Tests focused on components/services through their public interface. These are typically referred to as “integration” tests and include interactions with “external” services (e.g. databases, file systems, etc). See also ComponentTest on Martin Fowler's bliki. A component or service - one operation at a time, with multiple test cases to test each path through the operation if needed. Yes; other components. Hundreds Slightly slower; component tests may incur network hops to span containers (e.g. JVM to database). System tests UI, API, functional and acceptance tests (“end-to-end” tests; from the outside-in). See also BroadStackTest on Martin Fowler's bliki. A single scenario, use case, user story, feature, etc across a whole software system. Not usually, although perhaps other systems if necessary. Thousands Slower still; a single test case may require an execution path through multiple containers (e.g. web browser to web server to database).

Thoughts?

This definition of the tests doesn't say anything about TDD, when you write your tests and how many tests you should write. That's all interesting, but it's independent to the topic we're discussing here. So then, do *you* have a good clear definition of what "unit testing" means? Do your colleagues at work agree? Can you say the same for "integration testing"? What do you think about aligning the names of the tests with the things they are testing? Thoughts?

Categories: Architecture

Software Engineering Radio - Episode 228

Coding the Architecture - Simon Brown - Fri, 06/12/2015 - 10:23

A quick note to say that my recent interview/chat with Sven Johann for Software Engineering Radio has now been published. As you might expect, it covers software architecture, diagramming, the use of UML, my C4 model, doing "just enough" and why the code should (but doesn't often) reflect the software architecture. You can listen to the podcast here. Enjoy!

Categories: Architecture

Diagrams for System Evolution

Coding the Architecture - Simon Brown - Thu, 06/11/2015 - 08:17

Every system has an architecture (whether deliberately designed or not) and most systems change and evolve during their existence - which includes their architecture. Sometimes this happens gradually and sometimes in large jumps. Let's have a look at a simple example.

Basic context diagram for a furniture company


Before


The main elements are:

  • Sales Orderbook where sales staff book orders and designers occasionally view sales.
  • Product Catalogue where designers record the details of new products.
  • Warehouse Management application that takes orders and either delivers to customers from stock or forward orders to manufacturing.


The data flow is generally one way, with each system pushing out data with little feedback, due to an old batch-style architecture. For example, the designer updates product information but all the updates for the day are pushed downstream at end-of-day. The Warehouse application only accepts a feed from the Sales Orderbook, which includes both orders and product details.


There are multiple issues with a system like this:

  1. Modification to the products (new colours, sizes, materials etc) are not delivered to the manufacturer until after an order is placed. This means a long lead time for manufacturing and an unhappy customer.
  2. The sales orderbook receives a subset of the product information when updated. This means the information could become out-of-sync and not all of it is available.
  3. The warehouse system also relies on the sales orderbook for product information. This may be out of date or inaccurate.
  4. The designer does not have easy-to-access information about how product changes affect sales.


The users are unhappy with the system and want the processes to change as follows:

  1. The product designer wishes to see historic order information alongside the products in the catalog. Therefore the catalog should be able to retrieve data from the Sales Orderbook when required.
  2. Don't batch push products to the Orderbook and warehouse management. They should access the product catalog as a service when required.


System redesign

Therefore the system is re-architected as follows:



This is a significant modification to how the systems works:

  • The Sales Orderbook and Product Catalogue have a bi-directional relationship and request information from each other.
  • Only orders are sent from the Sales Orderbook to the Product Catalogue.
  • There is now a direct connection between the Warehouse Manager and the Product Catalogue.
  • The product catalogue now provides all features the Designer requires. They do not need to access any other interfaces into the system.

However, if you place the two diagrams next to each other they look similar and the scale of the change is not obvious.

Before

When we write code, we would use a ‘diff’ tool to show us the modifications.


Before


I often do something similar on my architecture diagrams. For example:


System before modification:

Before


System after modification:


When placed side-by-side the differences are much more obvious:

Before


I’ve used similar colours to those used by popular diff tools; with red for a removed relationship and green for a new relationship (I would use the same colours if components were added or removed). I’ve used blue to indicate where a relationship has changed significantly. Note that I’ve included a Key on these diagrams.

You may be thinking; why have I coloured before and after diagrams rather than having a single, coloured diagram? A single diagram can work if you are only adding and removing elements but if they are being modified it becomes very cluttered. As an example, consider the relationship between the Sales Orderbook and the Product Catalogue. Would the description on a combined diagram be "product updates", "exchange sales and product information", or both? How would I show which was before and after? Lastly, what would I do with the arrow on the connection? Colourise it differently from the line?

Conclusion

I find that having separate, coloured diagrams for the current and desired system architectures allows the changes to be seen much more clearly. Having spoken to several software architects, it appears that many use similar techniques, however there appears to be no standard way to do this. Therefore we should make sure that we follow standard advice such as using keys and labelling all elements and connections.

Categories: Architecture