Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Software Architecture for Developers in Chinese

Coding the Architecture - Simon Brown - Wed, 07/01/2015 - 11:14

Although it's been on sale in China for a few months, my copies of the Chinese translation of my Software Architecture for Developers book have arrived. :-)

Software Architecture for Developers

I can't read it, but seeing my C4 diagrams in Chinese is fun! Stay tuned for more translations.

Categories: Architecture

How to create the smallest possible docker container of any image

Xebia Blog - Tue, 06/30/2015 - 10:46

Once you start to do some serious work with Docker, you soon find that downloading images from the registry is a real bottleneck in starting applications. In this blog post we show you how you can reduce the size of any docker image to just a few percent of the original. So is your image too fat, try stripping your Docker image! The strip-docker-image utility demonstrated in this blog makes your containers faster and safer at the same time!


We are working quite intensively on our High Available Docker Container Platform  using CoreOS and Consul which consists of a number of containers (NGiNX, HAProxy, the Registrator and Consul). These containers run on each of the nodes in our CoreOS cluster and when the cluster boots, more than 600Mb is downloaded by the 3 nodes in the cluster. This is quite time consuming.

cargonauts/consul-http-router      latest              7b9a6e858751        7 days ago          153 MB
cargonauts/progrium-consul         latest              32253bc8752d        7 weeks ago         60.75 MB
progrium/registrator               latest              6084f839101b        4 months ago        13.75 MB

The size of the images is not only detrimental to the boot time of our platform, it also increases the attack surface of the container.  With 153Mb of utilities in the  NGiNX based consul-http-router, there is a lot of stuff in the container that you can use once you get inside. As we were thinking of running this router in a DMZ, we wanted to minimise the amount of tools lying around for a potential hacker.

From our colleague Adriaan de Jonge we already learned how to create the smallest possible Docker container  for a Go program. Could we repeat this by just extracting the NGiNX executable from the official distribution and copying it onto a scratch image?  And it turns out we can!

finding the necessary files

Using the utility dpkg we can list all the files that are installed by NGiNX

docker run nginx dpkg -L nginx
...
/.
/usr
/usr/sbin
/usr/sbin/nginx
/usr/share
/usr/share/doc
/usr/share/doc/nginx
...
/etc/init.d/nginx
locating dependent shared libraries

So we have the list of files in the package, but we do not have the shared libraries that are referenced by the executable. Fortunately, these can be retrieved using the ldd utility.

docker run nginx ldd /usr/sbin/nginx
...
	linux-vdso.so.1 (0x00007fff561d6000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fd8f17cf000)
	libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007fd8f1598000)
	libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007fd8f1329000)
	libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007fd8f10c9000)
	libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007fd8f0cce000)
	libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007fd8f0ab2000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fd8f0709000)
	/lib64/ld-linux-x86-64.so.2 (0x00007fd8f19f0000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fd8f0505000)
Following and including symbolic links

Now we have the executable and the referenced shared libraries, it turns out that ldd normally names the symbolic link and not the actual file name of the shared library.

docker run nginx ls -l /lib/x86_64-linux-gnu/libcrypt.so.1
...
lrwxrwxrwx 1 root root 16 Apr 15 00:01 /lib/x86_64-linux-gnu/libcrypt.so.1 -> libcrypt-2.19.so

By resolving the symbolic links and including both the link and the file, we are ready to export the bare essentials from the container!

getpwnam does not work

But after copying all essentials files to a scratch image, NGiNX did not start.  It appeared that NGiNX tries to resolve the user id 'nginx' and fails to do so.

docker run -P  --entrypoint /usr/sbin/nginx stripped-nginx  -g "daemon off;"
...
2015/06/29 21:29:08 [emerg] 1#1: getpwnam("nginx") failed (2: No such file or directory) in /etc/nginx/nginx.conf:2
nginx: [emerg] getpwnam("nginx") failed (2: No such file or directory) in /etc/nginx/nginx.conf:2

It turned out that the shared libraries for the name switch service reading /etc/passwd and /etc/group are loaded at runtime and not referenced in the shared libraries. By adding these shared libraries ( (/lib/*/libnss*) to the container, NGiNX worked!

strip-docker-image example

So now, the strip-docker-image utility is here for you to use!

    strip-docker-image  -i image-name
                        -t target-image-name
                        [-p package]
                        [-f file]
                        [-x expose-port]
                        [-v]

The options are explained below:

-i image-name           to strip
-t target-image-name    the image name of the stripped image
-p package              package to include from image, multiple -p allowed.
-f file                 file to include from image, multiple -f allowed.
-x port                 to expose.
-v                      verbose.

The following example creates a new nginx image, named stripped-nginx based on the official Docker image:

strip-docker-image -i nginx -t stripped-nginx  \
                           -x 80 \
                           -p nginx  \
                           -f /etc/passwd \
                           -f /etc/group \
                           -f '/lib/*/libnss*' \
                           -f /bin/ls \
                           -f /bin/cat \
                           -f /bin/sh \
                           -f /bin/mkdir \
                           -f /bin/ps \
                           -f /var/run \
                           -f /var/log/nginx \
                           -f /var/cache/nginx

Aside from the nginx package, we add the files /etc/passwd, /etc/group and /lib/*/libnss* shared libraries. The directories /var/run, /var/log/nginx and /var/cache/nginx are required for NGiNX to operate. In addition, we added /bin/sh and a few handy utilities, just to be able to snoop around a little bit.

The stripped image has now shrunk to an incredible 5.4% of the original 132.8 Mb to just 7.3Mb and is still fully operational!

docker images | grep nginx
...
stripped-nginx                     latest              d61912afaf16        21 seconds ago      7.297 MB
nginx                              1.9.2               319d2015d149        12 days ago         132.8 MB

And it works!

ID=$(docker run -P -d --entrypoint /usr/sbin/nginx stripped-nginx  -g "daemon off;")
docker run --link $ID:stripped cargonauts/toolbox-networking curl -s -D - http://stripped
...
HTTP/1.1 200 OK

For HAProxy, checkout the examples directory.

Conclusion

It is possible to use the official images that are maintained and distributed by Docker and strip them down to their bare essentials, ready for use! It speeds up load times and reduces the attack surface of that specific container.

Checkout the github repository for the script and the manual page.

Please send me your examples of incredibly shrunk Docker images!

Dealing with People You Can’t Stand

“If You Want To Go Fast, Go Alone. If You Want To Go Far, Go Together” – African Proverb

I blew the dust off some olds posts to rekindle some of the most important information for work and life.

It’s about dealing with people you can’t stand.

Whether you think of them as jerks, bullies, or just difficult people, the better you can deal with difficult people, the better you can get things done and make things happen.

And the more you learn how to bring out the best, in people at their worst, the less you’ll find people you can’t stand.

How To Bring Out the Best in People at Their Worst (Including Yourself)

Everything I needed to learn about dealing with difficult people, I learned from the book Dealing with People You Can’t Stand: How to Bring Out the Best in People at Their Worst, by Dr. Rick Brinkman and Dr. Rick Kirschner.

It’s one of the most brilliant, thoughtful books I’ve ever read on interpersonal skills and dealing with all sorts of bad behaviors.

The real key to dealing with difficult behavior is more than just recognizing bad behaviors in other people.

It’s recognizing bad behaviors in yourself, the kind that contribute to and amplify other people’s bad behaviors.

The more you know, the more you grow, and this is truly one of those transformational books.

Learn How To Deal with Difficult People (and Gain Some Mad Interpersonal Skills)

I’ve completely re-written my pot that provides an overview of the big ideas in Dealing with People You Can’t Stand:

Dealing with People You Can’t Stand

Even better, I’ve re-written all of my posts that talk through the 10 Types of Difficult People, and what to do about them.

I have to warn you:  Once you learn the 10 Types of Difficult People, you’ll be using the labels to classify bad behaviors that you experience in the halls, in meetings, behind your back, etc.

With that in mind, here they are …

10 Types of Difficult People

Here are the 10 Types of Difficult People at a glance:

  1. Grenade Person – After a brief period of calm, the Grenade person explodes into unfocused ranting and raving about things that have nothing to do with the present circumstances.
  2. Know-It-Alls – Seldom in doubt, the Know-It-All person has a low tolerance for correction and contradiction. If something goes wrong, however, the Know-It-All will speak with the same authority about who’s to blame – you!
  3. Maybe Person – In a moment of decision, the Maybe Person procrastinates in the hope that a better choice will present itself.
  4. No Person – A No Person kills momentum and creates friction for you. More deadly to morale than a speeding bullet, more powerful than hope, able to defeat big ideas with a single syllable.
  5. Nothing Person – A Nothing Person doesn’t contribute to the conversation. No verbal feedback, no nonverbal feedback, Nothing. What else could you expect from … the Nothing Person.
  6. Snipers – Whether through rude comments, biting sarcasm, or a well-timed roll of the eyes, making you look foolish is the Sniper’s specialty.
  7. Tanks – The Tank is confrontational, pointed and angry, the ultimate in pushy and aggressive behavior
  8. Think-They-Know-It-Alls – Think-They-Know-It-All people can’t fool all the people all the time, but they can fool some of the people enough of the time, and enough of the people all of the time – all for the sake of getting some attention.
  9. Whiners – Whiners feel helpless and overwhelmed by an unfair world. Their standard is perfection, and no one and nothing measures up to it.
  10. Yes Person – In an effort to please people and avoid confrontation, Yes People say “yes” without thinking things through.

I warned you.  Are you already thinking about some Snipers in a few meetings that you have, or is there a Yes Person driving you nuts (or are you that Yes Person?)

Have you talked to a Think-They-Know-It-All lately, or worse, a Know-It—All?

Never fear, I’ve included actionable insights and recommendations for dealing with all the various bad behaviors you’ll encounter.

The Lens of Human Understanding

If all this talk about dealing with difficult people, and having silly labels seems like a gimmick, it’s not.  It’s actually deep insight rooted in a powerful, but simple framework that Dr. Rick Brinkman and Dr. Rick Kirschner refer to as the Lens of Human Understanding:

The Lens of Human Understanding

Once I learned The Lens of Human Understanding, so many things fell into place.

Not only did I understand myself better, but I could instantly see what was driving other people, and how my behavior would either create more conflict or resolve it.

But when you don’t know what makes people tick, it’s very easy to get ticked off, or to tick them off.

Here’s looking at you … and other people … and their behaviors … in a brand new way.

You Might Also Like

25 Books the Most Successful Microsoft Leaders Read and Do

Interpersonal Skills Books

Personal Development Hub on Sources of Insight

Personal Development Resources at Sources of Insight

The Great Leadership Quotes Collection

Categories: Architecture, Programming

C4 stencil for OmniGraffle

Coding the Architecture - Simon Brown - Sun, 06/28/2015 - 11:38

If you like the look and feel of the C4 software architecture diagrams in my Software Architecture for Developers book (see examples here), Dennis Laumen has created an OmniGraffle stencil that will save you some time. Just download the stencil, install and it will appear in your stencil library.

The C4 stencil is available from Omni Group's Stenciltown. Thanks Dennis!

Categories: Architecture

Scala development with GitHub's Atom editor

Xebia Blog - Sat, 06/27/2015 - 14:57
.code { font-family: monospace; background-color: #eeeeee; }

GitHub recently released version 1.0 of their Atom editor. This post gives a rough overview of its Scala support.

Basic features

Basic features such as Scala syntax highlighting are provided by the language-scala plugin.

Some work on worksheets as found in e.g. Eclipse has been done in the scala-worksheet-plus plugin, but this is still missing major features and not very useful at this time.

Navigation and completion Ctags

Atom supports basic 'Go to Declaration' (ctrl-alt-down) and 'Search symbol' (cmd-shift-r) support by way of the default ctags-based symbols-view.

While there are multiple sbt plugins for generating ctags, the easiest seems to be to have Ensime download the sources (more on that below) and invoke ctags manually: put this configuration in your home directory and run the 'ctags' command from your project root.

This is useful for searching for symbols, but limited for finding declarations: for example, when checking the declaration for Success, ctags doesn't know whether this is scala.util.Success, akka.actor.Status.Success, spray.http.StatusCodes.Success or some other 3rd-party or local symbol with that name.

Ensime

This is where the Ensime plugin comes in.

Ensime is a service for Scala IDE support, originally written for the Scala support in Emacs. The project metadata for Ensime can be generated with 'sbt gen-ensime' from the ensime-sbt sbt plugin.

Usage

Start the Ensime server from Atom with 'cmd-shift-p' 'Ensime: start'. After a small pause the status bar proclaims 'Indexer ready!' and you should be good to go.

At this point the main features are 'jump to definition' (alt-click), hover for type info, and auto-completion:

atom.io ensime completion

There are some rough edges, but this is a promising start based on a solid foundation.

Conclusions

While Atom is already a pleasant, modern, open source, cross platform editor, it is clearly still early days.

The Scala support in Atom is not yet as polished as in IDE's such as IntelliJ IDEA or as stable as in more mature editors such as Sublime Text, but is already practically useful and has serious potential. Startup is not instant, but I did not notice a 'sluggish feel' as reported by earlier reviewers.

Feel free to share your experiences in the comments, I will keep this post updated as the tools - and our experience with them - evolve.

Automatically launching and configuring an EC2 instance with ansible

Agile Testing - Grig Gheorghiu - Fri, 06/26/2015 - 20:53
Ansible makes it easy to configure an EC2 instance from soup to nuts when it comes to launching the instance and configuring it.  Here's a complete playbook I use for this purpose:

$ cat ec2-launch-instance-api.yml
---
- name: Create a new api EC2 instance
hosts: localhost
gather_facts: False
vars:
keypair: api
instance_type: t2.small
security_group: api-core
image: ami-5189a661
region: us-west-2
vpc_subnet: subnet-xxxxxxx
name_tag: api01
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ vpc_subnet }}"
assign_public_ip: yes
instance_tags:
Name: "{{ name_tag }}"
register: ec2

- name: Add Route53 DNS record for this instance (overwrite if needed)
route53:
command: create
zone: mycompany.com
record: "{{name_tag}}.mycompany.com"
type: A
ttl: 3600
value: "{{item.private_ip}}"
overwrite: yes
with_items: ec2.instances

- name: Add new instance to proper ansible group
add_host: hostname={{name_tag}} groupname=api-servers ansible_ssh_host={{ item.private_ip }} ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/Users/grig.gheorghiu/.ssh/api.pem
with_items: ec2.instances

- name: Wait for SSH to come up
wait_for: host={{ item.private_ip }} port=22 search_regex=OpenSSH delay=210 timeout=420 state=started
with_items: ec2.instances

- name: Configure api EC2 instance
hosts: api-servers
sudo: True
gather_facts: True
roles:
- base
- tuning
- postfix
- monitoring
- nginx
- api


The first thing I do in this playbook is to launch a new EC2 instance, add or update its Route53 DNS A record, add it to an ansible group and wait for it to be accessible via ssh. Then I configure this instance by applying a handful or roles to it. That's it.
Some things to note:
1) Ansible uses boto under the covers, so you need that installed on your local host, and you also need a ~/.boto configuration file with your AWS credentials:
[Credentials]aws_access_key_id = xxxxxaws_secret_access_key = yyyyyyyyyy
2) When launching an EC2 instance with ansible via the ansible ec2 module, the hosts variable should point to localhost and gather_facts should be set to False. 
3) The various parameters expected by the EC2 API (keypair name, instance type, VPN subnet, security group, instance name tag etc) can be set in the vars section and then used in the tasks section in the ec2 stanza.
4) I used the ansible route53 module for managing DNS. This module has a handy property called overwrite, which when set to yes will update a DNS record in place if it exists, or will create it if it doesn't exist. 5) The add_host task is very useful in that it adds the newly created instance to a hosts group, in my case api-servers. This host group has a group_vars/api-servers configuration file already, where I set various ansible variables used in different roles (mostly secret-type variables such as API keys, user names, passwords etc). The group_vars directory is NOT checked in.
6) In the final task of the playbook, the [api-servers] group (which consists of only the newly created EC2 instance) gets the respective roles applied to it. Why does this group only consist of the newly created EC2 instance? Because when I run the playbook with ansible-playbook, I indicate an empy hosts file to make sure this group is empty:
$ ansible-playbook -i hosts/myhosts.empty ec2-launch-instance-api.yml
If instead I wanted to also apply the specified roles to my existing EC2 instances in that group, I would specify a hosts file that already has those instances defined in the [api-servers] group.

Automatically launching and configuring an EC2 instance with ansible

Agile Testing - Grig Gheorghiu - Fri, 06/26/2015 - 20:53
Ansible makes it easy to configure an EC2 instance from soup to nuts when it comes to launching the instance and configuring it.  Here's a complete playbook I use for this purpose:

$ cat ec2-launch-instance-api.yml
---
- name: Create a new api EC2 instance
hosts: localhost
gather_facts: False
vars:
keypair: api
instance_type: t2.small
security_group: api-core
image: ami-5189a661
region: us-west-2
vpc_subnet: subnet-xxxxxxx
name_tag: api01
tasks:
- name: Launch instance
ec2:
key_name: "{{ keypair }}"
group: "{{ security_group }}"
instance_type: "{{ instance_type }}"
image: "{{ image }}"
wait: true
region: "{{ region }}"
vpc_subnet_id: "{{ vpc_subnet }}"
assign_public_ip: yes
instance_tags:
Name: "{{ name_tag }}"
register: ec2

- name: Add Route53 DNS record for this instance (overwrite if needed)
route53:
command: create
zone: mycompany.com
record: "{{name_tag}}.mycompany.com"
type: A
ttl: 3600
value: "{{item.private_ip}}"
overwrite: yes
with_items: ec2.instances

- name: Add new instance to proper ansible group
add_host: hostname={{name_tag}} groupname=api-servers ansible_ssh_host={{ item.private_ip }} ansible_ssh_user=ubuntu ansible_ssh_private_key_file=/Users/grig.gheorghiu/.ssh/api.pem
with_items: ec2.instances

- name: Wait for SSH to come up
wait_for: host={{ item.private_ip }} port=22 search_regex=OpenSSH delay=210 timeout=420 state=started
with_items: ec2.instances

- name: Configure api EC2 instance
hosts: api-servers
sudo: True
gather_facts: True
roles:
- base
- tuning
- postfix
- monitoring
- nginx
- api


The first thing I do in this playbook is to launch a new EC2 instance, add or update its Route53 DNS A record, add it to an ansible group and wait for it to be accessible via ssh. Then I configure this instance by applying a handful or roles to it. That's it.
Some things to note:
1) Ansible uses boto under the covers, so you need that installed on your local host, and you also need a ~/.boto configuration file with your AWS credentials:
[Credentials]aws_access_key_id = xxxxxaws_secret_access_key = yyyyyyyyyy
2) When launching an EC2 instance with ansible via the ansible ec2 module, the hosts variable should point to localhost and gather_facts should be set to False. 
3) The various parameters expected by the EC2 API (keypair name, instance type, VPN subnet, security group, instance name tag etc) can be set in the vars section and then used in the tasks section in the ec2 stanza.
4) I used the ansible route53 module for managing DNS. This module has a handy property called overwrite, which when set to yes will update a DNS record in place if it exists, or will create it if it doesn't exist. 5) The add_host task is very useful in that it adds the newly created instance to a hosts group, in my case api-servers. This host group has a group_vars/api-servers configuration file already, where I set various ansible variables used in different roles (mostly secret-type variables such as API keys, user names, passwords etc). The group_vars directory is NOT checked in.
6) In the final task of the playbook, the [api-servers] group (which consists of only the newly created EC2 instance) gets the respective roles applied to it. Why does this group only consist of the newly created EC2 instance? Because when I run the playbook with ansible-playbook, I indicate an empy hosts file to make sure this group is empty:
$ ansible-playbook -i hosts/myhosts.empty ec2-launch-instance-api.yml
If instead I wanted to also apply the specified roles to my existing EC2 instances in that group, I would specify a hosts file that already has those instances defined in the [api-servers] group.

Git Subproject Compile-time Dependencies in Sbt

Xebia Blog - Fri, 06/26/2015 - 13:33

When creating a sbt project recently, I tried to include a project with no releases. This means including it using libraryDependencies in the build.sbt does not work. An option is to clone the project and publish it locally, but this is tedious manual work that needs to be repeated every time the cloned project changes.

Most examples explain how to add a direct compile time dependency on a git repository to sbt, but they just show how to add a single project repository as a dependency using an RootProject. After some searching I found the solution to add projects from a multi-project repository. Instead of RootProject the ProjectRef should be used. This allows for a second argument to specify the subproject in the reposityr.

This is my current project/Build.scala file:

import sbt.{Build, Project, ProjectRef, uri}

object GotoBuild extends Build {
  lazy val root = Project("root", sbt.file(".")).dependsOn(staminaCore, staminaJson, ...)

  lazy val staminaCore = ProjectRef(uri("git://github.com/scalapenos/stamina.git#master"), "stamina-core")
  lazy val staminaJson = ProjectRef(uri("git://github.com/scalapenos/stamina.git#master"), "stamina-json")
  ...
}

These subprojects are now a compile time dependency and sbt will pull in and maintain the repository in ~/.sbt/0.13/staging/[sha]/stamina. So no manual checkout with local publish is needed. This is very handy when depending on an internal independent project/module and without needing to create a new release for every change. (One side note is that my IntelliJ currently does not recognize that the library is on the class/source path of the main project, so it complains it cannot find symbols and therefore cannot do proper syntax checking and auto completing.)

Inspirational Quotes, Inspirational Life Quotes, and Great Leadership Quotes

I know several people looking for inspiration.

I believe the right words ignite or re-ignite us.

There is no better way to prime your mind for great things to come than filling your head and hear with the greatest inspirational quotes that the world has ever known.

Of course, the challenge is finding the best inspirational quotes to draw from.

Well, here you go …

3 Great Inspirational Quotes Collections at Your Fingertips

I revamped a few of my best inspirational quotes collections to really put the gems of insight at your fingertips:

  1. Inspirational Quotes – light a fire from the inside out, or find your North Star that pulls you forward
  2. Inspirational Life Quotes -
  3. Great Leadership Quotes – learn what great leadership really looks like and how it helps lifts others up

Each of these inspirational quotes collection is hand-crafted with deep words of wisdom, insight, and action.

You'll find inspirational quotes from Charles Dickens, Confucius, Dr. Seuss, George Bernard Shaw, Henry David Thoreau, Horace, Lao Tzu,  Lewis Carroll, Mahatma Gandhi, Oprah Winfrey, Oscar Wilde, Paulo Coelho, Ralph Waldo Emerson, Stephen King, Tony Robbins, and more.

You'll even find an inspirational quote from The Wizard of Oz (and it’s not “There’s no place like home.”)

Inspirational Quotes Jump Start

Here are a few of my favorites inspirational quotes to get you started:

“Courage doesn’t always roar. Sometimes courage is the quiet voice at the end of the day saying, ‘I will try again tomorrow.’”

Mary Anne Radmacher

“Do not follow where the path may lead. Go, instead, where there is no path and leave a trail.”

Ralph Waldo Emerson

“Don’t cry because it’s over, smile because it happened.”

Dr. Seuss

“It is not length of life, but depth of life.”

Ralph Waldo Emerson

“Life is not measured by the number of breaths you take, but by every moment that takes your breath away.”

Anonymous

“You live but once; you might as well be amusing.”

Coco Chanel

“It is never too late to be who you might have been.”

George Eliot

“Smile, breathe and go slowly.”

Thich Nhat Hanh

“What lies behind us and what lies before us are tiny matters compared to what lies within us.”

Ralph Waldo Emerson

These inspirational quotes are living breathing collections.  I periodically sweep them to reflect new additions, and I re-organize or re-style the quotes if I find a better way.

I invest a lot of time on quotes because I’ve learned the following simple truth:

Quotes change lives.

The right words, at the right time, can be just that little bit you need, to breakthrough or get unstuck, or find your mojo again.

Have you had your dose of inspiration today?

Categories: Architecture, Programming

Deploying monitoring tools with ansible

Agile Testing - Grig Gheorghiu - Fri, 06/26/2015 - 01:10
At my previous job, I used Chef for configuration management. New job, new tools, so I decided to use ansible, which I had played with before. Part of that was that I got sick of tools based on Ruby. Managing all the gems dependencies and migrating from one Ruby version to another was a nightmare that I didn't want to go through again. That's one reason why at my new job we settled on Go as the main language we use for our backend API layer.

Back to ansible. Since it's written in Python, it's already got good marks in my book. Plus it doesn't need a server and it's fairly easy to wrap your head around. I've been very happy with it so far.

For external monitoring, we use Pingdom because it works and it's cheap. We also use New Relic for application performance monitoring, but it's very expensive, so I've been looking at ways to supplement it with Open Source tools.

An announcement about telegraf drew my attention the other day: here was a metrics collection tool written in Go and sending its data to InfluxDB, which is a scalable database also written in Go and designed to receive time-series data. Seemed like a perfect fit. I just needed a way to display the data from InfluxDB. Cool, it looked like Grafana supports InfluxDB! It turns out however that Grafana support for the latest InfluxDB version 0.9.0 is experimental, i.e. doesn't really work. Plus telegraf itself has some rough edges in the way it tags the data it sends to InfluxDB. Long story short, after a whole day of banging my head against the telegraf/InfluxDB/Grafana wall, I decided to abandon this toolset.

Instead, I reached again to trusty old Graphite and its loyal companion statsd. I had problems with Graphite not scaling well before, but for now we're not sending it such a huge amount of metrics, so it will do. I also settled on collectd as the OS metric collector. It's small, easy to configure, and very stable. The final piece of the puzzle was a process monitoring and alerting tool. I chose monit for this purpose. Again: simple, serverless, small footprint, widely used, stable, easy to configure.

This seems like a lot of tools, but it's not really that bad if you have a good solid configuration management system in place -- ansible in my case.

Here are some tips and tricks specific to ansible for dealing with multiple monitoring tools that need to be deployed across various systems.

Use roles extensively

This is of course recommended no matter what configuration management system you use. With ansible, it's easy to use the commmand 'ansible-galaxy init rolename' to create the directory structure for a new role. My approach is to create a new role for each major application or tool that I want to deploy. Here are some of the roles I created:

  • a base role that adds users, deals with ssh keys and sudoers.d files, creates directory structures common to all servers, etc.
  • a tuning role that mostly configures TCP-related parameters in sysctl.conf
  • a postfix role that installs and configures postfix to use Amazon SES
  •  a go role that installs golang from source and configures GOPATH and GOROOT
  • an nginx role that installs nginx and deploys self-signed SSL certs for development/testing purposes
  • a collectd role that installs collectd and deploys (as an ansible template) a collectd.conf configuration file common to all systems, which sends data to graphite (the system name is customized as {{inventory_hostname}} in the write_graphite plugin)
  • a monit role that installs monit and deploys (again as an ansible template) a monitrc file that monitors resource metrics such as CPU, memory, disk etc. common to all systems
  • an api role that does the heavy lifting for the installation and configuration of the packages that are required by our API layer
Use an umbrella 'monitoring' role
At first I was configuring each ansible playbook to use both the monit role and the collectd role. I realized that it's a bit more clear and also easier to maintain if instead playbooks use a more generic monitoring role, which does nothing but list monit and collectd as dependencies in its meta/main.yml file:

dependencies:  - { role: monit }  - { role: collectd }
Customize monitoring-related configuration files in other roles
A nice thing about both monit and collectd, and a main reason I chose them, is that they read configuration files from a directory called /etc/monit/conf.d for monit and /etc/collectd/collectd.conf.d for collectd. This makes it easy for each role to add its own configuration files. For example, the api role adds 2 files as custom checks in  /etc/monit/conf.d: check_api and check_nginx. It also adds 2 files as custom metric collectors in /etc/collectd/collectd.conf.d: nginx.conf and memcached.conf. The api role does this via a file called tasks/monitoring.yml file which gets included in tasks/main.yml.
As another example, the nginx role also adds its own check_nginx configuration file to /etc/monit/conf.d via a tasks/monitoring.yml file.
The rule of thumb I arrived at is this: each low-level role such as monit and collectd installs the common configuration files needed by all other roles, whereas each higher-level role such as api installs its own custom checks and metric collectors via a monitoring.yml task file. This way, it's easy to see at a glance what each high-level role does for monitoring: just look in its monitoring.yml task file.
To wrap this post up, here is an example of a playbook I use to build API servers:
$ cat api-servers.yml---
- hosts: api-servers  sudo: yes  roles:    - base    - tuning    - postfix    - monitoring    - nginx    - api
I call this playbook with:
$ ansible-playbook -i hosts/myhosts api-servers.yml
To make this work, the hosts/myhosts file has a section similar to this:
[api-servers]api01 ansible_ssh_host=api01.mydomain.comapi02 ansible_ssh_host=api02.mydomain.com




New Azure Billing APIs Available

ScottGu's Blog - Scott Guthrie - Thu, 06/25/2015 - 06:59

Organizations moving to the cloud can achieve significant cost savings.  But to achieve the maximum benefit you need to be able to accurately track your cloud spend in order to monitor and predict your costs. Enterprises need to be able to get detailed, granular consumption data and derive insights to effectively manage their cloud consumption.

I’m excited to announce the public preview release of two new Azure Billing APIs today: the Azure Usage API and Azure RateCard API which provide customers and partners programmatic access to their Azure consumption and pricing details:

Azure Usage API – A REST API that customers and partners can use to get their usage data for an Azure subscription. As part of this new Billing API we now correlate the usage/costs by the resource tags you can now set set on your Azure resources (for example: you could assign a tag “Department abc” or “Project X” to a VM or Database in order to better track spend on a resource and charge it back to an internal group within your company). To get more details, please read the MSDN page on the Usage API. Enterprise Agreement (EA) customers can also use this API to get a more granular view into their consumption data, and to complement what they get from the EA Billing CSV.

Azure RateCard API – A REST API that customers and partners can use to get the list of the available resources they can use, along with metadata and price information about them. To get more details, please read the MSDN page on the RateCard API.

You can start taking advantage of both of these APIs today.  You can write your own custom code that uses the APIs to construct your own custom reports, or alternatively you can also now take advantage of pre-built bill tracking systems provided by our partners which already integrate the APIs into their existing solutions.

Partner Solutions

Two of our Azure Billing partners (Cloudyn and Cloud Cruiser) have already integrated the new Billing APIs into their products:

Cloudyn has integrated with Azure Billing APIs to provide IT financial management insights on cost optimization. You can read more about their integration experience in Microsoft Azure Billing APIs enable Cloudyn to Provide ITFM for Customers.

Cloud Cruiser has integrated with the Azure RateCard API to provide an estimate of what it would cost the customer to run the same workloads on Azure. They are also working on integrating with the Azure Usage API to provide insights based on the Azure consumption. You can read more about their integration in Cloud Cruiser and Microsoft Azure Billing API Integration.

You can adopt one or both of the above solutions immediately and use them to better track your Azure bill without having to write a single line of code.

image

Cloudyn's integration enables you to view and query the breakdown of Azure usage by resource tags (e.g. “Dev/Test”, “Department abc”, “Project X”):

image

Cloudyn's integration showing trend of estimated charges over time:

image

Cloud Cruiser's integration to show estimated cost of running workload on Azure:  

image Using the Billing APIs directly

You can also use the new Billing APIs directly to write your own custom reports and billing tracking logic.  To get started with the APIs, you can leverage the code samples on Github.

The Billing APIs leverage the new Azure Resource Manager and use Azure Active Directory for Authentication and follow the Azure Role-based access control policies.  The code samples we’ve published show a variety of common scenarios and how to integrate this logic end to end. Summary

The new Azure Billing APIs make it much easier to track your bill and save money.

As always, please reach out to us on the Azure Feedback forum and through the Azure MSDN forum.

Hope this helps,

Scott

omni
Categories: Architecture, Programming

Software architecture workshops in Australia during August

Coding the Architecture - Simon Brown - Mon, 06/22/2015 - 08:02

This is a quick update on my upcoming trip to Australia ... in conjunction with the lovely folks behind the YOW! conference, we've scheduled two public software architecture sketching workshops as follows.

This workshop addresses one of the major problems I've evidenced during my career in software development; namely that people find it really hard to describe, communicate and visualise the design of their software. Sure, we have UML, but very few people use it and instead resort to an ad hoc "boxes and lines" notation. This is fine, but the resulting diagrams (as illustrated below) need to make sense and they rarely do in my experience.

Some typical software architecture sketches

My workshop explores this problem from a number of different perspectives, not least by giving you an opportunity to practice these skills yourself. My Voxxed article titled Simple Sketches for Diagramming Your Software Architecture provides an introduction to this topic and the C4 model that I use. I've been running this workshop in various formats for nearly 10 years now and the techniques we'll cover have proven invaluable for software development teams around the world in the following situations:

  • Doing (just enough) up front design on whiteboards.
  • Communicating software design retrospectively before architecture refactoring.
  • Explaining how the software works when somebody new joins the team.
  • Documenting/recording the software architecture in something like a software architecture document or software guidebook.
"Really surprising training! I expected some typical spoken training about someones revolutionary method and I found a hands-on training that used the "do-yourself-and-fail" approach to learn the lesson, and taught us a really valuable, reasonable and simple method as an approach to start the architecture of a system. Highly recommended!" (an attendee from the software architecture sketching workshop at GOTO Amsterdam 2014)

Attendees will also receive a copy of my Software Architecture for Developers ebook and a year subscription to Structurizr. Please do feel free to e-mail me with any questions. See you in Australia!

Categories: Architecture

Leadership Skills for Making Things Happen

"A leader is one who knows the way, goes the way, and shows the way." -- John C. Maxwell

How many people do you know that talk a good talk, but don’t walk the walk?

Or, how many people do you know have a bunch of ideas that you know will never see the light of day?  They can pontificate all day long, but the idea of turning those ideas into work that could be done, is foreign to them.

Or, how many people do you know can plan all day long, but their plan is nothing more than a list of things that will never happen?  Worse, maybe they turn it into a team sport, and everybody participates in the planning process of all the outcomes, ideas and work that will never happen. (And, who exactly wants to be accountable for that?)

It doesn’t need to be this way.

A lot of people have Hidden Strengths they can develop into Learned Strengths.   And one of the most important bucket of strengths is Leading Implementation.

Leading Implementation is a set of leadership skills for making things happen.

It includes the following leadership skills:

  1. Coaching and Mentoring
  2. Customer Focus
  3. Delegation
  4. Effectiveness
  5. Monitoring Performance
  6. Planning and Organizing
  7. Thoroughness

Let’s say you want to work on these leadership skills.  The first thing you need to know is that these are not elusive skills reserved exclusively for the elite.

No, these are commonly Hidden Strengths that you and others around you already have, and they just need to be developed.

If you don’t think you are good at any of these, then before you rule yourself out, and scratch them off your list, you need to ask yourself some key reflective questions:

  1. Do you know what good actually looks like?  Who are you role models?   What do they do differently than you, and is it really might and magic or do they simply do behaviors or techniques that you could learn, too?
  2. How much have you actually practiced?   Have you really spent any sort of time working at the particular skill in question?
  3. How did you create an effective feedback loop?  So many people rapidly improve when they figure out how to create an effective learning loop and an effective feedback loop.
  4. Who did you learn from?  Are you expecting yourself to just naturally be skilled?  Really?  What if you found a good mentor or coach, one that could help you create an effective learning loop and feedback loop, so you can improve and actually chart and evaluate your progress?
  5. Do you have a realistic bar?  It’s easy to fall into the trap of “all or nothing.”   What if instead of focusing on perfection, you focused on progress?   Could a little improvement in a few of these areas, change your game in a way that helps you operate at a higher level?

I’ve seen far too many starving artists and unproductive artists, as well as mad scientists, that had brilliant ideas that they couldn’t turn into reality.  While some were lucky to pair with the right partners and bring their ideas to live, I’ve actually seen another pattern of productive artists.

They develop some of the basic leadership skills in themselves to improve their ability to execute.

Not only are they more effective on the job, but they are happier with their ability to express their ideas and turn their ideas into action.

Even better, when they partner with somebody who has strong execution, they amplify their impact even more because they have a better understanding and appreciation of what it takes to execute ideas.

Like talk, ideas are cheap.

The market rewards execution.

Categories: Architecture, Programming

How to deploy composite Docker applications with Consul key values to CoreOS

Xebia Blog - Fri, 06/19/2015 - 16:34

Most examples on the deployment of Docker applications to CoreOS use a single docker application. But as soon as you have an application that consists of more than 1 unit, the number of commands you have to type soon becomes annoying. At Xebia we have a best practice that says "Three strikes and you automate" mandating that a third time you do something similar, you automate. In this blog I share the manual page of the utility called fleetappctl that allows you to perform rolling upgrades and deploy Consul Key value pairs of composite applications to CoreOS and show three examples of its usage.


fleetappctl is a utility that allows you to manage a set of CoreOS fleet unit files as a single application. You can start, stop and deploy the application. fleetappctl is idempotent and does rolling upgrades on template files with multiple instances running. It can substitute placeholders upon deployment time and it is able to deploy Consul key value pairs as part of your application. Using fleetappctl you have everything you need to create a self contained deployment  unit of your composite application and put it under version control.

The command line options to fleetappctl are shown below:

fleetappctl [-d deployment-descriptor-file]
            [-e placeholder-value-file]
            (generate | list | start | stop | destroy)
option -d

The deployment descriptor file describes all the fleet unit files and Consul key-value pair files that make up the application. All the files referenced in the deployment-descriptor may have placeholders for deployment time values. These placeholders are enclosed in  double curly brackets {{ }}.

option -e

The file contains the values for the placeholders to be used on deployment of the application. The file has a simple format:

<name>=<value>
start

starts all units in the order as they appear in the deployment descriptor. If you have a template unit file, you can specify the number of instances you want to start. Start is idempotent, so you may call start multiple times. Start will bring the deployment inline with your descriptor.

If the unit file has changed with respect to the deployed unit file, the corresponding instances will be stopped and restarted with the new unit file. If you have a template file, the instances of the template file will be upgraded one by one.

Any consul key value pairs as defined by the consul.KeyValuePairs entries are created in Consul. Existing values are not overwritten.

generate

generates a deployment descriptor (deployit-manifest.xml) based upon all the unit files found in your directory. If a file is a fleet unit template file the number of instances to start is set to 2, to support rolling upgrades.

stop

stops all units in reverse order of their appearance in the deployment descriptor.

destroy

destroys all units in reverse order of their appearance in the deployment descriptor.

list

lists the runtime status of the units that appear in the deployment descriptor.

Install fleetappctl

to nstall the fleetappctl utility, type the following commands:

curl -q -L https://github.com/mvanholsteijn/fleetappctl/archive/0.25.tar.gz | tar -xzf -
cd fleetappctl-0.25
./install.sh
brew install xmlstarlet
brew install fleetctl
Start the platform

If you do not have the platform running, start it first.

cd ..
git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service.git
cd coreos-container-platform-as-a-service
git checkout 029d3dd8e54a5d0b4c085a192c0ba98e7fc2838d
cd vagrant
vagrant up
./is_platform_ready.sh
Example - Three component web application


The first example is a three component application. It consists of a mount, a Redis database service and a web application. We generate the deployment descriptor, indicate we do not want to start the mount, start the application and then modify the web application unit file to change the service name into 'helloworld'. We perform a rolling upgrade by issuing start again.. Finally we list, stop and destroy the application.

cd ../fleet-units/app
# generate a deployment descriptor
fleetappctl generate

# do not start mount explicitly
xml ed -u '//fleet.UnitConfigurationFile[@name="mnt-data"]/startUnit' \
       -v false deployit-manifest.xml > \
        deployit-manifest.xml.new
mv deployit-manifest.xml{.new,}

# start the app
fleetappctl start 

# Check it is working
curl hellodb.127.0.0.1.xip.io:8080
curl hellodb.127.0.0.1.xip.io:8080

# Change the service name of the application in the unit file
sed -i -e 's/SERVICE_NAME=hellodb/SERVICE_NAME=helloworld/' app-hellodb@.service

# do a rolling upgrade
fleetappctl start 

# Check it is now accessible on the new service name
curl helloworld.127.0.0.1.xip.io:8080

# Show all units of this app
fleetappctl list

# Stop all units of this app
fleetappctl stop
fleetappctl list

# Restart it again
fleetappctl start

# Destroy it
fleetappctl destroy
Example - placeholder references

This example shows the use of a placeholder reference in the unit file of the paas-monitor application. The application takes two optional environment  variables: RELEASE and MESSAGE that allow you to configure the resulting responses. The variable RELEASE is configured in the Docker run command in the fleet unit file through a placeholder. The actual value for the current deployment is taken from an placeholder value file.

cd ../fleetappctl-0.25/examples/paas-monitor
#check out the placeholder reference
grep '{{' paas-monitor@.service

...
ExecStart=/bin/sh -c "/usr/bin/docker run --rm --name %p-%i \
 <strong>--env RELEASE={{release}}</strong> \
...
# checkout our placeholder values
cat dev.env
...
release=V2
# start the app
fleetappctl -e dev.env start

# show current release in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start is idempotent (ie. nothing happens)
fleetappctl -e dev.env start

# update the placeholder value and see a rolling upgrade in the works
echo 'release=V3' > dev.env
fleetappctl -e dev.env start
curl paas-monitor.127.0.0.1.xip.io:8080/status

fleetappctl destroy
Example - Env Consul Key Value Pair deployments


The final example shows the use of a Consul Key Value Pair, the use of placeholders and envconsul to dynamically update the environment variables of a running instance. The environment variables RELEASE and MESSAGE are taken from the keys under /paas-monitor in Consul. In turn the initial value of these keys are loaded on first load and set using values from the placeholder file.

cd ../fleetappctl-0.25/examples/envconsul

#check out the Consul Key Value pairs, and notice the reference to placeholder values
cat keys.consul
...
paas-monitor/release={{release}}
paas-monitor/message={{message}}

# checkout our placeholder values
cat dev.env
...
release=V4
message=Hi guys
# start the app
fleetappctl -e dev.env start

# show current release and message in status
curl paas-monitor.127.0.0.1.xip.io:8080/status

# Change the message in Consul
fleetctl ssh paas-monitor@1 \
    curl -X PUT \
    -d \'hello Consul\' \
    http://172.17.8.101:8500/v1/kv/paas-monitor/message

# checkout the changed message
curl paas-monitor.127.0.0.1.xip.io:8080/status

# start does not change the values..
fleetappctl -e dev.env start
Conclusion

CoreOS provides all the basic functionality for a Container Platform as a Service. With the utility fleetappctl it becomes easy to start, stop and upgrade composite applications. The script is an superfluous to fleetctl and does not break other ways of deploying your applications to CoreOS.

The source code, manual page and documentation of fleetappctl can be found on https://github.com/mvanholsteijn/fleetappctl.

 

Diff'ing software architecture diagrams again

Coding the Architecture - Simon Brown - Fri, 06/19/2015 - 12:42

In Diff'ing software architecture diagrams, I showed that creating a software architecture model with a textual format provides you with the ability to version control and diff different versions of the model. As a follow-up, somebody asked me whether Structurizr provides a way to recreate what Robert Annett originally posted in Diagrams for System Evolution. In other words, can the colours of the lines be changed? As you can see from the images below, the answer is yes.

Before Before

To do this, you simply add some tags to the relationships and add the appropriate styles to the view configuration. structurizr.com will even auto-generate a key for you.

Diagram key

And yes, you can do the same with elements too. As this illustrates, the choice of approach is yours.

Categories: Architecture

Startup Thinking

“Startups don't win by attacking. They win by transcending.  There are exceptions of course, but usually the way to win is to race ahead, not to stop and fight.” -- Paul Graham

A startup is the largest group of people you can convince to build a different future.

Whether you launch a startup inside a big company or launch a startup as a new entity, there are a few things that determine the strength of the startup: a sense of mission, space to think, new thinking, and the ability to do work.

The more clarity you have around Startup Thinking, the more effective you can be whether you are starting startups inside our outside of a big company.

In the book, Zero to One: Notes on Startups, or How to Build the Future, Peter Thiel shares his thoughts about Startup Thinking.

Startups are Bound Together by a Sense of Mission

It’s the mission.  A startup has an advantage when there is a sense of mission that everybody lives and breathes.  The mission shapes the attitudes and the actions that drive towards meaningful outcomes.

Via Zero to One: Notes on Startups, or How to Build the Future:

“New technology tends to come from new ventures--startups.  From the Founding Fathers in politics to the Royal Society in science to Fairchild Semiconductor's ‘traitorous eight’ in business, small groups of people bound together by a sense of mission have changed the world for the better.  The easiest explanation for this is negative: it's hard to develop new things in big organizations, and it's even harder to do it by yourself.  Bureaucratic hierarchies move slowly, and entrenched interests shy away from risk.” 

Signaling Work is Not the Same as Doing Work

One strength of a startup is the ability to actually do work.  With other people.  Rather than just talk about it, plan for it, and signal about it, a startup can actually make things happen.

Via Zero to One: Notes on Startups, or How to Build the Future:

“In the most dysfunctional organizations, signaling that work is being done becomes a better strategy for career advancement than actually doing work (if this describes your company, you should quit now).  At the other extreme, a lone genius might create a classic work of art or literature, but he could never create an entire industry.  Startups operate on the principle that you need to work with other people to get stuff done, but you also need to stay small enough so that you actually can.”

New Thinking is a Startup’s Strength

The strength of a startup is new thinking.  New thinking is even more valuable than agility.  Startups provide the space to think.

Via Zero to One: Notes on Startups, or How to Build the Future:

“Positively defined, a startup is the largest group of people you can convince of a plan to build a different future.  A new company's most important strength is new thinking: even more important than nimbleness, small size affords space to think.  This book is about the questions you must ask and answer to succeed in the business of doing new things: what follows is not a manual or a record of knowledge but an exercise in thinking.  Because that is what a startup has to do: question received ideas and rethink business from scratch.”

Do you have stinking thinking or do you beautiful mind?

New thinking will take you places.

You Might Also Like

How To Get Innovation to Succeed Instead of Fail

Management Innovation is at the Top of the Innovation Stack

The Innovation Revolution

The New Competitive Landscape

The New Realities that Call for New Organizational and Management Capabilities

Categories: Architecture, Programming

Diff'ing software architecture diagrams

Coding the Architecture - Simon Brown - Wed, 06/17/2015 - 14:02

Robert Annett wrote a post titled Diagrams for System Evolution where he describes a simple approach to showing how to visually describe changes to a software architecture. In essence, in order to show how a system is to change, he'll draw different versions of the same diagram and use colour-coding to highlight the elements/relationships that will be added, removed or modified.

I've typically used a similar approach for describing as-is and to-be architectures in the past too. It's a technique that works well. Although you can version control diagrams, it's still tricky to diff them using a tool. One solution that addresses this problem is to not create diagrams, but instead create a textual description of your software architecture model that is then subsequently rendered with some tooling. You could do this with an architecture description language (such as Darwin) although I would much rather use my regular programming language instead.

Creating a software architecture model as code

This is exactly what Structurizr is designed to do. I've recreated Robert's diagrams with Structurizr as follows.

Before

Before

And since the diagrams were created by a model described as Java code, that description can be diff'ed using your regular toolchain.

The diff between the two model versions

Code provides opportunities

This perhaps isn't as obvious as Robert's visual approach, and I would likely still highlight the actual differences on diagrams using notation as Robert did too. Creating a textual description of a software architecture model does provide some interesting opportunities though.

Categories: Architecture

How to Configure Alerts to Prevent Too Many Server Notifications

This is a guest post by Ashish Mohindroo, who leads Product for Happy Apps, a new uptime and performance monitoring system.

It isn't uncommon for system administrators to receive a stream of alarms for situations that either can wait to be addressed, will remedy themselves, or simply weren't problems to begin with. The other side of the equation is missing the reports that indicate a problem that has to be addressed right away. Options for customizing server notifications let you decide the conditions that trigger alerts, set the level of alerts, and choose the recipients based on each alert's importance.

The only thing worse than receiving too many notifications is not receiving the one alert that would keep a small glitch from becoming a big problem. Preventing over-notification requires fine-tuning system alerts so that the right people find out about problems and potential problems at the right time. Here's a three-step approach to customizing alerts:

Categories: Architecture

boot2docker on xhyve

Xebia Blog - Mon, 06/15/2015 - 09:00

xhyve is a new hypervisor in the vein of KVM on Linux and bhyve on BSD. It’s actually a port of BSD’s bhyve to OS X and more similar to KVM than to Virtualbox in that it’s minimal and command line only which makes it a good fit for an always running virtual machine like boot2docker on OS X.

This post documents the steps to get boot2docker running within xhyve and contains some quick benchmarks to compare xhyve’s performance with Virtualbox.

Read the full blogpost on Simon van der Veldt's website

Using UIPageViewControllers with Segues

Xebia Blog - Mon, 06/15/2015 - 07:30

I've always wondered how to configure a UIPageViewController much like you configure a UITabBarController inside a Storyboard. It would be nice to create standard embed segues to the view controllers that make up the pages. Unfortunately, such a thing is not possible and currently you can't do a whole lot in a Storyboard with page view controllers.

So I thought I'd create a custom way of connecting pages through segues. This post explains how.

Without segues

First let's create an example without using segues and then later we'll try to modify it to use segues.

Screen Shot 2015-06-14 at 10.01.10

In the above Storyboard we have 2 scenes. One page view controller and another for the individual pages, the content view controller. The page view controller will have 4 pages that each display their page number. It's about as simple as a page view controller can get.

Below the code of our simple content view controller:

class MyContentViewController: UIViewController {

  @IBOutlet weak var pageNumberLabel: UILabel!

  var pageNumber: Int!

  override func viewDidLoad() {
    super.viewDidLoad()
      pageNumberLabel.text = "Page $$pageNumber)"
    }
}

That means that our page view controller just needs to instantiate an instance of our MyContentViewController for each page and set the pageNumber. And since there is no segue between the two scenes, the only way to create an instance of the MyContentViewController is programmatically with the UIStoryboard.instantiateViewControllerWithIdentifier(_:). Of course for that to work, we need to give the content view controller scene an identifier. We'll choose MyContentViewController to match the name of the class.

Our page view controller will look like this:

class MyPageViewController: UIPageViewController {

  let numberOfPages = 4

  override func viewDidLoad() {
    super.viewDidLoad()

    setViewControllers([createViewController(1)], 
      direction: .Forward, 
      animated: false, 
      completion: nil)

    dataSource = self
  }

  func createViewController(pageNumber: Int) -> UIViewController {
    let contentViewController = 
      storyboard?.instantiateViewControllerWithIdentifier("MyContentViewController") 
      as! MyContentViewController
    contentViewController.pageNumber = pageNumber
    return contentViewController
  }

}

extension MyPageViewController: UIPageViewControllerDataSource {
  func pageViewController(pageViewController: UIPageViewController, viewControllerAfterViewController viewController: UIViewController) -> UIViewController? {
    return createViewController(
      mod((viewController as! MyContentViewController).pageNumber, 
      numberOfPages) + 1)
  }

  func pageViewController(pageViewController: UIPageViewController, viewControllerBeforeViewController viewController: UIViewController) -> UIViewController? {
    return createViewController(
      mod((viewController as! MyContentViewController).pageNumber - 2, 
      numberOfPages) + 1)
  }
}

func mod(x: Int, m: Int) -> Int{
  let r = x % m
  return r < 0 ? r + m : r
}

Here we created the createViewController(_:) method that has the page number as argument and creates the instance of MyContentViewController and sets the page number. This method is called from viewDidLoad() to set the initial page and from the two UIPageViewControllerDataSource methods that we're implementing here to get to the next and previous page.

The custom mod(_:_) function is used to have a continuous page navigation where the user can go from the last page to the first and vice versa (the built-in % operator does not do a true modules operation that we need here).

Using segues

The above sample is pretty simple. So how can we change it to use a segue? First of all we need to create a segue between the two scenes. Since we're not doing anything standard here, it will have to be a custom segue. Now we need a way to get an instance of our content view controller through the segue which we can use from our createViewController(_:). The only method we can use to do anything with the segue is UIViewController.performSegueWithIdentifier(_:sender:). We know that calling that method will create an instance of our content view controller, which is the destination of the segue, but we then need a way to get this instance back into our createViewController(_:) method. The only place we can reference the new content view controller instance is from within the custom segue. From it's init method we can set it to a variable which the createViewController(_:) can also access. That looks something as following. First we create the variable:

  var nextViewController: MyContentViewController?

Next we create a new custom segue class that assigns the destination view controller (the new MyContentViewController) to this variable.

public class PageSegue: UIStoryboardSegue {

  public override init!(identifier: String?, source: UIViewController, destination: UIViewController) {
    super.init(identifier: identifier, source: source, destination: destination)
    if let pageViewController = source as? MyPageViewController {
      pageViewController.nextViewController = 
        destinationViewController as? MyContentViewController
    }
  }

  public override func perform() {}
    
}

Since we're only interested in getting the reference to the created view controller we don't need to do anything extra in the perform() method. And the page view controller itself will handle displaying the pages so our segue implementation remains pretty simple.

Now we can change our createViewController(_:) method:

func createViewController(pageNumber: Int) -> UIViewController {
  performSegueWithIdentifier("Page", sender: self)
  nextViewController!.pageNumber = pageNumber
  return nextViewController!
}

The method looks a bit odd since it's not we're never assigning nextViewController anywhere in this view controller. And we're relying on the fact that the segue is created synchronously from the performSegueWithIdentifier call. Otherwise this wouldn't work.

Now we can create the segue in our Storyboard. We need to give it the same identifier as we used above and set the Segue Class to PageSegue

Screen Shot 2015-06-14 at 11.58.49

<2>Generic class

Now we can finally create segues to visualise the relationship between page view controller and content view controller. But let's see if we can write a generic class that has most of the logic which we can reuse for each UIPageViewController.

We'll create a class called SeguePageViewController which will be the super class for our MyPageViewController. We'll move the PageSegue to the same source file and refactor some code to make it more generic:

public class SeguePageViewController: UIPageViewController {

    var pageSegueIdentifier = "Page"
    var nextViewController: UIViewController?

    public func createViewController(sender: AnyObject?) -> MyContentViewController {
        performSegueWithIdentifier(pageSegueIdentifier, sender: sender)
        return nextViewController!
    }

}

public class PageSegue: UIStoryboardSegue {

    public override init!(identifier: String?, source: UIViewController, destination: UIViewController) {
        super.init(identifier: identifier, source: source, destination: destination)
        if let pageViewController = source as? SeguePageViewController {
            pageViewController.nextViewController = destinationViewController as? UIViewController
        }
    }

    public override func perform() {}
    
} 

As you can see, we moved the nextViewController variable and createViewController(_:) to this class and use UIViewController instead of our concrete MyContentViewController class. We also introduced a new variable pageSegueIdentifier to be able to change the identifier of the segue.

The only thing missing now is setting the pageNumber of our MyContentViewController. Since we just made things generic we can't set it from here, so what's the best way to deal with that? You might have noticed that the createViewController(_:) method now has a sender: AnyObject? as argument, which in our case is still the page number. And we know another method that receives this sender object: prepareForSegue(_:sender:). All we need to do is implement this in our MyPageViewController class.

override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
  if segue.identifier == pageSegueIdentifier {
    let vc = segue.destinationViewController as! MyContentViewController
    vc.pageNumber = sender as! Int
  }
}

If you're surprised to see the sender argument being used this way you might want to read my previous post about Understanding the ‘sender’ in segues and use it to pass on data to another view controller

Conclusion

Wether or not you'd want to use this approach of using segues for page view controllers might be a matter of personal preference. It doesn't give any huge benefits but it does give you a visual indication of what's happening in the Storyboard. It doesn't really save you any code since the amount of code in MyPageViewController remained about the same. We just replaced the createViewController(_:) with the prepareForSegue(_:sender:) method.

It would be nice if Apple offers a better solution that depends less on the data source of a page view controller and let you define the paging logic directly in the storyboard.