Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Architecture

Stockholm and Bucharest

Coding the Architecture - Simon Brown - Fri, 05/08/2015 - 16:41

Following on from CRAFT and SATURN last month, the conference tour continues in May with visits to Stockholm and Bucharest.

First up is DevSum in Stockholm, Sweden, where I'll be speaking about Agility and the essence of software architecture. This session looks at my approach to doing "just enough up front" design and how to introduce these techniques into software development teams.

Later during the same week I'll then be delivering the opening keynote at the I T.A.K.E. Unconference in Bucharest, Romania. My talk here is called Software architecture as code. It's about rethinking the way we describe software architecture and how to create a software architecture model as code for living, up to date, software architecture documentation.

See you in a few weeks.

Categories: Architecture

Task Management for Teams

I’m a fan of monthly plans for meaningful work.

Whether you call it a task list or a To-Do list or a product backlog, it helps to have a good view of the things that you’ll invest your time in.

I’m not a fan of everybody trying to make sense of laundry lists of cells in a spreadsheet.

Time changes what’s important and it’s hard to see the forest for the trees, among rows of tasks that all start to look the same.

One of the most important things I’ve learned to do is to map out work for the month in a more meaningful way.

It works for individuals.  It works for teams.  It works for leaders.

It’s what I’ve used for Agile Results for years on projects small and large, and with distributed teams around the world.  (Agile Results is my productivity method introduced in Getting Results the Agile Way.)

A picture is worth a thousand words, so let’s just look at a sample output and then I’ll walk through it:

clip_image002

What I’ve found to be the most effective is to focus on a plan for the month – actually take an hour or two the week before the new month.  (In reality, I’ve done this with teams of 10 or more people in 30 minutes or less.  It doesn’t take long if you just dump things fast on the board, and just keep asking people “What else is on our minds.”)

Dive-in at a whiteboard with the right people in the room and just list out all the top of mind, important things – be exhaustive, then prioritize and prune.

You then step back and identify the 3 most important outcomes (3 Wins for the Month.)

I make sure each work item has a decent name – focused on the noun – so people can refer to it by name (like mini-initiatives that matter.)

I list it in alphabetical by the name of the work so it’s easy to manage a large list of very different things.

That’s the key.

Most people try to prioritize the list, but the reality is, you can use each week to pick off the high-value items.   (This is really important.  Most people spend a lot of time prioritizing lists, and re-prioritizing lists, and yet, people tend to be pretty good prioritizing when they have a quick list to evaluate.   Especially, if they know the priorities for the month, and they know any pressing events or dead-lines.   This is where clarity pays off.)

The real key is listing the work in alphabetical order so that it’s easy to scan, easy to add new items, and easy to spot duplicates.

Plus, it forces you to actually name the work and treat it more like a thing, and less like some fuzzy idea that’s out there.

I could go on and on about the benefits, but here are a few of the things that really matter:

  1. It’s super simple.   By keeping it simple, you can actually do it.   It’s the doing, not just the knowing that matters in the end.
  2. It chops big work down to size.   At the same time, it’s easy to quickly right-size.  Rather than bog down in micro-management, this simple list makes it easy to simply list out the work that matters.
  3. It gets everybody in the game.   Everybody gets to look at a whiteboard and plan what a great month will look like.  They get to co-create the journey and dream up what success will look like.   A surprising thing happens when you just identify Three Wins for the Month.

I find a plan for the month is the most useful.   If you plan a month well, the weeks have a better chance of taking care of themselves.   But if you only plan for the week or every two weeks, it’s easy to lose sight of the bigger picture, and the next thing you know, the months go by.  You’re busy, things happen, but the work doesn’t always accrue to something that matters.

This is a simple way to have more meaningful months.

I also can’t say it enough, that it’s less about having a prioritized list, and more about having an easy to glance at map of the work that’s in-flight.   I’m glad the map of the US is not a prioritized list by states.  And I’m glad that the states are well named.  It makes it easy to see the map.  I can then prioritize and make choices on any trip, because I actually have a map to work from, and I can see the big picture all at once, and only zoom in as I need to.

The big idea behind planning tasks and To-Do lists this way is to empower people to make better decisions.

The counter-intuitive part is first exposing a simple view of the map of the work, so it’s easy to see, and this is what enables simpler prioritization when you need it, regardless of which prioritization you use, or which workflow management tool you plug in to.

And, nothing stops you from putting the stuff into spreadsheets or task management tools afterwards, but the high-value part is the forming and storming and conforming around the initial map of the work for the month, so more people can spend their time performing.

May the power of a simple information model help you organize, prioritize, and optimize your outcomes in a more meaningful way.

If you need a deeper dive on this approach, and a basic introduction to Agile Results, here is a good getting started guide for Agile Results in action.

Categories: Architecture, Programming

Minimal Viable UX

Xebia Blog - Wed, 05/06/2015 - 20:38

An approach to incorporate UX into the LEAN principle.

User Experience is often interpreted as a process where the ‘UX guru’ holds the ultimate truth in designing for an experience. The guru likes to keep control of his design and doesn’t want to feel less valuable when adopting advice from non-designers, where his concern is becoming a pixel pusher.

Adopting UX in a LEAN way, the feedback from team members minimizes the team going down the wrong path. This prevents the guru from perfecting a design where constraints over time will become clearer and less aligned with the customer needs. Interaction with the team speeds up development time by giving early insight.

Design for User Experience

UX has many different definitions, in the end it enables the user to perform a task with the help of an interface. All disciplines in a software development team should be aware of the user they are designing or developing for, starting in Sprint Zero. UX is not about setting up mockups, wireframes, prototypes and providing designs, it has to be part of the team culture where every team member can attribute to. We are trying to solve problems and problems are not being solved with design documentation but solved with efficient, elegant and sophisticated software.

How to get there

Create user awareness

Being aware of the user helps reduce waste and keeps you focused on things you should care about, functionality that adds value in the perception of the customer.

First, use a set of personas, put them on a wall and let your team members align those users with the functionality they are building. Developers can reflect functionality, interaction designers can optimize interface elements and visual designers can align styling with the user.

Second, use a customer journey map. This is a powerful tool. It helps in creating context, gives an overview of the user experience and helps to find gaps.

Prototype quickly

Prototyping becomes easier by the day, thanks to the amount and quality of tools out there. Prototyping can be performed by using paper, mockups (Balsamiq) or a web framework, such as FramerJS. Pick the type you prefer and which is suitable for the situation and has the appropriate depth.

Diagram of the iterative design and critique process. Warfel, Todd Zaki. 2009. Prototyping: A Practitioner’s Guide. New York: Rosenfeld Media.

Use small portions of prototypes and validate those with a minimal set of users. This helps you to deliver faster, therefore again eliminate waste and improves built-in quality. Iterative design helps you to amplify learning. KISS!

Communicate

Involved parties need to be convinced that what you are saying is based on business needs, the product and the people. You need to befriend and understand all involved parties in order to make it work across the board. Besides that, don’t forget your best friends, the users.

If you don't talk to your customers, how will you know how to talk to your customers? - Will Evans

Varnish Goes Upstack with Varnish Modules and Varnish Configuration Language

This is a guest post by Denis Brækhus and Espen Braastad, developers on the Varnish API Engine from Varnish Software. Varnish has long been used in discriminating backends, so it's interesting to see what they are up to.

Varnish Software has just released Varnish API Engine, a high performance HTTP API Gateway which handles authentication, authorization and throttling all built on top of Varnish Cache. The Varnish API Engine can easily extend your current set of APIs with a uniform access control layer that has built in caching abilities for high volume read operations, and it provides real-time metrics.

Varnish API Engine is built using well known components like memcached, SQLite and most importantly Varnish Cache. The management API is written in Python. A core part of the product is written as an application on top of Varnish using VCL (Varnish Configuration Language) and VMODs (Varnish Modules) for extended functionality.

We would like to use this as an opportunity to show how you can create your own flexible yet still high performance applications in VCL with the help of VMODs.

VMODs (Varnish Modules)
Categories: Architecture

Elements of Scale: Composing and Scaling Data Platforms

This is a guest repost of Ben Stopford's epic post on Elements of Scale: Composing and Scaling Data Platforms. A masterful tour through the evolutionary forces that shape how systems adapt to key challenges.

As software engineers we are inevitably affected by the tools we surround ourselves with. Languages, frameworks, even processes all act to shape the software we build.

Likewise databases, which have trodden a very specific path, inevitably affect the way we treat mutability and share state in our applications.

Over the last decade we’ve explored what the world might look like had we taken a different path. Small open source projects try out different ideas. These grow. They are composed with others. The platforms that result utilise suites of tools, with each component often leveraging some fundamental hardware or systemic efficiency. The result, platforms that solve problems too unwieldy or too specific to work within any single tool.

So today’s data platforms range greatly in complexity. From simple caching layers or polyglotic persistence right through to wholly integrated data pipelines. There are many paths. They go to many different places. In some of these places at least, nice things are found.

So the aim for this talk is to explain how and why some of these popular approaches work. We’ll do this by first considering the building blocks from which they are composed. These are the intuitions we’ll need to pull together the bigger stuff later on.

Categories: Architecture

The Innovation Revolution (A Time of Radical Transformation)

It was the best of times, it was the worst of times …

It’s not A Tale of Two Cities.   It’s a tale of the Innovation Revolution.

We’ve got real problems worth solving.  The stakes are high.  Time is short.  And abstract answers are not good enough.

In the book, Ten Types of Innovation: The Discipline of building Breakthroughs, Larry Keeley, Helen Walters, Ryan Pikkel, and Brian Quinn explain how it is like A Tale of Two Cities in that it is the worst of time and it is the best of times.

But it is also like no other time in history.

It’s an Innovation Revolution … We have the technology and we can innovate our way through radical transformation.

The Worst of Times (Innovation Has Big Problems to Solve)

We’ve got some real problems to solve, whether it’s health issues, poverty, crime, or ignorance.  Duty calls.  Will innovation answer?

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“People expect very little good news about the wars being fought (whether in Iraq, Afghanistan, or on Terror, Drugs, Poverty, or Ignorance).  The promising Arab Spring has given way to a recurring pessimism about progress.  Gnarly health problems are on a tear the world over--diabetes now affects over eight percent of Americans--an other expensive disease conditions such as obesity, heart disease, and cancer are also now epidemic.  The cost of education rises like a runaway helium balloon, yet there is less and less evidence that it nets the students a real return on their investment.  Police have access to ever more elaborate statistical models of crime, but there is still way too much of it.  And global warming, steadily produces more extreme and more dangerous conditions the world over, yet according to about half of our elected 'leaders,' it is still, officially, only a theory that can conveniently be denied.”

The Best of Times (Innovation is Making Things Happen)

Innovation has been answering.  There have been amazing innovations heard round the world.  It’s only the beginning for an Innovation Revolution.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“And yet ...

We steadily expect more from our computers, our smartphones, apps, networks, and games.  We have grown to expect routine and wondrous stories of new ventures funded through crowdsourcing.  We hear constantly of lives around the world transformed because of Twitter or Kahn Academy or some breakthrough discovery in medicine.  Esther Duflo and her team at the Poverty Action Lab at MIT keep cracking tough problems that afflict the poor to arrive at solutions with demonstrated efficacy, and then, often the Gates Foundation or another philanthropic institution funds the transformational solution at unprecedented scale.

Storytelling is in a new golden age--whether in live events, on the radio, or in amazing new television series that can emerge anywhere in the world and be adapted for global tastes.  Experts are now everywhere, and shockingly easy and affordable to access.

Indeed, it seems clear that all the knowledge we've been struggling to amass is steadily being amplified and swiftly getting more organized, accessible, and affordable--whether through the magic of elegant little apps or big data managed in ever-smarter clouds or crowdfunding sites used to capitalize creative ideas in commerce or science.”

It’s a Time of Radical Transformation and New, More Agile Institutions

The pace of change and the size of change will accelerate exponentially as the forces of innovation rally together.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“One way to make sense of these opposing conditions is to see us as being in a time of radical transformation.  To see the old institutions as being challenged as a series of newer, more agile ones arise.  In history, such shifts have rarely been bloodless, but this one seems to be a radical transformation in the structure, sources, and nature of expertise.  Indeed, among innovation experts, this time in one like no other.  For the very first time in history, we are in a position to tackle tough problems with ground-breaking tools and techniques.”

It’s time to break some ground.

Join the Innovation Revolution and crack some problems worth solving.

You Might Also Like

How To Get Innovation to Succeed Instead of Fail

Innovation Life Cycle

Management Innovation is at the Top of the Innovation Stack

The Drag of Old Mental Models on Innovation and Change

The Myths of Business Model Innovation

Categories: Architecture, Programming

How To Get Innovation to Succeed Instead of Fail

“Because the purpose of business is to create a customer, the business enterprise has two–and only two–basic functions: marketing and innovation. Marketing and innovation produce results; all the rest are costs. Marketing is the distinguishing, unique function of the business.” – Peter Drucker

I’m diving deeper into patterns and practices for innovation.

Along the way, I’m reading and re-reading some great books on the art and science of innovation.

One innovation book I’m seriously enjoying is Ten Types of Innovation: The Discipline of building Breakthroughs by Larry Keeley, Helen Walters, Ryan Pikkel, and Brian Quinn.

Right up front, Larry Keeley shares some insight into the journey to this book.  He says that this book really codifies, structures, and simplifies three decades of experience from Doblin, a consulting firm focused on innovation.

For more than three decades, Doblin tried to answer the following question:

“How do we get innovation to succeed instead of fail?” 

Along the journey, there were a few ideas that they used to bridge the gap in innovation between the state of the art and the state of the practice.

Here they are …

Balance 3 Dimensions of Innovation (Theoretical Side + Academic Side + Applied Side)

Larry Keeley and his business partner Jay Doblin, a design methodologist, always balanced three dimensions of innovation: a theoretical side, an academic side, and an applied side.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“Over the years we have kept three important dimensions in dynamic tension.  We have a theoretical side, where we ask and seek real answers to tough questions about innovation.  Simple but critical ones like, 'Does brainstorming work?' (it doesn't), along with deep and systemic ones like, 'How do you really know what a user wants when the user doesn't know either?'  We have an academic side, since many of us are adjunct professors at Chicago's Institute of Design and this demands that we explain our ideas to smart young professionals in disciplined, distinctive ways.  And third, we have an applied side, in that have been privileged to adapt our innovation methods to many of the world's leading global enterprises and start-ups that hanker to be future leading firms.”

Effective Innovation Needs a Blend of Analysis + Synthesis

Innovation is a balance and blend of analysis and synthesis.  Analysis involves tearing things down, while synthesis is building new things up.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“From the beginning, Doblin has itself been interdisciplinary, mixing social sciences, technology, strategy, library sciences, and design into a frothy admixture that has always tried to blend both analysis, breaking tough things down, with synthesis, building new things up.  Broadly, we think any effective innovation effort needs plenty of both, stitched together as a seamless whole.”

Orchestrate the Ten Types of Innovation to Make a Game-Changing Innovation

Game-changing innovation is an orchestration of the ten types of innovation.

Via Ten Types of Innovation: The Discipline of building Breakthroughs:

“The heart of this book is built around a seminal Doblin discovery: that there are (and have always been) ten distinct types of innovation that need to be orchestrated with some care to make a game-changing innovation.“

The main idea is that innovation fails if you try to solve it with just one dimension.

You can’t just take a theoretical approach, and hope that it works in the real-world.

At the same time, innovation fails if you don’t leverage what we learn from the academic world and actually apply it.

And, if you know the ten types of innovation, you can focus your efforts more precisely.

You Might Also Like

Innovation Life Cycle

Management Innovation is at the Top of the Innovation Stack

No Slack = No Innovation

The Drag of Old Mental Models on Innovation and Change

The Myths of Business Model Innovation

Categories: Architecture, Programming

How to deploy an ElasticSearch cluster using CoreOS and Consul

Xebia Blog - Sun, 05/03/2015 - 13:39

The hot potato in the room of Containerized solutions is persistent services. Stateless applications are easy and trivial, but to deploy a persistent services like ElasticSearch is a totally different ball game. In this blog post we will show you how easy it is on this platform to create ElasticSearch clusters. The key to the easiness is the ability to lookup external ip addresses and port numbers of all cluster members in Consul and the reusable power of the CoreOS unit file templates. The presented solution is a ready-to-use ElasticSearch component for your application.

This solution:

  • uses empheral ports so that we can actually run multiple ElasticSearch nodes on the same host
  • mounts persistent storage under each node to prevent data loss on server crashes
  • uses the power of the CoreOS unit template files to deploy new ElasticSearch clusters.


In the previous blog posts we defined our A High Available Docker Container Platform using CoreOS and Consul and showed how we can add persistent storage to a Docker container

Once this platform is booted the only thing you need to do to deploy an ElasticSearch Cluster,  is to submit the following fleet unit system template file elasticsearch@.service  and start 3 or more instances.

Booting the platform

To see the ElasticSearch cluster in action, first boot up our CoreOS platform.

git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service
cd coreos-container-platform-as-a-service/vagrant
vagrant up
./is_platform_ready.sh
Starting an ElasticSearch cluster

Once the platform is started, submit the elasticsearch unit file and start three instances:

export FLEETCTL_TUNNEL=127.0.0.1:2222
cd ../fleet-units/elasticsearch
fleetctl submit elasticsearch@.service
fleetctl start elasticsearch@{1..3}

Now wait until all elasticsearch instances are running by checking the unit status.

fleetctl list-units
...
UNIT            MACHINE             ACTIVE  SUB
elasticsearch@1.service f3337760.../172.17.8.102    active  running
elasticsearch@2.service ed181b87.../172.17.8.103    active  running
elasticsearch@3.service 9e37b320.../172.17.8.101    active  running
mnt-data.mount      9e37b320.../172.17.8.101    active  mounted
mnt-data.mount      ed181b87.../172.17.8.103    active  mounted
mnt-data.mount      f3337760.../172.17.8.102    active  mounted
Create an ElasticSearch index

Now that the ElasticSearch cluster is running, you can create an index to store data.

curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/ -d \
     '{ "settings" : { "index" : { "number_of_shards" : 3, "number_of_replicas" : 2 } } }'
Insert a few documents
curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/1 -d@- <<!
{
    "first_name" : "John",
    "last_name" :  "Smith",
    "age" :        25,
    "about" :      "I love to go rock climbing",
    "interests": [ "sports", "music" ]
}
!

curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/2 -d@- <<!
{
    "first_name" :  "Jane",
    "last_name" :   "Smith",
    "age" :         32,
    "about" :       "I like to collect rock albums",
    "interests":  [ "music" ]
}
!

curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/3 -d@- <<!
{
    "first_name" :  "Douglas",
    "last_name" :   "Fir",
    "age" :         35,
    "about":        "I like to build cabinets",
    "interests":  [ "forestry" ]
}
!
And query the index
curl -XGET http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/_search?q=last_name:Smith
...
{
  "took": 50,
  "timed_out": false,
  "_shards": {
    "total": 3,
    "successful": 3,
    "failed": 0
  },
  "hits": {
    "total": 2,
  ...
}

restarting the cluster

Even when you restart the entire cluster, your data is persisted.

fleetctl stop elasticsearch@{1..3}
fleetctl list-units

fleetctl start elasticsearch@{1..3}
fleetctl list-units

curl -XGET http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/_search?q=last_name:Smith
...
{
  "took": 50,
  "timed_out": false,
  "_shards": {
    "total": 3,
    "successful": 3,
    "failed": 0
  },
  "hits": {
    "total": 2,
  ...
}

Open the console

Finally you can see the servers and the distribution of the index in the cluster by opening the console
http://elasticsearch.127.0.0.1.xip.io:8080/_plugin/head/.

elasticsearch head

Deploy other ElasticSearch clusters

Changing the name of the template file is the only thing you need to deploy another ElasticSearch cluster.

cp elasticsearch\@.service my-cluster\@.service
fleetctl submit my-cluster\@.service
fleetctl start my-cluster\@{1..3}
curl my-cluster.127.0.0.1.xip.io:8080
How does it work?

Starting a node in an ElasticSearch cluster is quite trivial, as shown in by the command line below:

exec gosu elasticsearch elasticsearch \
    --discovery.zen.ping.multicast.enabled=false \
    --discovery.zen.ping.unicast.hosts=$HOST_LIST \
    --transport.publish_host=$PUBLISH_HOST \
    --transport.publish_port=$PUBLISH_PORT \
     $@

We use the unicast protocol and specify our own publish host and port and list of ip address and port numbers of all the other nodes in the cluster.

Finding the other nodes in the cluster

But how do we find the other nodes in the cluster? That is quite easy. We query the Consul REST API for all entries with the same service name that are tagged as the "es-transport". This is the service exposed by ElasticSearch on port 9300.

curl -s http://consul:8500/v1/catalog/service/$SERVICE_NAME?tag=es-transport

...
[
    {
        "Node": "core-03",
        "Address": "172.17.8.103",
        "ServiceID": "elasticsearch-1",
        "ServiceName": "elasticsearch",
        "ServiceTags": [
            "es-transport"
        ],
        "ServiceAddress": "",
        "ServicePort": 49170
    },
    {
        "Node": "core-01",
        "Address": "172.17.8.101",
        "ServiceID": "elasticsearch-2",
        "ServiceName": "elasticsearch",
        "ServiceTags": [
            "es-transport"
        ],
        "ServiceAddress": "",
        "ServicePort": 49169
    },
    {
        "Node": "core-02",
        "Address": "172.17.8.102",
        "ServiceID": "elasticsearch-3",
        "ServiceName": "elasticsearch",
        "ServiceTags": [
            "es-transport"
        ],
        "ServiceAddress": "",
        "ServicePort": 49169
    }
]

Turning this into a comma seperated list of network endpoints is done using the following jq command:

curl -s http://consul:8500/v1/catalog/service/$SERVICE_NAME?tag=es-transport |\
     jq -r '[ .[] | [ .Address, .ServicePort | tostring ] | join(":")  ] | join(",")'
Finding your own network endpoint

As you can see in the above JSON output, each service entry has a unique ServiceID. To obtain our own endpoint, we use the following jq command:

curl -s http://consul:8500/v1/catalog/service/$SERVICE_NAME?tag=es-transport |\
     jq -r ".[] | select(.ServiceID==\"$SERVICE_9300_ID\") | .Address, .ServicePort" 
Finding the number of node in the cluster

Finding the intended number of nodes in the cluster is determined by counting the number of fleet unit instance files in CoreOS on startup and passing this number as an environment variable.

TOTAL_NR_OF_SERVERS=$(fleetctl list-unit-files | grep '%p@[^\.][^\.]*.service' | wc -l)

The %p refers to the part of the fleet unit file before the @ sign.

The Docker run command

The Docker run command is shown below. ElasticSearch exposes two ports: port 9200 exposes a REST api to the clients and port 9300 is used as the transport protocol between nodes in the cluster. Each port is a service and tagged appropriately.

ExecStart=/bin/sh -c "/usr/bin/docker run --rm \
    --name %p-%i \
    --env SERVICE_NAME=%p \
    --env SERVICE_9200_TAGS=http \
    --env SERVICE_9300_ID=%p-%i \
    --env SERVICE_9300_TAGS=es-transport \
    --env TOTAL_NR_OF_SERVERS=$(fleetctl list-unit-files | grep '%p@[^\.][^\.]*.service' | wc -l) \
    -P \
    --dns $(ifconfig docker0 | grep 'inet ' | awk '{print $2}') \
    --dns-search=service.consul \
    cargonauts/consul-elasticsearch"

The options are explained in the table below:

option description --env SERVICE_NAME=%p The name of this service to be advertised in Consul, resulting in a FQDN of %p.service.consul and will be used as the cluster name. %p refers to the first part of the fleet unit template file up to the @. --env SERVICE_9200_TAGS=www The tag assigned to the service at port 9200. This is picked up by the http-router, so that any http traffic to the host elasticsearch is direct to this port. --env SERVICE_9300_ID=%p-%i The unique id of this service in Consul. This is used by the startup script to find it's external port and ip address in Consul and will be used as the node name for the ES server. %p refers to the first part of the fleet unit template file up to the @ %i refers to the second part of the fleet unit file upto the .service. --env SERVICE_9300_TAGS=es-transport The tag assigned to the service at port 9300. This is used by the startup script to find the other servers in the cluster. --env TOTAL_NR_OF_SERVERS=$(...) The number of submitted unit files is counted and passed in as the environment variable 'TOTAL_NR_OF_SERVERS'. The start script will wait until this number of servers is actually registered in Consul before starting the ElasticSearch Instance. --dns $(...) Set DNS to query on the docker0 interface, where Consul is bound on port 53. (The docker0 interface ip address is chosen at random from a specific range). -dns-search=service.consul The default DNS search domain. Sources

The sources for the ElasticSearch repository can be found on github.

source description start-elasticsearch-clustered.sh   complete startup script of elasticsearch elasticsearch CoreOS fleet unit files for elasticsearch cluster consul-elasticsearch Sources for the Consul ElasticSearch repository Conclusion

CoreOS fleet template unit files are a powerful way of deploying ready to use components for your platform. If you want to deploy cluster aware applications, a service registry like Consul is essential.

Software architecture as code

Coding the Architecture - Simon Brown - Sat, 05/02/2015 - 11:40

A quick note to say that the video from my Software architecture as code talk at CRAFT 2015 in Budapest, Hungary last week is available to view online. This talk looks at why the software architecture model never quite matches the code, discusses architecturally-evident coding styles as a way to address this and shows how something like my Structurizr for Java library can be used to create a living software architecture model based upon a combination of extracting elements from the code and supplementing the model where this isn't possible.

The slides are also available to view online/download. Enjoy!

Categories: Architecture

Next-gen Web Apps with Isomorphic JavaScript

Xebia Blog - Fri, 05/01/2015 - 20:54

The web application landscape has recently seen a big shift in application architecture. Nowadays we build so-called Single Page Applications. These are web applications which render and run in the browser, powered by JavaScript. They are called “Single Page” because in such an application the browser never actually switches between pages. All interaction takes place within a single HTML document. This is great because users will not see a ”flash of white” whenever they perform an action, so all interaction feels much more fluid and natural. The application seems to respond much quicker which has a positive effect on user experience and conversion of the site. Unfortunately Single Page Applications also have several big drawbacks, mostly concerning the initial loading time and poor rankings in search engines.

Continue reading on Medium »

Stuff The Internet Says On Scalability For May 1st, 2015

Hey, it's HighScalability time:


Got containers? Gorgeous shot of the CSCL Globe (by Walter Scriptunas II), world's largest container ship: 1,313ft long; 19,000 standard containers.
  • $3000: Tesla's new 7kWh daily cycle battery.
  • Quotable Quotes:
    • @mamund: "Turns out there is nothing about HTTP that I like" --  Douglas Crockford 
    • @PeterChch: Your little unimportant site might be hacked not for your data but for your aws resources. E.g. bitcoin mining.
    • @Joseph_DeSimone: I find it stunning that Google's annual R&D budget totaled $9.8 billion and the Budget for the National Science Foundation was $7.3 billion
    • @jedberg: The new EC2 container service adds the missing granularity to #ec2
    • Randy Shoup: “Every service at Google is either deprecated or not ready yet.”  -- Google engineering proverb
    • @mtnygard: Today the ratio of admins to servers in a well-behaved scalable web companies is about 1 to 10,000. @botchagalupe #craftconf
    • @joshk: Data: There Are Over 9x More Private IPOs Than Actual Tech IPOs 
    • @nwjsmith: “Systems are not algorithms. Systems are much more complex.“ #CraftConf @skamille
    • kk: “Because the center of the universe is wherever there is the least resistance to new ideas.”
    • John Allspaw: Stop thinking that you’re trying to solve a troubleshooting problem; you’re not. Instead of telling me about how your software will solve problems, show me that you’re trying to build a product that is going to join my team as an awesome team member, because I’m going to think about using/buying your service in the same way that I think about hiring.
    • @mpaluchowski: "Netflix is a #logging system that happens to play movies." #CraftConf
    • John Wilke:  Resiliency is more important than performance.
    • @peakscale: The server/cattle metaphor rubs me the wrong way... all the farmers I knew and worked for named and cared about their herd.
    • @aphyr: "We've managed to run 40 services in prod for three years without needing to introduce a consensus system" @skamille, #CraftConf
    • @ryantomlinson: “Spotify have been using DNS for service discovery for a long time” #CraftConf
    • @csanchez: Google "we start over 2 billion containers per week" containers, containers, containers! #qconlondon 
    • @tyler_treat: If you're using RabbitMQ, consider replacing it with Kafka. Higher throughput, better replication, replayability. Same goes for other MQs.
    • @tastapod: @botchagalupe telling #CraftConf how it is! “Yelp is spinning up 8 containers a second. This is the real sh*t, man!”
    • @mpaluchowski: "A static #alert threshold won't be any good next week. It must be calculated." #CraftConf
    • @mtnygard: #craftconf @randyshoup “Microservices are an answer to a scaling problem, not a business problem.”  So right.
    • @adrianco: @mtnygard @randyshoup speed of development is the business problem that leads to Microservices.
    • @b6n: the aws financials should be a wake-up call to anyone still thinking cloud isn't a game of raw scale
    • @mtnygard: The “edge” used to be top-of-rack. Then the hypervisor. Now it’s the container. That’s 100x the number of IPs. — @botchagalupe #craftconf
    • @idajantis: 'An escalator can never break; it can only become stairs' - nice one by @viktorklang at #CraftConf on Distributed Systems failing gracefully
    • @jessitron: "You should store your data in a real database and replicate it to Elasticsearch." @aphyr #CraftConf

  • A telling difference between Google and Apple: Google Now becomes a more robust platform with 70 new partner apps. Apple takes an app-centric view of the world and Google not surprisingly takes a data centric view. With Google developers feed Google data for Google to display. With Apple developers feed Apple apps for users to consume. On Apple developers push their own brand and control functionality through bundled extensions, but Google will have the perspective to really let their deep learning prowess sing. So there's a real choice.

  • How appropriate that game theory is applied to cyberwarfare. Mutually Assured Destruction isn't just for nukes. Pentagon Announces New Strategy for Cyberwarfare: “Deterrence is partially a function of perception,” the new strategy says. “It works by convincing a potential adversary that it will suffer unacceptable costs if it conducts an attack on the United States, and by decreasing the likelihood that a potential adversary’s attack will succeed.

  • Reducing big data using ideas from quantum theory makes it easier to interpret. So maybe QM is nature's way of making sense of the BigData that is the Universe?

  • Synergy is not always BS. Cheaper bandwidth or bust: How Google saved YouTube: YouTube was burning through $2 million a month in bandwidth costs before the acquisition. What few knew at the time was that Google was a pioneer in data center technology, which allowed it to dramatically lower the costs of running YouTube.

  • In a winner take all market is the cost of customer acquisition pyrrhic? Uber Burning $750 Million in a Year.

  • The cloud behind the cloud. Apple details how it rebuilt Siri on Mesos: Apple’s custom Mesos scheduler is called J.A.R.V.I.S.; Apple uses J.A.R.V.I.S. as its internal platform-as-a-service; Apple’s Mesos cluster spans thousands of nodes and runs about a hundred services; Siri’s Mesos backend represents its third generation, and a move away from “traditional” infrastructure.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

Categories: Architecture

Paper: DNACloud: A Tool for Storing Big Data on DNA

"From the dawn of civilization until 2003, humankind generated five exabytes (1 exabytes = 1 billion gigabytes) of data. Now we produce five exabytes every two days and the pace is accelerating."

-- Eric Schmidt, Executive Chairman, Google, August 4, 2010. 

 

Where are we going to store the deluge of data everyone is warning us about? How about in a DNACloud that can store store 1 petabyte of information per gram of DNA?

Writing is a little slow. You have to convert your data file to a DNA description that is sent to a biotech company that will send you back a vile of synthetic DNA. Where do you store it? Your refrigerator.

Reading is a little slow too. The data can apparently be read with great accuracy, but to read it you have to sequence the DNA first, and that might take awhile.

The how of it is explained in DNACloud: A Tool for Storing Big Data on DNA (poster). Abstract:

The term Big Data is usually used to describe huge amount of data that is generated by humans from digital media such as cameras, internet, phones, sensors etc. By building advanced analytics on the top of big data, one can predict many things about the user such as behavior, interest etc. However before one can use the data, one has to address many issues for big data storage. Two main issues are the need of large storage devices and the cost associated with it. Synthetic DNA storage seems to be an appropriate solution to address these issues of the big data. Recently in 2013, Goldman and his collegues from European Bioinformatics Institute demonstrated the use of the DNA as storage medium with capacity of storing 1 peta byte of information on one gram of DNA and retrived the data successfully with low error rate [1]. This significant step shows a promise for synthetic DNA storage as a useful technology for the future data storage. Motivated by this, we have developed a software called DNACloud which makes it easy to store the data on the DNA. In this work, we present detailed description of the software.

 Related Articles
Categories: Architecture

Sponsored Post: OpenDNS, MongoDB, Internap, Aerospike, SignalFx, InMemory.Net, Couchbase, VividCortex, MemSQL, Scalyr, AiScaler, AppDynamics, ManageEngine, Site24x7

Who's Hiring?
  • The Cloud Platform team at OpenDNS is building a PaaS for our engineering teams to build and deliver their applications. This is a well rounded team covering software, systems, and network engineering and expect your code to cut across all layers, from the network to the application. Learn More

  • At Scalyr, we're analyzing multi-gigabyte server logs in a fraction of a second. That requires serious innovation in every part of the technology stack, from frontend to backend. Help us push the envelope on low-latency browser applications, high-speed data processing, and reliable distributed systems. Help extract meaningful data from live servers and present it to users in meaningful ways. At Scalyr, you’ll learn new things, and invent a few of your own. Learn more and apply.

  • UI EngineerAppDynamics, founded in 2008 and lead by proven innovators, is looking for a passionate UI Engineer to design, architect, and develop our their user interface using the latest web and mobile technologies. Make the impossible possible and the hard easy. Apply here.

  • Software Engineer - Infrastructure & Big DataAppDynamics, leader in next generation solutions for managing modern, distributed, and extremely complex applications residing in both the cloud and the data center, is looking for a Software Engineers (All-Levels) to design and develop scalable software written in Java and MySQL for backend component of software that manages application architectures. Apply here.
Fun and Informative Events
  • How to Get a Game-Changing Performance Advantage with Intel SSDs and Aerospike. Presenter: Frank Ober, Data Center Solution Architect at Intel Corporation. Wednesday, May 13, 2015 @ 10:00AM PST, 1:00PM PST. Learn how to maximize the price/performance of your Intel Solid-State Drives (SSDs) with Aerospike. Frank Ober of Intel’s Solutions Group will review how he achieved 1+ million transactions per second on a single dual socket Xeon Server with SSDs using the open source tools of Aerospike for benchmarking. Register Now.

  • MongoDB World brings together over 2,000 developers, sysadmins, and DBAs in New York City on June 1-2 to get inspired, share ideas and get the latest insights on using MongoDB. Organizations like Salesforce, Bosch, the Knot, Chico’s, and more are taking advantage of MongoDB for a variety of ground-breaking use cases. Find out more at http://mongodbworld.com/ but hurry! Super Early Bird pricing ends on April 3.
Cool Products and Services
  • SQL for Big Data: Price-performance Advantages of Bare Metal. When building your big data infrastructure, price-performance is a critical factor to evaluate. Data-intensive workloads with the capacity to rapidly scale to hundreds of servers can escalate costs beyond your expectations. The inevitable growth of the Internet of Things (IoT) and fast big data will only lead to larger datasets, and a high-performance infrastructure and database platform will be essential to extracting business value while keeping costs under control. Read more.

  • SignalFx: just launched an advanced monitoring platform for modern applications that's already processing 10s of billions of data points per day. SignalFx lets you create custom analytics pipelines on metrics data collected from thousands or more sources to create meaningful aggregations--such as percentiles, moving averages and growth rates--within seconds of receiving data. Start a free 30-day trial!

  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net

  • Benchmark: MongoDB 3.0 (w/ WiredTiger) vs. Couchbase 3.0.2. Even after the competition's latest update, are they more tired than wired? Get the Report.

  • VividCortex goes beyond monitoring and measures the system's work on your MySQL and PostgreSQL servers, providing unparalleled insight and query-level analysis. This unique approach ultimately enables your team to work more effectively, ship more often, and delight more customers.

  • MemSQL provides a distributed in-memory database for high value data. It's designed to handle extreme data ingest and store the data for real-time, streaming and historical analysis using SQL. MemSQL also cost effectively supports both application and ad-hoc queries concurrently across all data. Start a free 30 day trial here: http://www.memsql.com/

  • aiScaler, aiProtect, aiMobile Application Delivery Controller with integrated Dynamic Site Acceleration, Denial of Service Protection and Mobile Content Management. Also available on Amazon Web Services. Free instant trial, 2 hours of FREE deployment support, no sign-up required. http://aiscaler.com

  • ManageEngine Applications Manager : Monitor physical, virtual and Cloud Applications.

  • www.site24x7.com : Monitor End User Experience from a global monitoring network.

If any of these items interest you there's a full description of each sponsor below. Please click to read more...

Categories: Architecture

How can we Build Better Complex Systems? Containers, Microservices, and Continuous Delivery.

We must be able to create better complex software systems. That’s that message from Mary Poppendieck in a wonderful far ranging talk she gave at the Craft Conference: New New Software Development Game: Containers, Micro Services.

The driving insight is complexity grows nonlinearly with size. The type of system doesn’t really matter, but we know software size will continue to grow so software complexity will continue to grow even faster.

What can we do about it? The running themes are lowering friction and limiting risk:

  • Lower friction. This allows change to happen faster. Methods: dump the centralizing database; adopt microservices; use containers; better organize teams.

  • Limit risk. Risk is inherent in complex systems. Methods: PACT testing; continuous delivery.

Some key points:

  • When does software really grow? When smart people can do their own thing without worrying about their impact on others. This argues for building federated systems that ensure isolation, which argues for using microservices and containers.

  • Microservices usually grow successfully from monoliths. In creating a monolith developers learn how to properly partition a system.

  • Continuous delivery both lowers friction and lowers risk. In a complex system if you want stability, if you want security, if you want reliability, if you want safety then you must have lots of little deployments. 

  • Every member of a team is aware of everything. That's what makes a winning team. Good situational awareness.

The highlight of the talk for me was the section on the amazing design of the Swedish Gripen Fighter Jet. Talks on microservices tend to be highly abstract. The fun of software is in the building. Talk about parts can be so nebulous. With the Gripen the federated design of the jet as a System of Systems becomes glaringly concrete and real. If you can replace your guns, radar system, and virtually any other component without impacting the rest of the system, that’s something! Mary really brings this part of the talk home. Don’t miss it.

It’s a very rich and nuanced talk, there’s a lot history and context given, so I can’t capture all the details, watching the video is well worth the effort. Having said that, here’s my gloss on the talk...

Hardware Scales by Abstraction and Miniaturization
Categories: Architecture

Suprastructure – how come “Microservices” are getting small?

supra

Now, I don’t want to get off on a rant here*,  but, It seems like “Microservices” are all the rage these days – at least judging from my twitter, feedly and Prismatic feeds. I already wrote that that in my opinion “Microservices” is just new name to SOA . I thought I’d give a couple of examples for what I mean.

I worked on systems that today would  pass for Microservices years ago (as early as 2004/5).  For instance in  2007,  I worked at a startup called xsights. We developed something like  google goggles for brands (or barcodeless barcode) so users could snap a picture of a ad/brochure etc. and get relevant content or perks in response (e.g. we had campaigns in Germany with a book publisher where MMSing shots of newspaper ads  or outdoor signage resulted in getting information and discounts on the advertized books).The architecture driving that wast a set of small, focused autonomous services. Each service encapsulated its own data store (if it had one), services were replaceable (e.g. we had MMS, web, apps & 3G video call gateways). We developed the infrastructure to support automatic service discovery, the ability to create ad-hoc long running interactions a.k.a.Sagas that enabled different cooperations between the services (e.g. the flow for 3G video call needed more services for fulfilment than a app one) etc.  You can read a bit about it in the  “putting it all together” chapter of my SOA Patterns book or view the presentation I gave in QCon a few years back called “Building reliable systems from unreliable components” (see slides below) both elaborate some more on that system.

Another example is Naval command and control system I (along with Udi Dahan) designed back in 2004 for an unmanned surface vessel (like a drone but on water)  – In that system we had services like “navigation” that suggested navigation routes based on waypoints and data from other services (e.g. weather), a “protector” service that handled communications to and from the actual USVs a “Common Operational Picture” (COP) service that aggregated target data from external services and sensors (e.g. the ones on the protectors), “Alerts” services where business rules could trigger various actions etc. These services communicated using events and messages and had flows like the protecor publish its current positioning, the  COP publish an updated target positions (protector + other targets), the navigation system spots a potential interception problem and publish that , the alert service identify that the threshold for the potential problem is too big and trigger an alert to users which then initiate a request for suggesting alternate navigation plans etc. Admittedly some of these services could have been more focused and smaller but they were still autonomous, with separate storage  and  hey that was 2004 :)

So, what changed in the last decade ? For one, I guess after years of “enterprisy” hype that ruined SOAs name the actual architectural style is finally getting some traction (even if it had to change its name for that to happen).

However, this post is not just a rant on Microservices…

The more interesting chage is the shift in the role of infrastructure from a set of libraries and tools that are embedded within the software we write to larger constructs running outside of the software and running/managing it -> or in other words the emergence of  “suprastructure” instead of infrastructure  (infra = below, supra = above). It isn’t that infrastructure vanishes but a lot of functionality is “outsources” to suprastructure. Also this is something that started a few years back with PaaS but (IMHO) getting more acceptance and use in the last couple of years esp. with the gaining popularity of Docker (and more importantly its ecosystem)

If we consider, for example, the architecture of Appsflyer , which I recently joined, (You can listen to  Nir Rubinshtein, our system architect, presenting it (in Hebrew) or check out the slides on speaker-deck or below (English) )

Instead of writing or using elaborate service hosts and application servers you can  host simple apps in  Docker; run and schedule them by Mesos, get cluster and discovery services from Consul, recover from failure by rereading logs from Kafka etc. Back in the days we also had these capabilities but we wrote tons of code to make it happen. Tons of code that was specific to the solution and technology (and was prone for bugs and problems). For modern solutions, all these capabilities are available almost off the shelf , everywhere,  on premise, on clouds and even across clouds.

The importance of suprastructure in regard to  “microservices”  is that this “outsourcing” of functionality help drive down the overhead and costs associated with making services small(er). In previous years the threshold to getting from useful services  to nanoservices  was easier to cross. Today, it is almost reversed –  you spend the effort of setting all this suprastructure once and you actually just begin to see the return if you have enough services to make it worthwhile.

Another advantage of suprastructure is that it is easier to get polyglot services – i.e.it is easier  to write different services using different technologies. Instead of investing in a lot of technology-specific infrastructure you can get more generic capabilities from the suprastructure and spend more time solving the business problems using the right tool for the job. It also makes it easier to change and  evolve technologies over time – again saving the sunk costs of investing in elaborate infrastructure

 

of course, that’s just my opinion I could be wrong…*

PS – we (Appsflyer) are hiring : UI tech lead, data scientist, senior devs and more… :)

Building reliable systems from unreliable components

* with apologies to Dennis Miller

Categories: Architecture

How to deploy High Available persistent Docker services using CoreOS and Consul

Xebia Blog - Thu, 04/23/2015 - 15:45

Providing High Availability to stateless applications is pretty trivial as was shown in the previous blog posts A High Available Docker Container Platform and Rolling upgrade of Docker applications using CoreOS and Consul. But how does this work when you have a persistent service like Redis?

In this blog post we will show you how a persistent service like Redis can be moved around on machines in the cluster, whilst preserving the state. The key is to deploy a fleet mount configuration into the cluster and mount the storage in the Docker container that has persistent data.

To support persistency we have added a NAS to our platform architecture in the form of three independent NFS servers which act as our NAS storage, as shown in the picture below.

CoreOS platform architecture with fake NASThe applications are still deployed in the CoreOS cluster as docker containers.  Even our Redis instance is running in a Docker container. Our application is configured using the following three Fleet unit files:

The unit file of the Redis server is the most interesting one because it is our persistence service. In the unit section of the file, it first declares that it requires a mount for '/mnt/data' on which it will persist its data.

[Unit]
Description=app-redis
Requires=mnt-data.mount
After=mnt-data.mount
RequiresMountsFor=/mnt/data

In the start clause of the redis service, a specific subdirectory of /mnt/data is mounted into the container.

...
ExecStart=/usr/bin/docker run --rm \
    --name app-redis \
    -v /mnt/data/app-redis-data:/data \
    -p 6379:6379 \
    redis
...

The mnt-data.mount unit file is quite simple: It defines an NFS mount with the option 'noauto' indicating  that device should be automatically mounted on boot time.  The unit file has the option 'Global=true' so that the mount is distributed to  all the nodes in the cluster. The mount is only activated when another unit requests it.

[Mount]
What=172.17.8.200:/mnt/default/data
Where=/mnt/data
Type=nfs
Options=vers=3,sec=sys,noauto

[X-Fleet]
Global=true

Please note that the NFS mount specifies system security (sec=sys) and uses NFS version 3 protocol, to avoid all sorts of errors surrounding mismatches in user- and group ids between the client and the server.

Preparing the application

To see the failover in action, you need to start the platform and deploy the application:

git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service.git
cd coreos-container-platform-as-a-service/vagrant
vagrant up
./is_platform_ready.sh

This will start 3 NFS servers and our 3 node CoreOS cluster. After that is done, you can deploy the application, by first submitting the mount unit file:

export FLEETCTL_TUNNEL=127.0.0.1:2222
cd ../fleet-units/app
fleetctl load mnt-data.mount

starting the redis service:

fleetctl start app-redis.service

and finally starting a number of instances of the application:

fleetctl submit app-hellodb@.service
fleetctl load app-hellodb@{1..3}.service
fleetctl start app-hellodb@{1..3}.service

You can check that everything is running by issuing the fleetctl list-units command. It should show something like this:

fleetctl list-units
UNIT			MACHINE				ACTIVE		SUB
app-hellodb@1.service	8f7472a6.../172.17.8.102	active		running
app-hellodb@2.service	b44a7261.../172.17.8.103	active		running
app-hellodb@3.service	2c19d884.../172.17.8.101	active		running
app-redis.service	2c19d884.../172.17.8.101	active		running
mnt-data.mount		2c19d884.../172.17.8.101	active		mounted
mnt-data.mount		8f7472a6.../172.17.8.102	inactive	dead
mnt-data.mount		b44a7261.../172.17.8.103	inactive	dead

As you can see three app-hellodb instances are running and the redis service is running on 172.17.8.101, which is the only host that as /mnt/data mounted. The other two machines have this mount in the status 'dead', which is an unfriendly name for stopped.

Now you can access the app..

yes 'curl hellodb.127.0.0.1.xip.io:8080; echo ' | head -10 | bash
..
Hello World! I have been seen 20 times.
Hello World! I have been seen 21 times.
Hello World! I have been seen 22 times.
Hello World! I have been seen 23 times.
Hello World! I have been seen 24 times.
Hello World! I have been seen 25 times.
Hello World! I have been seen 26 times.
Hello World! I have been seen 27 times.
Hello World! I have been seen 28 times.
Hello World! I have been seen 29 times.
Redis Fail-over in Action

To see the fail-over in action, you start a monitor on a machine not running Redis. In our case the machine running app-hellodb@1.

vagrant ssh -c \
   "yes 'curl --max-time 2 hellodb.127.0.0.1.xip.io; sleep 1 ' | \
    bash" \
    app-hellodb@1.service

Now restart the redis machine:

vagrant ssh -c "sudo shutdown -r now" app-redis.service

After you restarted the machine running Redis, the  output should look something like this:

...
Hello World! I have been seen 1442 times.
Hello World! I have been seen 1443 times.
Hello World! I have been seen 1444 times.
Hello World! Cannot tell you how many times I have been seen.
	(Error 111 connecting to redis:6379. Connection refused.)
curl: (28) Operation timed out after 2004 milliseconds with 0 out of -1 bytes received
curl: (28) Operation timed out after 2007 milliseconds with 0 out of -1 bytes received
Hello World! I have been seen 1445 times.
Hello World! I have been seen 1446 times.
curl: (28) Operation timed out after 2004 milliseconds with 0 out of -1 bytes received
curl: (28) Operation timed out after 2004 milliseconds with 0 out of -1 bytes received
Hello World! I have been seen 1447 times.
Hello World! I have been seen 1448 times.
..

Notice that the distribution of your units has changed after the reboot.

fleetctl list-units
...
UNIT			MACHINE				ACTIVE		SUB
app-hellodb@1.service	3376bf5c.../172.17.8.103	active		running
app-hellodb@2.service	ff0e7fd5.../172.17.8.102	active		running
app-hellodb@3.service	3376bf5c.../172.17.8.103	active		running
app-redis.service	ff0e7fd5.../172.17.8.102	active		running
mnt-data.mount		309daa5a.../172.17.8.101	inactive	dead
mnt-data.mount		3376bf5c.../172.17.8.103	inactive	dead
mnt-data.mount		ff0e7fd5.../172.17.8.102	active		mounted
Conclusion

We now have the basis for a truly immutable infrastructure setup: the entire CoreOS cluster including the application can be destroyed and a completely identical environment can be resurrected within a few minutes!

  • Once you have an reliable external persistent store, CoreOS can help you migrate persistent services just as easy as stateless services. We chose a NFS server for ease of use on this setup, but nothing prevents you from mounting other kinds of storage systems for your application.
  • Consul excels in providing fast and dynamic service discovery for  services, allowing the Redis service to migrate to a different machine and the application instances to find the new address of the Redis service through as simple DNS lookup!

 

The Story of a Digital Artist

I’m always on the hunt for people that do what makes them come alive.

Artists in particular are especially interesting for me, especially when they are able to do what they love.

I’ve known too many artists that lived painful lives, trying to be an artist, but never making ends meet.

I’ve also known too many artists that lived another life outside of art, but never really lived, because they never answered their calling.

I believe that in today’s world, there are a lot more options for you to live life on you terms.

With technology at our fingertips, it’s easier to connect with people around the world and share your art, whatever that may be.

On Sources of Insight, I’ve asked artist Rebecca Tsien to share her story:

Why I Draw People and Animals

It’s more than a story of a digital artist.   It’s a journey of fulfillment.

Rebecca has found a way to do what she loves.  She lives and breathes her passion.

Maybe her story can inspire you.

Maybe there’s a way you can do more art.

Categories: Architecture, Programming

Scaling Agile? Keep it simple, scaler!

Xebia Blog - Wed, 04/22/2015 - 08:59

The promise of Agile is short cycled value delivery, with the ability to adapt. This is achieved by focusing on the people that create value and optimising the way they work.

Scrum provides a framework that provides a limited set of roles and artefacts and offers a simple process framework that helps to implement the Agile values and to adhere to the Agile principles.

I have supported many organisations in adopting Agile as their mindset and culture. What puzzles me is that many larger organisations seem to think that Scrum is not enough in their context and they feel the need for something bigger and more complicated. As a result of this, more and more Agile transformations start with scaling Agile to fit their context and then try to make things less complex.

While the various scaling frameworks for Agile contain many useful and powerful tools to apply in situations that require them, applying a complete Agile scaling framework to an organisation from the get-go often prevents the really needed culture and mindset change.

When applying a little bit of creativity, already present organisational structure can be mapped easily on the structure suggested by many scaling frameworks. Most frameworks explain the needed behaviour in an Agile environment, but these explanations are often ignored or misinterpreted. Due to (lengthy) descriptions of roles and responsibilities, people tend to stop thinking for themselves about what would work best and start to focus on who plays which role and what is someone else’s responsibility. There is a tendency to focus on the ceremonies rather than on the value that should be delivered by the team(s) with regards to product or service.

My take on adopting Agile would be to start simple. Use an Agile framework that prescribes very little, like Scrum or Kanban, in order to provoke learning and experiencing. From this learning and experiencing will come changes in the organisational structure to best support the Agile Values and Principles. People will find or create positions where their added value has most impact on the value that the organisation creates and, when needed, will dismantle positions and structure that prevent this value to be created.

Another effect of starting simple is that people will not feel limited by rules and regulations, and that way use their creativity, experience and capabilities easier. Oftentimes, more energy is create by less rules.

As said by others as well, some products or value are difficult to create with simple systems. As observed by Dave Snowden and captured in his Cynefin framework, too much simplicity could result in chaos when this simplicity is applied to complex systems. To create value in more complex systems, use the least amount of tools provided by the scaling frameworks to prevent chaos and leverage the benefits that simpler systems provide. Solutions to fix problems in complex systems are best found when experiencing the complexity and discovering what works best to cope with that. Trying to prevent problems to pop up might paralyse an organisation too much to put out the most possible value.

So: Focus on delivering value in short cycles, adapt when needed and add the least amount of tools and/or process to optimise communication and value delivery.

The Myths of Business Model Innovation

Business model innovation has a couple of myths.

One myth is that business model innovation takes big thinking.  Another myth about business model innovation is that technology is the answer.

In the book, The Business Model Navigator, Oliver Gassman, Karolin Frankenberger, and Michaela Csik share a couple of myths that need busting so that more people can actually achieve business model innovation.

The "Think Big" Myth

Business model innovation does not need to be “big bang.”  It can be incremental.  Incremental changes can create more options and more opportunities for serendipity.

Via The Business Model Navigator:

“'Business model innovations are always radical and new to the world.'   Most people associate new business models with the giants leaps taken by Internet companies.  The fact is that business model innovation, in the same way as product innovation, can be incremental.  For instance, Netflix's business model innovation of mailing DVDs to customers was undoubtedly incremental and yet brought great success to the company.  The Internet opened up new avenues for Netflix that allowed the company to steadily evolve into an online streaming service provider.”

The Technology Myth

It’s not technology for technology’s sake.  It’s applying technology to revolutionize a business that creates the business model innovation.

Via The Business Model Navigator:

“'Every business model innovation is based on a fascinating new technology that inspires new products.'  The fact is that while new technologies can indeed drive new business models, they are often generic in nature.  Where creativity comes in is in applying them to revolutionize a business.  It is the business application and the specific use of the technology which makes the difference.  Technology for technology's sake is the number one flop factor in innovation projects.  The truly revolutionary act is that of uncovering the economic potential of a new technology.”

If you want to get started with business model innovation, don’t just go for the home run.

You Might Also Like

Cloud Changes the Game from Deployment to Adoption

Cognizant on the Next Generation Enterprise

Drive Digital Transformation by Re-Imagining Operations

Drive Digital Transformation by Re-envisioning Your Customer Experience

The Future of Jobs

Categories: Architecture, Programming

Swift optional chaining and method argument evaluation

Xebia Blog - Tue, 04/21/2015 - 08:21

Everyone that has been programming in Swift knows that you can call a method on an optional object using a question mark (?). This is called optional chaining. But what if that method takes any arguments whose value you need to get from the same optional? Can you safely force unwrap those values?

A common use case of this is a UIViewController that runs some code within a closure after some delay or after a network call. We want to keep a weak reference to self within that closure because we want to be sure that we don't create reference cycles in case the closure would be retained. Besides, we (usually) don't need to run that piece of code within the closure in case the view controller got dismissed before that closure got executed.

Here is a simplified example:

class ViewController: UIViewController {

    let finishedMessage = "Network call has finished"
    let messageLabel = UILabel()

    override func viewDidLoad() {
        super.viewDidLoad()

        someNetworkCall { [weak self] in
            self?.finished(self?.finishedMessage)
        }
    }

    func finished(message: String) {
        messageLabel.text = message
    }
}

Here we call the someNetworkCall function that takes a () -> () closure as argument. Once the network call is finished it will call that closure. Inside the closure, we would like to change the text of our label to a finished message. Unfortunately, the code above will not compile. That's because the finished method takes a non-optional String as parameter and not an optional, which is returned by self?.finishedMessage.

I used to fix such problem by wrapping the code in a if let statement:

if let this = self {
    this.finished(this.finishedMessage)
}

This works quite well, especially when there are multiple lines of code that you want to skip if self became nil (e.g. the view controller got dismissed and deallocated). But I always wondered if it was safe to force unwrap the method arguments even when self would be nil:

self?.finished(self!.finishedMessage)

The question here is: does Swift evaluate method argument even if it does not call the method?

I went through the Swift Programming Guide to find any information on this but couldn't find an answer. Luckily it's not hard to find out.

Let's add a method that will return the finishedMessage and print a message and then call the finished method on an object that we know for sure is nil.

override func viewDidLoad() {
    super.viewDidLoad()

    let vc: ViewController? = nil
    vc?.finished(printAndGetFinishedMessage())
}

func printAndGetFinishedMessage() -> String {
    println("Getting message")
    return finishedMessage
}

When we run this, we see that nothing gets printed to the console. So now we know that Swift will not evaluate the method arguments when the method is not invoked. Therefore we can change our original code to the following:

someNetworkCall { [weak self] in
    self?.finished(self!.finishedMessage)
}