Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management

SPaMCAST 340 -Tom Howlett - Scrum Master, Teams, Collaboration, Distributed Agile

Software Process and Measurement Cast - Sun, 05/03/2015 - 22:00

Software Process and Measurement Cast 340 features our interview with Tom Howlett.  Tom is a Scrum Master.  We talked about teams, collaboration and how to effectively be Agile in distributed teams.

Tom’s bio:

Tom's been building and working with teams that focus on continuous improvement for 15 years. In that time he's written about the difficulties he faced and how he overcame them in over 100 blog posts on "Diary of a Scrummaster", and a book called "A Programmer's Guide To People". He has a strong focus on breaking down the barriers that restrict collaboration (whether remote or co-located) and ensuring the people who do the work can effectively decide how it's done. He's becoming well known in the Agile community through his speaking and running his local group the "Cheltenham Geeks'. His company LeanTomato provides help forming new teams and helping existing ones meet people’s needs more effectively.

Contact information
Blog: Diary of a ScrumMaster
Twitter: @diaryofscrum
Website: LeanTomato

Remember:

Jo Ann Sweeny (Explaining Change) is running her annual Worth Working Summit.  Please visit http://www.worthworkingsummit.com/

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

CMMI Institute Global Congress
May 12-13 Seattle, WA, USA
My topic - Agile Risk Management
http://cmmiconferences.com/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our essay on Agile team decision making. Team based decision making requires mechanisms and prerequisites for creating consensus among team members. The prerequisites are a decision to be made, trust, knowledge and the tools to make a decisions. In many instances team members are assumed to have the required tools and techniques in their arsenal. In many instances team members are assumed by management and other team members to have the required tools and techniques in their arsenal.  Next week we will explore decision making and give you tools to make decisions.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.

Categories: Process Management

How to deploy an ElasticSearch cluster using CoreOS and Consul

Xebia Blog - Sun, 05/03/2015 - 13:39

The hot potato in the room of Containerized solutions is persistent services. Stateless applications are easy and trivial, but to deploy a persistent services like ElasticSearch is a totally different ball game. In this blog post we will show you how easy it is on this platform to create ElasticSearch clusters. The key to the easiness is the ability to lookup external ip addresses and port numbers of all cluster members in Consul and the reusable power of the CoreOS unit file templates. The presented solution is a ready-to-use ElasticSearch component for your application.

This solution:

  • uses empheral ports so that we can actually run multiple ElasticSearch nodes on the same host
  • mounts persistent storage under each node to prevent data loss on server crashes
  • uses the power of the CoreOS unit template files to deploy new ElasticSearch clusters.


In the previous blog posts we defined our A High Available Docker Container Platform using CoreOS and Consul and showed how we can add persistent storage to a Docker container. 

Once this platform is booted the only thing you need to do to deploy an ElasticSearch Cluster,  is to submit the following fleet unit system template file elasticsearch@.service  and start 3 or more instances.

Booting the platform

To see the ElasticSearch cluster in action, first boot up our CoreOS platform.

git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service
cd coreos-container-platform-as-a-service/vagrant
vagrant up
./is_platform_ready.sh
Starting an ElasticSearch cluster

Once the platform is started, submit the elasticsearch unit file and start three instances:

export FLEETCTL_TUNNEL=127.0.0.1:2222
cd ../fleet-units/elasticsearch
fleetctl submit elasticsearch@.service
fleetctl start elasticsearch@{1..3}

Now wait until all elasticsearch instances are running by checking the unit status.

fleetctl list-units
...
UNIT            MACHINE             ACTIVE  SUB
elasticsearch@1.service f3337760.../172.17.8.102    active  running
elasticsearch@2.service ed181b87.../172.17.8.103    active  running
elasticsearch@3.service 9e37b320.../172.17.8.101    active  running
mnt-data.mount      9e37b320.../172.17.8.101    active  mounted
mnt-data.mount      ed181b87.../172.17.8.103    active  mounted
mnt-data.mount      f3337760.../172.17.8.102    active  mounted
Create an ElasticSearch index

Now that the ElasticSearch cluster is running, you can create an index to store data.

curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/ -d \
     '{ "settings" : { "index" : { "number_of_shards" : 3, "number_of_replicas" : 2 } } }'
Insert a few documents
curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/1 -d@- <<!
{
    "first_name" : "John",
    "last_name" :  "Smith",
    "age" :        25,
    "about" :      "I love to go rock climbing",
    "interests": [ "sports", "music" ]
}
!

curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/2 -d@- <<!
{
    "first_name" :  "Jane",
    "last_name" :   "Smith",
    "age" :         32,
    "about" :       "I like to collect rock albums",
    "interests":  [ "music" ]
}
!

curl -XPUT http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/3 -d@- <<!
{
    "first_name" :  "Douglas",
    "last_name" :   "Fir",
    "age" :         35,
    "about":        "I like to build cabinets",
    "interests":  [ "forestry" ]
}
!
And query the index
curl -XGET http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/_search?q=last_name:Smith
...
{
  "took": 50,
  "timed_out": false,
  "_shards": {
    "total": 3,
    "successful": 3,
    "failed": 0
  },
  "hits": {
    "total": 2,
  ...
}

restarting the cluster

Even when you restart the entire cluster, your data is persisted.

fleetctl stop elasticsearch@{1..3}
fleetctl list-units

fleetctl start elasticsearch@{1..3}
fleetctl list-units

curl -XGET http://elasticsearch.127.0.0.1.xip.io:8080/megacorp/employee/_search?q=last_name:Smith
...
{
  "took": 50,
  "timed_out": false,
  "_shards": {
    "total": 3,
    "successful": 3,
    "failed": 0
  },
  "hits": {
    "total": 2,
  ...
}

Open the console

Finally you can see the servers and the distribution of the index in the cluster by opening the console
http://elasticsearch.127.0.0.1.xip.io:8080/_plugin/head/.

elasticsearch head

Deploy other ElasticSearch clusters

Changing the name of the template file is the only thing you need to deploy another ElasticSearch cluster.

cp elasticsearch\@.service my-cluster\@.service
fleetctl submit my-cluster\@.service
fleetctl start my-cluster\@{1..3}
curl my-cluster.127.0.0.1.xip.io:8080
How does it work?

Starting a node in an ElasticSearch cluster is quite trivial, as shown in by the command line below:

exec gosu elasticsearch elasticsearch \
    --discovery.zen.ping.multicast.enabled=false \
    --discovery.zen.ping.unicast.hosts=$HOST_LIST \
    --transport.publish_host=$PUBLISH_HOST \
    --transport.publish_port=$PUBLISH_PORT \
     $@

We use the unicast protocol and specify our own publish host and port and list of ip address and port numbers of all the other nodes in the cluster.

Finding the other nodes in the cluster

But how do we find the other nodes in the cluster? That is quite easy. We query the Consul REST API for all entries with the same service name that are tagged as the "es-transport". This is the service exposed by ElasticSearch on port 9300.

curl -s http://consul:8500/v1/catalog/service/$SERVICE_NAME?tag=es-transport

...
[
    {
        "Node": "core-03",
        "Address": "172.17.8.103",
        "ServiceID": "elasticsearch-1",
        "ServiceName": "elasticsearch",
        "ServiceTags": [
            "es-transport"
        ],
        "ServiceAddress": "",
        "ServicePort": 49170
    },
    {
        "Node": "core-01",
        "Address": "172.17.8.101",
        "ServiceID": "elasticsearch-2",
        "ServiceName": "elasticsearch",
        "ServiceTags": [
            "es-transport"
        ],
        "ServiceAddress": "",
        "ServicePort": 49169
    },
    {
        "Node": "core-02",
        "Address": "172.17.8.102",
        "ServiceID": "elasticsearch-3",
        "ServiceName": "elasticsearch",
        "ServiceTags": [
            "es-transport"
        ],
        "ServiceAddress": "",
        "ServicePort": 49169
    }
]

Turning this into a comma seperated list of network endpoints is done using the following jq command:

curl -s http://consul:8500/v1/catalog/service/$SERVICE_NAME?tag=es-transport |\
     jq -r '[ .[] | [ .Address, .ServicePort | tostring ] | join(":")  ] | join(",")'
Finding your own network endpoint

As you can see in the above JSON output, each service entry has a unique ServiceID. To obtain our own endpoint, we use the following jq command:

curl -s http://consul:8500/v1/catalog/service/$SERVICE_NAME?tag=es-transport |\
     jq -r ".[] | select(.ServiceID==\"$SERVICE_9300_ID\") | .Address, .ServicePort" 
Finding the number of node in the cluster

Finding the intended number of nodes in the cluster is determined by counting the number of fleet unit instance files in CoreOS on startup and passing this number as an environment variable.

TOTAL_NR_OF_SERVERS=$(fleetctl list-unit-files | grep '%p@[^\.][^\.]*.service' | wc -l)

The %p refers to the part of the fleet unit file before the @ sign.

The Docker run command

The Docker run command is shown below. ElasticSearch exposes two ports: port 9200 exposes a REST api to the clients and port 9300 is used as the transport protocol between nodes in the cluster. Each port is a service and tagged appropriately.

ExecStart=/bin/sh -c "/usr/bin/docker run --rm \
    --name %p-%i \
    --env SERVICE_NAME=%p \
    --env SERVICE_9200_TAGS=http \
    --env SERVICE_9300_ID=%p-%i \
    --env SERVICE_9300_TAGS=es-transport \
    --env TOTAL_NR_OF_SERVERS=$(fleetctl list-unit-files | grep '%p@[^\.][^\.]*.service' | wc -l) \
    -P \
    --dns $(ifconfig docker0 | grep 'inet ' | awk '{print $2}') \
    --dns-search=service.consul \
    cargonauts/consul-elasticsearch"

The options are explained in the table below:

option description --env SERVICE_NAME=%p The name of this service to be advertised in Consul, resulting in a FQDN of %p.service.consul and will be used as the cluster name. %p refers to the first part of the fleet unit template file up to the @. --env SERVICE_9200_TAGS=www The tag assigned to the service at port 9200. This is picked up by the http-router, so that any http traffic to the host elasticsearch is direct to this port. --env SERVICE_9300_ID=%p-%i The unique id of this service in Consul. This is used by the startup script to find it's external port and ip address in Consul and will be used as the node name for the ES server. %p refers to the first part of the fleet unit template file up to the @ %i refers to the second part of the fleet unit file upto the .service. --env SERVICE_9300_TAGS=es-transport The tag assigned to the service at port 9300. This is used by the startup script to find the other servers in the cluster. --env TOTAL_NR_OF_SERVERS=$(...) The number of submitted unit files is counted and passed in as the environment variable 'TOTAL_NR_OF_SERVERS'. The start script will wait until this number of servers is actually registered in Consul before starting the ElasticSearch Instance. --dns $(...) Set DNS to query on the docker0 interface, where Consul is bound on port 53. (The docker0 interface ip address is chosen at random from a specific range). -dns-search=service.consul The default DNS search domain. Sources

The sources for the ElasticSearch repository can be found on github.

source description start-elasticsearch-clustered.sh   complete startup script of elasticsearch elasticsearch CoreOS fleet unit files for elasticsearch cluster consul-elasticsearch Sources for the Consul ElasticSearch repository Conclusion

CoreOS fleet template unit files are a powerful way of deploying ready to use components for your platform. If you want to deploy cluster aware applications, a service registry like Consul is essential.

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 11

IMG_1249

This week I was a participant at the International Software and Measurement (ISMA) Conference, put on by the International Function Point User Group (IFPUG). During the conference, I struck up a conversation with Anteneh Berhane who was sitting behind me during the general session. Our conversation quickly turned to books and unbidden, Anteneh volunteered that The Goal was the type of book that had a major impact on his¬†life. He said that The Goal provided an implementable and measurable framework to think about all types of work (personal and professional). Nearly everyone that I have talked has been impacted by the ideas in the book such¬†small-batch sizes and analytically looking at the whole process that we see in today’s installment of the re-read.

Part 1       Part 2       Part 3      Part 4      Part 5      Part 6      Part 7      Part 8    Part 9   Part 10

Chapter 27

Alex presents the plant’s monthly reports to Bill Peach (Alex’s manager) and Alex’s peers. Even with the troubles with the non-bottleneck parts (Re-Read Part 10), the turnaround has been spectacular. Peach opens the meeting by telling everyone that it is because of Alex’s plant that the division was profitable in the last month. However Peach has no confidence that the turnaround will continue. Peach tells Alex’s that unless the plant delivers an additional 15% increase in profit, Peach will close the plant. Alex commits to the increase with a lot of internal trepidation and no idea how he will deliver the increase.

On the way home Alex visits Julie, his estranged wife. Alex proposes identifying the goal of their marriage and then working backwards to identify what would help them achieve that goal. Alex is applying the many of the ideas from the plant to his personal life. In my conversations between presentations during the ISMA Conference, Anteneh said that understanding the goal of any endeavor is the critical first step toward measuring whether you are attaining that goal (ISMA is a measurement conference). Measuring progress provides feedback to keep you on track.

Chapter 28

Johan calls Alex. Johan will be out of touch for several weeks and he wants to make sure things are going well at the plant. Alex fills him in on the progress and the 15% demand being levied on the plant. Jonah points out since the plant is the only profitable component in the division, Peach probably will not follow through on the threat to close the plant. Side note: Most of us that have managed projects or any other group have been handed stretch goals. Most of these demands are presented in terms of both a carrot (incentives) and stick (consequences). Peach’s words have that sort of ring to them, however Alex has committed and asks Johan if there are any next steps.

Alex meets his management team the next day and relates the first of the next steps. Johan has suggested that the plant cut batch size in half for all non-bottleneck steps. Alex’s management team list the steps and the time for each step needed for a batch.

Batch Time =
set-up time +
processing time +
wait time before processing +
wait time before being assembled into the next step.

The two categories that include wait time are generally the longest in duration, and cutting batch size directly cuts the overall batch time. The only wild card in the equation is the amount of set of time which must occur before each batch. Smaller batches generally decrease wait time more than the additional set up time and will increase the ability to change direction if business needs change.

The concept of shortening batch size has been directly adopted by the Agile community. Time boxes enforce small batch sizes. Teams practicing Scrum will recognize sprint planning as a set-up step required before processing, which includes design, coding and testing. The smaller the batch size the faster value is delivered and the faster feedback is generated.

Alex asks his team whether they feel that the process will allow them to deliver orders in four weeks or less. When they agree he asks for a public commitment. Scrum also uses the idea of public commitment to generate internal team support for an overall goal. If everyone publicly commits it is harder to throw in the towel and it creates an atmosphere where the entire team helps each other out when a problem arises (in Agile we call this swarming).

Johan also suggested that Alex ask the company’s sales department to promote the company’s new ability to deliver quickly to their clients. While not stated, the politics of this idea is wonderful. If Alex and his team can pull delivery change off they will be virtually immune to Bill Peach’s irrational demands. However, when Alex pitches the plant’s new ability to Jons, the sales/marketing manager, he experiences pushback. Jons does not believe the turn around because less than a year before the best the plant could promise was four months (and they were generally late) and now Alex was promising a four week turn around on orders. Alex and Jons end up striking a compromise, sales will promote a six week turnaround on orders. If the plant can deliver in less, Jons will buy Alex a pair of shoes and if the plant misses the 6-week window, Alex will have to buy Jons a pair of shoes.

Summary of The Goal so far:
(Next week I am going to create a separate summary page)

Chapters 1 through 3 actively present the reader with a burning platform. The plant and division are failing. Alex Rogo has actively pursued increased efficiency and automation to generate cost reductions, however performance is falling even further behind and fear has become central feature in the corporate culture.

Chapters 4¬†through¬†6¬†shift the focus from steps in the process to the process as a whole. Chapters 4 ‚Äď 6 move us down the path of identifying the ultimate goal of the organization (in this book). The goal is making money and embracing the big picture of systems thinking. In this section, the authors point out that we are often caught up with pursuing interim goals, such as quality, efficiency or even employment, to the exclusion of the of the ultimate goal. We are reminded by the burning platform identified in the first few pages of the book, the impending closure of the plant and perhaps the division, which in the long run an organization must make progress towards their ultimate goal, or they won‚Äôt exist.

Chapters 7 through 9¬†show Alex‚Äôs commitment to change, seeks more precise advice from Johan, brings his closest reports into the discussion and begins a dialog with his wife (remember this is a novel). In this section of the book the concept ‚Äúthat you get what you measure‚ÄĚ is addressed. In this section of the book, we see measures of efficiency being used at the level of part production, but not at the level of whole orders or even sales. We discover the corollary to the adage ‚Äėyou get what you measure‚Äô is that if you measure the wrong thing ‚Ķyou get the wrong thing. We begin to see Alex‚Äôs urgency and commitment to make a change.

Chapters 10 through 12 mark a turning point in the book. Alex has embraced a more systems view of the plant and that the measures that have been used to date are more focused on optimizing parts of the process to the detriment to overall goal of the plant.  What has not fallen into place is how to take that new knowledge and change how the plant works. The introduction of the concepts of dependent events and statistical variation begin the shift the conceptual understanding of what measure towards how the management team can actually use that information.

Chapters 13 through 16 drive home the point that dependent events and statistical variation impact the performance of the overall system. In order for the overall process to be more effective you have to understand the capability and capacity of each step and then take a systems view. These chapters establish the concepts of bottlenecks and constraints without directly naming them and that focusing on local optimums causes more trouble than benefit.

Chapters 17 through 18 introduces the concept of bottlenecked resources. The affect of the combination dependent events and statistical variability through bottlenecked resources makes delivery unpredictable and substantially more costly. The variability in flow through the process exposes bottlenecks that limit our ability to catch up, making projects and products late or worse generating technical debt when corners are cut in order to make the date or budget.

Chapters 19 through 20 begins with Johan coaching Alex’s team to help them to identify a pallet of possible solutions. They discover that every time the capacity of a bottleneck is increased more product can be shipped.  Changing the capacity of a bottleneck includes reducing down time and the amount of waste the process generates. The impact of a bottleneck is not the cost of individual part, but the cost of the whole product that cannot be shipped. Instead of waiting to make all of the changes Alex and his team implement changes incrementally rather than waiting until they can deliver all of the changes.

Chapters 21 through 22 are a short primer on change management. Just telling people to do something different does not generate support. Significant change requires transparency, communication and involvement. One of Deming’s 14 Principles is constancy of purpose. Alex and his team engage the workforce though a wide range of communication tools and while staying focused on implementing the changes needed to stay in business.

Chapters 23 through 24 introduce the idea of involving the people doing the work in defining the solutions to work problems and finding opportunities. In Agile we use retrospectives to involve and capture the team’s ideas on process and personnel improvements. We also find that fixing one problem without an overall understanding of the whole system can cause problems to pop up elsewhere.

Chapters 25 and 26 introduce several concepts. The first concept is that if non-bottleneck steps are run at full capacity, they create inventory and waste. At full capacity their output outstrips the overall process’ ability to create a final product. Secondly, keeping people and resources 100% busy does not always move you closer to the goal of delivering value to the end customer. Simply put: don’t do work that does not move you closer to the goal of the organization. The combination of these two concepts suggests that products (parts or computer programs) should only be worked on and completed until they are needed in the next step in the process (Kanban). A side effect to these revelations is that sometimes people and processes will not be 100% utilized.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version


Categories: Process Management

Next-gen Web Apps with Isomorphic JavaScript

Xebia Blog - Fri, 05/01/2015 - 20:54

The web application landscape has recently seen a big shift in application architecture. Nowadays we build so-called Single Page Applications. These are web applications which render and run in the browser, powered by JavaScript. They are called ‚ÄúSingle Page‚ÄĚ because in such an application the browser never actually switches between pages. All interaction takes place within a single HTML document. This is great because users will not see a ‚ÄĚflash of white‚ÄĚ whenever they perform an action, so all interaction feels much more fluid and natural. The application seems to respond much quicker which has a positive effect on user experience and conversion of the site. Unfortunately Single Page Applications also have several big drawbacks, mostly concerning the initial loading time and poor rankings in search engines.

Continue reading on Medium ¬Ľ

The Top Five Issues In Project Estimation

Sometimes estimation leaves you in a fog!

Sometimes estimation leaves you in a fog!

I recently asked a group of people the question, ‚ÄúWhat are the two largest issues in project estimation?‚ÄĚ I received a wide range of answers;¬†probably a reflection of the range of individuals answering.¬† Five macro categories emerged from the answers. They are:

  1. Requirements. The impact of unclear and changing requirements on budgeting and estimation was discussed in detail in the entry, Requirements: The Chronic Problem with Project Estimation.  Bottom line, change is required to embrace dynamic development methods and that change will require changes in how the organization evaluates projects.
  2. Estimate Reliability. The perceived lack of reliability of an estimate can be generated by many factors including differences in between development and estimation processes. One of the respondents noted, ‚Äúmost of the time the project does not believe the estimate and thus comes up with their own, which is primarily based on what they feel the customer wants to hear.‚ÄĚ
  3. Project History. Both analogous and parametric estimation processes use the past as an input in determining the future.¬† Collection of consistent historical data is critical to learning and not repeating the same mistakes over and over.¬† According to Joe Schofield, ‚Äúfew groups retain enough relevant data from their experiences to avoid relearning the same lesson.‚ÄĚ
  4. Labor Hours Are Not The Same As Size.¬† Many estimators either estimate the effort needed to perform the project or individual tasks.¬† By jumping immediately to effort, estimators miss all of the nuances that effect the level of effort required to deliver value.¬† According to Ian Brown, ‚Äúthen the discussion basically boils down to opinions of the number of hours, rather that assessing other attributes that drive the number of hours that something will take.‚ÄĚ
  5. No One Dedicated to Estimation.¬† Estimating is a skill built on a wide range of techniques that need to be learned and practiced.¬† When no one is dedicated to developing and maintaining estimates it is rare that anyone can learn to estimate consistently, which affects reliability.¬† To quote one of the respondents, ‚Äúconsistency of estimation from team to team, and within a team over time, is non-existent.‚ÄĚ

Each of the top five issues are solvable without throwing out the concept of estimation that are critical for planning at the organization, portfolio and product levels.  Every organization will have to wrestle with their own solution to the estimation conundrum. However the first step is to recognize the issues you face and your goals from the estimation process.


Categories: Process Management

Software Development Conferences Forecast April 2015

From the Editor of Methods & Tools - Thu, 04/30/2015 - 13:57
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software […]

Budgeting, Estimation, Planning, #NoEstimates and the Agile Planning Onion

1127131620a_1

There are many levels of estimation including budgeting, high-level estimation and task planning (detailed estimation).  We can link a more classic view of estimation to  the Agile planning onion popularized by Mike Cohn.   In the Agile planning onion, strategic planning is on the outside of the onion and the planning that occurs in the daily sprint meetings at the core of the onion. Each layer closer to the core relates more to the day-to-day activity of a team. The #NoEstimates movement eschew developing story- or task-level estimates and sometimes at higher levels of estimation. As you get closer to the core of the planning onion the case for the #NoEstimates becomes more compelling.

03fig01.jpg (500√ó393)

Planning Onion

 

Budgeting is a strategic form of estimation that most corporate and governmental entities perform.  Budgeting relates to the strategy and portfolio layers of the planning onion.  #NoEstimates techniques doesn’t answer the central questions most organizations need to answer at this level which include:

1.     How much money should I allocate for software development, enhancements and maintenance?

2.     Which projects or products should we fund?

3.     Which projects will return the greatest amount of value?

Budgets are often educated guesses that provide some approximation of the size and cost of the work on the overall backlog. Budgeting provides the basis to allocate resources in environments where demand outstrips capacity. Other than in the most extreme form of #NoEstimate, which eschews all estimates, budgeting is almost always performed.

High-level estimation, performed in the product and release layers of the planning onion, is generally used to forecast when functionality will be available. Release plans and product road maps are types of forecasts that are used to convey when products and functions will be available. These types of estimates can easily be built if teams have a track record of delivering value on a regular basis. #NoEstimates can be applied at this level of planning and estimation by substituting the predictable completion of work items for developing effort estimates.  #NoEstimates at this level of planning can be used only IF  conditions that facilitate predictable delivery flow are met. Conditions include:

  1. Stable teams
  2. Adoption of an Agile mindset (at both the team and organizational levels)
  3. A backlog of well-groomed stories

Organizations that meet these criteria can answer the classic project/release questions of when, what and how much based on the predictable delivery rates of #NoEstimate teams (assuming some level of maturity ‚Äď newly formed teams are never predictable). High level estimate is closer to the day-to-day operations of the team and connect budgeting to the lowest level of planning in the planning onion.

In the standard corporate environment, task-level estimation (typically performed at the iteration and daily planning layers of the onion) is an artifact of project management controls or partial adoptions of Agile concepts. Estimating tasks is often mandated in organizations that allocate individual teams to multiple projects at the same time. The effort estimates are used to enable the organization to allocate slices of time to projects. Stable Agile teams that are allowed to focus one project at a time and use #NoEstimate techniques have no reason to estimate effort at a task level due to their ability to consistently say what they will do and then deliver on their commitments. Ceasing task-level estimation and planning is the core change all proponents of #NoEstimates are suggesting.

A special estimation case that needs to be considered is that of commercial or contractual work. These arrangements are often represent lower trust relationships or projects that are perceived to be high risk. The legal contracts agreed upon by both parties often stipulate the answers to the what, when and how much question before the project starts. Due to the risk the contract creates both parties must do their best to predict/estimate the future before signing the agreement. Raja Bavani, Senior Director at Cognizant Technology Solutions¬†suggested in a recent conversation, that he thought that, “#NoEstimates was a non-starter in a contractual environment due the financial risk both parties accept when signing a contract.”

Estimation is a form of planning, and planning is a considered an important competency in most business environments. Planning activities abound whether planning the corporate picnic to planning the acquisition and implementation of a new customer relationship management system. Most planning activities center on answering a few very basic questions. When with will ‚Äúit‚ÄĚ be done? How much will ‚Äúit‚ÄĚ cost? What is ‚Äúit‚ÄĚ that I will actually get? As an organization or team progresses through the planning onion, the need for effort and cost estimation lessens in most cases. #NoEstimation does not remove the need for all types of estimates. Most organizations will always need to estimate in order to budget. Organizations that have stable teams, adopted the Agile mindset and have a well-groomed backlog will be able to use predictable flow to forecast rather than effort and cost estimation. At a sprint or day-to-day level Agile teams that predictably deliver value can embrace the idea of #NoEstimate while answering the basic questions based what, when and how much based on performance.


Categories: Process Management

Should Team Members Sign Up for Tasks During Sprint Planning?

Mike Cohn's Blog - Tue, 04/28/2015 - 15:00

During sprint planning, a team selects a set of product backlog items they will work on during the coming sprint. As part of doing this, most teams will also identify a list of the tasks to be performed to complete those product backlog items.

Many teams will also provide rough estimates of the effort involved in each task. Collectively, these artifacts are the sprint backlog and could be presented along the following lines:

One issue for teams to address is whether individuals should sign up for tasks during sprint planning.

If a team walks out of sprint planning with a name next to every task, individual accountability will definitely be increased. I will feel more responsibility to finish the tasks with my name or initials next to them. And you will feel the same for those with yours. But, this will come at the expense of team accountability.

My recommendation is that a team should leave sprint planning without having put names on tasks. Following a real-time sign-up strategy will allow more flexibility during the sprint.

Quote of the Month April 2015

From the Editor of Methods & Tools - Mon, 04/27/2015 - 13:30
Code that doesn’t have tests rots. It rots because we don’t feel confident to touch it, we’re afraid to break the “working” parts. Code rotting means that it doesn’t improve, staying the way we first wrote it. I’ll be the first to admit that whenever I write code, it comes in its most ugly form. […]

SPaMCAST 339 ‚Äď Demonstrations, Microservices

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 339 features our essay on demonstrations and a new Form Follows Function column from Gene Hughson.

Demonstrations are a tool to generate conversations about what is being delivered.  Because a demonstration occurs at the end of every sprint the team will continually be demonstrating the value they are delivering, which reinforces confidence and motivation. The act of demonstrating value provides the team with a platform for collecting feedback that will help them stay on track and focused on delivering what has the most value to the business.

Gene continues his theme of microservices.¬† This week we tackle, ‚ÄúMicroservices, SOA, and EITA: Where To Draw the Line? Why to Draw the¬†Line?‚ÄĚ Gene says, ‚Äúwe recognize lines to prevent needless conflict and waste.‚ÄĚ

Two special notes:

Jo Ann Sweeny of the Explaining Change column is running her annual Worth Working Summit.  Please visit http://www.worthworkingsummit.com/

Jeremy Berriault will be joining the SPaMCAST family.  Jeremy will be focusing on testing and the lessons testing can provide to a team and organization.

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

CMMI Institute Global Congress
May 12-13 Seattle, WA, USA
My topic – Agile Risk Management
http://cmmiconferences.com/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with Tom Howlett.  Tom is the author of the Diary of a Scrummaster and is a Scrum Master’s Scrum Master. Tom and I talked Agile and being Agile outside of the classic software development environments.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 339 ‚Äď Demonstrations, Microservices

Software Process and Measurement Cast - Sun, 04/26/2015 - 22:00

Software Process and Measurement Cast 339 features our essay on demonstrations and a new Form Follows Function column from Gene Hughson.

Demonstrations are a tool to generate conversations about what is being delivered.  Because a demonstration occurs at the end of every sprint the team will continually be demonstrating the value they are delivering, which reinforces confidence and motivation. The act of demonstrating value provides the team with a platform for collecting feedback that will help them stay on track and focused on delivering what has the most value to the business.

Gene continues his theme of microservices.¬† This week we tackle, ‚ÄúMicroservices, SOA, and EITA: Where To Draw the Line? Why to Draw the¬†Line?‚ÄĚ Gene says, ‚Äúwe recognize lines to prevent needless conflict and waste.‚ÄĚ

Two special notes:

Jo Ann Sweeny of the Explaining Change column is running her annual Worth Working Summit.  Please visit http://www.worthworkingsummit.com/

Jeremy Berriault will be joining the SPaMCAST family.  Jeremy will be focusing on testing and the lessons testing can provide to a team and organization.

Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

CMMI Institute Global Congress
May 12-13 Seattle, WA, USA
My topic - Agile Risk Management
http://cmmiconferences.com/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our interview with Tom Howlett.  Tom is the author of the Diary of a Scrummaster and is a Scrum Master’s Scrum Master. Tom and I talked Agile and being Agile outside of the classic software development environments.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.

Categories: Process Management

Re-Read Saturday: The Goal: A Process of Ongoing Improvement. Part 10

IMG_1249

As I began to summarize my Re-Read notes from Chapters 25 and 26, I was struck by a conversion I participated in at the QAI Quest Conference in Atlanta this week (I spoke on scaling Agile testing using the TMMi, but that is a topic for another column). A gentleman at my lunch table expressed his frustration because his testing team could not keep up with all of the work generated by the development team his group served. They read that for every ten developers there should be two testers, and his company had been struggling with balancing the flow of work between testing and development for a longtime. It was not working. I asked what happened when they were being asked to do more work that they had capacity to deliver. The answer was that it sometimes depended on who was yelling at them. The tactics they used included expediting work, letting some work sit around until it became critical or just cutting the amount of planned testing. He capped the conversation by evoking the old adage; ‚Äúthe squeaky wheel gets the grease.‚ÄĚ My lunch companion had reverted to expediting work through a bottleneck much in the same way suggested by Alex (and rejected by Johan) in today’s Re-Read Saturday installment.

Part 1       Part 2       Part 3      Part 4      Part 5      Part 6      Part 7      Part 8    Part 9

Chapter 25. In Chapter 24 Alex and his team suddenly found that 30 parts of their product have become constraints. As Chapter 25 beings, Johan arrives. After being briefed on the problem and the data, Johan suggests that Alex and his leadership team go back out into the plant to SEE what is really happening. The milling machine has become a problem area. Red tag items, priority items that to need ready for NCX-10 and heat treat bottlenecks, are being built to the exclusion of green tag (non-priority parts). Two things have occurred, excess red tag inventory has built up and now overall products can’t be assembled because green tag parts are missing. Johan points out the red card/green card process and running all steps at 100% capacity have created the problem. Remember, by definition non-bottleneck steps or processes have more capacity than is needed and when the non-bottleneck process is run consistently at 100% capacity it produce more output than the overall process needs. Bottlenecks as we have noted earlier define the capacity of the overall process. When output of any step outstrips what the bottleneck excess inventory is generated.  In this case, since everyone has been told to build red card items first they have less time to create non-bottleneck parts therefore inventory is being build up for parts that are not currently needed to the exclusion of the parts needed.  A mechanism is needed to signal when parts should start to flow through process so they will arrive at assemble when they are needed.

Alex and his team discover two rules:

  1. The level of capacity utilization of a non-bottleneck step is not determined by its own potential capacity, but some other constraint. Said differently, non-bottlenecked steps should only be used to the capacity required to support the bottleneck steps and to the level customers want the output of the process.
  2. Activation of a resource (just turning a resource on or off) and utilization of a resource (making use of a resource in a way that moves the system closer to the goal) are not synonymous. Restated, work that does not support attaining the goals of the system generates excess inventory and waste.

Returning briefly to my lunch conversation at QAI Quest, my new chum noted that recently one small project had to wait so long to be tested that the business changed their mind, meaning that six months of coding had to backed out. Which is one reasons that work-in-progress increases waste even in software development.

One the great lines in Chapter 26 is, ‚Äúa system of local optimums is not optimum at all.‚ÄĚ Everyone working at full capacity rarely generates the maximum product flow through the overall process.

Side Note: At the beginning of the chapter Alex and his team had the data that indicated that a problem existed. Until they all went to the floor as a whole team and VISUALIZED the work the problem was difficult to understand. Both Agile and Kanban use visualization as a tool to develop an understanding of how raw material turn into shippable product.

Chapter 26. Alex sits at his kitchen table thinking about the solution to the new dilemma of making sure the right parts flow through the system when they are needed without building up as inventory. Parts should only be worked on when they can continuously move through the process without stopping. This means each step needs to be coordinated, and if the full capacity of a step is not needed it should not be run. This means people might not be busy at all times. Sharon and David (Alex’s children) express interest in helping solve the problem. Using the metaphor of the Boy Scout hike David and Alex participated on earlier in the book, both children try to find a solution of pace and synchronization. Sharon suggests using a drummer to provide a coordinated pace. In Agile we would call the concept of a drummer: cadence. Cadence provides a beat that Agile teams use to pace their delivery of product. David suggests tying a rope to each part flowing through the process. Parts would move at a precise rate, if any step speeds up the rope provides resistance which is a signal to slow down and if slack occurs it is a signal to speed up in orders to keep the pace. Parts arrive at the end of the system when they are needed in a coordinated fashion. In Kanban we would recognize this concept as work being pulled thought the process rather than being pushed.

When back at the plant, Ralph (the computer guy) announces he can calculate the release rate needed for the red tag parts to flow smoothly through the process.  This would mean only the parts that will needed for orders being worked on would be created. No excess inventory would be created at any step including the bottlenecks.  Johan points out that Ralph can also use the same data to calculate the release rates for needed for the green tag items. Ralph thinks it will take him a long time to get both right. Alex tells them to begin even though it won’t be perfect and that they can tune the process as they get data. Do not let analysis paralysis keep you from getting started.

The chapter ends with Donavan (OPS) and Alex recognizing that their corporate efficiency reporting (they report efficiency of steps not the whole system) aren’t going to look great. ¬†Even though they will be completing and shipping more finished product the corporate measures have not been syncronized to the new way the plant is working. The reduction in efficiency (cost per part – see installments one and two) ¬†is going to attract Alex’s boss, Bill Peach‚Äôs attention.

Summary of The Goal so far:

Chapters 1 through 3 actively present the reader with a burning platform. The plant and division are failing. Alex Rogo has actively pursued increased efficiency and automation to generate cost reductions, however performance is falling even further behind and fear has become central feature in the corporate culture.

Chapters 4¬†through¬†6¬†shift the focus from steps in the process to the process as a whole. Chapters 4 ‚Äď 6 move us down the path of identifying the ultimate goal of the organization (in this book). The goal is making money and embracing the big picture of systems thinking. In this section, the authors point out that we are often caught up with pursuing interim goals, such as quality, efficiency or even employment, to the exclusion of the of the ultimate goal. We are reminded by the burning platform identified in the first few pages of the book, the impending closure of the plant and perhaps the division, which in the long run an organization must make progress towards their ultimate goal, or they won‚Äôt exist.

Chapters 7 through 9¬†show Alex‚Äôs commitment to change, seeks more precise advice from Johan, brings his closest reports into the discussion and begins a dialog with his wife (remember this is a novel). In this section of the book the concept ‚Äúthat you get what you measure‚ÄĚ is addressed. In this section of the book, we see measures of efficiency being used at the level of part production, but not at the level of whole orders or even sales. We discover the corollary to the adage ‚Äėyou get what you measure‚Äô is that if you measure the wrong thing ‚Ķyou get the wrong thing. We begin to see Alex‚Äôs urgency and commitment to make a change.

Chapters 10 through 12 mark a turning point in the book. Alex has embraced a more systems view of the plant and that the measures that have been used to date are more focused on optimizing parts of the process to the detriment to overall goal of the plant.  What has not fallen into place is how to take that new knowledge and change how the plant works. The introduction of the concepts of dependent events and statistical variation begin the shift the conceptual understanding of what measure towards how the management team can actually use that information.

Chapters 13 through 16 drive home the point that dependent events and statistical variation impact the performance of the overall system. In order for the overall process to be more effective you have to understand the capability and capacity of each step and then take a systems view. These chapters establish the concepts of bottlenecks and constraints without directly naming them and that focusing on local optimums causes more trouble than benefit.

Chapters 17 through 18 introduces the concept of bottlenecked resources. The affect of the combination dependent events and statistical variability through bottlenecked resources makes delivery unpredictable and substantially more costly. The variability in flow through the process exposes bottlenecks that limit our ability to catch up, making projects and products late or worse generating technical debt when corners are cut in order to make the date or budget.

Chapters 19 through 20 begins with Johan coaching Alex’s team to help them to identify a pallet of possible solutions. They discover that every time the capacity of a bottleneck is increased more product can be shipped.  Changing the capacity of a bottleneck includes reducing down time and the amount of waste the process generates. The impact of a bottleneck is not the cost of individual part, but the cost of the whole product that cannot be shipped. Instead of waiting to make all of the changes Alex and his team implement changes incrementally rather than waiting until they can deliver all of the changes.

Chapters 21 through 22 are a short primer on change management. Just telling people to do something different does not generate support. Significant change requires transparency, communication and involvement. One of Deming’s 14 Principles is constancy of purpose. Alex and his team engage the workforce though a wide range of communication tools and while staying focused on implementing the changes needed to stay in business.

Chapters 23 through 24 introduce the idea of involving the people doing the work in defining the solutions to work problems and finding opportunities. In Agile we use retrospectives to involve and capture the team’s ideas on process and personnel improvements. We also find that fixing one problem without an overall understanding of the whole system can cause problems to pop up elsewhere.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast. Dead Tree Version or Kindle Version


Categories: Process Management

Demonstrations

2870297406_d369034e9b_b

Demonstrations aren’t presentations, they require bi-directional conversation.

Demonstrations, also known as demos, are Agile’s mechanism to share what the team has been accomplished during the current sprint. At root, the demo is a show and tell that provides the team with a platform to describe what has been accomplished. Demos build confidence and customer satisfaction. They can be¬†done using a simple, repeatable process that straight forward and to the point. ¬†By making sure the process is interactive and the material concise, all of the participants will find the demo engaging and focused.¬†Demonstrations deliver fast feedback, but only if they happen and only if they are engineered to facilitate a bi-directional exchange of information.

The basic mechanism of a demo is simple. Show stakeholders the completed work, let them interact with it and actively solicit feedback. I once heard someone describe this as running toward criticism. Where variations do happen they tend to be driven by scope, which implies different audiences, or by the geography of the team and stakeholders.  The variations in demonstration techniques are less about the goal of gathering feedback than about audiences and enabling the audience to provide the feedback.

Demonstrations occur on the last day of every sprint. They show the stakeholders what has been accomplished during the current sprint. The goal is for the team to gather feedback from the stakeholders so that they build what is needed and what the team committed to at the beginning of the iteration. The transparency created when the team shares its performance allows the team to rush toward feedback, while selling progress. Good demos include stakeholders interacting with the software so that everyone understands exactly what has been developed.

Demonstrations are the team’s mechanism to gather feedback and to ensure they are delivering value. I believe that demos have two currencies. The first is working software and the second is feedback. Teams build cache when they say what they are going to do, do what they say and listen to how their stakeholders feel about what they deliver.


Categories: Process Management

How to deploy High Available persistent Docker services using CoreOS and Consul

Xebia Blog - Thu, 04/23/2015 - 15:45

Providing High Availability to stateless applications is pretty trivial as was shown in the previous blog posts A High Available Docker Container Platform and Rolling upgrade of Docker applications using CoreOS and Consul. But how does this work when you have a persistent service like Redis?

In this blog post we will show you how a persistent service like Redis can be moved around on machines in the cluster, whilst preserving the state. The key is to deploy a fleet mount configuration into the cluster and mount the storage in the Docker container that has persistent data.

To support persistency we have added a NAS to our platform architecture in the form of three independent NFS servers which act as our NAS storage, as shown in the picture below.

CoreOS platform architecture with fake NASThe applications are still deployed in the CoreOS cluster as docker containers.  Even our Redis instance is running in a Docker container. Our application is configured using the following three Fleet unit files:

The unit file of the Redis server is the most interesting one because it is our persistence service. In the unit section of the file, it first declares that it requires a mount for '/mnt/data' on which it will persist its data.

[Unit]
Description=app-redis
Requires=mnt-data.mount
After=mnt-data.mount
RequiresMountsFor=/mnt/data

In the start clause of the redis service, a specific subdirectory of /mnt/data is mounted into the container.

...
ExecStart=/usr/bin/docker run --rm \
    --name app-redis \
    -v /mnt/data/app-redis-data:/data \
    -p 6379:6379 \
    redis
...

The mnt-data.mount unit file is quite simple: It defines an NFS mount with the option 'noauto' indicating  that device should be automatically mounted on boot time.  The unit file has the option 'Global=true' so that the mount is distributed to  all the nodes in the cluster. The mount is only activated when another unit requests it.

[Mount]
What=172.17.8.200:/mnt/default/data
Where=/mnt/data
Type=nfs
Options=vers=3,sec=sys,noauto

[X-Fleet]
Global=true

Please note that the NFS mount specifies system security (sec=sys) and uses NFS version 3 protocol, to avoid all sorts of errors surrounding mismatches in user- and group ids between the client and the server.

Preparing the application

To see the failover in action, you need to start the platform and deploy the application:

git clone https://github.com/mvanholsteijn/coreos-container-platform-as-a-service.git
cd coreos-container-platform-as-a-service/vagrant
vagrant up
./is_platform_ready.sh

This will start 3 NFS servers and our 3 node CoreOS cluster. After that is done, you can deploy the application, by first submitting the mount unit file:

export FLEETCTL_TUNNEL=127.0.0.1:2222
cd ../fleet-units/app
fleetctl load mnt-data.mount

starting the redis service:

fleetctl start app-redis.service

and finally starting a number of instances of the application:

fleetctl submit app-hellodb@.service
fleetctl load app-hellodb@{1..3}.service
fleetctl start app-hellodb@{1..3}.service

You can check that everything is running by issuing the fleetctl list-units command. It should show something like this:

fleetctl list-units
UNIT			MACHINE				ACTIVE		SUB
app-hellodb@1.service	8f7472a6.../172.17.8.102	active		running
app-hellodb@2.service	b44a7261.../172.17.8.103	active		running
app-hellodb@3.service	2c19d884.../172.17.8.101	active		running
app-redis.service	2c19d884.../172.17.8.101	active		running
mnt-data.mount		2c19d884.../172.17.8.101	active		mounted
mnt-data.mount		8f7472a6.../172.17.8.102	inactive	dead
mnt-data.mount		b44a7261.../172.17.8.103	inactive	dead

As you can see three app-hellodb instances are running and the redis service is running on 172.17.8.101, which is the only host that as /mnt/data mounted. The other two machines have this mount in the status 'dead', which is an unfriendly name for stopped.

Now you can access the app..

yes 'curl hellodb.127.0.0.1.xip.io:8080; echo ' | head -10 | bash
..
Hello World! I have been seen 20 times.
Hello World! I have been seen 21 times.
Hello World! I have been seen 22 times.
Hello World! I have been seen 23 times.
Hello World! I have been seen 24 times.
Hello World! I have been seen 25 times.
Hello World! I have been seen 26 times.
Hello World! I have been seen 27 times.
Hello World! I have been seen 28 times.
Hello World! I have been seen 29 times.
Redis Fail-over in Action

To see the fail-over in action, you start a monitor on a machine not running Redis. In our case the machine running app-hellodb@1.

vagrant ssh -c \
   "yes 'curl --max-time 2 hellodb.127.0.0.1.xip.io; sleep 1 ' | \
    bash" \
    app-hellodb@1.service

Now restart the redis machine:

vagrant ssh -c "sudo shutdown -r now" app-redis.service

After you restarted the machine running Redis, the  output should look something like this:

...
Hello World! I have been seen 1442 times.
Hello World! I have been seen 1443 times.
Hello World! I have been seen 1444 times.
Hello World! Cannot tell you how many times I have been seen.
	(Error 111 connecting to redis:6379. Connection refused.)
curl: (28) Operation timed out after 2004 milliseconds with 0 out of -1 bytes received
curl: (28) Operation timed out after 2007 milliseconds with 0 out of -1 bytes received
Hello World! I have been seen 1445 times.
Hello World! I have been seen 1446 times.
curl: (28) Operation timed out after 2004 milliseconds with 0 out of -1 bytes received
curl: (28) Operation timed out after 2004 milliseconds with 0 out of -1 bytes received
Hello World! I have been seen 1447 times.
Hello World! I have been seen 1448 times.
..

Notice that the distribution of your units has changed after the reboot.

fleetctl list-units
...
UNIT			MACHINE				ACTIVE		SUB
app-hellodb@1.service	3376bf5c.../172.17.8.103	active		running
app-hellodb@2.service	ff0e7fd5.../172.17.8.102	active		running
app-hellodb@3.service	3376bf5c.../172.17.8.103	active		running
app-redis.service	ff0e7fd5.../172.17.8.102	active		running
mnt-data.mount		309daa5a.../172.17.8.101	inactive	dead
mnt-data.mount		3376bf5c.../172.17.8.103	inactive	dead
mnt-data.mount		ff0e7fd5.../172.17.8.102	active		mounted
Conclusion

We now have the basis for a truly immutable infrastructure setup: the entire CoreOS cluster including the application can be destroyed and a completely identical environment can be resurrected within a few minutes!

  • Once you have an reliable external persistent store, CoreOS can help you migrate persistent services just as easy as stateless services. We chose a NFS server for ease of use on this setup, but nothing prevents you from mounting¬†other kinds of storage systems for your application.
  • Consul excels in providing fast and dynamic service discovery for ¬†services, allowing the¬†Redis service to migrate to a different machine and the application instances to find the new address of the Redis service through as simple DNS lookup!

 

Software Development Linkopedia April 2015

From the Editor of Methods & Tools - Wed, 04/22/2015 - 15:20
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about honnest ignorance, canonical data model, global teams, recruiting developers, self-organization, requirements management, SOLID programming, developer training, retrospectives and Test-Driven Development (TDD). Blog: The Power of Not Knowing Blog: Why You […]

Scaling Agile? Keep it simple, scaler!

Xebia Blog - Wed, 04/22/2015 - 08:59

The promise of Agile is short cycled value delivery, with the ability to adapt. This is achieved by focusing on the people that create value and optimising the way they work.

Scrum provides a framework that provides a limited set of roles and artefacts and offers a simple process framework that helps to implement the Agile values and to adhere to the Agile principles.

I have supported many organisations in adopting Agile as their mindset and culture. What puzzles me is that many larger organisations seem to think that Scrum is not enough in their context and they feel the need for something bigger and more complicated. As a result of this, more and more Agile transformations start with scaling Agile to fit their context and then try to make things less complex.

While the various scaling frameworks for Agile contain many useful and powerful tools to apply in situations that require them, applying a complete Agile scaling framework to an organisation from the get-go often prevents the really needed culture and mindset change.

When applying a little bit of creativity, already present organisational structure can be mapped easily on the structure suggested by many scaling frameworks. Most frameworks explain the needed behaviour in an Agile environment, but these explanations are often ignored or misinterpreted. Due to (lengthy) descriptions of roles and responsibilities, people tend to stop thinking for themselves about what would work best and start to focus on who plays which role and what is someone else’s responsibility. There is a tendency to focus on the ceremonies rather than on the value that should be delivered by the team(s) with regards to product or service.

My take on adopting Agile would be to start simple. Use an Agile framework that prescribes very little, like Scrum or Kanban, in order to provoke learning and experiencing. From this learning and experiencing will come changes in the organisational structure to best support the Agile Values and Principles. People will find or create positions where their added value has most impact on the value that the organisation creates and, when needed, will dismantle positions and structure that prevent this value to be created.

Another effect of starting simple is that people will not feel limited by rules and regulations, and that way use their creativity, experience and capabilities easier. Oftentimes, more energy is create by less rules.

As said by others as well, some products or value are difficult to create with simple systems. As observed by Dave Snowden and captured in his Cynefin framework, too much simplicity could result in chaos when this simplicity is applied to complex systems. To create value in more complex systems, use the least amount of tools provided by the scaling frameworks to prevent chaos and leverage the benefits that simpler systems provide. Solutions to fix problems in complex systems are best found when experiencing the complexity and discovering what works best to cope with that. Trying to prevent problems to pop up might paralyse an organisation too much to put out the most possible value.

So: Focus on delivering value in short cycles, adapt when needed and add the least amount of tools and/or process to optimise communication and value delivery.

Demonstrations in Distributed Teams

The demonstration needs to work for everyone, no matter where in the world you are.

The demonstration needs to work for everyone, no matter where in the world you are.

Demonstrations are an important tool for teams to gather feedback to shape the value they deliver.  Demonstrations provide a platform for the team to show the stories that have been completed so the stakeholders can interact with the solution.  The feedback a team receives not only ensures that the solution delivered meets the needs but also generates new insights and lets the team know they are on track.  Demonstrations should provide value to everyone involved. Given the breadth of participation in a demo, the chance of a distributed meeting is even more likely.  Techniques that support distributed demonstrations include:

  1. More written documentation: Teams, especially long-established teams, often develop shorthand expressions that convey meaning fall short before a broader audience. Written communication can be more effective at conveying meaning where body language can’t be read and eye contact can’t be made. Publish an agenda to guide the meeting; this will help everyone stay on track or get back on track when the phone line drops. Capture comments and ideas on paper where everyone can see them.¬† If using flip charts, use webcams to share the written notes.¬† Some collaboration tools provide a notepad feature that stays resident on the screen that can be used to capture notes that can be referenced by all sites.
  2. Prepare and practice the demo. The risk that something will go wrong with the logistics of the meeting increase exponentially with the number of sites involved.  Have a plan for the demo and then practice the plan to reduce the risk that you have not forgotten something.  Practice will not eliminate all risk of an unforeseen problem, but it will help.
  3. Replicate the demo in multiple locations. In scenarios with multiple locations with large or important stakeholder populations, consider running separate demonstrations.  Separate demonstrations will lose some of the interaction between sites and add some overhead but will reduce the logistical complications.
  4. Record the demo. Some sites may not be able to participate in the demo live due to their time zones or other limitations. Recording the demo lets stakeholders that could not participate in the live demo hear and see what happened and provide feedback, albeit asynchronously.  Recording the demo will also give the team the ability to use the recording as documentation and reference material, which I strongly recommend.
  5. Check the network(s)! Bandwidth is generally not your friend. Make sure the network at each location can support the tools you are going to use (video, audio or other collaboration tools) and then have a fallback plan. Fallback plans should be as low tech as practical.  One team I observed actually had to fall back to scribes in two locations who kept notes on flip charts by mirroring each-other (cell phones, bluetooth headphones and whispering were employed) when the audio service they were using went down.

Demonstrations typically involve stakeholders, management and others.  The team needs feedback, but also needs to ensure a successful demo to maintain credibility within the organization.  In order to get the most effective feedback in a demo everyone needs to be able to hear, see and get involved.  Distributed demos need to focus on facilitating interaction more than in-person demos. Otherwise, distributed demos risk not being effective.


Categories: Process Management

ScrumMaster ‚Äď Full Time or Not?

Mike Cohn's Blog - Tue, 04/21/2015 - 15:00

The following was originally published in Mike Cohn's monthly newsletter. If you like what you're reading, sign up to have this content delivered to your inbox weeks before it's posted on the blog, here.

I’ve been in some debates recently about whether the ScrumMaster should be full time. Many of the debates have been frustrating because they devolved into whether a team was better off with a full-time ScrumMaster or not.

I’ll be very clear on the issue: Of course, absolutely, positively, no doubt about it a team is better off with a full-time ScrumMaster.

But, a team is also better off with a full-time, 100 percent dedicated barista. Yes, that’s right: Your team would be more productive, quality would be higher, and you’d have more satisfied customers, if you had a full-time barista on your team.

What would a full-time barista do? Most of the time, the barista would probably just sit there waiting for someone to need coffee. But whenever someone was thirsty or under-caffeinated, the barista could spring into action.

The barista could probably track metrics to predict what time of day team members were most likely to want drinks, and have their drinks prepared for them in advance.

Is all this economically justified? I doubt it. But I am 100 percent sure a team would be more productive if they didn’t have to pour their own coffee. Is a team more productive when it has a fulltime ScrumMaster? Absolutely. Is it always economically justified? No.

What I found baffling while debating this issue was that teams who could not justify a full-time ScrumMaster were not really being left a viable Scrum option. Those taking the “100 percent or nothing” approach were saying that if you don’t have a dedicated ScrumMaster, don’t do Scrum. That’s wrong.

A dedicated ScrumMaster is great, but it is not economically justifiable in all cases. When it’s not, that should not rule out the use of Scrum.

And a note: I am not saying that one of the duties of the ScrumMaster is to fetch coffee for the team. It’s just an exaggeration of a role that would make any team more productive.

Swift optional chaining and method argument evaluation

Xebia Blog - Tue, 04/21/2015 - 08:21

Everyone that has been programming in Swift knows that you can call a method on an optional object using a question mark (?). This is called optional chaining. But what if that method takes any arguments whose value you need to get from the same optional? Can you safely force unwrap those values?

A common use case of this is a UIViewController that runs some code within a closure after some delay or after a network call. We want to keep a weak reference to self within that closure because we want to be sure that we don't create reference cycles in case the closure would be retained. Besides, we (usually) don't need to run that piece of code within the closure in case the view controller got dismissed before that closure got executed.

Here is a simplified example:

class ViewController: UIViewController {

    let finishedMessage = "Network call has finished"
    let messageLabel = UILabel()

    override func viewDidLoad() {
        super.viewDidLoad()

        someNetworkCall { [weak self] in
            self?.finished(self?.finishedMessage)
        }
    }

    func finished(message: String) {
        messageLabel.text = message
    }
}

Here we call the someNetworkCall function that takes a () -> () closure as argument. Once the network call is finished it will call that closure. Inside the closure, we would like to change the text of our label to a finished message. Unfortunately, the code above will not compile. That's because the finished method takes a non-optional String as parameter and not an optional, which is returned by self?.finishedMessage.

I used to fix such problem by wrapping the code in a if let statement:

if let this = self {
    this.finished(this.finishedMessage)
}

This works quite well, especially when there are multiple lines of code that you want to skip if self became nil (e.g. the view controller got dismissed and deallocated). But I always wondered if it was safe to force unwrap the method arguments even when self would be nil:

self?.finished(self!.finishedMessage)

The question here is: does Swift evaluate method argument even if it does not call the method?

I went through the Swift Programming Guide to find any information on this but couldn't find an answer. Luckily it's not hard to find out.

Let's add a method that will return the finishedMessage and print a message and then call the finished method on an object that we know for sure is nil.

override func viewDidLoad() {
    super.viewDidLoad()

    let vc: ViewController? = nil
    vc?.finished(printAndGetFinishedMessage())
}

func printAndGetFinishedMessage() -> String {
    println("Getting message")
    return finishedMessage
}

When we run this, we see that nothing gets printed to the console. So now we know that Swift will not evaluate the method arguments when the method is not invoked. Therefore we can change our original code to the following:

someNetworkCall { [weak self] in
    self?.finished(self!.finishedMessage)
}

SPaMCAST 338 ‚Äď Stephen Parry, Adaptive Organizations, Lean and Agile Thinking

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 338 features our new interview with Stephen Parry.¬† We discussed adaptable organizations. Stephen recently wrote: ‚ÄúOrganizations¬†which are able to embrace and implement the principles of Lean Thinking are inevitably known for three things:¬†vision,¬†imagination¬†and ‚Äď most importantly of all – implicit¬†trust in their own people.‚ÄĚ We discussed why trust, vision and imagination have to be more than just words in a vision or mission statement to get value out of lean and Agile.

Need more Stephen Parry?  Check out our first interview.  We discussed adaptive thinking and command and control management!

Stephen’s Bio

Stephen Parry is an international leader and strategist on the design and creation of adaptive-lean enterprises. He has a world-class reputation for passionate leadership and organisational transformation by changing the way employees, managers and leaders think about their business and their customers.

He is the author of Sense and Respond: The Journey to Customer Purpose (Palgrave), a highly regarded book written as a follow-up to his award-winning organisational transformations. His change work was recognised when he received Best Customer Service Strategy at the National Business Awards. The judges declared his strategy had created organisational transformations which demonstrated an entire cultural change around the needs of customers and could, as a result, demonstrate significant business growth, innovation and success. He is the founder and senior partner at Lloyd Parry a consultancy specialising in Lean organisational design and business transformation.

Stephen believes that organisations must be designed around the needs of customers through the application of employee creativity, innovation and willing contribution. This was recognised when his approach received awards from the European Service Industry for the Best People Development Programme and a personal award for Innovation and Creativity. Stephen has since become a judge at the National Business Awards and the National Customer Experience Awards. He is also a Fellow at the Lean Systems Society.

Website: www.lloydparry.com
Call to action!

Reviews of the Podcast help to attract new listeners.  Can you write a review of the Software Process and Measurement Cast and post it on the podcatcher of your choice?  Whether you listen on ITunes or any other podcatcher, a review will help to grow the podcast!  Thank you in advance!

Re-Read Saturday News

The Re-Read Saturday focus on Eliyahu M. Goldratt and Jeff Cox’s The Goal: A Process of Ongoing Improvement began on February 21nd. The Goal has been hugely influential because it introduced the Theory of Constraints, which is central to lean thinking. The book is written as a business novel. Visit the Software Process and Measurement Blog and catch up on the re-read.

Note: If you don’t have a copy of the book, buy one.  If you use the link below it will support the Software Process and Measurement blog and podcast.

Dead Tree Version or Kindle Version 

I am beginning to think of which book will be next. Do you have any ideas?

Upcoming Events

QAI Quest 2015
April 20 -21 Atlanta, GA, USA
Scale Agile Testing Using the TMMi
http://www.qaiquest.org/2015/

DCG will also have a booth!

Next SPaMCast

The next Software Process and Measurement Cast will feature our essay on demonstrations!¬† **** Meg June 24 ‚Äď 29 2013 / / /**** Demonstrations are an important tool for teams to gather feedback to shape the value they deliver.¬† Demonstrations provide a platform for the team to show the stories that have been completed so the stakeholders can interact with the solution. It is unfortunate that many teams mess them up.¬† We can help demonstrate what a good demo is all about.

 

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.


Categories: Process Management