Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management

Celebration And Reflection Are Sides Of The Same Coin

Happy 4th of July.

Happy 4th of July.

Whether on a personal or a project level, the calendar is the most important measuring stick used to gauge progress because it is the measure everyone understands and can follow.  We mark significant milestones with a celebration. For example, birthdays are always a milestone and every country, person or project has a birthday (whether they celebrate them or not is another story). Whether the celebration is annually or monthly (anyone with a newborn has celebrated birthdays monthly), isn’t as material as using the calendar milestone to create space for celebration and reflection.  At milestones like birthdays we should remember and reflect on where we started, the path that we have taken and our goals.

In Agile projects we build in time for reflection called retrospectives. While in classic waterfall projects we have post-implementation reviews.  Regardless of which project management camp you find yourself in, introspection and the process adjustments generated from those changes are valuable. What is rarer is personal reflection. Sure we create resolutions on New Year’s Day, but how often do we review our progress against those goals? Milestones represent chance not only to celebrate, but, as importantly, a chance to step back and take time for introspection. For example, the team at the Software Process and Measurement Podcast and blog (yes there is a team) spent a bit of time during a recent retreat reflecting on a number of stalled projects and how we were going to get them going.

Milestone evoke celebration and introspection. Celebration is the easy part, what is typically harder is to reflect on how we met our goals. In some cases to we accept accomplishment with asking whether means justify the ends.  Early in my career I saw projects that, even though they delivered great outcomes, left project teams in shambles, or some cases, used creative accounting to hide budget overages.  In the short run the celebration was exciting, however in the long run was cost worth the benefit?  I think not. Tellingly, I do not know anyone from that stage of my career that is still in the business. A better approach is when the  markers that show that time is passing are a signal to celebrate and find time to reflect and renew.  In projects, those milestones include sprint reviews, demonstrations, sprint planning or classic phase gates. Each of those milestones generate feedback that helps teams and individuals change direction if needed. Feedback provides the impetus for change. It is usually very easy to mark those events by letting friends, family and stakeholders know what has been accomplished since the last milestone.

Everyone likes a celebration, whether it is because of fireworks, a piece of cake or the demonstration of some tasty bit of promised functionality.  After the celebration step back and reflect on the path that has been taken.  Seek out feedback, just like a retrospective and make changes where needed to ensure that when you reflect back at the next milestone that you do not have to think about what you should have done.


Categories: Process Management

Create the smallest possible Docker container

Xebia Blog - Fri, 07/04/2014 - 21:59

When you are playing around with Docker, you quickly notice that you are downloading large numbers of megabytes as you use preconfigured containers. A simple Ubuntu container easily exceeds 200MB and as software is installed on top of it, the size increases. In some use cases, you do not need everything that comes with Ubuntu. For example, if you want to run a simple web server, written in Go, there is no need for any tool around that at all.

I have been searching for the smallest possible container to start with and found this one:

docker pull scratch

The scratch image is perfect. Literally perfect! It is elegant, small and fast. It does not contain any bugs, security leaks, slow code or technical debt. And that is because it is basically empty. Except for a bit of metadata added by Docker. In fact, you could have created this scratch image yourself with this command as described in the Docker documentation:

tar cv --files-from /dev/null | docker import - scratch

 

So that is it, the smallest possible Docker image. End of blog post!

... or is there something more we can say about this? For example, how do you use the scratch base image? It turns out this brings some challenges of its own.

Creating content for the scratch image

What can we run on an empty base image? An executable without dependencies. Do you have executables without dependencies?

I used to write code in Python, Java and JavaScript. Each of these languages/platforms require a runtime installed. Recently, I started looking into the Go (or GoLang if you prefer) platform. And it seems (spoiler alert) like Go is statically  linked. So I tried compiling a simple web server saying Hello World and running it within the scratch container. Here is the code for the Hello World web server:

package main

import (
	"fmt"
	"net/http"
)

func helloHandler(w http.ResponseWriter, r *http.Request) {
	fmt.Fprintln(w, "Hello World from Go in minimal Docker container")
}

func main() {
	http.HandleFunc("/", helloHandler)

	fmt.Println("Started, serving at 8080")
	err := http.ListenAndServe(":8080", nil)
	if err != nil {
		panic("ListenAndServe: " + err.Error())
	}
}

 

Obviously, I cannot compile my webserver inside the scratch container as there is no Go compiler in it. And as I am working on a Mac, I also cannot compile a Linux binary just like that. (Actually, it is possible to cross-compile GoLang sources to different platforms, but that is material for another blog post)

So I first need a Docker container with a Go compiler. Let's start simple:

docker run -ti google/golang /bin/bash

 

Inside this container, I can build the Go web server, which I have committed in a GitHub repository:

go get github.com/adriaandejonge/helloworld

 

The go get command is a variant of the go build command that allows fetching and building remote dependencies. You can start the resulting executable with:

$GOPATH/bin/helloworld

 

This works. But it is not what we want. We need the hello world container to run inside the scratch container. So, in fact, we need a Dockerfile saying:

FROM scratch
ADD bin/helloworld /helloworld
CMD ["/helloworld"]

 

and then start that. Unfortunately, the way we started the google/golang container, there is no way to build this Dockerfile. So first, we need a way to access Docker from within the container.

Calling Docker from within Docker

When you use Docker, sooner or later you run into the need to control Docker from within Docker. There are multiple ways to accomplish this. You could use recursion and run Docker inside Docker. However, that seems overly complex and again leads to large containers. You can also provide access to the Docker server outside the instance with a few additional command line options:

docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):$(which docker) -ti google/golang /bin/bash

 

Before you continue, please rerun the Go compiler, as Docker forgot our previous compilation during the restart:

go get github.com/adriaandejonge/helloworld

 

When starting the container, the -v flag creates a volume inside the Docker container and allows you to provide a file from the Docker machine as input. The /var/run/docker.sock is the Unix socket that allows access to the Docker server. The $(which docker) part is a clever way to provide the path for the docker executable inside the container without hardcoding it. However, be careful when you use this command on an Apple when using boot2docker. If the docker executable is installed in a different location than it is installed in boot2docker's virtual machine, this results in a mismatch. It will be the executable inside the boot2docker virtual server that gets inserted into the container. So you may want to replace $(which docker) with /usr/local/bin/docker which is hardcoded. Similarly, if you run a different system, there is a chance that the /var/run/docker.sock has a different location and you need to adjust it accordingly.

Now you can use the Dockerfile inside the google/golang container in the $GOPATH directory, which points to /gopath in this example. Actually, I already checked this Dockerfile into GitHub. So you can copy it from the Go build directory to the desired location like this:

cp $GOPATH/src/github.com/adriaandejonge/helloworld/Dockerfile $GOPATH

 

You need to copy this as the compiled binary is now located in $GOPATH/bin and it is not possible to include files from parent directories when building a Dockerfile. So after copying, the next step is:

docker build -t adejonge/helloworld $GOPATH

 

And if all goes, well, Docker responds with something like:

Successfully built 6ff3fd5a381d

 

Which allows you to run the container:

docker run -ti --name hellobroken adejonge/helloworld

 

But unfortunately, now Docker responds with:

2014/07/02 17:06:48 no such file or directory

 

So what is going on? We have a statically linked executable inside a scratch container. Did we make a mistake?

As it turns out, Go does not statically link libraries. Or at least not all libraries. Under Linux, we can see the dynamically linked libraries for an executable with the ldd command:

ldd $GOPATH/bin/helloworld 

 

Which responds with:

linux-vdso.so.1 => (0x00007fff039fe000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f61df30f000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f61def84000)
/lib64/ld-linux-x86-64.so.2 (0x00007f61df530000)

 

So before we can run the Hello World webserver, we need to tell the Go compiler to actually do static linking.

Creating statically linked executables in Go

In order to create statically linked executables, we need to tell Go to use the cgo compiler rather than the go compiler. The command to do so is:

CGO_ENABLED=0 go get -a -ldflags '-s' github.com/adriaandejonge/helloworld

 

The CGO_ENABLED environment variable tells Go to use the cgo compiler rather than the go compiler. The -a flag tells Go to rebuild all dependencies. Otherwise you still end up with dynamically linked dependencies. And finally the -ldflags '-s' flag is a nice extra. It reduces the file size of the resulting executable by roughly 50%. You can also do this without the cgo compiler. The size reduction is a result from removing debug information.

Just to be sure, rerun the ldd command.

ldd $GOPATH/bin/helloworld 

 

It should now respond with:

not a dynamic executable

 

You can also rerun the steps for creating the Docker container around the executable from scratch:

docker build -t adejonge/helloworld $GOPATH

 

And if all goes well, Docker responds with something like:

Successfully built 6ff3fd5a381d

 

Which allows you to run the container:

docker run -ti --name helloworld adejonge/helloworld

 

And this time it should respond with:

Started, serving at 8080

 

Until so far, there were many manual steps and there is a lot of room for error. Let's exit from the google/golang container and continue from the surrounding machine:

<Press Ctrl-C>
exit

 

You can check the existence or absence of containers and images with:

docker ps -a
docker images -a

 

And you can do some cleaning of Docker with:

docker rm -f helloworld
docker rmi -f adejonge/helloworld

 

Creating a Docker container that creates a Docker container

The steps we took so far, we can also record in a Dockerfile and have Docker do the work for us:

FROM google/golang
RUN CGO_ENABLED=0 go get -a -ldflags '-s' github.com/adriaandejonge/helloworld
RUN cp /gopath/src/github.com/adriaandejonge/helloworld/Dockerfile /gopath
CMD docker build -t adejonge/helloworld gopath

 

I checked this Dockerfile into a separate GitHub repository called adriaandejonge/hellobuild. It can be built with this command:

docker build -t adejonge/hellobuild github.com/adriaandejonge/hellobuild

 

Providing the  -t flag names the image as adejonge/hellobuild and implicitly tags it as latest. These names make it easier for you to remove the image later on. Next,  you can create a container from this image while providing the flags that you have seen earlier in this post:

docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):$(which docker) -ti --name hellobuild adejonge/hellobuild

 

Providing the --name hellobuild flag makes it easier to remove the container after running. In fact, you can do so right away, because after running this command, you already created the adejonge/helloworld image:

docker rm -f hellobuild
docker rmi -f adejonge/hellobuild

 

And now you can start a new container named helloworld based on the adejonge/helloworld image as you have done before:

docker run -ti --name helloworld adejonge/helloworld

 

Because all these steps are run from the same command line, without opening a bash shell inside a Docker container, you can add these steps to a bash script and run it automatically. For your convenience, I have added these bash scripts to the hellobuild GitHub repository.

Also, if you want to try the smallest possible Docker container running a Hello World web server without following all the steps described in this blog post, you can also use the pre-built image that I checked into the Docker Hub repository:

docker pull adejonge/helloworld

 

With docker images -a you can see that the size is 3.6MB. Of course, you can make it even smaller if you manage to create an executable that is smaller than the web server in Go that I wrote. In C or Assembly you may be able to do so. However, you can never make it smaller than the scratch image.

Baselining and Benchmarking Require Standards

Baseline, not base line...

Baseline, not base line…

Measuring a process generates a baseline.  By contrast, a benchmark is a comparison of a baseline to another baseline.  Benchmarks can compare baselines to other internal baselines or external baselines.  I am often asked whether it possible to externally benchmark measures and metrics that have no industry definition or occasionally are team specific. Without developing a common definition of the measure or metric so that data is comparable, the answer is no.  A valid baseline line and benchmark requires that the measure or metric being collected is defined and consistently collected by all parties using the benchmark.

Measures or metrics used in external benchmarks need to be based on published or agreed upon standards between the parties involved in the benchmark.  Most examples of standards are obvious.  For example, in the software field there are a myriad of standards that can be leveraged to define software metrics.  Examples of standards groups include: IEEE, ISO, IFPUG, COSMIC and OMG. Metrics that are defined by these standards can be externally benchmarked and there are numerous sources of data.  Measures without international standards require all parties to specifically define what is being measured.  I recently ran across a simple example. The definition of a month caused a lot of discussion.  An organization compared function points per month (a simple throughput metric) to benchmark data they purchased.  The organization’s data was remarkably below the baseline.  The problem was that the benchmark used the common definition of a month (12 in a year) while their data used an internal definition of a 13 period year. The benchmark data or their data should have been modified to be comparable.

Applying the defined metric consistently is also critical and not always a given.  For example, when discussing the cost of an IT project understanding what is included is important for consistency.  Project costs could include hardware, software development and changes, purchased software, management costs, project management costs, business participation costs, and the list could go on ad-infinitum.  Another example might be the use of story points (a relative measure based on team perception), while a team may well be able to apply the measure consistently because it is based on perception comparisons, outside of the team would be at best valueless and at worst dangerous.

The data needed to create a baseline and for a benchmark comparison must be based on a common definition that is understood by all parties, or the results will generate misunderstandings.  A common definition is only a step along the route to a valuable baseline or benchmark, the data collection must be done on a consistent basis.  It is one thing to agree upon a definition and then have that definition consistently applied during data collection. Even metrics like IFPUG Function Points, which have a standard definition and rigorous training, can show up to a five percent variance between counters.  Less rigorously defined and trained metrics are unknowns that require due diligence by anyone that use them.


Categories: Process Management

Dockerfiles as automated installation scripts

Xebia Blog - Thu, 07/03/2014 - 19:16

Dockerfiles are great and easily readable specifications for the installation and configuration of an application. It is terse, can be understood by anyone who understands UNIX commands, results in a testable product and can easily be turned into an automated installation script using a little awk'ward magic. Just in case you want to install the application in question on the good old fashioned way, without the Docker hassle :-)

In this case, we needed to experiment with the Codahale Metrics library and Zabbix. Instead of installing a complete Zabbix server, I googled for a docker container and was pleased to find a ready to run Zabbix server configuration created by Bernardo Gomez Palacio. . Unfortunately, the server stopped repeatedly after about 5 minutes due the simplevisor's impression that it was requested to stop. I could not figure out where this request was coming from, and as it was pretty persistent, I decided to install zabbix on a virtual box.

So I checked out the  docker-zabbix github project and found a ready to run Vagrant configuration to build the zabbix docker container itself (Cool!). The Dockerfile contained easily and readable instructions on how to install and configure Zabbix. But,  instead of copy-and-pasting the instructions to the command prompt, I cloned the project on the vagrant box and created the following awk script in order to execute the instructions in the Dockerfile directly on the running system.

/^ADD/ {
sub(/ADD/, "")
    cmd = "mkdir -p $(dirname " $2 ")"
    system(cmd)
    cmd = "cp " $0
    system(cmd)
}

/^RUN/ {
    sub(/RUN/, "")
    cmd = $0
    system(cmd)
}

After a few minutes, the image was properly configured. I just needed to run the database initialisation script (/start.sh) and ensured that all the services were started on reboot.

 cd /etc/init.d
for i in zabbix* httpd mysqld snmp* ; do
     chkconfig $i on
     service $i start
done

Even if you do not use Docker in production, Dockerfiles are a great improvement in the specifications of installation instructions!

How architecture enables kick ass teams (1): replication considered harmful?

Xebia Blog - Thu, 07/03/2014 - 11:51

At Xebia we regularly have discussions regarding Agile Architecture? What is it? What does it take? How should you organise this? Is it technical or organisational? And much more questions… which I won’t be answering today. What I will do today is kick off a blog series covering subjects that are often part of these heated debates. In general what we strive for with Agile Architecture is an architecture that enables the organisation to keep moving fast and without IT be a limiting factor for realising changes. As you read this series you’ll start noticing one theme coming back over and over again: Autonomy. Sometimes we’ll be focussing on the architecture of systems, sometimes on the architecture of the organisation or teams, but autonomy is the overarching theme. And if you’re familiar with Conways Law it should be no surprise that there is a strong correlation between team and system structure. Having a structure of teams  that is completely different from your system landscape causes friction. We are convinced that striving for optimal team and system autonomy will lead to an organisation which is able to quickly adapt and respond to changes.

The first subject is replication of data, this is more a systems (landscape) issue and less of an organisational issue and definitely not the only one, more posts will follow.

We all have to deal with situations where:

  • consumers of a data retrieval service (e.g. customer account details) require this service to be highly available, or
  • compute intensive analysis must be done using the data in a system, or
  • data owned by a system must be searched in a way that is not (efficiently) supported by that system

These situations all impact the autonomy of the system owning the data.Is the system able to provide the it's functionality at the require quality level or do these external requirements lead to negative consequences on quality of the service provided or maintainability? Should these requirements be forced into the system or is another approach more appropriate?

Above examples all could be solved by replicating data into another system which is more suitable for meeting these requirements but … replication of data is considered to be harmful by some. Is it really? Often mentioned reasons not to replicate data are:

  • The replicated data will always be less accurate and timely than the original data
    True, and is this really a problem for the specific situation you’re dealing with? Sometimes you really need the latest version of a customer record, but in many situations it is no problem is the data is seconds, minutes or even hours old.
  • Business logic that interprets the data is implemented twice and needs to be maintained
    Yes, and you have to compare the costs of this against the benefits. As long as the benefits outweigh the costs, it is a good choice.  You can even consider to provide a library that is used in both systems.
  • System X is the authoritative source of the data and should be the only one that exposes it
    Agree, and keeping that system as the authoritative source is good practice and does not mean that there could be read only access to the same (replicated) data in other systems.

As you can see it is never a black and white decision, you’ll have to make a balanced decision and include benefits and costs of both alternatives. The gained autonomy and business benefits derived from this can easily outweigh the extra development, hosting and maintenance costs of replicating data.

A few concrete examples from my own experience:

We had a situation where a CRM system instance owned data which was also required in a 24x7 emergency support proces. The data was nicely exposed by a number of data retrieval services. At that organisation the CRM system deployment was such that most components were redundant, but during updates the system as a whole would still be down for several hours. Which was not acceptable given that the data was required in a 24x7 emergency support process. Making the CRM system deployment upgradable without downtime was not possible or would cost .
In this situation the costs of replicating the CRM system database to another datacenter using standard database features and having the data retrieval services access either that replicated database or the original database (as fall back) was much cheaper than trying to make CRM system itself high available. The replicated database would remain running accessible even when CRM system  got upgraded. Yes, we’re bypassing the CRM system business logic for interpreting the data, but for this situation the logic was so simple that the costs of reimplementing and maintaining this in a new lightweight service (separate from CRM system) were neglectable.

Another example is from a telecom provider that uses a chain of fulfilment systems in which it registered all network products sold to its customers (e.g. internet access, telephony, tv). Each product instance depends on instances registered in another system and if you drill down deep enough you‚Äôll reach the physical network hardware ports on which it runs. The systems that registered all products used a relational model which was okay for registration. However, questions like ‚Äúif this product instance breaks, which customers are impacted‚ÄĚ were impossible to answer without overheating CPUs in those systems. By publishing all changes in the registrations to a separate system we could model the whole inventory of services as a network graph and easily do analysis on it without impacting the fulfilment systems. The fact that the data would be a (at most) a few seconds old was absolutely no problem.

And a last example is that sometimes you want to do a full (phonetic) text search through a subset of your domain model. Relational data models quickly get you into an unmaintainable situation. You‚Äôre SQL queries will require many tables, lot‚Äôs of inefficient ‚ÄúLIKE ‚Äė%gold%‚Äô" and developers that have a hard time understanding what a query actually intended to do. Replicating the data to a search engine makes searching far easier and¬†provides more possibilities for searches that are hard to realise in a relational database.

As you can see replication of data can increase autonomy of systems and teams and thereby make your system or system landscape and organisation more agile. I.e. you can realise new functionality faster and get it available for your users quicker because the coupling with other systems or teams is reduced.

In a next blog we'll discuss another subject that impacts team or system autonomy.

Teams Have a Peak Load, Revisited

Does the raft have a peak load?

Does the raft have a peak load?

Peak load is an engineering concept that has found its way into software development and maintenance conversation.¬† Peak load is a spike over a specific period of time and not a sustainable level of performance. When applied to a software team, the peak load is how much additional work a team can do for a short period of time. Before we concluded with the admonition; ‚ÄúThe idea of pushing a team to a peak load should be used judiciously.‚ÄĚ To which Assaf Sternberg asked ‚ÄúTom, how do you square this away with another thing that differentiates good software engineering from assembly line work ‚Äď the ability to refactor/reengineer the solution in anticipation of future work to make the latter easier/faster/less risky? Over the long run, this should make it possible for the ‚Äėfunctional point‚Äô count per sprint to continue to rise (while these items would require less effort)‚ÄĚ

Refactoring, also known as code refactoring, is the process changing or restructuring code to be simpler, more effective or more efficient without changing the code’s functional behavior. Refactoring can also be done to make code be more maintainable or extensible. The need to refactor can be inferred from the Agile principles of simplicity and emergent design. Refactoring is an integral part of development in most implementations of Agile.  For example, in test driven development the final step in process is to refactor both the code and design.

In order for refactoring to be effective, it needs to be planned part of work and needs to done in pursuit of an overall goal that can be tested. During sprint planning teams need to identify tasks for refactoring just as they do for other development activities.  Refactoring is just another task that requires time and uses the team’s capacity. When the team plans for refactoring, it is reflected in the team’s velocity and productivity. When a team adopts the technique of refactoring it will initially reduce their functional output, thereby reducing velocity and productivity. But, over the long run, data I have collected as an Agile coach shows that that productivity and velocity increase (about 5% year over year). When productivity goes up more functionality is delivered for less effort. Refactoring is at least partially responsible for this increase.

Refactoring is done to attain a stated goal.  For example, a team recently I worked with focus their refactoring efforts on maintainability (the team had developed standards for maintainability). Given that they had to implement, maintain and enhance the code as a team maintainability improves their overall efficiency (reflected in velocity and productivity changes over time).  The team developed the goal, then agreed on how to pursue the goal and finally agreed how they would know if they were successful.  A goal is important to ensure that team members do not act in an ad hoc manner.

How does or should refactoring impact a team’s a sustainable pace and by extension their peak load? Refactoring does not extend the day, there are same number of hours in your work day. What it does is help the team be more efficient and effective over time. Therefore refactoring increases velocity and productivity.  This is only possible if refactoring is planned as part of the teams normal activity and focused on achieving a goal.


Categories: Process Management

Sprint Planning and Non-Story Related Tasks

Burn down chart

Burn down chart

I have been asked more than once about what to do with tasks that occur during a sprint that are not directly related to a committed story.  You need to first determine 1) whether the teams commit to the task, and 2) whether it is generally helpful for the team to account for the effort and burn it down.  Tasks can be categorized to help determine whether they affect capacity or need to be planned and managed at the team level.  Tasks that the team commits to performing need to be managed as part of the teams capacity while administrative tasks reduce capacity.

Administrative tasks.  Administrative tasks include vacations (planned), corporate meetings, meetings with human resources managers, non-project related training and others.  Classify any tasks that are not related to delivering project value under administrative tasks. One attribute of these types of tasks is that team members do not commit to these tasks, they are levied by the organization. The effort planned for these tasks should be deducted from the capacity of the team.  For example, in a five person team with 200 hours to spend on a project during a sprint (capacity), if one person was taking 20 hours of vacation the team’s capacity would be 180 hours.  If in addition to the vacation all five had to attend a two hour department staff meeting (even an important staff meeting), the team’s capacity would be reduced to 170 hours.  Administrative tasks can add up, deducting them from capacity makes the impact of these tasks obvious to everyone involved with the team.  Note: in organizations that have a very high administrative burden I sometime draw a line on the burn down chart that represents capacity before administrative tasks are removed. 

Project-related non-story tasks. Project-related non-story tasks are required to deliver the project value.  This category of tasks include backlog grooming, spikes and retrospectives.  There is a school of thought that the effort for these tasks should be deducted from the capacity.  Deducting the effort from capacity takes away the team’s impetus to manage the effort and the tasks. This takes away some of the team’s ability to self-organize and self-manage. The team should plan and commit to these tasks, therefore they are added to the backlog and burned down. This puts the onus on the team to complete the tasks and manage the time need to complete the tasks. As example if our team with 170 hours of capacity planned to do a 10 hour spike and have three people perform sprint grooming for an hour (total of 13 hours for both), I would expect to see cards for these tasks in the backlog and as they are completed the 13 hours would be burned down from the capacity.

Tasks that are under the control of the team need to be planned and burned against their capacity.  The acts of planning and accounting for the time provide the team with ability to plan and control the work they commit to completing.  When tasks are planned for the team that they can’t control, deducting it from the overall capacity helps the team keep from over committing to work that must be delivered.   


Categories: Process Management

Adding Decorated User Roles to Your User Stories

Mike Cohn's Blog - Tue, 07/01/2014 - 15:00

When writing user stories there are, of course, two parts: user and story. The user is embedded right at the front of stories when using the most common form of writing user stories:

As a <user role> I <want/can/am required to> <some goal> so that <some reason>.

Many of the user roles in a system can be considered first class users. These are the roles that are significant to the success of a system and will occur over and over in the user stories that make up a system’s product backlog.

These roles can often be displayed in a hierarchy for convenience. An example for a website that offers benefits to registered members is as follows:

 

 

 

 

 

 

 

In this example, some stories will be written with “As a site visitor, …” This would be appropriate for something anyone visiting the site can do, such as view a license agreement.

Other stories could be written specifically for registered members. For example, a website might have, “As a registered member, I can update my payment information.” This is appropriate to write for a registered member because it’s not appropriate for other roles such as visitors, former members, or trial members.

Stories can also be written for premium members (who are the only ones who can perhaps view certain content) or trial members (who are occasionally prompted to join).

But, beyond the first class user roles shown in a hierarchy like this, there are also other occasional users—that is, users who may be important but who aren’t really an entirely separate type of user.

First-time users can be considered an example. A first-time user is usually really a first-class role but they can be important to the success of a product, such as the website in this example.

Usually, the best way to handle this type of user role is by adding an adjective in front of the user role. That could give us roles such as:

  • First-time visitor
  • First-time member

The former would refer to someone on their absolute first visit to the website. The latter could refer to someone who is on the site for the first time as a subscribed member.

This role could have stories such as, “As a first-time visitor, additional prompts appear on the site to direct me toward the most common starting points so that I learn how to navigate the system.”

A similar example could be “forgetful member.” Forgetful members are very unlikely to be a primary role you identify as vital to the success of your product. We should, however, be aware of them.

A forgetful member may have stories such as “As a forgetful member, I can request a password reminder so that I can log in without having to first call tech support.”

I refer to these as decorated user roles, borrowing the term from mathematics where decorated symbols such as a-bar (x̄) and p-hat (x̂) are common.

Decorated users add the ability to further refine user roles without adding complexity to the user role model itself through the inclusion of additional first-class roles. As a fan of user stories, I hope you find the idea as helpful as I do.

In the comments, let me know what you think and please share some examples of decorated user roles you’ve used or encountered.

Adding Decorated User Roles to Your User Stories

Mike Cohn's Blog - Tue, 07/01/2014 - 15:00

When writing user stories there are, of course, two parts: user and story. The user is embedded right at the front of stories when using the most common form of writing user stories:

As a <user role> I <want/can/am required to> <some goal> so that <some reason>.

Many of the user roles in a system can be considered first class users. These are the roles that are significant to the success of a system and will occur over and over in the user stories that make up a system’s product backlog.

These roles can often be displayed in a hierarchy for convenience. An example for a website that offers benefits to registered members is as follows:

 

 

 

 

 

 

In this example, some stories will be written with “As a site visitor, …” This would be appropriate for something anyone visiting the site can do, such as view a license agreement.

Other stories could be written specifically for registered members. For example, a website might have, “As a registered member, I can update my payment information.” This is appropriate to write for a registered member because it’s not appropriate for other roles such as visitors, former members, or trial members.

Stories can also be written for premium members (who are the only ones who can perhaps view certain content) or trial members (who are occasionally prompted to join).

But, beyond the first class user roles shown in a hierarchy like this, there are also other occasional users—that is, users who may be important but who aren’t really an entirely separate type of user.

First-time users can be considered an example. A first-time user is usually really a first-class role but they can be important to the success of a product, such as the website in this example.

Usually, the best way to handle this type of user role is by adding an adjective in front of the user role. That could give us roles such as:

  • First-time visitor
  • First-time member

The former would refer to someone on their absolute first visit to the website. The latter could refer to someone who is on the site for the first time as a subscribed member.

This role could have stories such as, “As a first-time visitor, additional prompts appear on the site to direct me toward the most common starting points so that I learn how to navigate the system.”

A similar example could be “forgetful member.” Forgetful members are very unlikely to be a primary role you identify as vital to the success of your product. We should, however, be aware of them.

A forgetful member may have stories such as “As a forgetful member, I can request a password reminder so that I can log in without having to first call tech support.”

I refer to these as decorated user roles, borrowing the term from mathematics where decorated symbols such as a-bar (x̄) and p-hat (x̂) are common.

Decorated users add the ability to further refine user roles without adding complexity to the user role model itself through the inclusion of additional first-class roles. As a fan of user stories, I hope you find the idea as helpful as I do.

In the comments, let me know what you think and please share some examples of decorated user roles you’ve used or encountered.

How combined Lean- and Agile practices will change the world as we know it

Xebia Blog - Tue, 07/01/2014 - 08:50

You might have attended this month at our presentation about eXtreme Manufacturing and the keynote of Nalden last week on XebiCon 2014. There are a few epic takeaways and additions I would like to share with you in this blogpost.

Epic TakeAway #1: The Learn, Unlearn and Relearn Cycle Like Nalden expressed in his inspiring keynote, one of the major things for him to be successful is being able to Learn, Unlearn and Relearn every time again. In my opinion, this will be the key ability for every successful company in the near future.  In fact, this is how nature evolutes: in the end, only the species who are able to adapt to changing circumstances will survive and evolute. This mechanism makes for example, most of the startups fail, but those who will survive, can be extremely disruptive for non-agile organizations.  Best example for this is of course Whatsapp.  Beating up the Telco Industry by almost destroying their whole businessmodel in only a few months. Learn more about disruptive innovation from one of my personal heroes, Harvard Professor Clayton Christensen.

Epic TakeAway #2: Unlearning Waterfall, Relearning Lean & Agile Globally, Waterfall is still the dominant method in companies and universities.  Waterfall has its origins more than 40 years ago. Times have changed. A lot. A new, successful and disruptive product could be there in only a matter of days instead of (many) years. Finally, things are changing. For example, the US Department of Defence has recently embraced Lean and Agile as mandatory practices, especially Scrum. Schools and universities are also more and more adopting the Agile way of working. Later more in this blogpost.

Epic TakeAway #3: Combined Lean- and Agile practices =  XM Lean practices arose in Japan in the 1980’s , mainly in the manufacturing industry, Toyota being the frontrunner here.  Agile practices like Scrum, were first introduced in the 1990’s by Ken Schwaber and Jeff Sutherland, these practices were mainly applied in the IT-industry. Until now, the manufacturing and IT world didn’t really joined forces combining Lean and Agile practices.  Until recently.  The WikiSpeed initiative of Joe Justice proved combining these practices result in a hyper-productive environment, where a 100 Mile/Gallon road legal sportscar could be developed in less than 3 months.  Out of this success eXtreme Manufacturing (XM) arose. Finally, a powerful combination of best practices from the manufacturing- and IT-world came together.

Epic TakeAway #4: Agile Mindset & Education fotoLike Sir Ken Robinson and Dan Pink already described in their famous TED-talks, the way most people are educated and rewarded, is not suitable anymore for modern times and even conflicts with the way we are born.  We learn by "failing", not by preventing it.  Failing in it’s essence should stimulate creativity to do things better next time, not be punished.  On the long run, failing (read: learning!) has more added value than short-term succes, for example by chasing milestones blindly. EduScrum in the Netherlands stimulates schools and universities to apply Scrum in their daily classes in order to stimulate creativity, happiness, self-reliantness and talent. The results of the schools joining these initiative are spectacular: happy students, less dropouts an significantly higher grades. For a prestigious project for the Delft University, Forze, the development of a hydrogen race car, the students are currently being trained and coached to apply Agile and Lean practices.  Also these results are more than promising. The Forze team is happier, more productive and more able to learn faster and better from setbacks.  Actually, they are taking the first steps of being anti-fragile.  Due too an intercession of the Forze team members themselves,  the current support of agile (Xebia) coaches is now planned being extended to the flagship of the Delft University:  the NUON solar team.

The Final Epic TakeAway In my opinion, we reached a tipping point in the way goals should be achieved.  Organizations are massively abandoning Waterfall and embracing Agile practices, like Scrum.  Adding Lean practices like Joe Justice did in his WikiSpeed project, makes Agile and Lean extremely powerful.  Yes, this will even make this world a much better place.  We cannot prevent nature disasters with this, but we can be anti-fragile.  We cannot prevent every epidemic, but we can respond in an XM-fashion on this by developing a vaccin in only days instead of years.  This brings me finally to the missing statement of the current Agile Manifesto:   We should Unlearn and Relearn before we Judge.  Dare to Dream like a little kid again. Unlearn your skepticism.  Companies like Boeing, Lockheed Martin and John Deere already did. Adopting XM speeded up their velocity in some cases with more than 7 times.

Measures of Central Tendency

Clouds distributed around the mean, err, horizon.

Clouds distributed around the mean, err, horizon.

Measures of central tendency attempt to define the center of a set of data. Measures of central tendency are important to for interpreting benchmarks and when single rates are used in contracts.  There are many ways to measure however the three most popular are mean, mode and median.  Each of the measures of central tendency provides interesting information and each is more or less useful in different circumstances.  Let’s explore the following simple data set.

Untitled

Mean

The mean is the most popular and well known measure of central tendency.  The mean is calculated by summing the values in the sample (or population) and dividing by the total number of observations.  In the example the mean is calculated as 231.43 / 13 or 17.80.  The mean is most useful when the data points are disturbed evenly around the mean or are normally distributed. A mean is highly influenced by outliers.  

Advantages include:

  • Most common measure of central tendency used and therefore most quickly understood.
  • The answer is unique.

Disadvantages include

  • Influenced by extremes (skewed data and outliers).

Median

Median is the middle observation in a set of data.  Median is affected less by outliers or skewed data.  In order to find the median (by hand) you need to arrange the data in numerical order.  Using the same data set:

Untitled2

The median is 18.64 (six observations above and six observations below.  Since the median is positional, it is less affected by extreme values. Therefore the median is a better reflection of central tendency for data that has outliers or is skewed.  Most project metrics include outliers and tend to be skewed therefore the median is very valuable when evaluating software measures. 

Advantages

  • Extreme values (outliers) do not affect the median as strongly as they do the mean.
  • The answer is unique.

Disadvantages

  • Not as popular as the mean.

Mode

The mode is the most frequent observation in the set of data.  Modes may not be the best measure of central tendency and may not be unique. Worse the set may not have a mode.  The mode is most useful when the data is non-numeric or when you are attempting to the most popular item in a data set. Determine the mode by counting the number of each unique observations. In our example data set:

Untitled3

The mode in this data set is 26.43; it has two observations.  

Advantages:

  • Extreme values (outliers) do not affect the mode.

Disadvantages:

  • May be more than one answer.
  • If every value is unique the mode is useless (every value is the mode).
  • May be difficult to interpret.

Based on our test data set the three measures of central tendency return the following values:

  • Mean: 17.8
  • Median: 18.64
  • Mode: 26.43

Each statistic returns different values.  The mean and median provide relatively similar values therefore it would be important to understand whether the data set represents a sample or whether the data set represents the population.  If the data is from a sample or could become more skewed by extreme values, the median is probably a better representation of the central tendency in this case.  If the population is evenly distributed about the mean (or is normally distributed) the mean is a better representation of central tendency. In the sample data set the mode provides little explanative power. Understanding which measure of central tendency allows change agents to better target changes and if your contract uses metrics to determine performance, which measure of central measure you can have an impact.  Changing or arguing over which to use smacks of poor contracting or gaming the measure.  


Categories: Process Management

SPaMCAST 296 ‚Äď Jeff Dalton, CMMI, Agile, Resiliency

Image

 

Listen to the Software Process and Measurement Cast 296

SPaMCAST 296 features our interview with Jeff Dalton we talked about Agile and resiliency. If Agile is resilient it will be able spring back into shape after being bent or compressed by the pressures of development and support.  In the conversation Jeff and I discussed whether Agile was resilient and how frameworks like the CMMI can be used to make Agile more resilient.

Jeff is Broadsword‚Äôs President, Certified Lead Appraiser,¬†CMMI¬†Instructor, ScrumMaster and author of ‚ÄúagileCMMI,‚ÄĚ Broadsword‚Äôs leading methodology for incremental and iterative process improvement. ¬†He is Chairman of the CMMI Institute‚Äôs Partner Advisory Board and former President of the Great Lakes Software Process Improvement Network (GL-SPIN). ¬†He is a recipient of the Software Engineering Institute‚Äôs¬†SEI Member Award for Outstanding Representative¬†for his work uniting the Agile and¬†CMMI¬†communities together through his popular blog ‚ÄúAsk the¬†CMMI¬†Appraiser.‚ÄĚ ¬†He holds degrees in Music and Computer Science and builds experimental airplanes in his spare time. ¬†You can reach Jeff at¬†appraiser@broadswordsolutions.com.

Contact Data:
Email:  appraiser@broadswordsolutions.com.
Twitter:  @CMMIAppraiser
Blog: http://askthecmmiappraiser.blogspot.com/
Web:  http://www.broadswordsolutions.com/
also see:  http://www.cmmi-tv.com

Next week we will feature our essay on IFPUG Function Points.  IFPUG function points are an ISO Standard means to size projects and applications. IFPUG function points are used across a wide range of project types, industries and countries.

Upcoming Events

Upcoming DCG Webinars:

July 24 11:30 EDT ‚Äď The Impact of Cognitive Bias On Teams
Check these out at www.davidconsultinggroup.com

I will be attending Agile 2014 in Orlando, July 28 through August 1, 2014.  It would be great to get together with SPaMCAST listeners, let me know if you are attending.

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute(ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.

 

 


Categories: Process Management

SPaMCAST 296 ‚Äď Jeff Dalton, CMMI, Agile, Resiliency

Software Process and Measurement Cast - Sun, 06/29/2014 - 22:00

SPaMCAST 296 features our interview with Jeff Dalton we talked about Agile and resiliency. If Agile is resilient it will be able spring back into shape after being bent or compressed by the pressures of development and support.  In the conversation Jeff and I discussed whether Agile was resilient and how frameworks like the CMMI can be used to make Agile more resilient.

Jeff is Broadsword’s President, Certified Lead Appraiser, CMMI Instructor, ScrumMaster and author of “agileCMMI,” Broadsword’s leading methodology for incremental and iterative process improvement.  He is Chairman of the CMMI Institute’s Partner Advisory Board and former President of the Great Lakes Software Process Improvement Network (GL-SPIN).  He is a recipient of the Software Engineering Institute’s SEI Member Award for Outstanding Representative for his work uniting the Agile and CMMI communities together through his popular blog “Ask the CMMI Appraiser.”  He holds degrees in Music and Computer Science and builds experimental airplanes in his spare time.  You can reach Jeff at appraiser@broadswordsolutions.com.

Contact Data:
Email:  appraiser@broadswordsolutions.com.
Twitter:  @CMMIAppraiser
Blog: http://askthecmmiappraiser.blogspot.com/
Web:  http://www.broadswordsolutions.com/
also see:  www.cmmi-tv.com

Next week we will feature our essay on IFPUG Function Points.  IFPUG function points are an ISO Standard means to size projects and applications. IFPUG function points are used across a wide range of project types, industries and countries.

Upcoming Events

Upcoming DCG Webinars:

July 24 11:30 EDT – The Impact of Cognitive Bias On Teams
Check these out at www.davidconsultinggroup.com

I will be attending Agile 2014 in Orlando, July 28 through August 1, 2014.  It would be great to get together with SPaMCAST listeners, let me know if you are attending.

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute(ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Adding #NoEstimates to the Framework

#NoEstimates  . . .Yes or No?

#NoEstimates . . .Yes or No?

 

Hand Drawn Chart Saturday!

When I published An Estimation Framework Is Required In Complex Environments, several people that I respect, including Luis Gonçalves (interviewed on the SPaMCAST 282 with Ben Linders), begged to differ with my conclusion that a framework was ever required.  Luis made an impassioned plea for #NoEstimates.  The premise of #NoEstimates is that estimates enforce a plan and plans many times are overcome by changes that range across both technology and business needs.

Vasco Duarte, a leading proponent of #NoEstimate describes the process as follows:

  1. Select the highest value piece of work the team needs to do.
  2. Break that piece of work down into small components.  Vasco uses the term risk-neutral chunks, which means pieces of work that if they don’t get delivered in the first attempt will not put the project at risk.
  3. Develop each piece of work according to the definition of done. #NoEstimates makes a strong case that unless done means anything other than usable by the end customers, the project is not getting the feedback needed to avoid negative surprises.
  4. Iterate and refactor. Continue until the product or enhancement meets the organization’s definition of done.

Estimates are part of a continuum that begins with budgeting, continues to estimating and terminates at planning.   Organizations build strategic plans based on bringing new or enhanced products to market.  For example, a retailer might commit to opening x number of stores in the next year.  If public, once publicly stated, the organization will need to perform to those commitments or face a wide range of consequences.  Based on experience gathered by working in several retailer’s IT organizations, I know that even a single store is a major effort that includes store operations, purchasing, legal and IT.  Missing an opening date causes embarrassment and typically, large financial penalties (paying workers who aren’t working, rescheduling advertising and possible tax penalties not to mention the impact to stock prices).  Organizations need to budget and estimate at a strategic level.

Where the #NoEstimates approach makes sense is at the planning level.  The #NoEstimates process empowers teams (product owner, Scrum Master/coach and development personnel) to work on the highest value work first and to develop a predictable capacity to deliver work.  The results generated by the team provide feedback to evaluate the promises made though organization-level budgets and estimates.

When performance is at odds with what has been promised business choices should be made.  Choices can range from involving other teams (when this makes sense) to accepting the implications of not meeting the commitments made by the organization.

Does #NoEstimates make sense?  Yes, the process and concepts embodied by #NoEstimates fits solidly into a framework of budgeting, estimating and planning.   Without a framework to codify the use of #NoEstimates and to govern organizational behavior, getting to the point of making hard business choices will generate pressure to fall back to command and control fashion.

Note:  I am working on scheduling an interview and discussion with Luis and Vasco on the Software Process and Measurement Cast to discuss #NoEstimates.


Categories: Process Management

Fixing The Top Five Issues in Project Estimation

Sometimes you need a seeing eye dog to see the solution.

Sometimes you need a seeing eye dog to see the solution.

In the entry, The Top Five Issues In Project Estimation, we identified the five macro categories of estimation problems generated when I asked a group of people the question ‚ÄúWhat are the two largest issues in project estimation?‚Ä̬† Knowing what the issues are is important, however equally important is having a set of solutions.

  1. Requirements. Techniques that reduce the impact of unclear and changing requirements on budgeting and estimation include release plans, identifying a clear minimum viable product and changing how requirements changes are viewed when judging project success. See Requirements: The Chronic Problem with Project Estimation.
  2. Estimate Reliability. Recognize that budgets, estimates and plans are subject to the cone of uncertainty.  The cone of uncertainty is a reflection of the fact earlier in a project the less you know about the project.  Predictions of the future will be more variable the less you know about the project.  Budgets, estimates and plans are predictions of cost, effort, duration or size.
  3. Project History.  Collect predicted and actual project size, effort, duration and other project demographics for each project.  Project history can be used both as the basis for analogous estimates and/or to train parametric estimation tools.  The act of collecting the quantitative history and the qualitative story about how projects performed is a useful form of introspection that can drive change.
  4. Labor Hours Are Not The Same As Size.  Implement functional (e.g. IFPUG Function Points) or relative sizing (Story Points) as a step in the estimation process. The act of focusing on size separately allows estimators to gain greater focus on the other parts of the estimation process like team capabilities, processes, risks or changes that will affect velocity.  Greater focus leads to greater understanding, which leads to a better estimate.
  5. No One Dedicated to Estimation.  Estimating is a skill that that not only requires but practice to develop consistency.  While everyone should understand the concepts of estimation, consistency will be gained faster if someone is dedicated to learn and to execute the estimation process.

Solving the five macro estimation problems requires organizational change.¬† Many of the changes required are difficult because they are less about ‚Äúhow‚ÄĚ to estimate and more about what we think estimates are, which leads into a discussion of why we estimate.¬† Organization‚Äôs budget and estimate to provide direction at a high level.¬†¬† At this level budgets and estimates affect planning for tax accruals and for communicating portfolio level decisions to organizational stakeholders.¬† Investing in improving how organizations estimate will improve communication between CIOs, CFOs and business stakeholders.


Categories: Process Management

The Top Five Issues In Project Estimation

 

Sometimes estimation leaves you in a fog!

Sometimes estimation leaves you in a fog!

When I recently asked a group of people the question ‚ÄúWhat are the two largest issues in project estimation?‚ÄĚ, I received a wide range of answers. The range of answers is probably a reflection of the range of individuals answering.¬† Five macro categories emerged from the answers. They are:

  1. Requirements. The impact of unclear and changing requirements on budgeting and estimation was discussed in detail in the entry, Requirements: The Chronic Problem with Project Estimation.  Bottom line, change is required to embrace dynamic development methods and that change will require changes in how the organization evaluates projects.
  2. Estimate Reliability. The perceived lack of reliability of an estimate can be generated by many factors including differences in between development and estimation processes. One of the respondents noted, ‚Äúmost of the time the project does not believe the estimate and thus comes up with their own, which is primarily based on what they feel the customer wants to hear.‚ÄĚ
  3. Project History. Both analogous and parametric estimation processes use the past as an input in determining the future.¬† Collection of consistent historical data is critical to learning and not repeating the same mistakes over and over.¬† According to Joe Schofield, ‚Äúfew groups retain enough relevant data from their experiences to avoid relearning the same lesson.‚ÄĚ
  4. Labor Hours Are Not The Same As Size.¬† Many estimators either estimate the effort needed to perform the project or individual tasks.¬† By jumping immediately to effort, estimators miss all of the nuances that effect the level of effort required to deliver value.¬† According to Ian Brown, ‚Äúthen the discussion basically boils down to opinions of the number of hours, rather that assessing other attributes that drive the number of hours that something will take.‚ÄĚ
  5. No One Dedicated to Estimation.¬† Estimating is a skill built on a wide range of techniques that need to be learned and practiced.¬† When no one is dedicated to developing and maintaining estimates it is rare that anyone can learn to estimate consistently, which affects reliability.¬† To quote one of the respondents, ‚Äúconsistency of estimation from team to team, and within a team over time, is non-existent.‚ÄĚ

 

Each of the top five issues are solvable without throwing out the concept of estimation that are critical for planning at the organization, portfolio and product levels.  Every organization will have to wrestle with their own solution to the estimation conundrum. However the first step is to recognize the issues you face and your goals from the estimation process.


Categories: Process Management

Mocking a REST backend for your AngularJS / Grunt web application

Xebia Blog - Thu, 06/26/2014 - 17:15

Anyone who ever developed a web application will know that a lot of time is spend in a browser to check if everything works as well and looks good. And you want to make sure it looks good in all possible situations. For a single-page application, build with a framework such as AngularJS, that gets all it's data from a REST backend this means you should verify your front-end against different responses from your backend. For a small application with primarily GET requests to display data, you might get away with testing against your real (development) backend. But for large and complex applications, you need to mock your backend.

In this post I'll go in to detail how you can solve this by mocking GET requests for an AngularJS web application that's built using Grunt.

In our current project, we're building a new mobile front-end for an existing web application. Very convenient since the backend already exists with all the REST services that we need. An even bigger convenience is that the team that built the existing web application also built an entire mock implementation of the backend. This mock implementation will give standard responses for every possible request. Great for our Protractor end-to-end tests! (Perhaps another post about that another day.) But this mock implementation is not so great for the non standard scenario's. Think of error messages, incomplete data, large numbers or a strange valuta. How can we make sure our UI displays these kind of cases correct? We usually cover all these cases in our unit tests, but sometimes you just want to see it right in front of you as well. So we started building a simple solution right inside our Grunt configuration.

To make this solution work, we need to make sure that all our REST requests go through the Grunt web server layer. Our web application is served by Grunt on localhost port 9000. This is the standard configuration that Yeoman generates (you really should use Yeoman to scaffold your project). Our development backend is also running on localhost, but on port 5000. In our web application we want to make all REST calls using the `/api` path so we need to rewrite all requests to http://localhost:9000/api to our backend: http://localhost:5000/api. We can do this by adding middleware in the connect:livereload configuration of our Gruntfile.

livereload: {
  options: {
    open: true,
    middleware: function (connect, options) {
      return [
        require('connect-modrewrite')(['^/api http://localhost:5000/api [P L]']),

        /* The lines below are generated by Yeoman */
        connect.static('.tmp'),
        connect().use(
          '/bower_components',
          connect.static('./bower_components')
        ),
        connect.static(appConfig.app)
      ];
    }
  }
},

Do the same for the connect:test section as well.

Since we're using 'connect-modrewrite' here, we'll have to add this to our project:

npm install connect-modrewrite --save-dev

With this configuration every request starting will http://localhost:9000/api will be passed on to http://localhost:5000/api so we can just use /api in our AngularJS application. Now that we have this working, we can write some custom middleware to mock some of our requests.

Let's say we have a GET request /api/user returning some JSON data:

{"id": 1, "name":"Bob"}

Now we'd like to see what happens with our application in case the name is missing:

{"id": 1}

It would be nice if we could send a simple POST request to change the response of all subsequent calls. Something like this:

curl -X POST -d '{"id": 1}' http://localhost:9000/mock/api/user

We prefixed the path that we want to mock with /mock in order to know when we should start mocking something. Let's see how we can implement this. In the same Gruntfile that contains our middleware configuration we add a new function that will help us mock our requests.

var mocks = [];
function captureMock() {
  return function (req, res, next) {

    // match on POST requests starting with /mock
    if (req.method === 'POST' && req.url.indexOf('/mock') === 0) {

      // everything after /mock is the path that we need to mock
      var path = req.url.substring(5);

      var body = '';
      req.on('data', function (data) {
        body += data;
      });
      req.on('end', function () {

        mocks[path] = body;

        res.writeHead(200);
        res.end();
      });
    } else {
      next();
    }
  };
}

And we need to add the above function to our middleware configuration:

middleware: function (connect, options) {
  return [
    captureMock(),
    require('connect-modrewrite')(['^/api http://localhost:5000/api [P L]']),

    connect.static('.tmp'),
    connect().use(
      '/bower_components',
      connect.static('./bower_components')
    ),
    connect.static(appConfig.app)
  ];
}

Our function will be called for each incoming request. It will capture each request starting with /mock as a request to define a mock request. Next it stores the body in the mocks variable with the path as key. So if we execute our curl POST request we end up with something like this in our mocks array:

mocks['/api/user'] = '{"id": 1}';

Next we need to actually return this data for requests to http://localhost:9000/api/user. Let's make a new function for that.

function mock() {
  return function (req, res, next) {
    var mockedResponse = mocks[req.url];
    if (mockedResponse) {
      res.writeHead(200);
      res.write(mockedResponse);
      res.end();
    } else {
      next();
    }
  };
}

And also add it to our middleware.

  ...
  captureMock(),
  mock(),
  require('connect-modrewrite')(['^/api http://localhost:5000/api [P L]']),
  ...

Great, we now have a simple mocking solution in just a few lines of code that allows us to send simple POST requests to our server with the requests we want to mock. However, it can only send status codes of 200 and it cannot differentiate between different HTTP methods like GET, PUT, POST and DELETE. Let's change our functions a bit to support that functionality as well.

 var mocks = {
  GET: {},
  PUT: {},
  POST: {},
  PATCH: {},
  DELETE: {}
};

function mock() {
  return function (req, res, next) {
    if (req.method === 'POST' && req.url.indexOf('/mock') === 0) {
      var path = req.url.substring(5);

      var body = '';
      req.on('data', function (data) {
        body += data;
      });
      req.on('end', function () {

        var headers = {
          'Content-Type': req.headers['content-type']
        };
        for (var key in req.headers) {
          if (req.headers.hasOwnProperty(key)) {
            if (key.indexOf('mock-header-') === 0) {
              headers[key.substring(12)] = req.headers[key];
            }
          }
        }

        mocks[req.headers['mock-method'] || 'GET'][path] = {
          body: body,
          responseCode: req.headers['mock-response'] || 200,
          headers: headers
        };

        res.writeHead(200);
        res.end();
      });
    }
  };
};

function mock() {
  return function (req, res, next) {
    var mockedResponse = mocks[req.method][req.url];
    if (mockedResponse) {
      res.writeHead(mockedResponse.responseCode, mockedResponse.headers);
      res.write(mockedResponse.body);
      res.end();
    } else {
      next();
    }
  };
}

We can now create more advanced mocks:

curl -X POST \
    -H "mock-method: DELETE" \
    -H "mock-response: 403" \
    -H "Content-type: application/json" \
    -H "mock-header-Last-Modified: Tue, 15 Nov 1994 12:45:26 GMT" \
    -d '{"error": "Not authorized"}' http://localhost:9000/mock/api/user

curl -D - -X DELETE http://localhost:9000/api/user
HTTP/1.1 403 Forbidden
Content-Type: application/json
last-modified: Tue, 15 Nov 1994 12:45:26 GMT
Date: Wed, 18 Jun 2014 13:39:30 GMT
Connection: keep-alive
Transfer-Encoding: chunked

{"error": "Not authorized"}

Since we thought this would be useful for other developers, we decided to make all this available as open source library on GitHub and NPM

To add this to your project, just install with npm:

npm install mock-rest-request --save-dev

And of course add it to your middleware configuration:

middleware: function (connect, options) {
  var mockRequests = require('mock-rest-request');
  return [
    mockRequests(),
    
    connect.static('.tmp'),
    connect().use(
      '/bower_components',
      connect.static('./bower_components')
    ),
    connect.static(appConfig.app)
  ];
}

Software Development Conferences Forecast June 2014

From the Editor of Methods & Tools - Thu, 06/26/2014 - 07:22
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing and software quality, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. AGILE2014, July 28 ‚Äď August 1, Orlando, USA Agile on the Beach, September 4-5 2014, Falmouth in Cornwall, UK SPTechCon, September 16-19 2014, Boston, USA STARWEST, October 12-17 2014, Anaheim, USA JAX London, October 13-15 2014,London, UK Pacific Northwest ...

What do you do when inertia wins?

At the end of a race inertia might not be enough!

At the end of a race inertia might not be enough!

Audio Version on SPaMCAST 197

Changing how any organization works is not easy.¬† Many different moving parts have to come together for a change to take root and build up enough inertia to pass the tipping point. Unfortunately because of misalignment, misunderstanding or poor execution, change programs don’t always win the day.¬† This is not new news to most of us in the business.¬† The question I pose is what should happen after a process improvement program fails?¬† What happens when the wrong kind of inertia wins?

Step One:  All failures must be understood.

A critical review of the failed program that focuses on why and how it failed must be performed.¬† The word critical is important.¬† Nothing should be sugar coated or “spun” to protect people’s feelings.¬† A critical review must also have a good dose of independence from those directly involved in the implementation.¬† Independence is required so that the biases and decisions that led to the original program can be scrutinized.¬† The goal is not to pillory those involved but rather to make sure the same mistakes are not repeated.¬† These reviews are known by many names: postmortems, retrospectives or troubled project reviews, to name a few.

Step two:  Determine which way the organization is moving.

Inertia describes why an object in motion tends to stay in motion or those at rest tend to stay at rest.  Energy is required to change the state of any object or organization: therefore understanding the direction of the organization is critical to planning any change. In process improvement programs, we call the application of energy change management.  A change management program might include awareness building, training, mentoring or a myriad of other events all designed to inject energy into the system, the goal of that energy is either to amplify or change the performance of some group within an organization.  When not enough or too much energy is applied, the process change will fail.

Just because a change has failed does not mean all is lost.  I would suggest that there are two possible outcomes to a failure: The first is that the original position is reinforced, making change even more difficult.  The second is that the target group has been pushed into moving, maybe not all the way to where they should be or even in the right direction but the original inertia has been broken.

Frankly, both outcomes happen.¬† If the failure is such that no good comes of it, then your organization will be mired in the muck of living off past performance.¬† This is similar to what happens when a car gets stuck in snow or sand and digs itself in.¬† The second scenario is more positive, and while the goal was not attained, the organization has begun to move, making further change easier.¬† I return to the car stuck in the snow example.¬† A technique that is taught to many of us that live in snowy climates is “rocking.” Rocking is used to get a car stuck in snow moving back and forth.¬† Movement increases the odds that you will be able to break free and get going in the right direction.¬† Interestingly, the recognition of movement is a powerful sales technique taught in the Sandler Sales System.

Step Three:  Take smaller bites!

The lean startup movement provides a number of useful concepts that can be used when changing any organization.  In the Software Process and Measurement Cast 196, Jeff Anderson talked in detail about leveraging the concepts of lean start-ups within change programs (Link to SPaMCAST 196).  In this essay, I suggest using the concept of minimum viable changes to build a backlog of manageable changes.  The backlog should be groomed and prioritized by a product owner (or owners) from the area being impacted by the change.  This will increase ownership and involvement and generate buy-in.  Once you have a prioritized backlog, make the changes in a short time-boxed manner while involving those being impacted in measuring the value delivered.  Stop doing things if they are not delivering value and go to the next change.

What do you do when inertia wins? Being a change agent is not easy, and no one succeeds all the time unless they are not taking any risks.  Learn from your mistakes and successes.  Understand the direction the organization is moving and use that movement as an asset to magnify the energy you apply. Involve those you are asking to change to building a backlog of prioritized minimum viable changes (mix the concept of a backlog with concepts from the lean start up movement).  Make changes based on how those who are impacted prioritize the backlog then stand back to observe and measure.  Finally, pivot (change direction) if necessary.  Always remember that the goal is not really the change itself but rather demonstrable business value. Keep pushing until the organization is going in the right direction.  What do you do when inertia wins?  My mother would have said just get back up, dust yourself off and get back in the game; it isn’t that easy but it is not that much more complicated.


Categories: Process Management

How Agile accelerates your business

Xebia Blog - Wed, 06/25/2014 - 10:11

This drawing explains how agility accelerates your business. It is free to use and distribute. Should you have any questions regarding the subjects mentioned, feel free to get in touch.
Dia1