Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management

Agile User Acceptance Testing Spans The Entire Product Development Life Cycle

Agile re-defines acceptance testing as a “formal description of the behavior of a software product[1].”

Agile re-defines acceptance testing as a “formal description of the behavior of a software product[1].”

User acceptance test (UAT) is a process that confirms that the output of a project meets the business needs and requirements. Classically, UAT would happen at the end of a project or release. Agile spreads UAT across the entire product development life cycle re-defining acceptance testing as a “formal description of the behavior of a software product[1].” By redefining acceptance testing as a description of what the software does (or is supposed to do) that can be proved (the testing part), Agile makes acceptance more important than ever by making it integral across the entire Agile life cycle.

Agile begins the acceptance testing process as requirements are being discovered. Acceptance tests are developed as part of the of the requirements life cycle in an Agile project because acceptance test cases are a form of requirements in their own right. The acceptance tests are part of the overall requirements, adding depth and granularity to the brevity of the classic user story format (persona, goal, benefit). Just like user stories, there is often a hierarchy of granularity from an epic to a user story. The acceptance tests that describe a feature or epic need to be decomposed in lock step with the decomposition of features and epics into user stories. Institutionalizing the process of generating acceptance tests at the feature and epic level and then breaking the stories and acceptance test cases down as part of grooming is a mechanism to synchronize scaled projects (we will dive into greater detail on this topic in a later entry).

As stories are accepted into sprints and development begins, acceptance test cases become a form of executable specifications. Because the acceptance test describes what the user wants the system to do, then the functionality of the code can be compared to the expected outcome of the acceptance test case to guide the developer.

When development of user stories is done the acceptance test cases provide a final feedback step to prove completion. The output of acceptance testing is a reflection of functional testing that can be replicated as part of the demo process. Typically, acceptance test cases are written by users (often Product Owners or subject matter experts) and reflect what the system is supposed to do for the business. Ultimately, it provides proof to the user community that the team (or teams) are delivering what is expected.

As one sprint follows another, the acceptance test cases from earlier sprints are often recast as functional regression tests cases in later sprints.

Agile user acceptance testing is a direct reflection of functional specifications that guide coding, provide basis for demos and finally, ensure that later changes don’t break functions that were develop and accepted in earlier sprints. UAT in an Agile project is more rigorous and timely than the classic end of project UAT found in waterfall projects.

[1] http://guide.agilealliance.org/guide/acceptance.html, September 2015


Categories: Process Management

Budget When You Can’t Estimate

Mike Cohn's Blog - Tue, 09/01/2015 - 15:00

I've written before that we should only estimate if having the estimate will change someone's actions. Normally, this means that the team should estimate work only if having the estimate will either:

  • enable the product owner to better prioritize the product backlog, or
  • let the product owner answer questions about when some amount of functionality can be available.

But, even if we only ask for estimates at these times, there will be some work that the team finds extremely hard or nearly impossible to estimate. Let's look at the best approach when an agile team is asked to estimate work they don't think they can reasonably estimate. But let's start with some of the reasons why an agile team may not be able to estimate well.

Why a Team Might Not Be Able to Estimate Well

This might occur for a variety of reasons. It happens frequently when a team first begins working in a new domain or with new technology.

Take a set of highly experienced team members who have worked together for years building, let's say, healthcare applications for the Web. And then move them to building banking applications for mobile devices. They can come up with some wild guesses about that new mobile banking app. But that's about all that estimate would be--a wild guess.

Similarly, team members can have a hard time estimating when they aren't really much of a team yet. If seven people are thrown together today for the first time and immediately asked to estimate a product backlog, their estimates will suck. Individuals won't know who has which skills (as opposed to who says they have those skills). They won't know how well they collaborate, and so on. After such a team works together for perhaps a few months, or maybe even just a few weeks, the quality of their estimates should improve dramatically.

There can also be fundamental reasons why an estimate is hard to produce. Some work is hard to estimate. For example, how long until cancer is cured? Or, how long will it take to design (or architect) this new system? Teams hate being asked to estimate this kind of work. In some cases, it's done when it's done, as in the cancer example. In other cases, it's done when time runs out, as in the design or architecture example.

Budgeting Instead of Estimating

In cases like these, the best thing is to approach the desire for an estimate from a different direction. Instead of asking a team, "How long will this take?" the business thinks, "How much time is this worth?" In effect, what this does is create a budget for the feature, rather than an estimate. Creating a budget for a feature is essentially applying the agile practice of timeboxing. The business says, "This feature is worth four iterations to us." In a healthy, mature agile organization, the team should be given a chance to say if they think they can do a reasonable job in that amount of time. If the team does not think it can build something the business will value in the budgeted amount of time, a discussion should ensue. What if the budget were expanded by another sprint or two? Can portions of the feature be considered optional so that the mandatory features are delivered within the budget? Establishing a budget frames this type of discussion.

What Do You Think?

What do you think of this approach? Are there times when you'd find it more useful to put a budget or timebox on a feature rather than estimate it? Have you done this in the past? I'd love to know your thoughts in the comments below.

Making Amazon ECS Container Service as easy to use as Docker run

Xebia Blog - Mon, 08/31/2015 - 20:52

One of the reasons Docker caught fire was that it was soo easy to use. You could build and start a docker container in a matter of seconds. With Amazon ECS this is not so. You have to learn a whole new lingo (Clusters, Task definitions, Services and Tasks), spin up an ECS cluster, write a nasty looking JSON file or wrestle with a not-so-user-friendly UI before you have your container running in ECS.

In the blog we will show you that Amazon ECS can be as fast, by presenting you a small utility named ecs-docker-run which will allow you to start a Docker container almost as fast as with Docker stand-alone by interpreting the Docker run command line options. Together with a ready-to-run CloudFormation template, you can be up and running with Amazon ECS within minutes!

ECS Lingo

Amazon ECS uses different lingo than Docker people, which causes confusion. Here is a short translation:

- Cluster - one or more Docker Hosts.
- Task Definition - A JSON representation of a docker run command line.
- Task - A running docker instance. When the instance stops, the task is finished.
- Service - A running docker instance, when it stops, it is restarted.

In the basis that is all there is too it. (Cutting a few corners and skimping on a number of details).

Once you know this, we are ready to use ecs-docker-run.

ECS Docker Run

ecs-docker-run is a simple command line utility to run docker images on Amazon ECS. To use this utility you can simply type something familiar like:

ecs-docker-run \
        --name paas-monitor \
        --env SERVICE_NAME=paas-monitor \
        --env SERVICE_TAGS=http \
        --env "MESSAGE=Hello from ECS task" \
        --env RELEASE=v10 \
        -P  \
        mvanholsteijn/paas-monitor:latest

substituting the 'docker run' with 'ecs-docker-run'.

Under the hood, it will generate a task definition and start a container as a task on the ECS cluster. All of the following Docker run command line options are functionally supported.

-P publishes all ports by pulling and inspecting the image. --name the family name of the task. If unspecified the name will be derived from the image name. -p add a port publication to the task definition. --env set the environment variable. --memory sets the amount of memory to allocate, defaults to 256 --cpu-shares set the share cpu to allocate, defaults to 100 --entrypoint changes the entrypoint for the container --link set the container link. -v set the mount points for the container. --volumes-from set the volumes to mount.

All other Docker options are ignored as they refer to possibilities NOT available to ECS containers. The following options are added, specific for ECS:

--generate-only will only generate the task definition on standard output, without starting anything. --run-as-service runs the task as service, ECS will ensure that 'desired-count' tasks will keep running. --desired-count specifies the number tasks to run (default = 1). --cluster the ECS cluster to run the task or service (default = cluster). Hands-on!

In order to proceed with the hands-on part, you need to have:

- jq installed
- aws CLI installed (version 1.7.44 or higher)
- aws connectivity configured
- docker connectivity configured (to a random Docker daemon).

checkout ecs-docker-run

Get the ecs-docker-run sources by typing the following command:

git clone git@github.com:mvanholsteijn/ecs-docker-run.git
cd ecs-docker-run/ecs-cloudformation
import your ssh key pair

To look around on the ECS Cluster instances, import your public key into Amazon EC2, using the following command:

aws ec2 import-key-pair \
          --key-name ecs-$USER-key \
          --public-key-material  "$(ssh-keygen -y -f ~/.ssh/id_rsa)"
create the ecs cluster autoscaling group

In order to create your first cluster of 6 docker Docker Hosts, type the following command:

aws cloudformation create-stack \
        --stack-name ecs-$USER-cluster \
        --template-body "$(<ecs.json)"  \
        --capabilities CAPABILITY_IAM \
        --parameters \
                ParameterKey=KeyName,ParameterValue=ecs-$USER-key \
                ParameterKey=EcsClusterName,ParameterValue=ecs-$USER-cluster

This cluster is based upon the firstRun cloudformation definition, which is used when you follow the Amazon ECS wizard.

And wait for completion...

Wait for completion of the cluster creation, by typing the following command:

function waitOnCompletion() {
        STATUS=IN_PROGRESS
        while expr "$STATUS" : '^.*PROGRESS' > /dev/null ; do
                sleep 10
                STATUS=$(aws cloudformation describe-stacks \
                               --stack-name ecs-$USER-cluster | jq -r '.Stacks[0].StackStatus')
                echo $STATUS
        done
}

waitOnCompletion
Create the cluster

Unfortunately, CloudFormation does (not) yet allow you to specify the ECS cluster name, so need to manually create the ECS cluster, by typing the following command:

aws ecs create-cluster --cluster-name ecs-$USER-cluster

You can now manage your hosts and tasks from the Amazon AWS EC2 Container Services console.

Run the paas-monitor

Finally, you are ready to run any docker image on ECS! Type the following command to start the paas-monitor.

../bin/ecs-docker-run --run-as-service \
                        --number-of-instances 3 \
                        --cluster ecs-$USER-cluster \
                        --env RELEASE=v1 \
                        --env MESSAGE="Hello from ECS" \
                        -p :80:1337 \
                        mvanholsteijn/paas-monitor
Get the DNS name of the Elastic Load Balancer

To see the application in action, you need to obtain the DNS name of the Elastic Load Balancer. Type the following commands:

# Get the Name of the ELB created by CloudFormation
ELBNAME=$(aws cloudformation describe-stacks --stack-name ecs-$USER-cluster | \
                jq -r '.Stacks[0].Outputs[] | select(.OutputKey =="EcsElbName") | .OutputValue')

# Get the DNS from of that ELB
DNSNAME=$(aws elb describe-load-balancers --load-balancer-names $ELBNAME | \
                jq -r .LoadBalancerDescriptions[].DNSName)
Open the application

Finally, we can obtain access to the application.

open http://$DNSNAME

And it should look something like this..

host release message # of calls avg response time last response time

b6ee7869a5e3:1337 v1 Hello from ECS from release v1; server call count is 82 68 45 36

4e09f76977fe:1337 v1 Hello from ECS from release v1; server call count is 68 68 41 38 65d8edd41270:1337 v1 Hello from ECS from release v1; server call count is 82 68 40 37

Perform a rolling upgrade

You can now perform a rolling upgrade of your application, by typing the following command while keeping your web browser open at http://$DNSNAME:

../bin/ecs-docker-run --run-as-service \
                        --number-of-instances 3 \
                        --cluster ecs-$USER-cluster \
                        --env RELEASE=v2 \
                        --env MESSAGE="Hello from Amazon EC2 Container Services" \
                        -p :80:1337 \
                        mvanholsteijn/paas-monitor

The result should look something like this:

host release message # of calls avg response time last response time b6ee7869a5e3:1337 v1 Hello from ECS from release v1; server call count is 124 110 43 37 4e09f76977fe:1337 v1 Hello from ECS from release v1; server call count is 110 110 41 35 65d8edd41270:1337 v1 Hello from ECS from release v1; server call count is 124 110 40 37 ffb915ddd9eb:1337 v2 Hello from Amazon EC2 Container Services from release v2; server call count is 43 151 9942 38 8324bd94ce1b:1337 v2 Hello from Amazon EC2 Container Services from release v2; server call count is 41 41 41 38 7b8b08fc42d7:1337 v2 Hello from Amazon EC2 Container Services from release v2; server call count is 41 41 38 39

Note how the rolling upgrade is a bit crude. The old instances stop receiving requests almost immediately, while all requests seem to be loaded onto the first new instance.

You do not like the ecs-docker-run script?

If you do not like the ecs-docker-run script, do not dispair. Below are the equivalent Amazon ECS commands to do it without the hocus-pocus script...

Create a task definition

This is the most difficult task: Manually creating a task definition file called 'manual-paas-monitor.json' with the following content:

{
  "family": "manual-paas-monitor",
  "containerDefinitions": [
    {
      "volumesFrom": [],
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 1337
        }
      ],
      "command": [],
      "environment": [
        {
          "name": "RELEASE",
          "value": "v3"
        },
        {
          "name": "MESSAGE",
          "value": "Native ECS Command Line Deployment"
        }
      ],
      "links": [],
      "mountPoints": [],
      "essential": true,
      "memory": 256,
      "name": "paas-monitor",
      "cpu": 100,
      "image": "mvanholsteijn/paas-monitor"
    }
  ],
  "volumes": []
}
Register the task definition

Before you can start a task it has to be registered at ECS, by typing the following command:

aws ecs register-task-definition --cli-input-json "$(<paas-monitor.json)"
Start a service

Now start a service based on this definition, by typing the following command:

aws ecs create-service \
     --cluster ecs-$USER-cluster \
     --service-name manual-paas-monitor \
     --task-definition manual-paas-monitor:1 \
     --desired-count 1

You should see a new row appear in your browser:

host release message # of calls avg response time last response time .... 5ec1ac73100f:1337 v3 Native ECS Command Line Deployment from release v3; server call count is 37 37 37 36 Conclusion

Amazon EC2 Container Services has a higher learning curve than using plain Docker. You need to get passed the lingo, the creation of an ECS cluster on Amazon EC2 and most importantly the creation of the cumbersome task definition file. After that it is almost as easy to use as Docker run.

In return you get all the goodies from Amazon like Autoscaling groups, Elastic Load Balancers and multi-availability zone deployments ready to use in your Docker applications. So, check ECS out!

More Info

Checkout more information:

Software Development Conferences Forecast August 2015

From the Editor of Methods & Tools - Mon, 08/31/2015 - 14:53
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & […]

Unlocking ES2015 features with Webpack and Babel

Xebia Blog - Mon, 08/31/2015 - 09:53

This post is part of a series of ES2015 posts. We'll be covering new JavaScript functionality every week for the coming two months.

After being in the working draft state for a long time, the ES2015 (formerly known as ECMAScript 6 or ES6 shorthand) specification has reached a definitive state a while ago. For a long time now, BabelJS, a Javascript transpiler, formerly known as 6to5, has been available for developers that would already like to use ES2015 features in their projects.

In this blog post I will show you how you can integrate Webpack, a Javascript module builder/loader, with Babel to automate the transpiling of ES2015 code to ES5. Besides that I'll also explain you how to automatically generate source maps to ease development and debugging.

Webpack Introduction

Webpack is a Javascript module builder and module loader. With Webpack you can pack a variety of different modules (AMD, CommonJS, ES2015, ...) with their dependencies into static file bundles. Webpack provides you with loaders which essentially allow you to pre-process your source files before requiring or loading them. If you are familiar with tools like Grunt or Gulp you can think of loaders as tasks to be executed before bundling sources. To make your life even easier, Webpack also comes with a development server with file watch support and browser reloading.

Installation

In order to use Webpack all you need is npm, the Node Package Manager, available by downloading either Node or io.js. Once you've got npm up and running all you need to do to setup Webpack globally is install it using npm:

npm install -g webpack

Alternatively, you can include it just in the projects of your preference using the following command:

npm install --save-dev webpack

Babel Introduction

With Babel, a Javascript transpiler, you can write your code using ES2015 (and even some ES7 features) and convert it to ES5 so that well-known browsers will be able to interpret it. On the Babel website you can find a list of supported features and how you can use these in your project today. For the React developers among us, Babel also comes with JSX support out of the box.

Alternatively, there is the Google Traceur compiler which essentially solves the same problem as Babel. There are multiple Webpack loaders available for Traceur of which traceur-loader seems to be the most popular one.

Installation

Assuming you already have npm installed, installing Babel is as easy as running:

npm install --save-dev babel-loader

This command will add babel-loader to your project's package.json. Run the following command if you prefer installing it globally:

npm install -g babel-loader

Project structure
webpack-babel-integration-example/
  src/
    DateTime.js
    Greeting.js
    main.js
  index.html
  package.json
  webpack.config.js

Webpack's configuration can be found in the root directory of the project, named webpack.config.js. The ES6 Javascript sources that I wish to transpile to ES5 will be located under the src/ folder.

Webpack configuration

The Webpack configuration file that is required is a very straightforward configuration or a few aspects:

  • my main source entry
  • the output path and bundle name
  • the development tools that I would like to use
  • a list of module loaders that I would like to apply to my source
var path = require('path');

module.exports = {
  entry: './src/main.js',
  output: {
    path: path.join(__dirname, 'build'),
    filename: 'bundle.js'
  },
  devtool: 'inline-source-map',
  module: {
    loaders: [
      {
        test: path.join(__dirname, 'src'),
        loader: 'babel-loader'
      }
    ]
  }
};

The snippet above shows you that my source entry is set to src/main.js, the output is set to create a build/bundle.js, I would like Webpack to generate inline source maps and I would like to run the babel-loader for all files located in src/.

ES6 sources A simple ES6 class

Greeting.js contains a simple class with only the toString method implemented to return a String that will greet the user:

class Greeting {
  toString() {
    return 'Hello visitor';
  }
}

export default Greeting
Using packages in your ES2015 code

Often enough, you rely on a bunch of different packages that you include in your project using npm. In my example, I'll use the popular date time library called Moment.js. In this example, I'll use Moment.js to display the current date and time to the user.

Run the following command to install Moment.js as a local dependency in your project:

npm install --save-dev moment

I have created the DateTime.js class which again only implements the toString method to return the current date and time in the default date format.

import moment from 'moment';

class DateTime {
  toString() {
    return 'The current date time is: ' + moment().format();
  }
}

export default DateTime

After importing the package using the import statement you can use it anywhere within the source file.

Your main entry

In the Webpack configuration I specified a src/main.js file to be my source entry. In this file I simply import both classes that I created, I target different DOM elements and output the toString implementations from both classes into these DOM objects.

import Greeting from './Greeting.js';
import DateTime from './DateTime.js';

var h1 = document.querySelector('h1');
h1.textContent = new Greeting();

var h2 = document.querySelector('h2');
h2.textContent = new DateTime();
HTML

After setting up my ES2015 sources that will display the greeting in an h1 tag and the current date time in an h2 tag it is time to setup my index.html. Being a straightforward HTML-file, the only thing that is really important is that you point the script tag to the transpiled bundle file, in this example being build/bundle.js.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Webpack and Babel integration example</title>
</head>
<body>

<h1></h1>

<h2></h2>

    <script src="build/bundle.js"></script>
</body>
</html>
Running the application

In this example project, running my application is as simple as opening the index.html in your favorite browser. However, before doing this you will need to instruct Webpack to actually run the loaders and thus transpile your sources into the build/bundle.js required by the index.html.

You can run Webpack in watch mode, meaning that it will monitor your source files for changes and automatically run the module loaders defined in your configuration. Execute the following command to run in watch mode:

webpack --watch

If you are using my example project from Github (link at the bottom), you can also use the following script which I've set up in the package.json:

npm run watch

Easier debugging using source maps

Debugging transpiled ES5 is a huge pain which will make you want to go back to writing ES5 without thinking. To ease development and debugging of ES2015 I can rely on source maps generated by Webpack. While running Webpack (normal or in watch mode) with the devtool property set to inline-source-map you can view the ES2015 source files and actually place breakpoints in them using your browser's development tools.

Debugging ES6 with source maps

Running the example project with a breakpoint inside the DateTime.js toString method using the Chrome developer tools.

Conclusion

As you've just seen, setting up everything you need to get started with ES2015 is extremely easy. Webpack is a great utility that will allow you to easily set up your complete front-end build pipeline and seamlessly integrates with Babel to include code transpiling into the build pipeline. With the help of source maps even debugging becomes easy again.

Sample project

The entire sample project as introduced above can be found on Github.

SPaMCAST 357 – Mind Mapping, Who Needs Architects, TameFlow Chapter 4

Software Process and Measurement Cast - Sun, 08/30/2015 - 22:00

The Software Process and Measurement our essay on Mind Mapping. I view Mind Mapping as a great technique to stimulate creative thinking and organize ideas. Mind Mapping is one of my favorite tools.  Note:  this essay has a lot of visual examples, while there they are explained in the reading of the essay, I have also included them in a separate document for reference. 

We also feature Steve Tendon’s column discussing the TameFlow methodology and Chapter 4 of his great book, Hyper-Productive Knowledge Work Performance.  Steve and I discussed the impact of management’s profound understanding of knowledge work and that understandings impact on the delivery of value.

Anchoring the cast will be Gene Hughson, returning with an entry from his Form Follows Function blog.  This week Gene discusses the ideas in his entry, “Who Needs Architects?” 

Call to Action!

For the remainder of August and September let’s try something a little different.  Forget about iTunes reviews and tell a friend or a coworker about the Software Process and Measurement Cast. Let’s use word of mouth will help grow the audience for the podcast.  After all the SPaMCAST provides you with value, why keep it yourself?!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.  This week we tackle the essay titled “Calling The Shot”!  Check out the new installment at Software Process and Measurement Blog.

Upcoming Events

Software Quality and Test Management 
September 13 – 18, 2015
San Diego, California
http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams.  Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code.

 

Agile Development Conference East
November 8-13, 2015
Orlando, Florida
http://adceast.techwell.com/

I will be speaking on November 12th on the topic of Agile Risk. Let me know if you are going and we will have a SPaMCAST Meetup.

Next SPaMCAST

The next Software Process and Measurement feature our interview with Julia Wester.  Julia is an Improvement Coach at LeanKit and writes the Every Kanban Blog.  Julia and I had wide ranging conversation that included both process improvement and the role of management in Agile. Julia’s [WHAT?] is that management can play a constructive role in helping Agile teams deliver the most value.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 357 – Mind Mapping, Who Needs Architects, TameFlow Chapter 4

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

The Software Process and Measurement our essay on Mind Mapping. I view Mind Mapping as a great technique to stimulate creative thinking and organize ideas. Mind Mapping is one of my favorite tools.  Note:  this essay has a lot of visual examples, while there they are explained in the reading of the essay, I have also included them in a separate document for reference. 

If you want to reference the figures/examples in the essay download this PDF.  Mind Mapping Figures

We also feature Steve Tendon’s column discussing the TameFlow methodology and Chapter 4 of his great book, Hyper-Productive Knowledge Work Performance.  Steve and I discussed the impact of management’s profound understanding of knowledge work and that understandings impact on the delivery of value.

Anchoring the cast will be Gene Hughson, returning with an entry from his Form Follows Function blog.  This week Gene discusses the ideas in his entry, “Who Needs Architects?”

Call to Action!

For the remainder of August and September let’s try something a little different.  Forget about iTunes reviews and tell a friend or a coworker about the Software Process and Measurement Cast. Let’s use word of mouth will help grow the audience for the podcast.  After all the SPaMCAST provides you with value, why keep it yourself?!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.  This week we tackle the essay titled “Calling The Shot”!  Check out the new installment at Software Process and Measurement Blog.

Upcoming Events

Software Quality and Test Management
September 13 – 18, 2015
San Diego, California
http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams.  Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code.

Agile Development Conference East
November 8-13, 2015
Orlando, Florida
http://adceast.techwell.com/

I will be speaking on November 12th on the topic of Agile Risk. Let me know if you are going and we will have a SPaMCAST Meetup.

Next SPaMCAST

The next Software Process and Measurement feature our interview with Julia Wester.  Julia is an Improvement Coach at LeanKit and writes the Every Kanban Blog.  Julia and I had wide ranging conversation that included both process improvement and the role of management in Agile. Julia’s opinion is that management can play a constructive role in helping Agile teams deliver the most value.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

Xebia KnowledgeCast Episode 6: Lodewijk Bogaards on Stackstate and TypeScript

Xebia Blog - Sun, 08/30/2015 - 12:00

lodewijk
The Xebia KnowledgeCast is a podcast about software architecture, software development, lean/agile, continuous delivery, and big data.

In this 6th episode, we switch to a new format! So, no fun with stickies this time. It’s one interview. And we dive in deeper than ever before.

Lodewijk Bogaards is co-founder and CTO at Stackstate. Stackstate is an enterprise application that provides real time insight in the run-time status of all business processes, applications, services and infrastructure components, their dependencies and states.

It gives immediate insight into the cause and impact of any change or failure from the hardware level to the business process level and everything in between. To build awesome apps like that, Lodewijk and his team use a full-stack of technologies that they continue to improve. One such improvement is ditching plain Javascript in favor of TypeScript and Angular and it is that topic that we'll focus on in today's disussion.

What's your opinion on TypeScript? I'm looking forward to reading your comments!

Links:

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or Stitcher.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, send in a voice message so we can put you ON the show!

Credits

Calling The Shot Re-Read Saturday: The Mythical Man-Month, Part 8

The Mythical Man-Month

The Mythical Man-Month

In the seventh essay of The Mythical Man-Month, Fred P. Brooks begins to tackle the concept of estimating. While there are many estimating techniques, Brooks’ approach is a history/data-based approach, which we would understand as today as parametric estimation. Parametric estimation is generally a technique that generates a prediction of the effort needed to deliver a project based on historical data of productivity, staffing and quality. Estimating is not a straightforward extrapolation of has happened in the past to what will happen in the future, and mistaking it as such is fraught with potential issues. Brooks identified two potentially significant estimating errors that can occur when you use the past to predict the future without interpretation.

Often the only data available is the information about one part of the project’s life cycle. The first issue Brooks identified was that you cannot estimate the entire job or project by just estimating the coding and inferring the rest. There are many variables that might affect the relationship between development and testing. For example, some changes can impact more of the code than others, requiring more or less regression testing. The link between the effort required to deliver different types of work is not linear. The ability to estimate based on history requires a knowledge of project specific practices and attributes including competency, complexity and technical constraints.

Not all projects are the same. The second issue Brooks identified was that one type of project is not applicable for predicting another. Brooks used the differences between small projects and programming systems products to illustrate his point. Each type of work requires different activities, not just scaled up versions of the same tasks. Similarly, consider the differences in the tasks and activities required for building a smart phone app compared to building a large data warehouse application. Simply put, they are radically different. Brooks drove the point home using the analogy of extrapolating the record time for 100-yard dash (9.07 seconds according to Wikipedia) to the time to run a mile. The linear extrapolation would mean that a mile could be run in 2.40 (ish) minutes (a mile is 1760 yards) the current record is 3.43.13.

A significant portion of this essay is a review of a number of studies that illustrated the relationship between work done and the estimate. Brooks used these studies to highlight different factors that could impact the ability to extrapolate what has happened in the past to an estimate of the future (note: I infer from the descriptions that these studies dealt with the issue of completeness and relevance. The surveys, listed by  the person that generated the data, and the conclusions we can draw from an understanding of the data included:

  1. Charles Portman’s Data – Slippages occurred primarily because only half the time available was productive. Unrelated jobs meetings, paperwork, downtime, vacations and other non-productive tasks used the remainder.
  2. Joel Aron’s Data – Productivity was negatively related to the number of interactions among programmers. As the number of interactions goes up, productivity goes down.
  3. John Harr’s Data- The variation between estimates and actuals tend to be affected by the size of workgroups, length of time and number of modules. Complexity of the program being worked on could also be a contributor.
  4. OS/360 Data- Confirmed the striking differences in productivity driven by the complexity and difficulty of the task.
  5. CorbatoĂł’s Data – Programming languages affect productivity. Higher-level languages are more productive. Said a little differently, writing a function in Ruby on Rails requires less time than writing the same function in macro assembler language.

I believe that the surveys and data discussed are less important that the statistical recognition that there are many factors that must be addressed when trying to predict the future. In the end, estimation requires relevant historical data regardless of method, but the data must be relevant. Relevance is short hand for accounting for the factors that affect the type work you are doing. In homogeneous environments, complexity and language may not be as big a determinant of productivity as the number of interactions driven by team size or the amount of non-productive time teams have to absorb. The problem with historical data is that gathering the data requires effort, time and/or money.  The need to expend resources to generate, collect or purchase historical data is often used as a bugaboo to resist collecting the data and as a tool to avoid using parametric or historical estimating techniques.

Recognize that the the term historical data should not scare you away.  Historical data can be as simple as a Scrum team collecting their velocity or productivity every sprint and using it to calculate an average for planning and estimating. Historical data can be as complex as a pallet of information including project effort, size, duration, team capabilities and project context.

Previous installments of the Re-read of The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design

The Second-System Effect

Passing the Word

Why did the Tower of Babel fall?


Categories: Process Management

Trying out the Serenity BDD framework; a report

Xebia Blog - Fri, 08/28/2015 - 12:47

“Serenity, that feeling you know you can trust your tests.” Sounds great, but I was thinking of Firefly first when I heard the name ‘Serenity’. In this case, we are talking about a framework you can use to automate your tests.

The selling points of this framework are that it integrates your acceptance tests (BDD) with reporting and acts like living documentation. It can also integrate with JIRA and all that jazz. Hearing this, I wasn’t ‘wowed’ per se. There are many tools out there that can do that. But Serenity isn’t supporting just one approach. Although it is heavily favouring Webdriver/Selenium, you can also use JBehave, JUnit, Cucumber. That is really nice! 

Last weekend, at the Board of Agile Testers, we tried the framework with a couple of people. Our goal was to see if it’s really easy to set up, to see if the reports are useful and how easy it is to implement features. We used Serenity ‘Cucumber-style’ with Selenium/Webdriver (Java) and the Page Object Pattern.

Setup

It maybe goes a little too far to say a totally non-technical person could set up the framework, but it was pretty easy. Using your favorite IDE, all you had to do was import a Maven archetype (we used a demo project) and all the Serenity dependencies are downloaded for you. We would recommend using Java 7 at least, Java 6 gave us problems.

Using the tool

The demo project tests ran alright, but we noticed it was quite slow! The reason is probably that Serenity takes a screenshot at every step of your test. You can configure this setting, thankfully.

At the end of each test run, Serenity generates an HTML report. This report looks really good! You get a general overview and can click on each test step to see the screenshots. There is also a link to the requirements and you can see the ‘coverage’ of your tests. I’m guessing they mean functional coverage here, since we’re writing acceptance tests.

Serenity Report

Writing our own tests

After we got a little overview of the tool we started writing our own tests, using a Calculator as the System Under Test. The Serenity specific Page Object stuff comes with the Maven archetype so the IDE could help you implement the tests. We tried to do it in a little TDD cycle. Run the test, let it fail and let the output give you a hint on how to implement the step definitions. Beyond the step definition you had to use your Java skills to implement the actual tests.

Conclusion

The tool is pretty developer oriented, there’s no denying that. The IDE integration is very good in my opinion. With the Community IntelliJ edition you have all the plugins you need to speed up your workflow. The reporting is indeed the most beautiful I had seen, personally. Would we recommend changing your existing framework to Serenity? Unless your test reports are shit: no. There is in fact a small downside to using this framework; for now there are only about 15 people who actively contribute. You are of course allowed to join in, but it is a risk that there are only a small group of people actively improving it. If the support base grows, it will be a powerful framework for your automated tests and BDD cycle. For now, I'm going to play with the framework some more, because after using it for about 3 hours I think we only scratched the surface of its possibilities. 

 

You Are Not Agile . . . Hybrid Models Revisited

A hybrid

A hybrid

Being Agile is a lot easier than it was even a few years ago. However, there are still roadblocks; including lack of management buy in, not changing the whole development life cycle or and only vaguely considering technical Agile practices. The discussions I have had with colleagues, readers of blog and a professor at Penn State about roadblock have generated a lot passion over the relative value and usefulness of hybrids.

Classic software development techniques are a cascade beginning with requirements definition and ending with implementation. The work moves through that cascade in a manner similar to an auto assembly line. The product is not functional until it rolls off the end of the line. The model uses specialized labor doing specific tasks over and over. Henry Ford changed the automotive market with this technique.  However, applying this model to software development can cause all sorts of bad behaviors. Those bad behaviors are generally caused by a mismatch between linear work like manufacturing and work that is more dynamic and more oriented to research and development. One example of a bad behavior the abuse of stage gates. Often teams continue working even when the method indicates they must wait for a stage gate decision. The problem is that if they wait for a decision they will never complete on time. Managers are ALWAYS aware what is happening, but choose to not ask. It generally is not because anyone in the process is a bad player, but rather that the process is corrupt.

Agile is different. Agile stops the cascade by breaking work into smaller chunks. Those smaller pieces are then designed, developed, tested and delivered using short sprints (known as cadence) in order to generate immediate feedback. The “new” process takes the reason for running stage gate stop signs away.

Hybrid models attempt to harvest the best pieces from the classic and Agile frameworks to fit the current organizational culture or structure. The process of making the Agile (or classic methods for that matter) fit into an organization optimized for classic methods is where issues generally creep in. The compromises required often stray from the assumptions of built into the Agile values and principles.  Examples of the assumptions built into Agile include stable teams, self management and delivering functional software at the end of every sprint. It is easier to wander away from those values and principles than to tackle changing the organization’s culture or structure. One of the most common (and damaging) compromises can be seen in organizations that continue the practice of leveraging dynamically staffed teams (often called matrix organizations). Stable teams often require rearranging organizational boundaries and a closer assessment of capabilities.

Typical organizational problems that if not addressed will lead organizations to generate classic/Agile hybrids include:

  1. Silos: Boundaries between groups require higher levels of planning and coordination to ensure the right people are available at the right time even when there are delays. Historically, large development organizations have included the role of scheduler/expediter in their organization chart to deal with these types of issues.
  2. Overly Lean: Many development organizations have suffered through years of cost cutting and have far fewer people than needed to complete the work they have committed to accomplishing. Personnel actively work on several projects at once to give the appearance of progress across a wide range of enterprises. Switching between tasks is highly inefficient which reduces the overall value delivered, often leading to more cost cutting pressure.
  3. Lack of Focus: Leaders in development organizations often feel the need to accept and start all projects that are presented. Anthony Mersino calls this the “say yes to everything syndrome.” This typically occurs in organizations without strong portfolio management in place. Ultimately, this means that people and teams need to multitask, leading to inefficiency.
  4. Lack of Automation: While I have never met a development practice that couldn’t be done on a small scale using pencil and paper, automation makes scale possible. For example, consider running several thousand regression tests by hand. In order to run the tests you would either need a significant duration or lots of people. Lots of people generally means more teams, more team boundaries, more hierarchy and more overhead – leading to the possibility of just running less tests to meet a date or budget.

The values and principles that underpin Agile really matter. They guide behavior so that it is more focused on delivering value quickly. The four values in the Agile Manifesto are presented as couplets. For example, in the first value: “Individuals and interactions over processes and tools,” the items on left side are valued more than those on the right (even though those on the right still have value). Hybrid models often generate compromises that shift focus from the attributes on the left more toward the center and perhaps back to those attributes on the right. Hybrids are not evil or bad, but they are generally cop-outs if they wander away from basic Agile values and principles rather than addressing tough organizational issues.

Agree or disagree, your thoughts are important to guiding the conversation about what is and isn’t Agile.


Categories: Process Management

Software Development Linkopedia August 2015

From the Editor of Methods & Tools - Thu, 08/27/2015 - 14:44
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about commitment & estimation, better software testing, the dark side of metrics, scrum retrospectives, databases and Agile adoption. Blog: Against Estimate-Commitment Blog: Write Better Tests in 5 Steps Blog: Story Point […]

You Are Not Agile . . . If You Only Do Scrum

Being Agile is more than just Scrum or not being waterfall.

Being Agile is more than just Scrum or not being waterfall.

If you were a developer from the 1980s or early 1990’s and traveled in time to attend most Agile conferences today, you would probably take away the idea that Agile was all about Scrum and kicking sand at project managers. Unfortunately MANY organizations have same stilted perception. The perception that Agile is just another form of project management is EXACTLY wrong. If an organization only embraces the parts of Agile that address team organization and project control they are not truly Agile. In order to deliver the most value from adopting and embracing Agile, organizations and teams must understand and adopt Agile technical practices. Just doing Scrum is not going to cut it if your goal is improving the quantity and quality of functional software you deliver.

Tools, processes and techniques that facilitate the development, testing and implementation of functional software are technical practices. A very abridged list of technical practices includes:

  • User Stories
  • Story Mapping
  • Test Driven Development (including variants such as Behavior Driven Development)
  • Architecture
  • Technical Standards (including Coding Standards)
  • Refactoring
  • Automated Testing
  • Pair Programming
  • Continuous Integration

Implementing a combination of Scrum with Extreme Programming (xP) is one of the most common methods of addressing both the technical and management/control aspects of delivering functional software. xP is an Agile software development methodology originally conceived by Kent Beck and others in late 1990’s. There are many other techniques, frameworks and methodologies that support the technical side of Agile. These techniques directly impact how effectively and efficiently solutions are designed, coded and tested. As importantly efficiency directly impacts the amount of value that is possible to deliver.

You Are Not Agile . . . If You Do Waterfall described some of the attributes of organizations that get stuck transitioning to Agile from a process point of view. Similarly, many organizations begin an Agile transition by adopting Scrum perhaps with a few techniques such as User Stories; however, they never quite get to addressing the technical components. The issues of the business analysts, developers and testers don’t get addressed.

Allan Kelly, in an interview on the Software Process and Measurement Cast 353, stated that if you are not “doing” the technical side of Agile, you were not Agile. This statement is probably somewhat of a harsh assessment and perhaps a bit hyperbolic, but only a bit. Developing, enhancing and maintaining software (or any product) requires more than short iterations, stand-ups, retrospectives and demonstrations. Far be it from me to say those techniques don’t help and even alone are certainly better than most waterfall practices, but they do not address the nuts and bolts of developing software.  In order to deliver the maximum value we have to change how we are developing designs, writing code and then proving that it works because that is the true heart and soul of software development.

In the long run Agile has to address management buy in, changing the whole development life cycle or and the technical practices, or you won’t get the whole value that Agile can deliver.

I would be interested in your thoughts!


Categories: Process Management

An Iterative Waterfall Isn’t Agile

Mike Cohn's Blog - Tue, 08/25/2015 - 16:43

I’ve noticed something disturbing over the past two years. And it’s occurred uniformly with teams I’ve worked with all across the world. It’s the tendency to create an iterative waterfall process and then to call it agile.

An iterative waterfall looks something like this: In one sprint, someone (perhaps a business analyst working with a product owner) figures out what is to be built. 

Because they’re trying to be agile, they do this with user stories. But rather than treating the user stories as short placeholders for future conversations, each user story becomes a mini-specification document, perhaps three to five pages long. And I’ve seen them longer than that. 

These mini-specs/user stories document nearly everything conceivable about a given user story.

Because this takes a full sprint to figure out and document, a second sprint is devoted to designing the user interface for the user story. Sometimes, the team tries to be a little more agile (in their minds) by starting the design work just a little before the mini-spec for a user story is fully written. 

Many on the team will consider this dangerous because the spec isn’t fully figured out yet. But, what the heck, they’ll reason, this is where the agility comes in.

Programmers are then handed a pair of documents. One shows exactly what the user story should look like when implemented, and the other provides all details about the story’s behavior. 

No programming can start until these two artifacts are ready. In some companies, it’s the programmers who force this way of working. They take an attitude of saying they will build whatever is asked for, but you better tell them exactly what is needed at the start of the sprint.

Some organizations then stretch things out even further by having the testers work an iteration behind the programmers. This seems to happen because a team’s user stories get larger when each user story needs to include a mini-spec and a full UI design before it can be coded.

Fortunately, most teams realize that programmers and testers need to work together in the same iteration, but not extend that to being a whole team working together. This leads to the process shown in this figure.

This figure shows a first iteration devoted to analysis. A second iteration (possibly slightly overlapping with the first) is devoted to user experience design. And then a third iteration is devoted to coding and testing.
This is not agile. It might be your organization’s first step toward becoming agile. But it’s not agile.

What we see in this figure is an iterative waterfall.

In traditional, full waterfall development, a team does all of the analysis for the entire project first. Then they do all the design for the entire project. Then they do all the coding for the entire project. Then they do all the testing for the entire project.

In the iterative waterfall of the figure above, the team is doing the same thing but they are treating each story as a miniature project. They do all the analysis for one story, then all the design for one story, then all the coding and testing for one story. This is an iterative waterfall process, not an agile process.

Ideally, in an agile process, all types of work would finish at exactly the same time. The team would finish analyzing the problem at exactly the same time they finished designing the solution to the problem, which would also be the same time they finished coding and testing that solution. All four of those disciplines (and any others I’m not using in this example) would all finish at exactly the same time.

It’s a little naïve to assume a team can always perfectly achieve that. (It can be achieved some times.) But it can remain the goal a team can work towards.

A team should always work to overlap work as much as possible. And upfront thinking (analysis, design and other types of work) should be done as late as possible and in as little detail as possible while still allowing the work to be completed within the iteration.

If you are treating your user stories as miniature specification documents, stop. Start instead thinking about each as a promise to have a conversation. 

Feel free to add notes to some stories about things you want to make sure you bring up during that conversation. But adding these notes should be an optional step, not a mandatory step in a sequential process. 

Leaving them optional avoids turning the process into an iterative waterfall process and keeps your process agile.

SPaMCAST 356 - Steve Turner, Time and Inbox Management

Software Process and Measurement Cast - Sun, 08/23/2015 - 22:00

The Software Process and Measurement 356 features our interview with Steve Turner.  Steve and I talked time management and email inbox tyranny! If you let your inbox and interruptions manage your day you will deliver less value than you should and feel far less in control than you could! 

Steve’s Bio:

With a background in technology and nearly 30 years of business expertise, Steve has spent the last eight years sharing technology and time management tools, techniques and tips with thousands of professionals across the country.  His speaking, training and coaching has helped many organizations increase the productivity of their employees. Steve has significant experience working with independent sales and marketing agencies. His proven ability to leverage technology (including desktops, laptops and mobile devices) is of great value to anyone in need of greater sales and/or productivity results. TurnerTime℠ is time management tools, techniques and tips to effectively manage e-mails, tasks and projects.  

Contact Information:
Email: steve@getturnertime.com
Web: www.GetTurnerTime.com

Call to Action!

For the remainder of August and September let’s try something a little different.  Forget about iTunes reviews and tell a friend or a coworker about the Software Process and Measurement Cast. Let’s use word of mouth will help grow the audience for the podcast.  After all the SPaMCAST provides you with value, why keep it yourself?!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.  This week we tackle the essay titled “Why Did the Tower Babel Fall”!  Check out the new installment at Software Process and Measurement Blog.

 

Upcoming Events

Software Quality and Test Management 
September 13 – 18, 2015
San Diego, California
http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams.  Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code.

Agile Development Conference East
November 8-13, 2015
Orlando, Florida
http://adceast.techwell.com/

I will be speaking on November 12th on the topic of Agile Risk. Let me know if you are going and we will have a SPaMCAST Meetup.

Next SPaMCAST

The next Software Process and Measurement feature our essay on mind mapping.  To quote Tony Buzan, Mind Mapping is a technique for radiant thinking.  I view it as a technique to stimulate creative thinking and organize ideas. I think that is what Tony meant by radiant thinking. Mind Mapping is one of my favorite tools.  

We will also feature Steve Tendon’s column discussing the TameFlow methodology and his great new book, Hyper-Productive Knowledge Work Performance.

Anchoring the cast will be Gene Hughson returning with an entry from his Form Follows Function blog.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

HTTP/2 Server Push

Xebia Blog - Sun, 08/23/2015 - 17:11

The HTTP/2 standard was finalized in May 2015. Most major browsers support it, and Google uses it heavily.

HTTP/2 leaves the basic concepts of Requests, Responses and Headers intact. Changes are mostly at the transport level, improving the performance of parallel requests - with few changes to your application. The go HTTP/2 'gophertiles' demo nicely demonstrates this effect.

A new concept in HTTP/2 is Server Push, which allows the server to speculatively start sending resources to the client. This can potentially speed up initial page load times: the browser doesn't have to parse the HTML page and find out which other resources to load, instead the server can start sending them immediately.

This article will demonstrate how Server Push affects the load time of the 'gophertiles'.

HTTP/2 in a nutshell

The key characteristic of HTTP/2 is that all requests for a server are sent over one TCP connection, and responses can come in parallel over that connection.

Using only one connection reduces overhead caused by TCP and TLS handshakes. Allowing responses to be sent in parallel is an improvement over HTTP/1.1 pipelining, which only allows requests to be served sequentially.

Additionally, because all requests are sent over one connection, there is a Header Compression mechanism that reduces the bandwidth needed for headers that previously would have had to be repeated for each request.

Server Push

Server Push allows the server to preemptively send a 'request promise' and an accompanying response to the client.

The most obvious use case for this technology is sending resources like images, CSS and JavaScript along with the page that includes them. Traditionally, the browser would have to first fetch the HTML, parse it, and then make subsequent requests for other resources. As the server can fairly accurately predict which resources a client will need, with Server Push it does not have to wait for those requests and can begin sending the resources immediately.

Of course sometimes, you really do only want to fetch the HTML and not the accompanying resources. There are 2 ways to accomplish this: the client can specify it does not want to receive any pushed resources at all, or cancel an individual push after receiving the push promise. In the latter case the client cannot prevent the browser from initiating the Push, though, so some bandwidth may have been wasted. This will make deciding whether to use Server Push for resources that might already have been cached by the browser subtle.

Demo

To show the effect HTTP/2 Server Push can have, I have extended the gophertiles demo to be able to test behavior with and without Server Push, available hosted on an old raspberry pi.

Both the latency of loading the HTML and the latency of loading each tile is now artificially increased.

When visiting the page without Server Push with an artificial latency of 1000ms, you will notice that loading the HTML takes at least one second, and then loading all images in parallel again takes at least one second - so rendering the complete page takes well above 2 seconds.

With server push enabled, you will see that after the DOM has loaded, the images are almost immediately there, because they have been Push'ed already.

All that glitters, however, is not gold: as you will notice when experimenting (especially at lower artificial latencies), while Server Push fairly reliably reduces the complete load time, it sometimes increases the time until the DOM-content is loaded. While this makes sense (the browser needs to process frames relating to the Server Push'ed resources), this could have an impact on the perceived performance of your page: for example, it could delay running JavaScript code embedded in your site.

HTTP/2 does give you tools to tune this, such as Stream Priorities, but this might need careful tuning and be supported by the http2 library you are choosing.

Conclusions

HTTP/2 is here today, and can provide a considerable improvement in perceived performance - even with few changes in your application.

Server Push potentially allows you to improve your page loading times even further, but requires careful analysis and tuning - otherwise it might even have an adverse effect.

SPaMCAST 356 – Steve Turner, Time and Inbox Management

 www.spamcast.net

                             www.spamcast.net

Listen Now

Subscribe on iTunes

The next Software Process and Measurement features our interview with Steve Turner.  Steve and I talked time management and email inbox tyranny! If you let your inbox and interruptions manage your day you will deliver less value than you should and feel far less in control than you could!

Steve’s Bio:

With a background in technology and nearly 30 years of business expertise, Steve has spent the last eight years sharing technology and time management tools, techniques and tips with thousands of professionals across the country.  His speaking, training and coaching has helped many organizations increase the productivity of their employees. Steve has significant experience working with independent sales and marketing agencies. His proven ability to leverage technology (including desktops, laptops and mobile devices) is of great value to anyone in need of greater sales and/or productivity results. TurnerTime℠ is time management tools, techniques and tips to effectively manage e-mails, tasks and projects.

Contact Information:
Email: steve@getturnertime.com
Web: www.GetTurnerTime.com

 

Call to Action!

For the remainder of August and September let’s try something a little different.  Forget about iTunes reviews and tell a friend or a coworker about the Software Process and Measurement Cast. Let’s use word of mouth will help grow the audience for the podcast.  After all the SPaMCAST provides you with value, why keep it yourself?!

Re-Read Saturday News

Remember that the Re-Read Saturday of The Mythical Man-Month is in full swing.  This week we tackle the essay titled “Why Did the Tower Babel Fall”!  Check out the new installment at Software Process and Measurement Blog.

 

Upcoming Events

Software Quality and Test Management
September 13 – 18, 2015
San Diego, California
http://qualitymanagementconference.com/

I will be speaking on the impact of cognitive biases on teams.  Let me know if you are attending! If you are still deciding on attending let me know because I have a discount code.

Agile Development Conference East
November 8-13, 2015
Orlando, Florida
http://adceast.techwell.com/

I will be speaking on November 12th on the topic of Agile Risk. Let me know if you are going and we will have a SPaMCAST Meetup.

Next SPaMCAST

The next Software Process and Measurement feature our essay on mind mapping.  To quote Tony Buzan, Mind Mapping is a technique for radiant thinking.  I view it as a technique to stimulate creative thinking and organize ideas. I think that is what Tony meant by radiant thinking. Mind Mapping is one of my favorite tools.

We will also feature Steve Tendon’s column discussing the TameFlow methodology and his great new book, Hyper-Productive Knowledge Work Performance.

Anchoring the cast will be Gene Hughson returning with an entry from his Form Follows Function blog.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

Why did the Tower of Babel fall? Re-Read Saturday: The Mythical Man-Month, Part 7

The Mythical Man-Month

The Mythical Man-Month

In the seventh essay of The Mythical Man-Month, Brooks returns to the topics of organization and communication using story of the Tower of Babel as a metaphor for many of the engineering fiascoes seen in software development. In the story of the Tower of Babel, when communication broke down, coordination failed and the project ground to a halt. As projects become larger and more complex, communication and organization always becomes more difficult and more critical.

The essay Why Did the Tower of Babel Fall? presents three mechanisms to facilitate communication.

  1. Informal Communication, which includes all of the telephone calls and conversations that occur in and among teams. Healthy teams and teams of teams talk, interact and share information and ideas.
  2. Meetings are more formal and include reviews, technical briefings, demonstrations, sprint planning and release planning events. Done well formal meetings, whether part of an Agile framework or not, can find problems and share information and knowledge.
  3. Project Workbooks are the repository of the evolving intellectual property that defines any significant endeavor.

Brooks goes into depth on the definition and structure of the project workbook. While the workbook that Brooks describes is a physical construct we should consider that the idea of workbook is less of a document then a structure for grouping information. All the project documents need to fit into the workbook and be part of the overall structure of project information so that it can be reused and inherited from. The ultimate goal of the workbook is ensure that everyone who needs information has access to the information and gets the information.

Workbook Mechanics: In order to meet the goal of ensuring of everyone access to the information they need, the workbook needs to be maintained and organized. For example, Brooks suggests ordering and numbering all project memorandums (we would call these emails now) so that people can look through the list and actually see if they need information. Brooks points out that as size of the effort, project or program goes up, the complexity of maintaining and indexing the project communications increases in a non-linear fashion. Today’s technical tools like wikis, SharePoint and hypertext markup make curating and sharing information easier than paper and microfiche (if microfiche is before your time, Google it). As a reminder, capturing and reusing IP is required in all types of work. As a coder, it is easy to fall prey to the idea that the code and a functional system is the only documentation required; however, this is false. For most large efforts, capturing decisions, designs, architectures and even the occasional user manual is an option that can be ignored only if you want to court a communication failure. Remember however that over-reliance on documentation is not the goal.

Brooks uses the formula (n2-n)/2, where n is the number of people working on an effort, to define the possible communication interfaces. As the number of people increase the number of interfaces increases very quickly. Organization is a tool to reduce the amount of interfaces. This is one of the reasons why small teams are more effective than large teams. Scaling Agile is often accomplished by assembling teams of small teams. We use the metaphor of trees or flowers are often to describe the visualizations of team of teams. Communications spreading from a central point can look like a tree or flower. Organizations leverage techniques like the specialization and division of labor to reduce the number of communication interfaces.

There is often push back on division of labor and specialization in Agile. This push back is often misplaced. Within a team the concept of T Shaped people, which eschews specialization, increases capabilities; however, teams often specialize thus becoming capability teams. Capability teams often tackle specific types of work more effectively and efficiently by reducing learning curves. Capability teams fit into Brooks’ ideas on organizing to reduce the communication overhead.

Brooks proposes that each grouping of work have six essential needs.

  1. A Mission,
  2. A Producer,
  3. A Technical Director or An Architect,
  4. A Schedule,
  5. A Division of Labor, and
  6. Interface Definitions Among the Parts.

Four of these essentials are easily understood; however, the roles of the producer and technical broke new ground and extended the ideas Brooks began in The Surgical Team. The role of the producer is to establish the team, establish the schedule and continually acquire the resources needed to deliver the project. The role of the producer includes many of the classic project and program management activities, but also much of the outside communication and coordination activities. When small effort with a single team the roles of the producer and technical director can be held by the same person. As efforts become larger and more complex the roles are generally held by separate people. The role of the producer in the Scaled Agile Framework (SAFe) is handled by the Release Train Engineer. The technical director focuses on the design and the technical components (inside focus). In SAFe this role is held by the System Architect.

Brooks leverages the metaphor of the Tower of Babel to focus on the need for communication and coordination. Projects, team and organizations need to take steps to ensure communication happens effectively. Techniques like meetings and project workbooks are merely tools to make sure information is available and organized. Once we have information organization provides a framework to ensure information is communicated effectively. Without communication any significant engineering effort will end up just like the Tower of Babel.

Are there other mechanisms for organization, communication and coordination that have worked for you? Please share and lets discuss!

Previous installments of the Re-read of The Mythical Man-Month

Introductions and The Tar Pit

The Mythical Man-Month (The Essay)

The Surgical Team

Aristocracy, Democracy and System Design

The Second-System Effect

Passing the Word


Categories: Process Management

You Are Not Agile . . . If You Do Waterfall

The spiral method is just one example of a Agile hybrid.

The spiral method is just one example of a Agile hybrid.

Many organizations have self-titled themselves as Agile. Who wouldn’t want to be Agile? If you are not Agile, aren’t you by definition clumsy, slow or dull? Very few organizations would sign up for those descriptions; however, Agile in the world of software development, enhancements and maintenance means more than being able to move quickly and easily. Agile means that a team or organization has embraced a set of principles that shape behaviors and lead to the adoption of a set of techniques. When there is a disconnect between the Agile walk and the Agile talk, management is often a barrier when it comes to principles and practitioners are when it comes to techniques. Techniques are often deeply entrenched and require substantial change efforts. Many organizations state they are using a hybrid approach to Agile to transition from a more classic approach to some combination of Scrum, Kanban and Extreme Programming. This is considered a safe, conservative approach that allows an organization to change organically. The problem is that this tactic rarely works and often organizations get stuck. Failure to spend the time and effort on change management often leads to hybrids frameworks that are neither fish nor fowl.  Those neither fish nor fowl frameworks are rarely Agile. Attributes of stuck (or potentially stuck) organizations are:

The iterative waterfall. The classic iterative waterfall traces its roots to the Boehem Spiral Model. In the faux Agile version of iterative development, short, time-boxed iterations are used for each of the classic waterfall phase. A requirements sprint is followed by a design sprint, then a development sprint and you know the rest. Both the classic spiral model or the faux Agile version are generally significantly better than the classic waterfall model for generating feedback and delivering value faster; therefore, organizations stop moving toward Agile and reap the partial rewards.

Upfront requirements. In this hybrid approach to Agile, a team or organization will gather all of the requirements (sometimes called features) at the beginning of the project and then have them locked down before beginning “work.” Agile is based on a number of assumptions about requirements. Two key assumptions are that requirements are emergent, and that once known, requirements decay over time. Locking product backlogs flies in the face of both of these assumptions, which puts teams and organizations back into the age of building solutions that when delivered don’t meet the current business needs. This approach is typically caused when the Agile rollout is done using a staggered approach beginning with the developers and then later reaching out to the business analysts and business. the interface between groups who have embraced Agile and those that  have not often generates additional friction, often blamed on Agile making further change difficult.

Testing after development is “done.” One of the most pernicious Agile hybrids is testing the sprint after development is complete. I have heard this hybrid called “development+1 sprint.” In this scenario a team will generate a solution (functional code if this is a software problem), demo it to customers, and declare it to be done, and THEN throw it over the wall to testers. Testers will ALWAYS find defects, which requires them to throw the software back over the wall either to be worked on, disrupting the current development sprint, or to be put on the backlog to be addressed later. Agile principles espouse the delivery of shippable software (or at least potentially shippable) at the end of every sprint. Shippable means TESTED. Two slightly less pernicious variants of this problem are the use of hardening sprints or doing all of the testing at the end of the project. At least in those cases you are not pretending to be Agile.

How people work is the only cut and dry indicator of whether an organization is Agile or not. Sometimes how people work is reflection of a transition; however, without a great deal of evidence that the transition is moving along with alacrity, I assume they are or will soon be stuck. When a team or organization adopts Agile, pick a project and have everyone involved with that project adopt Agile at the same time, across the whole flow of work. If that means you have to coach one whole project or team at a time, so be it. Think of it as an approach that slices the onion, addressing each layer at the same time rather than peeling it layer by layer.

One final note: Getting stuck in most of these hybrids is probably better than the method(s) that was being used before. This essay should not be read as an indictment of people wrestling with adopting Agile, but rather as a prod to continue to move forward.


Categories: Process Management

Quote of the Month August 2015

From the Editor of Methods & Tools - Wed, 08/19/2015 - 09:39
Acknowledging that something isn’t working takes courage. Many organizations encourage people to spin things in the most positive light rather than being honest. This is counterproductive. Telling people what they want to hear just defers the inevitable realization that they won’t get what they expected. It also takes from them the opportunity to react to […]