Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management

The Metrics Minute: Velocity

Zoom!

Zoom!

Audio Version:  Software Process and Measurement Cast 119.

Definition:

The simple definition of velocity is the amount of work that is completed in a period of time (typically a sprint). The definition is related to productivity, which is the amount of effort required to complete a unit of work and delivery rate (the speed that the work is completed).  The inclusion of a time box (the sprint) creates a fixed duratio,n which transforms velocity into more of a productivity metric than a speed metric (how much work can be be done in a specific timescale by a specific team). Therefore to truly measure velocity you need to estimate the units of work completed, have a definition of complete and have a time box.

The definition of done in Agile is typically functional code, however I think the definition can be stretched to reflect the terminal deliverable the sprint team has committed to create and based on the definition of done (for example requirements for a sprint team working on requirements or completed test cases in a test sprint) that the team has established.

Many Agile projects use the concept of story points as a metaphor for size or functional code. Note other functional size measures can be just as easily used. Examples in this paper will use story points as a unit of measure.  What is not size however is effort or duration. Effort is an input that is consumed while transforming ideas into functional code. The amount of effort required for the transformation is a reflection of size, complexity and other factors. Duration like effort is consumed by a sprint not created therefore does not measure what is delivered.

Formula

To calculate velocity, simply add up the size estimates of the features (user stories, requirements, backlog items, etc.) successfully delivered in an iteration.  The use of the size estimates allows the team to distinguish between items of differing levels of granularity.  Successfully delivered should equate to the definition of done.

Velocity = Story Points Completed Per Sprint

And:

Average velocity = Average Number of Story Points Per Sprint

The formula becomes more complex if staffing varies between sprints (and potentially less valuable as a predictive measure).  In order to account for variable staffing the velocity formula would have to be modified as follows:

velocity per person = sum (size of completed features in a sprint / number of people) / number of sprints or observations

To be really precise (not necessarily more accurate) we would have to understand the variability of the data as variability would help define level of confidence.  Variability generated by differences in team member capabilities is one of the reasons that predicability is enhanced by team stability. As you can see, the more complex the environmental scenario becomes, the less simple the math must be to describe the scenario.

Uses:

Velocity is used as a tool in project planning and reporting. Velocity is used in planning to predict how much work will be completed in a sprint and in reporting to communicate what has been done.

When used for planning and estimation the team’s velocity is used along with a prioritized set of granular features (e.g., user stories, backlog items, requirements, etc.) that have been sized or estimated.  The team uses these factors to select what can be done in the upcoming sprint. When the sprint is complete the results are used to update velocity for the next sprint. This is a top down estimation process using historical data.

Over a number of sprints velocity can be used both as a macro planning tool (when will will the project be done) and a reporting tool (we planned at this velocity and are delivering at this velocity).

Velocity can be used in all methodologies and because it is team specific, it is agnostic in terms of units of size.

Issues

As with all metrics, velocity has it’s share of issues.

The first is that there is an expectation of team stability inherent in the metric. Velocity is impacted by team size and composition and without collecting additional attributes and correlating these attributes to performance, change is not predictable (except by gut feel or Ouija Board). There should always be notes kept on team size and capability so that you can understand your data over time.

Similarly team dynamics change over time, sometimes radically. Radical changes in  team dynamics will affect velocity. Note shocks to any system of work are apt to create the same issue. Measurement personnel, SCRUM masters and team leaders need to be aware of people’s personalities and how they change over time.

The first time application of velocity requires either historical data of other similar teams and projects or an estimate. In a perfect world a few sprints would be executed and data gathered before expectations are set however generally clients want an idea of if a project will be completed, when it will be completed and the functions that will be delivered along the way.  Estimates of velocity based on the teams knowledge of the past or other crowd sourcing techniques are relatively safe starting points assuming continuos recalibration.

The final issue is the requirement for a good definition of done. Done is a concept that has been driven home in the agile community. To quote Mayank Gupta (http://www.scrumalliance.org/articles/106-definition-of-done-a-reference), “An explicit and concrete definition of done may seem small but it can be the most critical checkpoint of an agile project.”  A concrete definition of done provides the basis for estimating velocity by reducing variability based on features that are in different states of completion.  Done also focuses the team by providing a goal to pursue. Make sure you have a crisp definition of done and recognize how that definition can change from sprint to sprint.

Related Metrics:

Productivity (size / effort)

Delivery Rate (duration / size)

Criticisms:

The first criticism of velocity is that the metric is not comparable between teams and by Inference is not useful as a benchmark. Velocity was conceived as a tool for Scrum Masters and Team Leads to manage and plan individual sprints. There are no overarching set of rules for the metric to enforce standardization therefore one velocity is apt to reflect something different than the next. The criticism is correct but perhaps off the mark. As a team level tool velocity works because it is very easy to use and can be consistent, adding the complexity of standards and rules to make it more organizational will by definition reduce the simplicity and therefore the usefulness at the team level.

A second criticism is that estimates and budgets are typically set early in a projects life.  Team level velocity may well be an unknown until later.  The dichotomy between estimating and planning (or budgeting and estimating for that matter) is often overlooked.  Estimates developed early in a project or in projects with multiple teams require different techniques to generate. In large projects applying team level velocities requires using techniques more akin to portfolio management which add significant levels of overhead. I would suggest that velocity is more valuable as a team planning tool than as a budgeting or estimation tool at a macro level.

A final criticism is that backlog items may not be defined at consistent level of granularity therefore when applied, velocity may deliver inconsistent results. I tend to dismiss this criticism as it is true for any mechanism that relies on relative sizing. Team consistency will help reduce the variability in sizing however all teams should strive to break backlog items into as atomic stories as possible.


Categories: Process Management

Software Development Linkopedia October 2014

From the Editor of Methods & Tools - Wed, 10/22/2014 - 15:04
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about managing code duplication, product backlog prioritization, engineering management, organizational culture, mobile testing, code reuse and big data. Blog: Practical Guide to Code Clones (Part 1) Blog: Practical Guide to Code Clones (Part 2) Blog: Selecting Backlog Items By Cost of Delay Blog: WIP and Priorities – how to get fast and focused! Blog: 44 engineering management lessons Article: Do’s and Don’ts of Agile Retrospectives Article: Organizational Culture Considerations with Agile Article: ...

Agile Concepts: The Rationale for Longer Cadence

 

Space between has to big enough and no bigger.

Space between has to big enough and no bigger.

Cadence represents a predictable rhythm. The predictability of cadence is crucial for Agile teams to build trust. Trust is built between the team and the organization and  amongst the team members themselves by meeting commitments over and over and over. The power of cadence is important to the team’s well being, therefore the choice of cadence is often debated in new teams. Many young teams make the mistake of choosing a slower (longer) cadence for many reasons. The most often cited reasons I have found are:

  1. The team has not adopted or adapted their development process to the concept of delivering working functionality in a single sprint. When a team leverages a waterfall or remnants of a waterfall method, work passes from phase to phase at sprint boundaries. For example, it passes from coding to testing. Longer time boxes feel appropriate to the team so they can get analysis, design, coding, testing and implementation done before the next sprint. The problem is that they are trying to “agilefy” a non-Agile process, which rarely works well.
  2. New Agile teams tend to lack confidence in their capabilities. Capabilities that teams need to sort out include both learning the techniques of Agile and abilities of the team members. Teams convince themselves that a longer sprint will provide a bit of padding that will accommodate the learning process. The problem with adding padding is twofold. The first is that time tend to fill the available time (Parkinson’s Law) and secondly lengthening the sprint delays retrospectives. Retrospectives provide a team the platform needed to identify issues and make changes which leads to improved capabilities.
  3. Large stories that can’t be completed in a single sprint are often noted as reason to adopt longer duration sprints and slower cadence. This is generally a reflection of improper backlog grooming. More mature Agile teams typically adopt a rule of thumb help guide the breakdown of stories. Examples include maximum size limits (e.g. 8 story points, 7 quick function points) or duration limits (a story must be able to be completed in 3 days).
  4. Planning sessions take too long, eating into development time of the sprint. Similar to the large stories, overly long planning sessions are typically a reflection of either poor backlog grooming or trying to plan and commit to more than can be done within the sprint time box. Teams often change the length of a sprint rather than doing better grooming or taking less work. Often even when the sprint duration is expanded the problem of overly long planning sessions returns as more stories are taken and worsens as the team gets bored with planning.

Teams often think that they can solve process problems by lengthening the duration of their sprints, which slows their cadence. Typically a better solution is to make sure they are practicing Agile techniques rather than trying to “agilefy” a waterfall method or doing a better job grooming stories. A faster cadence is generally better if for no other reason than the team will get to review their approach faster by doing retrospectives!


Categories: Process Management

Velocity-Driven Sprint Planning

Mike Cohn's Blog - Tue, 10/21/2014 - 15:00

There are two general approaches to planning sprints: velocity-driven planning and commitment-driven planning. Over the next three posts, I will describe each approach, and make my case for the one I prefer.

Let’s begin with velocity-driven sprint planning because it’s the easiest to describe. Velocity-driven sprint planning is based on the premise that the amount of work a team will do in the coming sprint is roughly equal to what they’ve done in prior sprints. This is a very valid premise.

This does assume, of course, things such as a constant team size, the team is working on similar work from sprint to sprint, consistent sprint lengths, and so on. Each of these assumptions is generally valid and violations of the assumptions are easily identified—that is, when a sprint changes from 10 to nine working days as in the case of a holiday, the team knows this in advance.

The steps in velocity-driven sprint planning are as follows:

1. Determine the team’s historical average velocity.
2. Select a number of product backlog items equal to that velocity.

Some teams stop there. Others include the additional step of:

3. Identify the tasks involved in the selected user stories and see if it feels like the right amount of work.

And some teams will go even further and:

4. Estimate the tasks and see if the sum of the work is in line with past sprints.

Let’s look in more detail at each of these steps.

Step 1. Select the Team’s Average Velocity

The first step in velocity-driven sprint planning is determining the team’s average velocity. Some agile teams prefer to look only at their most recent velocity. Their logic is that the most recent sprint is the single best indicator of how much will be accomplished in the next sprint. While this is certainly valid, I prefer to look back over a longer period if the team has the data.

I prefer to take the average of the eight most recent sprints. I’ve found that to be fairly predictive of the future. I certainly wouldn’t take the average over the last 10 years. How fast a team was going during the Bush Administration is irrelevant to that team’s velocity today. But, similarly, if you have the data, I don’t think you should look back only one sprint.

Step 2: Select Product Backlog Items

Armed with an average velocity, team members select the top items from the product backlog. They grab items up to or equal to the average velocity.

At this point, velocity-driven planning in the strictest sense is finished. Rather than taking an hour or two per week of the sprint, sprint planning is done in a matter of seconds. As quickly as the team can add up to their average velocity, they’re done.

Step 3: Identify Tasks (Optional)

Most teams who favor velocity-driven sprint planning realize that planning a sprint in a few seconds is probably insufficient. And so many will add in this third step of identifying the tasks to be performed.

To do this, team members discuss each of the product backlog items that have been selected for the sprint and attempt to identify the key steps necessary in delivering each product backlog item. Teams can’t possibly think of everything so they instead make a good faith effort to think of all they can.

After having identified most of the tasks that will ultimately be necessary, team members look at the selected product backlog item and the tasks for each, and decide if the sprint seems full. They may conclude that the sprint is overfilled or insufficiently filled and add or drop product backlog items. At this point, some teams are done and they conclude sprint planning with a set of selected product backlog items and a list of the tasks to deliver them. Some teams proceed though to a final step:

Step 4: Estimate the Tasks (Optional)

With a nice list of tasks in front of them, some teams decide to go ahead and estimate those tasks in hours, as an aid in helping them decide if they’ve selected the right amount of work.

In next week’s post, I’ll describe an alternative approach, commitment-driven sprint planning. And after that, I’ll share some recommendations.

Agile Concepts: Cadence

 In today’s business environment a plurality of organizations use a two week sprint cadence.

In today’s business environment a plurality of organizations use a two week sprint cadence.

In Agile, cadence is the number days or weeks in a sprint or release. Stated another way, it is the length of the team’s development cycle. In today’s business environment a plurality of organizations use a two week sprint cadence. The cadence that a project or organization selects is based on a number of factors that include: criticality, risk and the type of project.

A ‘critical’ IT project would be crucial, decisive or vital. Projects or any kind of work that can be defined as critical need to be given every change to succeed. Feedback is important for keeping critical projects pointed in the right direction. Projects that are highly important will benefit from gathering feedback early and often. The Agile cycle of planning, demonstrating progress and retrospect-ing is tailor-made to gather feedback and then act on that feedback. A shorter cycle leads to a faster cadence and quicker feedback.

Similarly, projects with higher levels of risk will benefit from faster feedback so that the team and the organization can evaluate whether the risk is being mitigated or whether risk is being converted into reality. Feedback reduces the potential for surprises therefore faster cadences is a good tool for reducing some forms of risk.

The type of project can have an impact on cadence. Projects that include hardware engineering or interfaces with heavyweight approval mechanisms will tend to have slower cadences. For example, a project I was recently asked about required two separate approval board reviews (one regulatory and the second security related). Both took approximately five working days. The length of time required for the reviews was not significantly impacted by the amount of work each group needed to approve. The team adopted a four-week cadence to minimize the potential for downtime while waiting for feedback and to reduce the rework risk of going forward without approval. Maintenance projects, on the other hand, can often leverage Kanban or Scrumban in more of a continuous flow approach (no time box).

Development cadence is not synonymous with release cadence. In many Agile techniques, the sprint cadence and the release cadence do not have to be the same. The Scaled Agile Framework Enterprise (SAFe) makes the point that teams should develop on a cadence, but release on demand. Many teams use a fast development cadence only to release in large chunks (often called releases). When completed work is released often either as a reflection of business need, an artifact in thinking from waterfall development and, in some rare cases, the organization’s operational environment.

Most projects will benefit from faster feedback. Shorter cycles, i.e. faster cadence, are an important tool for generating feedback and reducing risk. A faster cadence is almost always the right answer, unless you really don’t want to know what is happening while you can react.


Categories: Process Management

SPaMCAST 312 – Alex Neginsky, A Leader and Practitioner’s View of Agile

www.spamcast.net

http://www.spamcast.net

Listen to the SPaMCAST 312 now!

SPaMCAST 312 features our interview with Alex Neginsky.  Alex is a real leader and practitioner in a real company that has really applied Agile.  Alex shares pragmatic advice about to how practice Agile in the real world!

Alex’s bio:

Alex Neginsky began his career in the software industry at the age of 16 as a Software Engineer for Ultimate Software. He earned his Bachelor’s degree in Computer Science at Florida Atlantic University in 2006. By age 27, Alex obtained his first software patent.

Alex has been at MTech, a division of Newmarket International, since 2011. As the Director of Development he brings 15 years of experience, technical skills, and management capabilities. Alex manages highly skilled software professionals across several teams stationed all over Eastern Europe and the United States. He serves as the liaison between MTech Development and the industry. During his tenure with the MTech division of Newmarket, Alex has been pivotal in the adoption of the complete software development lifecycle and has spearheaded the adoption of leading Agile Development Methodologies such as Scrum and Kanban. This has yielded higher velocity and better efficiencies throughout the organization.

Contact Alex at aneginsky@newmarketinc.com

LinkedIn

If you have the right stuff and are interested in a joining Newmarket then check out:

http://www.newmarketinc.com/careers/join-newmarket

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin in November. More on this new feature next week. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 313 features our essay on developing an initial backlog.  Developing an initial backlog is an important step to get projects going and moving in the right direction. If a project does not start well, it is hard for it to end well.  We will provide techniques to help you begin well!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:30 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 312 - Alex Neginsky, A Leader and Practitioner’s View of Agile

Software Process and Measurement Cast - Sun, 10/19/2014 - 22:00

SPaMCAST 312 features our interview with Alex Neginsky.  Alex is a real leader and practitioner in a real company that has really applied Agile.  Alex shares pragmatic advice about to how practice Agile in the real world!  

Alex’s bio:

Alex Neginsky began his career in the software industry at the age of 16 as a Software Engineer for Ultimate Software. He earned his Bachelor's degree in Computer Science at Florida Atlantic University in 2006. By age 27, Alex obtained his first software patent. 

Alex has been at MTech, a division of Newmarket International, since 2011. As the Director of Development he brings 15 years of experience, technical skills, and management capabilities. Alex manages highly skilled software professionals across several teams stationed all over Eastern Europe and the United States. He serves as the liaison between MTech Development and the industry. During his tenure with the MTech division of Newmarket, Alex has been pivotal in the adoption of the complete software development lifecycle and has spearheaded the adoption of leading Agile Development Methodologies such as Scrum and Kanban. This has yielded higher velocity and better efficiencies throughout the organization. 

Contact Alex at aneginsky@newmarketinc.com

LinkedIn

If you have the right stuff and are interested in a joining Newmarket then check out:

http://www.newmarketinc.com/careers/join-newmarket

Call to action!

What are the two books that have most influenced you career (business, technical or philosophical)?  Send the titles to spamcastinfo@gmail.com.  What will we do with this list?  We have two ideas.  First, we will compile a list and publish it on the blog.  Second, we will use the list to drive “Re-read” Saturday. Re-read Saturday is an exciting new feature we will begin in November. More on this new feature next week. So feel free to choose you platform and send an email, leave a message on the blog, Facebook or just tweet the list (use hashtag #SPaMCAST)!

Next

SPaMCAST 313 features our essay on developing an initial backlog.  Developing an initial backlog is an important step to get projects going and moving in the right direction. If a project does not start well, it is hard for it to end well.  We will provide techniques to help you begin well!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:30 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Agile Risk Management: The Need Still Exists

ROAM

 

Hand Drawn Chart Saturday

I once asked the question, “Has the adoption of Agile techniques magically erased risk from software projects?” The obvious answer is no, however Agile has provided a framework and tools to reduce the overall level of performance variance. Even if we can reduce risk by damping the variance driven by complexity, the size of the work, process discipline and people, we still need to “ROAM” the remaining risks. ROAM is a model that helps teams identify risks. Applying the model, a team would review each risk and classify then as:

  • Resolved, the risk has been answered and avoided or eliminated.
  • Owned, someone has accepted the responsibility for doing something about the risk.
  • Accepted, the risk has been understood and the team has agreed that nothing will be done about it.
  • Mitigated, something has been done so that the probability or potential impact is reduced.

When we consider any risk we need to recognize the two attributes: impact and probability.  Impact is what will happen if the risk becomes something tangible. Impacts are typically monetized or stated as the amount of effort needed to correct the problem if it occurs. The size of the impact can vary depending on when the risk occurs. For example, if we suddenly decide that the system architecture will not scale to the level required during sprint 19, the cost in rework would be higher than if that fact were discovered in sprint 2. Probability is the likelihood a risk will become an issue. In a similar manner to impact, probability varies over time.

We defined risk as “any uncertain event that can have an impact on the success of a project.” Does using Agile change our need to recognize and mitigate risk?  No, but instead of a classic risk management plan and a risk log, a more Agile approach to risk management might be generating additional user stories. While Agile techniques reduce some forms of risk we still need to be vigilant. Adding risks to the project or program backlog will help ensure there is less chance of variability and surprises.


Categories: Process Management

Agile Risk Management: People

People are chaotic.

People are chaotic.

Have ever heard the saying, “software development would be easy if it weren’t for the people”? People are one of the factors that cause variability in the performance of projects and releases (other factors also include complexity, the size of the work, and process discipline.) There are three mechanisms built into most Agile frameworks to address an acceptance of the idea that people can be chaotic by nature and therefore dampen variability.

  1. Team size, constitution and consistency are attributes that most Agile frameworks have used to enhance productivity and effectiveness that also reduce the natural variability generated when people work together.
    1. The common Agile team size of 7 ± 2 is small enough that team members can establish and nurture personal relationships to ensure effective communication.
    2. Agile teams are typically cross-functional and include a Scrum master/coach and the product owner. The composition of the team fosters self-reliance and the ability to self-organize, again reducing variability.
    3. Long lived teams tend to establish strong bonds that foster good communication and behaviors such as swarming. Swarming is a behavior in which team members rally to a task that is in trouble so that team as a whole can meet its goal, which reduces overall variability in performance.
  2. Peer reviews of all types have been a standard tool to improve quality and consistency of work products for decades. Peer reviews are a mechanism to remove defects from code or other work product before they are integrated into larger work products. The problem is that having someone else look at something you created and criticize it is grating. Extreme programing took classic peer reviews a step further and put two people together at one keyboard, one typing and the other providing running commentary (a colloquial description of pair programing). Demonstrations are a variant of peer reviews. Removing defects earlier in the development process through observation and discussion reduces variability and therefore the risk of not delivering value.
  3. Daily stand ups and other rituals are the outward markers of Agile techniques. Iteration/sprint planning keeps teams focused on what they need to do in the short-term future and then re-plans when that time frame is over. Daily stand-ups provide a platform for the team to sync up on a daily basis to reduce the variance that can creep in when plans diverge. Demonstrations show project stakeholders how the team is solving their business problems and solicit feedback to keep the team on track. All of these rituals reduce potential variability that can be introduced by people acting alone rather than as a team with a common goal.

In information technology projects of all types, people transform ideas and concepts into business value. In software development and maintenance, the tools and techniques might vary but, at its core, software-centric projects are social enterprises. Get any group of people together to achieve a goal is a somewhat chaotic process. Agile techniques and frameworks have been structured to help individuals to increase alignment and to act together as a team to deliver business value.


Categories: Process Management

How to deploy a Docker application into production on Amazon AWS

Xebia Blog - Fri, 10/17/2014 - 17:00

Docker reached production status a few months ago. But having the container technology alone is not enough. You need a complete platform infrastructure before you can deploy your docker application in production. Amazon AWS offers exactly that: a production quality platform that offers capacity provisioning, load balancing, scaling, and application health monitoring for Docker applications.

In this blog, you will learn how to deploy a Docker application to production in five easy steps.

For demonstration purposes, you are going to use the node.js application that was build for CloudFoundry and used to demonstrate Deis in a previous post. A truly useful app of which the sources are available on github.

1. Create a Dockerfile

First thing you need to do is to create a Dockerfile to create an image. This is quite simple: you install the node.js and npm packages, copy the source files and install the javascript modules.

# DOCKER-VERSION 1.0
FROM    ubuntu:latest
#
# Install nodejs npm
#
RUN apt-get update
RUN apt-get install -y nodejs npm
#
# add application sources
#
COPY . /app
RUN cd /app; npm install
#
# Expose the default port
#
EXPOSE  5000
#
# Start command
#
CMD ["nodejs", "/app/web.js"]
2. Test your docker application

Now you can create the Docker image and test it.

$ docker build -t sample-nodejs-cf .
$ docker run -d -p 5000:5000 sample-nodejs-cf

Point your browser at http://localhost:5000, click the 'start' button and Presto!

3. Zip the sources

Now you know that the instance works, you zip the source files. The image will be build on Amazon AWS based on your Dockerfile.

$ zip -r /tmp/sample-nodejs-cf-srcs.zip .
4. Deploy Docker application to Amazon AWS

Now you install and configure the amazon AWS command line interface (CLI) and deploy the docker source files to elastic beanstalk.  You can do this all manually, but here you use the script deploy-to-aws.sh that I created.

$ deploy-to-aws.sh \
         sample-nodejs-cf \
         /tmp/sample-nodejs-cf-srcs.zip \
         demo-env

After about 8-10 minutes your application is running. The output should look like this..

INFO: creating application sample-nodejs-cf
INFO: Creating environment demo-env for sample-nodejs-cf
INFO: Uploading sample-nodejs-cf-srcs.zip for sample-nodejs-cf, version 1412948762.
upload: ./sample-nodejs-cf-srcs.zip to s3://elasticbeanstalk-us-east-1-233211978703/1412948762-sample-nodejs-cf-srcs.zip
INFO: Creating version 1412948762 of application sample-nodejs-cf
INFO: demo-env in status Launching, waiting to get to Ready..
...
INFO: demo-env in status Launching, waiting to get to Ready..
INFO: Updating environment demo-env with version 1412948762 of sample-nodejs-cf
INFO: demo-env in status Updating, waiting to get to Ready..
...
INFO: demo-env in status Updating, waiting to get to Ready..
INFO: Version 1412948762 of sample-nodejs-cf deployed in environment
INFO: current status is Ready, goto http://demo-env-vm2tqi3qk4.elasticbeanstalk.com
5. Test your Docker application on the internet!

Your application is now available on the Internet. Browse to the designated URL and click on start. When you increase the number of instances at Amazon, they will appear in the application. When you deploy a new version of the application, you can observe how new versions of the application  appear without any errors on the client application.

For more information, goto Amazon Elastic Beanstalk adds Docker support. and Dockerizing a Node.js Web App.

Then When Given

Xebia Blog - Fri, 10/17/2014 - 14:50

People who practice ATDD all know how frustrating it can be to write automated examples. Especially when you get stuck overthinking the preconditions of examples.

This post describes an alternative approach to writing acceptance tests: write them backwards!

Imagine that you are building the very first online phone book. We need to define an acceptance tests for viewing the location of a florist. Using the Given-When-Then formula you would probably describe the behaviour like this.


Given I am on the online phone book homepage
When I type “Florist” in the business type field
And I click 

...

Most of the time you will be discussing and describing details that have nothing to do with viewing the location of a florist. To avoid this, write down the Then clause of the formula first.
Make sure the Then clause contains an observable result.


Then I see the location “Floriststreet 123”

Next, we will try to answer the following question: What caused the Then clause?
Make sure the Then clause contains an actor and an action.


When I click “View map” of the search result
Then I see the location “Floriststreet 123”

The last thing we will need to do is answer the following question: Why can I perform that action?
Make sure the Given clause contains a simple precondition.


Given I see a search result for florist “Floral Designs”
When I click “View map” of the search result
Then I see the location “Floriststreet 123”

You might have noticed that I left out certain parts where the user goes to the homepage and selects UI objects in the search area. It was not worth mentioning in the Given-When-Then formula. Too much details make us lose focus of what we really want to check. The essence of this acceptance test is clicking on the link "View map" and exposing the location to the user.

Try it a couple of times and let me know how it went.

Agile Risk Management: Ad-hoc Processes

The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche...

The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche…

There are many factors that cause variability in the performance of projects and releases, including complexity, the size of the work, people and process discipline. Consistency and predictability are difficult when the process is being made up on the spot. Agile has come to reflect (at least in practice) a wide range of values ranging from faster delivery to more structured frameworks such as Scrum, Extreme Programing and Scale Agile Framework Enterprise. Lack of at least some structure nearly always increases the variability in delivery and therefore the risk to the organization.

I recently received the following note from a reader (and listener to the podcast) who will remain nameless (all names redacted at the request of the reader).

“All of the development is outsourced to a company with many off-shore and a few on-site resources.

The development agency has, somehow, sold the business on the idea that because they are “Agile”, their ability to dynamically/quickly react and implement requires a lack of formal “accounting.”  The CFO here wants to move away from vague generic invoices because he feels (rightly so) that the agency interprets the relationship as having carte blanche to work on anything and everything ever scratched out on a cocktail napkin without proper project charters, buy-in, and SOW.”

This observation reflects a risk to the organization of an ill-defined process in terms the value that get delivered to the business, financial risk and from the risk to customer satisfaction. Repeatability and consistency of process are not a dirty words.

Scrum and other Agile frameworks are light-weight empirical models. At their most basic levels they summarized as:

  1. Agree upon what your are going to do (build a backlog),
  2. Plan work directly ahead (sprint/iteration planning),
  3. Build a little bit while interacting with the customer (short time box development),
  4. Review what has been done with the stakeholders (demonstration),
  5. Make corrections to the process (retrospective),
  6. Repeat as needed until the goals of the work are met.

Deming would have recognized the embedded plan-do-check-act cycle. There is nothing ad-hoc about the frame even though it is not overly prescriptive.

I recently toured a research facility for a major snack manufacturer. The people in the labs were busy dreaming up the next big snack food. Personnel were involved in both “pure” and applied research, both highly creative endeavors. When I asked about the process they were using what was described was something similar to Scrum. Creatively being pursued within a framework to reduce risk.

Ad-hoc software development and maintenance was never in style. In today’s business environment where software in an integral the delivery of value, just winging the process of development increases risk of an already somewhat risky proposition.


Categories: Process Management

Quote of the Month October 2014

From the Editor of Methods & Tools - Thu, 10/16/2014 - 11:48
Minimalism also applies in software. The less code you write, the less you have to maintain. The less you maintain, the less you have to understand. The less you have to understand, the less chance of making a mistake. Less code leads to fewer bugs. Source: “Quality Code: Software Testing Principles, Practices, and Patterns”, Stephen Vance, Addison-Wesley

Agile Risk Management: Dealing With Size

Overall project size influences variability.

Overall project size influences variability.

Risk is reflection of possibilities, stuff that could happen if the stars align. Therefore projects are highly influenced by variability. There are many factors that influence variability including complexity, process discipline, people and the size of the work. The impact of the size can be felt in two separate but equally important manners. The first is the size of the overall project and second is the size any particular unit of work.

Overall project size influences variability by increasing the sheer number of moving parts that have to relate to each other. As an example, the assembly of an automobile is a large endeavor and is the culmination of a number of relatively large subprojects. Any significant variance in how the subprojects are assembled along the path of building the automobile will cause problems in the final deliverable. Large software projects require extra coordination, testing, integration to ensure that all of the pieces fit together, deliver the functionality customers and stakeholders expect and act properly. All of these extra steps increase the possibility of variance.

Similarly large pieces of work, user stories in Agile, cause similar problems as noted for large projects, but at the team level. For example, when any piece of work enters a sprint the first step in the process of transforming that story into value is planning. Large pieces of work are more difficult to plan, if for no other reason that they take longer to break down into tasks increasing the likelihood that something will not be considered generating a “gotcha” later in the sprint.

Whether at a project or sprint level, smaller is generally simpler, and simpler generates less variability. There are a number of techniques for managing size.

  1. Short, complete sprints or iterations. The impact of time boxing on reducing project size has been discussed and understood in mainstream since the 1990’s (see Hammer and Champy, Reengineering the Corporation). Breaking a project into smaller parts reduces the overhead of coordination and provides faster feedback and more synchronization points, which reduces the variability. Duration of a sprint acts as constraint to the amount of work that can be taken and into a sprint and reasonably be completed.
  2. Team sizes of 5 to 9 are large enough to tackle real work while maintaining the stabile team relationships needed to coordinate the development and integration of functionality that can potentially be shipped at the end of each sprint. Team size constrains the amount of work that can enter a sprint and be completed.
  3. Splitting user stories is a mechanism to reduce the size of a specific piece of work so that it can be complete faster with fewer potential complications that cause variability. The process of splitting user stories breaks stories down into smaller independent pieces (INVEST) that be developed and tested faster. Smaller stories are less likely to block the entire team if anything goes wrong. This reduces variability and generates feedback more quickly, thereby reducing the potential for large batches of rework if the solution does not meet stakeholder needs. Small stories increases flow through the development processes reducing variability.

I learned many years ago that supersizing fries at the local fast food establishment was a bad idea in that it increased the variability in my caloric intake (and waistline). Similarly, large projects are subject to increased variability. There are just too many moving parts, which leads to variability and risk. Large user stories have exactly the same issues as large project just on a smaller scale. Agile techniques of short sprints and small team size provide constraints so that teams can control the size of work they are considering at any point in time. Teams need to take the additional step of breaking down stories into smaller pieces to continue to minimize the potential impact of variability.


Categories: Process Management

Agile Risk Management: Dealing With Complexity

Does the raft have a peak load?

Software development is inherently complex and therefore risky. Historically there have been many techniques leveraged to identify and manage risk. As noted in Agile and Risk Management, much of the risk in projects can be put at the feet of variability. Complexity is one of the factors that drives complexity. Spikes, prioritization and fast feedback are important techniques for recognizing and reducing the impact of complexity.

  1. Spikes provide a tool to develop an understanding of an unknown within a time box. Spikes are a simple tool used to answer a question, gather information, perform a specific piece of basic research, address project risks or break a large story down. Spikes generate knowledge, which both reduces the complexity intrinsic in unknowns and provides information that can be used to simplify the problem being studied in the spike.
  2. Prioritization is a tool used to order the project or release backlog. Prioritization is also a powerful tool of reducing the impact of variability. It generally easier to adapt to a negative surprise in a project earlier in the lifecycle. Teams can allocate part of their capacity to the most difficult stories early in the project using complexity as a criterion for prioritization.
  3. Fast feedback is the single most effective means of reducing complexity (short of avoidance). Core to the DNA of Agile and lean frameworks is the “inspect and adapt” cycle. Deming’s Plan-Do-Check-Act cycle is one representation of “inspect and adapt” as are retrospectives, pair-programming and peer reviews. Short iterative cycles provide a platform to effectively apply the model of “inspect and adapt” to reduce complexity based on feedback. When teams experience complexity they have a wide range of tool to share and seek feedback ranging from daily stand-up meetings to demonstrations and retrospectives.
    Two notes on feedback:

    1. While powerful, feedback only works if it is heard.
    2. The more complex (or important) a piece of work is, the shorter the feedback cycle should be.

Spikes, prioritization and feedback are common Agile and lean techniques. The fact that they are common has led some Agile practitioners to feel that frameworks like Scrum has negated the need to deal specifically with risks at the team level. These are powerful tools for identifying and reducing complexity and the variability complexity generates however they need to be combined with other tools and techniques to manage the risk that is part of all projects and releases.


Categories: Process Management

AngularJS Training Week

Xebia Blog - Tue, 10/14/2014 - 07:00

Just a few more weeks and it's the AngularJS Training Week at Xebia in Hilversum (The Netherlands). 4 days full with AngularJS content, from 17 to 20 October, 2014. In these different days we will cover the AngularJS basics, AngularJS advanced topics, Tooling & Scaffolding and Testing with Jasmine, Karma and Protractor.

If you already have some experience or if you are only interested in one or two of the topics, then you can sign up for just the days that are of interest to you.

Visit www.angular-training.com for a full overview of the days and topics or sign up on the Xebia Training website using the links below.

Agile and Risk Management

We brush our teeth to avoid cavities; a process and an outcome.

We brush our teeth to avoid cavities; a process and an outcome.

There is no consensus on how risk is managed in Agile projects or releases or whether there is a need for any special steps. Risk is defined as “any uncertain event that can have an impact on the success of a project.” Risk management, on the other hand, is about predicting the future. Risk management is a reflection of the need manage the flow of work in the face of variability. Variability in software development can be defined as the extent that any outcome differs from expectations. Understanding and exploiting that variability is the essence of risk management. Variability is exacerbated by a number of factors, such as the following.

  1. Complexity in any problem is a reflection of the number of parts to the problem, how those parts interact and the level of intellectual difficulty. The more complex a problem, the less likely you will be able to accurately predict the delivery of a solution to the problem. Risk is reduced by relentlessly focusing on good design and simplifying the problem. Reflecting the Agile principle of simplicity, reducing the work to only what is absolutely needed is one mechanism for reducing variability.
  2. Size of project or release influences the overall variability by increasing the number of people involved in delivering an outcome and therefore the level of coordination needed. Size is a form of complexity. Size increases the possibility that there will be interactions and dependencies between components of the solution. In a perfect world, the work required to deliver each requirement and system component would be independent of each other, however when this can’t be avoided projects need mechanisms to identify and deal with dependencies. An example of a mechanism to address complexity created by size is the release planning event is SAFe.
  3. Ad hoc or uncontrolled processes can’t deliver a consistent output. We adopt processes in order to control the outcome. We brush our teeth to avoid cavities; a process and an outcome. When a process is not under control, the outcome is not predictable. In order for a process to be under control, the process needs to be defined and then performed based on that definition. Processes need to be defined in as simple and lean a manner possible to be predictable.
  4. People are chaotic by nature. Any process that includes people will be more variable than the same process without people. Fortunately we have not reached the stage in software development where machines have replaced people (watch any of the Terminator or Matrix movies). The chaotic behavior of teams or teams of teams can be reduced through alignment. Alignment can be achieved at many levels. At release or program level, developing a common goal and understanding of business need helps to develop alignment. At a team level, stable teams helps team members build interpersonal bonds leading to alignment. Training is another tool to develop or enhance alignment.

Software development, whether the output maintenance, enhancements or new functionality a reflection of processes that have a degree of variability. Variability, while generally conceived of as being bad and therefore to be avoided, could be good or bad. Finding a new was to deliver work that increases productivity and quality is good. The goal for any project or release is to reduce the possibility of negative variance and then learning to use what remains to best advantage.


Categories: Process Management

Fast and Easy integration testing with Docker and Overcast

Xebia Blog - Mon, 10/13/2014 - 18:40
Challenges with integration testing

Suppose that you are writing a MongoDB driver for java. To verify if all the implemented functionality works correctly, you ideally want to test it against a REAL MongoDB server. This brings a couple of challenges:

  • Mongo is not written in java, so we can not embed it easily in our java application
  • We need to install and configure MongoDB somewhere, and maintain the installation, or write scripts to set it up as part of our test run.
  • Every test we run against the mongo server, will change the state, and tests might influence each other. We want to isolate our tests as much as possible.
  • We want to test our driver against multiple versions of MongoDB.
  • We want to run the tests as fast as possible. If we want to run tests in parallel, we need multiple servers. How do we manage them?

Let's try to address these challenges.

First of all, we do not really want to implement our own MonogDB driver. Many implementations exist and we will be reusing the mongo java driver to focus on how one would write the integration test code.

Overcast and Docker

logoWe are going to use Docker and Overcast. Probably you already know Docker. It's a technology to run applications inside software containers. Overcast is the library we will use to manage docker for us. Overcast is a open source java library
developed by XebiaLabs to help you to write test that connect to cloud hosts. Overcast has support for various cloud platforms, including EC2, VirtualBox, Vagrant, Libvirt (KVM). Recently support for Docker has been added by me in Overcast version 2.4.0.

Overcast helps you to decouple your test code from the cloud host setup. You can define a cloud host with all its configuration separately from your tests. In your test code you will only refer to a specific overcast configuration. Overcast will take care of creating, starting, provisioning that host. When the tests are finished it will also tear down the host. In your tests you will use Overcast to get the hostname and ports for this cloud host to be able to connect to them, because usually these are dynamically determined.

We will use Overcast to create Docker containers running a MongoDB server. Overcast will help us to retrieve the dynamically exposed port by the Docker host. The host in our case will always be the docker host. Docker in our case runs on an external Linux host. Overcast will use a TCP connection to communicate with Docker. We map the internal ports to a port on the docker host to make it externally available. MongoDB will internally run on port 27017, but docker will map this port to a local port in the range 49153 to 65535 (defined by docker).

Setting up our tests

Lets get started. First, we need a Docker image with MongoDB installed. Thanks to the Docker community, this is as easy as reusing one of the already existing images from the Docker Hub. All the hard work of creating such an image is already done for us, and thanks to containers we can run it on any host capable of running docker containers. How do we configure Overcast to run the MongoDB container? This is our minimal configuration we put in a file called overcast.conf:

mongodb {
    dockerHost="http://localhost:2375"
    dockerImage="mongo:2.7"
    exposeAllPorts=true
    remove=true
    command=["mongod", "--smallfiles"]
}

That's all! The dockerHost is configured to be localhost with the default port. This is the default value and you can omit this. The docker image called mongo version 2.7 will be automatically pulled from the central docker registry. We set exposeAllPorts to true to inform docker it needs to dynamically map all exposed ports by the docker image. We set remove to true to make sure the container is automatically removed when stopped. Notice we override the default container startup command by passing in an extra parameter "--smallfiles" to improve testing performance. For our setup this is all we need, but overcast also has support for defining static port mappings, setting environment variables, etc. Have a look at the Overcast documentation for more details.

How do we use this overcast host in our test code? Let's have a look at the test code that sets up the Overcast host and instantiates the mongodb client that is used by every test. The code uses the TestNG @BeforeMethod and @AfterMethod annotations.

private CloudHost itestHost;
private Mongo mongoClient;

@BeforeMethod
public void before() throws UnknownHostException {
    itestHost = CloudHostFactory.getCloudHost("mongodb");
    itestHost.setup();

    String host = itestHost.getHostName();
    int port = itestHost.getPort(27017);

    MongoClientOptions options = MongoClientOptions.builder()
        .connectTimeout(300 * 1000)
        .build();

    mongoClient = new MongoClient(new ServerAddress(host, port), options);
    logger.info("Mongo connection: " + mongoClient.toString());
}

@AfterMethod
public void after(){
    mongoClient.close();
    itestHost.teardown();
}

It is important to understand that the mongoClient is the object under test. Like mentioned before, we borrowed this library to demonstrate how one would integration test such a library. The itestHost is the Overcast CloudHost. In before(), we instantiate the cloud host by using the CloudHostFactory. The setup() will pull the required images from the docker registry, create a docker container, and start this container. We get the host and port from the itestHost and use them to build our mongo client. Notice that we put a high connection timeout on the connection options, to make sure the mongodb server is started in time. Especially the first run it can take some time to pull images. You can of course always pull the images beforehand. In the @AfterMethod, we simply close the connection with mongoDB and tear down the docker container.

Writing a test

The before and after are executed for every test, so we will get a completely clean mongodb server for every test, running on a different port. This completely isolates our test cases so that no tests can influence each other. You are free to choose your own testing strategy, sharing a cloud host by multiple tests is also possible. Lets have a look at one of the tests we wrote for mongo client:

@Test
public void shouldCountDocuments() throws DockerException, InterruptedException, UnknownHostException {

    DB db = mongoClient.getDB("mydb");
    DBCollection coll = db.getCollection("testCollection");
    BasicDBObject doc = new BasicDBObject("name", "MongoDB");

    for (int i=0; i < 100; i++) {
        WriteResult writeResult = coll.insert(new BasicDBObject("i", i));
        logger.info("writing document " + writeResult);
    }

    int count = (int) coll.getCount();
    assertThat(count, equalTo(100));
}

Even without knowledge of MongoDB this test should not be that hard to understand. It creates a database, a new collection and inserts 100 documents in the database. Finally the test asserts if the getCount method returns the correct amount of documents in the collection. Many more aspects of the mongodb client can be tested in additional tests in this way. In our example setup, we have implemented two more tests to demonstrate this. Our example project contains 3 tests. When you run the 3 example tests sequentially (assuming the mongo docker image has been pulled), you will see that it takes only a few seconds to run them all. This is extremely fast.

Testing against multiple MongoDB versions

We also want to run all our integration tests against different versions of the mongoDB server to ensure there are no regressions. Overcast allows you to define multiple configurations. Lets add configuration for two more versions of MongoDB:

defaultConfig {
    dockerHost="http://localhost:2375"
    exposeAllPorts=true
    remove=true
    command=["mongod", "--smallfiles"]
}

mongodb27=${defaultConfig}
mongodb27.dockerImage="mongo:2.7"

mongodb26=${defaultConfig}
mongodb26.dockerImage="mongo:2.6"

mongodb24=${defaultConfig}
mongodb24.dockerImage="mongo:2.4"

The default configuration contains the configuration we have already seen. The other three configurations extend from the defaultConfig, and define a specific mongoDB image version. Lets also change our test code a little bit to make the overcast configuration we use in the test setup depend on a parameter:

@Parameters("overcastConfig")
@BeforeMethod
public void before(String overcastConfig) throws UnknownHostException {
    itestHost = CloudHostFactory.getCloudHost(overcastConfig);

Here we used the paramaterized tests feature from TestNG. We can now define a TestNG suite to define our test cases and how to pass in the different overcast configurations. Lets have a look at our TestNG suite definition:

<suite name="MongoSuite" verbose="1">
    <test name="MongoDB27tests">
        <parameter name="overcastConfig" value="mongodb27"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
    <test name="MongoDB26tests">
        <parameter name="overcastConfig" value="mongodb26"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
    <test name="MongoDB24tests">
        <parameter name="overcastConfig" value="mongodb24"/>
        <classes>
            <class name="mongo.MongoTest" />
        </classes>
    </test>
</suite>

With this test suite definition we define 3 test cases that will pass a different overcast configuration to the tests. The overcast configuration plus the TestNG configuration enables us to externally configure against which mongodb versions we want to run our test cases.

Parallel test execution

Until this point, all tests will be executed sequentially. Due to the dynamic nature of cloud hosts and docker, nothing limits us to run multiple containers at once. Lets change the TestNG configuration a little bit to enable parallel testing:

<suite name="MongoSuite" verbose="1" parallel="tests" thread-count="3">

This configuration will cause all 3 test cases from our test suite definition to run in parallel (in other words our 3 overcast configurations with different MongoDB versions). Lets run the tests now from IntelliJ and see if all tests will pass:

Screen Shot 2014-10-08 at 8.32.38 PM
We see 9 executed test, because we have 3 tests and 3 configurations. All 9 tests have passed. The total execution time turned out to be under 9 seconds. That's pretty impressive!

During test execution we can see docker starting up multiple containers (see next screenshot). As expected it shows 3 containers with a different image version running simultaneously. It also shows the dynamic port mappings in the "PORTS" column:

Screen Shot 2014-10-08 at 8.50.07 PM

That's it!

Summary

To summarise, the advantages of using Docker with Overcast for integration testing are:

  1. Minimal setup. Only a docker capable host is required to run the tests.
  2. Save time. Minimal amount of configuration and infrastructure setup required to run the integration tests thanks to the docker community.
  3. Isolation. All test can run in their isolated environment so the tests will not affect each other.
  4. Flexibility. Use multiple overcast configuration and parameterized tests for testing against multiple versions.
  5. Speed. The docker container starts up very quickly, and overcast and testng allow you to even parallelize the testing by running multiple containers at once.

The example code for our integration test project is available here. You can use Boot2Docker to setup a docker host on Mac or Windows.

Happy testing!

Paul van der Ende 

Note: Due to a bug in the gradle parallel test runner you might run into this random failure when you run the example test code yourself. The work around is to disable parallelism or use a different test runner like IntelliJ or maven.

 

Xebia KnowledgeCast Episode 5: Madhur Kathuria and Scrum Day Europe 2014

Xebia Blog - Mon, 10/13/2014 - 10:48

xebia_xkc_podcast
The Xebia KnowledgeCast is a bi-weekly podcast about software architecture, software development, lean/agile, continuous delivery, and big data. Also, we'll have some fun with stickies!

In this 5th episode, we share key insights of Madhur Kathuria, Xebia India’s Director of Agile Consulting and Transformation, as well as some impressions of our Knowledge Exchange and Scrum Day Europe 2014. And of course, Serge Beaumont will have Fun With Stickies!

First, Madhur Kathuria shares his vision on Agile and we interview Guido Schoonheim at Scrum Day Europe 2014.

In this episode's Fun With Stickies Serge Beaumont talks about wide versus deep retrospectives.

Then, we interview Martin Olesen and Patricia Kong at Scrum Day Europe 2014.

Want to subscribe to the Xebia KnowledgeCast? Subscribe via iTunes, or use our direct rss feed.

Your feedback is appreciated. Please leave your comments in the shownotes. Better yet, send in a voice message so we can put you ON the show!

Credits

SPaMCAST 311 – Backlog Grooming, Software Sensei, Containment-Viruses and Software

www.spamcast.net

http://www.spamcast.net

Listen to SPaMCAST 311 now!

SPaMCAST 311 features our essay on backlog grooming. Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time.

We also have a new installment of Kim Pries’s Software Sensei column.  What is the relationship between the containment of diseases and bad software?  Kim makes the case that the process for dealing both are related.

The Essay begins

Backlog grooming is an important technique that can be used in any Agile or Lean methodology. At one point the need for backlog grooming was debated, however most practitioners now find the practice useful. The simplest definition of backlog grooming is the preparation of the user stories or requirements to ensure they are ready to be worked on. The act of grooming and preparation can cover a wide range of specific activities and can be performed at any time (although some times are better than others).

Listen to the rest now!

Next

SPaMCAST 312 features our interview with Alex Neginsky.  Alex is a real practitioner in a real company that has really applied Agile.  Almost everyone finds their own path with Agile.  Alex has not only found his path but has gotten it right and is willing to share!

Upcoming Events

DCG Webinars:

Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT

Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

Upcoming Conferences:

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management