Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management

SPaMCAST 296 ‚Äď Jeff Dalton, CMMI, Agile, Resiliency

Image

 

Listen to the Software Process and Measurement Cast 296

SPaMCAST 296 features our interview with Jeff Dalton we talked about Agile and resiliency. If Agile is resilient it will be able spring back into shape after being bent or compressed by the pressures of development and support.  In the conversation Jeff and I discussed whether Agile was resilient and how frameworks like the CMMI can be used to make Agile more resilient.

Jeff is Broadsword‚Äôs President, Certified Lead Appraiser,¬†CMMI¬†Instructor, ScrumMaster and author of ‚ÄúagileCMMI,‚ÄĚ Broadsword‚Äôs leading methodology for incremental and iterative process improvement. ¬†He is Chairman of the CMMI Institute‚Äôs Partner Advisory Board and former President of the Great Lakes Software Process Improvement Network (GL-SPIN). ¬†He is a recipient of the Software Engineering Institute‚Äôs¬†SEI Member Award for Outstanding Representative¬†for his work uniting the Agile and¬†CMMI¬†communities together through his popular blog ‚ÄúAsk the¬†CMMI¬†Appraiser.‚ÄĚ ¬†He holds degrees in Music and Computer Science and builds experimental airplanes in his spare time. ¬†You can reach Jeff at¬†appraiser@broadswordsolutions.com.

Contact Data:
Email:  appraiser@broadswordsolutions.com.
Twitter:  @CMMIAppraiser
Blog: http://askthecmmiappraiser.blogspot.com/
Web:  http://www.broadswordsolutions.com/
also see:  http://www.cmmi-tv.com

Next week we will feature our essay on IFPUG Function Points.  IFPUG function points are an ISO Standard means to size projects and applications. IFPUG function points are used across a wide range of project types, industries and countries.

Upcoming Events

Upcoming DCG Webinars:

July 24 11:30 EDT ‚Äď The Impact of Cognitive Bias On Teams
Check these out at www.davidconsultinggroup.com

I will be attending Agile 2014 in Orlando, July 28 through August 1, 2014.  It would be great to get together with SPaMCAST listeners, let me know if you are attending.

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute(ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques¬†co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book¬†here.

Available in English and Chinese.

 

 


Categories: Process Management

SPaMCAST 296 ‚Äď Jeff Dalton, CMMI, Agile, Resiliency

Software Process and Measurement Cast - Sun, 06/29/2014 - 22:00

SPaMCAST 296 features our interview with Jeff Dalton we talked about Agile and resiliency. If Agile is resilient it will be able spring back into shape after being bent or compressed by the pressures of development and support.  In the conversation Jeff and I discussed whether Agile was resilient and how frameworks like the CMMI can be used to make Agile more resilient.

Jeff is Broadsword’s President, Certified Lead Appraiser, CMMI Instructor, ScrumMaster and author of “agileCMMI,” Broadsword’s leading methodology for incremental and iterative process improvement.  He is Chairman of the CMMI Institute’s Partner Advisory Board and former President of the Great Lakes Software Process Improvement Network (GL-SPIN).  He is a recipient of the Software Engineering Institute’s SEI Member Award for Outstanding Representative for his work uniting the Agile and CMMI communities together through his popular blog “Ask the CMMI Appraiser.”  He holds degrees in Music and Computer Science and builds experimental airplanes in his spare time.  You can reach Jeff at appraiser@broadswordsolutions.com.

Contact Data:
Email:  appraiser@broadswordsolutions.com.
Twitter:  @CMMIAppraiser
Blog: http://askthecmmiappraiser.blogspot.com/
Web:  http://www.broadswordsolutions.com/
also see:  www.cmmi-tv.com

Next week we will feature our essay on IFPUG Function Points.  IFPUG function points are an ISO Standard means to size projects and applications. IFPUG function points are used across a wide range of project types, industries and countries.

Upcoming Events

Upcoming DCG Webinars:

July 24 11:30 EDT – The Impact of Cognitive Bias On Teams
Check these out at www.davidconsultinggroup.com

I will be attending Agile 2014 in Orlando, July 28 through August 1, 2014.  It would be great to get together with SPaMCAST listeners, let me know if you are attending.

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute(ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Adding #NoEstimates to the Framework

#NoEstimates  . . .Yes or No?

#NoEstimates . . .Yes or No?

 

Hand Drawn Chart Saturday!

When I published An Estimation Framework Is Required In Complex Environments, several people that I respect, including Luis Gonçalves (interviewed on the SPaMCAST 282 with Ben Linders), begged to differ with my conclusion that a framework was ever required.  Luis made an impassioned plea for #NoEstimates.  The premise of #NoEstimates is that estimates enforce a plan and plans many times are overcome by changes that range across both technology and business needs.

Vasco Duarte, a leading proponent of #NoEstimate describes the process as follows:

  1. Select the highest value piece of work the team needs to do.
  2. Break that piece of work down into small components.  Vasco uses the term risk-neutral chunks, which means pieces of work that if they don’t get delivered in the first attempt will not put the project at risk.
  3. Develop each piece of work according to the definition of done. #NoEstimates makes a strong case that unless done means anything other than usable by the end customers, the project is not getting the feedback needed to avoid negative surprises.
  4. Iterate and refactor. Continue until the product or enhancement meets the organization’s definition of done.

Estimates are part of a continuum that begins with budgeting, continues to estimating and terminates at planning.   Organizations build strategic plans based on bringing new or enhanced products to market.  For example, a retailer might commit to opening x number of stores in the next year.  If public, once publicly stated, the organization will need to perform to those commitments or face a wide range of consequences.  Based on experience gathered by working in several retailer’s IT organizations, I know that even a single store is a major effort that includes store operations, purchasing, legal and IT.  Missing an opening date causes embarrassment and typically, large financial penalties (paying workers who aren’t working, rescheduling advertising and possible tax penalties not to mention the impact to stock prices).  Organizations need to budget and estimate at a strategic level.

Where the #NoEstimates approach makes sense is at the planning level.  The #NoEstimates process empowers teams (product owner, Scrum Master/coach and development personnel) to work on the highest value work first and to develop a predictable capacity to deliver work.  The results generated by the team provide feedback to evaluate the promises made though organization-level budgets and estimates.

When performance is at odds with what has been promised business choices should be made.  Choices can range from involving other teams (when this makes sense) to accepting the implications of not meeting the commitments made by the organization.

Does #NoEstimates make sense?  Yes, the process and concepts embodied by #NoEstimates fits solidly into a framework of budgeting, estimating and planning.   Without a framework to codify the use of #NoEstimates and to govern organizational behavior, getting to the point of making hard business choices will generate pressure to fall back to command and control fashion.

Note:  I am working on scheduling an interview and discussion with Luis and Vasco on the Software Process and Measurement Cast to discuss #NoEstimates.


Categories: Process Management

Fixing The Top Five Issues in Project Estimation

Sometimes you need a seeing eye dog to see the solution.

Sometimes you need a seeing eye dog to see the solution.

In the entry, The Top Five Issues In Project Estimation, we identified the five macro categories of estimation problems generated when I asked a group of people the question ‚ÄúWhat are the two largest issues in project estimation?‚Ä̬† Knowing what the issues are is important, however equally important is having a set of solutions.

  1. Requirements. Techniques that reduce the impact of unclear and changing requirements on budgeting and estimation include release plans, identifying a clear minimum viable product and changing how requirements changes are viewed when judging project success. See Requirements: The Chronic Problem with Project Estimation.
  2. Estimate Reliability. Recognize that budgets, estimates and plans are subject to the cone of uncertainty.  The cone of uncertainty is a reflection of the fact earlier in a project the less you know about the project.  Predictions of the future will be more variable the less you know about the project.  Budgets, estimates and plans are predictions of cost, effort, duration or size.
  3. Project History.  Collect predicted and actual project size, effort, duration and other project demographics for each project.  Project history can be used both as the basis for analogous estimates and/or to train parametric estimation tools.  The act of collecting the quantitative history and the qualitative story about how projects performed is a useful form of introspection that can drive change.
  4. Labor Hours Are Not The Same As Size.  Implement functional (e.g. IFPUG Function Points) or relative sizing (Story Points) as a step in the estimation process. The act of focusing on size separately allows estimators to gain greater focus on the other parts of the estimation process like team capabilities, processes, risks or changes that will affect velocity.  Greater focus leads to greater understanding, which leads to a better estimate.
  5. No One Dedicated to Estimation.  Estimating is a skill that that not only requires but practice to develop consistency.  While everyone should understand the concepts of estimation, consistency will be gained faster if someone is dedicated to learn and to execute the estimation process.

Solving the five macro estimation problems requires organizational change.¬† Many of the changes required are difficult because they are less about ‚Äúhow‚ÄĚ to estimate and more about what we think estimates are, which leads into a discussion of why we estimate.¬† Organization‚Äôs budget and estimate to provide direction at a high level.¬†¬† At this level budgets and estimates affect planning for tax accruals and for communicating portfolio level decisions to organizational stakeholders.¬† Investing in improving how organizations estimate will improve communication between CIOs, CFOs and business stakeholders.


Categories: Process Management

The Top Five Issues In Project Estimation

 

Sometimes estimation leaves you in a fog!

Sometimes estimation leaves you in a fog!

When I recently asked a group of people the question ‚ÄúWhat are the two largest issues in project estimation?‚ÄĚ, I received a wide range of answers. The range of answers is probably a reflection of the range of individuals answering.¬† Five macro categories emerged from the answers. They are:

  1. Requirements. The impact of unclear and changing requirements on budgeting and estimation was discussed in detail in the entry, Requirements: The Chronic Problem with Project Estimation.  Bottom line, change is required to embrace dynamic development methods and that change will require changes in how the organization evaluates projects.
  2. Estimate Reliability. The perceived lack of reliability of an estimate can be generated by many factors including differences in between development and estimation processes. One of the respondents noted, ‚Äúmost of the time the project does not believe the estimate and thus comes up with their own, which is primarily based on what they feel the customer wants to hear.‚ÄĚ
  3. Project History. Both analogous and parametric estimation processes use the past as an input in determining the future.¬† Collection of consistent historical data is critical to learning and not repeating the same mistakes over and over.¬† According to Joe Schofield, ‚Äúfew groups retain enough relevant data from their experiences to avoid relearning the same lesson.‚ÄĚ
  4. Labor Hours Are Not The Same As Size.¬† Many estimators either estimate the effort needed to perform the project or individual tasks.¬† By jumping immediately to effort, estimators miss all of the nuances that effect the level of effort required to deliver value.¬† According to Ian Brown, ‚Äúthen the discussion basically boils down to opinions of the number of hours, rather that assessing other attributes that drive the number of hours that something will take.‚ÄĚ
  5. No One Dedicated to Estimation.¬† Estimating is a skill built on a wide range of techniques that need to be learned and practiced.¬† When no one is dedicated to developing and maintaining estimates it is rare that anyone can learn to estimate consistently, which affects reliability.¬† To quote one of the respondents, ‚Äúconsistency of estimation from team to team, and within a team over time, is non-existent.‚ÄĚ

 

Each of the top five issues are solvable without throwing out the concept of estimation that are critical for planning at the organization, portfolio and product levels.  Every organization will have to wrestle with their own solution to the estimation conundrum. However the first step is to recognize the issues you face and your goals from the estimation process.


Categories: Process Management

Mocking a REST backend for your AngularJS / Grunt web application

Xebia Blog - Thu, 06/26/2014 - 17:15

Anyone who ever developed a web application will know that a lot of time is spend in a browser to check if everything works as well and looks good. And you want to make sure it looks good in all possible situations. For a single-page application, build with a framework such as AngularJS, that gets all it's data from a REST backend this means you should verify your front-end against different responses from your backend. For a small application with primarily GET requests to display data, you might get away with testing against your real (development) backend. But for large and complex applications, you need to mock your backend.

In this post I'll go in to detail how you can solve this by mocking GET requests for an AngularJS web application that's built using Grunt.

In our current project, we're building a new mobile front-end for an existing web application. Very convenient since the backend already exists with all the REST services that we need. An even bigger convenience is that the team that built the existing web application also built an entire mock implementation of the backend. This mock implementation will give standard responses for every possible request. Great for our Protractor end-to-end tests! (Perhaps another post about that another day.) But this mock implementation is not so great for the non standard scenario's. Think of error messages, incomplete data, large numbers or a strange valuta. How can we make sure our UI displays these kind of cases correct? We usually cover all these cases in our unit tests, but sometimes you just want to see it right in front of you as well. So we started building a simple solution right inside our Grunt configuration.

To make this solution work, we need to make sure that all our REST requests go through the Grunt web server layer. Our web application is served by Grunt on localhost port 9000. This is the standard configuration that Yeoman generates (you really should use Yeoman to scaffold your project). Our development backend is also running on localhost, but on port 5000. In our web application we want to make all REST calls using the `/api` path so we need to rewrite all requests to http://localhost:9000/api to our backend: http://localhost:5000/api. We can do this by adding middleware in the connect:livereload configuration of our Gruntfile.

livereload: {
  options: {
    open: true,
    middleware: function (connect, options) {
      return [
        require('connect-modrewrite')(['^/api http://localhost:5000/api [P L]']),

        /* The lines below are generated by Yeoman */
        connect.static('.tmp'),
        connect().use(
          '/bower_components',
          connect.static('./bower_components')
        ),
        connect.static(appConfig.app)
      ];
    }
  }
},

Do the same for the connect:test section as well.

Since we're using 'connect-modrewrite' here, we'll have to add this to our project:

npm install connect-modrewrite --save-dev

With this configuration every request starting will http://localhost:9000/api will be passed on to http://localhost:5000/api so we can just use /api in our AngularJS application. Now that we have this working, we can write some custom middleware to mock some of our requests.

Let's say we have a GET request /api/user returning some JSON data:

{"id": 1, "name":"Bob"}

Now we'd like to see what happens with our application in case the name is missing:

{"id": 1}

It would be nice if we could send a simple POST request to change the response of all subsequent calls. Something like this:

curl -X POST -d '{"id": 1}' http://localhost:9000/mock/api/user

We prefixed the path that we want to mock with /mock in order to know when we should start mocking something. Let's see how we can implement this. In the same Gruntfile that contains our middleware configuration we add a new function that will help us mock our requests.

var mocks = [];
function captureMock() {
  return function (req, res, next) {

    // match on POST requests starting with /mock
    if (req.method === 'POST' && req.url.indexOf('/mock') === 0) {

      // everything after /mock is the path that we need to mock
      var path = req.url.substring(5);

      var body = '';
      req.on('data', function (data) {
        body += data;
      });
      req.on('end', function () {

        mocks[path] = body;

        res.writeHead(200);
        res.end();
      });
    } else {
      next();
    }
  };
}

And we need to add the above function to our middleware configuration:

middleware: function (connect, options) {
  return [
    captureMock(),
    require('connect-modrewrite')(['^/api http://localhost:5000/api [P L]']),

    connect.static('.tmp'),
    connect().use(
      '/bower_components',
      connect.static('./bower_components')
    ),
    connect.static(appConfig.app)
  ];
}

Our function will be called for each incoming request. It will capture each request starting with /mock as a request to define a mock request. Next it stores the body in the mocks variable with the path as key. So if we execute our curl POST request we end up with something like this in our mocks array:

mocks['/api/user'] = '{"id": 1}';

Next we need to actually return this data for requests to http://localhost:9000/api/user. Let's make a new function for that.

function mock() {
  return function (req, res, next) {
    var mockedResponse = mocks[req.url];
    if (mockedResponse) {
      res.writeHead(200);
      res.write(mockedResponse);
      res.end();
    } else {
      next();
    }
  };
}

And also add it to our middleware.

  ...
  captureMock(),
  mock(),
  require('connect-modrewrite')(['^/api http://localhost:5000/api [P L]']),
  ...

Great, we now have a simple mocking solution in just a few lines of code that allows us to send simple POST requests to our server with the requests we want to mock. However, it can only send status codes of 200 and it cannot differentiate between different HTTP methods like GET, PUT, POST and DELETE. Let's change our functions a bit to support that functionality as well.

 var mocks = {
  GET: {},
  PUT: {},
  POST: {},
  PATCH: {},
  DELETE: {}
};

function mock() {
  return function (req, res, next) {
    if (req.method === 'POST' && req.url.indexOf('/mock') === 0) {
      var path = req.url.substring(5);

      var body = '';
      req.on('data', function (data) {
        body += data;
      });
      req.on('end', function () {

        var headers = {
          'Content-Type': req.headers['content-type']
        };
        for (var key in req.headers) {
          if (req.headers.hasOwnProperty(key)) {
            if (key.indexOf('mock-header-') === 0) {
              headers[key.substring(12)] = req.headers[key];
            }
          }
        }

        mocks[req.headers['mock-method'] || 'GET'][path] = {
          body: body,
          responseCode: req.headers['mock-response'] || 200,
          headers: headers
        };

        res.writeHead(200);
        res.end();
      });
    }
  };
};

function mock() {
  return function (req, res, next) {
    var mockedResponse = mocks[req.method][req.url];
    if (mockedResponse) {
      res.writeHead(mockedResponse.responseCode, mockedResponse.headers);
      res.write(mockedResponse.body);
      res.end();
    } else {
      next();
    }
  };
}

We can now create more advanced mocks:

curl -X POST \
    -H "mock-method: DELETE" \
    -H "mock-response: 403" \
    -H "Content-type: application/json" \
    -H "mock-header-Last-Modified: Tue, 15 Nov 1994 12:45:26 GMT" \
    -d '{"error": "Not authorized"}' http://localhost:9000/mock/api/user

curl -D - -X DELETE http://localhost:9000/api/user
HTTP/1.1 403 Forbidden
Content-Type: application/json
last-modified: Tue, 15 Nov 1994 12:45:26 GMT
Date: Wed, 18 Jun 2014 13:39:30 GMT
Connection: keep-alive
Transfer-Encoding: chunked

{"error": "Not authorized"}

Since we thought this would be useful for other developers, we decided to make all this available as open source library on GitHub and NPM

To add this to your project, just install with npm:

npm install mock-rest-request --save-dev

And of course add it to your middleware configuration:

middleware: function (connect, options) {
  var mockRequests = require('mock-rest-request');
  return [
    mockRequests(),
    
    connect.static('.tmp'),
    connect().use(
      '/bower_components',
      connect.static('./bower_components')
    ),
    connect.static(appConfig.app)
  ];
}

Software Development Conferences Forecast June 2014

From the Editor of Methods & Tools - Thu, 06/26/2014 - 07:22
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing and software quality, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. AGILE2014, July 28 ‚Äď August 1, Orlando, USA Agile on the Beach, September 4-5 2014, Falmouth in Cornwall, UK SPTechCon, September 16-19 2014, Boston, USA STARWEST, October 12-17 2014, Anaheim, USA JAX London, October 13-15 2014,London, UK Pacific Northwest ...

What do you do when inertia wins?

At the end of a race inertia might not be enough!

At the end of a race inertia might not be enough!

Audio Version on SPaMCAST 197

Changing how any organization works is not easy.¬† Many different moving parts have to come together for a change to take root and build up enough inertia to pass the tipping point. Unfortunately because of misalignment, misunderstanding or poor execution, change programs don’t always win the day.¬† This is not new news to most of us in the business.¬† The question I pose is what should happen after a process improvement program fails?¬† What happens when the wrong kind of inertia wins?

Step One:  All failures must be understood.

A critical review of the failed program that focuses on why and how it failed must be performed.¬† The word critical is important.¬† Nothing should be sugar coated or “spun” to protect people’s feelings.¬† A critical review must also have a good dose of independence from those directly involved in the implementation.¬† Independence is required so that the biases and decisions that led to the original program can be scrutinized.¬† The goal is not to pillory those involved but rather to make sure the same mistakes are not repeated.¬† These reviews are known by many names: postmortems, retrospectives or troubled project reviews, to name a few.

Step two:  Determine which way the organization is moving.

Inertia describes why an object in motion tends to stay in motion or those at rest tend to stay at rest.  Energy is required to change the state of any object or organization: therefore understanding the direction of the organization is critical to planning any change. In process improvement programs, we call the application of energy change management.  A change management program might include awareness building, training, mentoring or a myriad of other events all designed to inject energy into the system, the goal of that energy is either to amplify or change the performance of some group within an organization.  When not enough or too much energy is applied, the process change will fail.

Just because a change has failed does not mean all is lost.  I would suggest that there are two possible outcomes to a failure: The first is that the original position is reinforced, making change even more difficult.  The second is that the target group has been pushed into moving, maybe not all the way to where they should be or even in the right direction but the original inertia has been broken.

Frankly, both outcomes happen.¬† If the failure is such that no good comes of it, then your organization will be mired in the muck of living off past performance.¬† This is similar to what happens when a car gets stuck in snow or sand and digs itself in.¬† The second scenario is more positive, and while the goal was not attained, the organization has begun to move, making further change easier.¬† I return to the car stuck in the snow example.¬† A technique that is taught to many of us that live in snowy climates is “rocking.” Rocking is used to get a car stuck in snow moving back and forth.¬† Movement increases the odds that you will be able to break free and get going in the right direction.¬† Interestingly, the recognition of movement is a powerful sales technique taught in the Sandler Sales System.

Step Three:  Take smaller bites!

The lean startup movement provides a number of useful concepts that can be used when changing any organization.  In the Software Process and Measurement Cast 196, Jeff Anderson talked in detail about leveraging the concepts of lean start-ups within change programs (Link to SPaMCAST 196).  In this essay, I suggest using the concept of minimum viable changes to build a backlog of manageable changes.  The backlog should be groomed and prioritized by a product owner (or owners) from the area being impacted by the change.  This will increase ownership and involvement and generate buy-in.  Once you have a prioritized backlog, make the changes in a short time-boxed manner while involving those being impacted in measuring the value delivered.  Stop doing things if they are not delivering value and go to the next change.

What do you do when inertia wins? Being a change agent is not easy, and no one succeeds all the time unless they are not taking any risks.  Learn from your mistakes and successes.  Understand the direction the organization is moving and use that movement as an asset to magnify the energy you apply. Involve those you are asking to change to building a backlog of prioritized minimum viable changes (mix the concept of a backlog with concepts from the lean start up movement).  Make changes based on how those who are impacted prioritize the backlog then stand back to observe and measure.  Finally, pivot (change direction) if necessary.  Always remember that the goal is not really the change itself but rather demonstrable business value. Keep pushing until the organization is going in the right direction.  What do you do when inertia wins?  My mother would have said just get back up, dust yourself off and get back in the game; it isn’t that easy but it is not that much more complicated.


Categories: Process Management

How Agile accelerates your business

Xebia Blog - Wed, 06/25/2014 - 10:11

This drawing explains how agility accelerates your business. It is free to use and distribute. Should you have any questions regarding the subjects mentioned, feel free to get in touch.
Dia1

Requirements: The Chronic Problem With Project Estimation

Just being ready to run does not mean you know all of the requirements.

Just being ready to run does not mean you know all of the requirements.

When I recently asked a group of people the question ‚ÄúWhat are the two largest issues in project estimation?‚ÄĚ I received one response more than any other: requirements, prefixed by words like unclear and changing. In the eight years I have been hosting the Software Process and Measurement Cast I end each interview by asking what two changes could be made to make it easier to deliver better functionality (the actual words vary), and requirements are one of the culprits that appear over and over.¬† The requirements/estimation conundrum has not changed much over the multiple decades I have been in the software development world.¬† We have described the budget, estimate and plan continuum that most large projects follow, the estimation ‚Äúproblem‚ÄĚ follows the same continuum.

Most large organizations have follow a cycle of budgeting for projects. Plans are immediately put into motion based on the budgets including ‚Äúguidance‚ÄĚ provided to the markets in public companies. The budgets become part of how executives and managers are paid or bonused. In IT, projects can make up a significant portion of C-level and other managers budgets.¬† Projects at this level generally represent concepts and at best are estimated based on interpolations from analogies. However estimates of cost, duration and revenue are generated by a lot of thought and hardwork.¬† Anyone that has been in the business knows that the scope of projects at this point are dynamic, But even this early on, the die has begun to be set. ¬†

As programs and projects begin, a better understanding of the central concept is developed.  Based on that better understanding the budget refined however the refinement generally occurs within the boundaries developed and placed in the budget.  I have known project and program managers that tried all sorts of techniques as they fought to respect the stakeholder’s central concept and the need to meet the numbers. Techniques include sourcing decisions (offshoring), buying packages or even shedding known scope.  All of this activity occurs as the concept and the underlying requirements evolve. This issue occurs both in industry and government.

As work begins budgeting and estimating shift to planning.  In waterfall projects, estimators and schedules build elaborate work breakdown plans that help guild team members through the process of delivering value. Each requirement and task is estimated to support the WBS.  [WBS?] This type of behavior also occurs in some pseudo-Agile teams. For work that is highly deterministic this approach may work well, however if the business environment is dynamic or requirements evolve to more fully meet the product owner or other stakeholders needs, it won’t work.

The natural tendency is to eschew budgeting and estimating, to change how public companies report and how executives are paid.  This will happen, but not overnight.  In the interim the best option in most cases is to manage the boundary between estimating and planning using tools like release plans and minimum marketable/acceptable products.  The release plan needs to identify what has to be delivered (minimum marketable/acceptable product) with the nice to haves acting as the buffer that is managed to meet the corporate promises. This approach requires all parties to change some behaviors, such as over-promising by both IT and stakeholders and treating IT less like a factory and more like a collaborative venture.  Both are difficult  changes but just holding out for better requirements has not worked for decades and probably won’t get better soon.


Categories: Process Management

Teams Should Go So Fast They Almost Spin Out of Control

Mike Cohn's Blog - Tue, 06/24/2014 - 15:00

Yes, I really did refer to guitarist Alvin Lee in a Certified Scrum Product Owner class last week. Here's why.

I was making a point that Scrum teams should strive to go as fast as they can without going so fast they spin out of control. Alvin Lee of the band Ten Years After was a talented guitarist known for his very fast solos. Lee's ultimate performance was of the song "I'm Going Home" at Woodstock. During the performance, Lee was frequently on the edge of flying out of control, yet he kept it all together for some of the best 11 minutes in rock history.

I want the same of a Scrum team--I want them going so fast they are just on the verge of spinning out of control yet are able to keep it together and deliver something classic and powerful.

Re-watching Ten Years After's Woodstock performance I'm struck by a couple of other lessons, which I didn't mention in class last week:

One: Scrum teams should be characterized by frequent, small hand-offs. A programmer gets eight lines of code working and yells, "Hey, Tester, check it out." The tester has been writing automated tests while waiting for those eight lines and runs the tests. Thirty minutes later the programmer has the next micro-feature coded and ready for testing. Although a good portion of the song is made up of guitar solos, they aren't typically long solos. Lee plays a solo and soon hands the song back to his bandmates, repeating for four separate solos through the song.

Two: Scrum teams should minimize work in progress. While "I'm Going Home" is a long song (clocking in at over eleven minutes), there are frequent "deliveries" of interpolated songs throughout the performance. Listen for "Blue Suede Shoes, "Whole Lotta Shaking" and others, some played for just a few seconds.

OK, I'm probably nuts, and I certainly didn't make all these points in class. But Alvin Lee would have made one great Scrum teammate. Let me know what you think in the comments below.

Portfolio-Level Estimation

Portfolio Estimation?  A life coach might help!

Portfolio Estimation? A life coach might help!

I recently asked a group of people the question ‚ÄúWhat are the two largest issues in project estimation?‚ÄĚ ¬† The group were all involved in delivering value to clients either as developers, testers, methodologists and consultants.¬† The respondents experience ran the gamut from Scrum and eXtreme through Scaled Agile Framework (SAFe) and Disciplined Agile Development (DaD) to waterfall.¬† While not a scientific survey, the responses were illuminating.¬† While I am still in process of compiling the results and extracting themes, I thought I would share one of the first responses: all resources are not created equal.¬† The respondent made the point that most estimating exercises, which begin at the portfolio level, don‚Äôt take into account the nuances of individual experience and capacity when projects are ‚Äúplucked‚ÄĚ from a prioritized portfolio to begin work.¬† This problem, at the portfolio level, is based on two issues.¬† The first is making assumptions are based on assumptions and the second is making decisions based on averages.¬† At the portfolio level both are very hard to avoid.

Nearly all organizations practice some form of portfolio management.¬† Portfolio management techniques can be range from na√Įve (e.g. the squeaky wheel method) to sophisticated (e.g. portfolio-level Kanban).¬† In most cases the decision process as to when to release a piece of work from the portfolio requires making assumptions about the perceived project size and organizational capabilities required to deliver the project. In order to make the assumptions, a number of assumptions must be made (a bit of foreshadowing, assumptions made based on assumptions are a potential problem).¬† The most important set of assumptions that are made when a project is released is that the requirements and solution are known.¬† These assumptions will affect how large the project needs to be and the capabilities required to deliver the project. Many organizations go to great lengths to solve this problem.¬† Tactics used to address this issue include trying to gather and validate all of the requirements before starting any technical work (waterfall), running a small proof-of-concept project (prototypes) to generating rapid feedback (Agile). Other techniques that are used include creating repositories that link skills to people or teams.¬† And while these tools are useful for assembling teams in matrix organizations, these are rarely useful at the portfolio level because they are not forecasting tools. In all cases, the path that provides the most benefit revolves around generating information as early as possible and then reacting to the information.¬†

The second major issue is that estimates and budgets divined at the portfolio level are a reflection of averages.  In many cases, organizations use analogies to generate estimates and initial budget numbers for portfolio-level initiatives.  When using analogies an estimator (or group) will compare the project he or she is trying to estimate to completed projects to determine how alike they are to another. For example, if a you think that a project is about 70% the size of a known project, simple arithmetic can be used to estimate the new project.  Other assumptions and perceptions would be used to temper the precision.  Real project performance will reflect on all of the nuances that the technology, solution and the individual capabilities generate.  These nuances will generate variances from the estimate.  As with the knowledge issue, organizations use many techniques to manage the impact of the variances that will occur.  Two popular methods used include contingencies in the work breakdown schedule (waterfall) and backlog re-planning (Agile). In all cases, the best outcomes are reflective of feedback based on performance of real teams delivering value. 

Estimates by definition are never right (hopefully close). Estimates (different than planning) are based on what the estimator knows very early in the process.  What really needs to be built becomes know later in the process after estimates and budgets are set at the portfolio levels.  Mature organizations recognize that as projects progress new information is gathered which should be quickly used to refine estimates and budgets. 


Categories: Process Management

Kanban, Developer Career & Mobile UX in Methods & Tools Summer 2014 issue

From the Editor of Methods & Tools - Mon, 06/23/2014 - 14:54
Methods & Tools ‚Äď the free e-magazine for software developers, testers and project managers ‚Äď has just published its Summer 2014 issue that discusses objections to Kanban implementation, How to use a model to evaluate and improve mobile user experience, balancing a software development job and a meaningful life, Scrum agile project management tools, JavaScript unit testing and static analysis for BDD. Methods & Tools Summer 2014 contains the following articles: * Kanban for Skeptics * Using a Model To Systematically Evaluate and Improve Mobile User Experience * Developer Careers Considered Harmful * TargetProcess – ...

SPaMCAST 295 ‚Äď TDD, Software Sensei, Cognitive Load

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 295!

SPaMCAST 295 features our essay on Test Driven Development (TDD). TDD is an approach to development in which you write a test that proves the piece of work you are working on, and then write the code required to pass the test. You then refactor that code to eliminate duplication and any overlap, then repeat until all of the work is completed. Philosophically, Agile practitioners see TDD as a tool either to improve requirements and design (specification) or to improve the quality of the code.  This is similar to the distinction between verification (are you doing the right thing) and validation (are you doing the thing right).

We also have a new entry from the Software Sensei, Kim Pries. Kim addresses cognitive load theory.  Cognitive load theory helps explain how learning and change occur at personnel, team and organizational levels.

Next week we will feature our interview with Jeff Dalton. Jeff and I talked about making Agile resilient.  Jeff posits that the CMMI can be used to strengthen and reinforce Agile. This is an important interview for organizations that are considering scaled Agile frameworks.

Upcoming Events

Upcoming DCG Webinars:

July 24 11:30 EDT – The Impact of Cognitive Bias On Teams

Check these out at www.davidconsultinggroup.com

I will be attending Agile 2014 in Orlando, July 28 through August 1, 2014.  It would be great to get together with SPaMCAST listeners, let me know if you are attending.

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 295 - TDD, Software Sensei, Cognitive Load

Software Process and Measurement Cast - Sun, 06/22/2014 - 22:00

SPaMCAST 295 features our essay on Test Driven Development (TDD). TDD is an approach to development in which you write a test that proves the piece of work you are working on, and then write the code required to pass the test. You then refactor that code to eliminate duplication and any overlap, then repeat until all of the work is completed. Philosophically, Agile practitioners see TDD as a tool either to improve requirements and design (specification) or to improve the quality of the code.  This is similar to the distinction between verification (are you doing the right thing) and validation (are you doing the thing right).

We also have a new entry from the Software Sensei, Kim Pries. Kim addresses cognitive load theory.  Cognitive load theory helps explain how learning and change occur at personnel, team and organizational levels.

Next week we will feature our interview with Jeff Dalton. Jeff and I talked about making Agile resilient.  Jeff posits that the CMMI can be used to strengthen and reinforce Agile. This is an important interview for organizations that are considering scaled Agile frameworks.

Upcoming Events

Upcoming DCG Webinars:

July 24 11:30 EDT - The Impact of Cognitive Bias On Teams

Check these out at www.davidconsultinggroup.com

I will be attending Agile 2014 in Orlando, July 28 through August 1, 2014.  It would be great to get together with SPaMCAST listeners, let me know if you are attending. http://agile2014.agilealliance.org/

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1

http://www.pnsqc.org/international-conference-software-quality-test-management-2014/

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA. 

http://www.neqc.org/conference/60/location.asp

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events! 

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI's mission is to pull together the expertise and educational efforts of the world's leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: "This book will prove that software projects should not be a tedious process, neither for you or your team." Support SPaMCAST by buying the book here.

Available in English and Chinese. 

Categories: Process Management

Parkinson’s Law and The Myth of 100% Utilization

20140621-191243-69163701.jpg

Nature abhors a vacuum, thus a line forms.

One of the common refrains in management circles is that work expands to fill the time available. It is know as Parkinson’s Law. The implication is that if work expands to fill available time, then managers should overload backlogs to ensure time is spent in the most efficient manner. Historical evidence from the UK Civil Service and experimental evidence from staffing contact centers can be backs up that claim. Given the existence of data that proves Parkinson’s Law, many IT managers and project managers strive to ensure that full utilization is planned and monitored.  The focus on planning 100% utilization in software teams is potentially counterproductive because it generates planning errors and compression.  

In classic project management some combination estimators, project managers and team members build a list of the tasks needed to deliver the project’s requirements.  These work breakdown structures are ordered based on predecessors, successors and the teams capacity.  Utilization of each team member is meticulously balanced to a prescribed level (generally 100%).  Once the project begins, the real world takes over and WHAM something unanticipated crops or a group of tasks turn out more difficult than anticipated. These are schedule errors. Rarely do the additions to the schedule ever balance with the subtractions.  As soon as the plan is disrupted something has to give. And while re-planning does occur, the usual approach is to work longer hours or to cut corners. Both cutting corners and tired team members can and generally do lead to increase levels of technical debt. 

Over planning, also known in many circles as stretch goals, generates immediate schedule compression. In this scenario, the project is compressed through a number of techniques including adding people to the team, working more hours or days or the infamous step of cutting testing. These same techniques are leveraged in projects where planning errors overwhelm any contingency.  Schedule compression increases risk, cost, team stress and technical debt.  Compression can (and does) occur in classic and Agile projects when teams are pushed to take on more work than they can deliver given their capacity. 

In projects with high levels of transparency these decisions reflect tradeoffs that are based on business decisions. In some cases the date might be more important than quality, cost and the long term health of the team.  Making that type of decision rarely makes sense but when it does it must be made done with knowledge of the consequences.

Agile teams natural antidotes for Parkinson’s Law: the prioritized backlog, the burn down chart and the daily standup/Scrum meeting. On a daily basis team members discuss the work they have completed and will complete.  When the sprint backlog is drawn down, the team can (with the product owners assent) draw new stories into the sprint.  The burn down chart is useful to help the team understand how they are consuming their capacity to complete work. 

Whether you use Agile or classic project management techniques, Parkinson‚Äôs Law can occur. However the typical response of planning and insisting on 100% utilization might lead to a situation where the cure is not worth the pain delivered in the treatment. In all cases, slack must be planned to account for the oft remarked ‚Äústuff‚ÄĚ that happens and teams must be both responsible and accountable for delivering value of the time at their disposal. ¬†

 


Categories: Process Management

How to verify Web Service State in a Protractor Test

Xebia Blog - Sat, 06/21/2014 - 08:24

Sometimes it can be useful to verify the state of a web service in an end-to-end test. In my case, I was testing a web application that was using a third-party Javascript plugin that logged page views to a Rest service. I wanted to have some tests to verify that all our web pages did include the plugin, and that it was communicating with the Rest service properly when a new page was opened.
Because the webpages were written with AngularJS, Protractor was our framework of choice for our end-to-end test suite. But how to verify web service state in Protractor?

My first draft of a Protractor test looked like this:

var businessMonitoring = require('../util/businessMonitoring.js');
var wizard = require('./../pageobjects/wizard.js');

describe('Business Monitoring', function() {
  it('should log the page name of every page view in the wizard', function() {
    wizard.open();
    expect(wizard.activeStepNumber.getText()).toBe('1');

    // We opened the first page of the wizard and we expect it to have been logged
    expect(businessMonitoring.getMonitoredPageName()).toBe('/wizard/introduction');

    wizard.nextButton.click();
    expect(wizard.completeStep.getAttribute('class')).toContain('active');
    // We have clicked the ‚Äėnext‚Äô button so the ‚Äėcompleted‚Äô page has opened, this should have // been logged as well
    expect(businessMonitoring.getMonitoredPageName()).toBe('/wizard/completed');
  });
});

The next thing I had to write was the businessMonitoring.js script, which should somehow make contact with the Rest service to verify that the correct page name was logged.
First I needed a simple plugin to make http requests. I found the 'request' npm package , which provides a simple API to make a http request like this:

var request = require('request');

var executeRequest = function(method, url) {
  var defer = protractor.promise.defer();
  
  // method can be ‚ÄėGET‚Äô, ‚ÄėPOST‚Äô or ‚ÄėPUT‚Äô
  request({uri: url, method: method, json: true}, function(error, response, body) {

    if (error || response.statusCode >= 400) {
      defer.reject({
        error : error,
        message : response
      });
    } else {
      defer.fulfill(body);
    }
  });

  // Return a promise so the caller can wait on it for the request to complete
  return defer.promise;
};

Then I completed the businessmonitoring.js script with a method that gets the last request from the Rest service, using the request plugin.
It looked like this:

var businessMonitoring = exports; 

< .. The request wrapper with the executeRequest method is included here, left out here for brevity ..>

businessMonitoring.getMonitoredPageName = function () {

    var defer = protractor.promise.defer();

    executeRequest('GET', 'lastRequest')  // Calls the method which was defined above
      .then(function success(data) {
        defer.fulfill(data,.url);
      }, function error(e) {
        defer.reject('Error when calling BusinessMonitoring web service: ' + e);
      });

    return defer.promise;
 };

It just fires a GET request to the Rest service to see which page was logged. It is an Ajax call so the result is not immediately available, so a promise is returned instead.
But when I plugged the script into my Protractor test, it didn't work.
I could see that the requests to the Rest service were done, but they were done immediately before any of my end-to-end tests were executed.
How come?

The reason is that Protractor uses the WebdriverJS framework to handle its control flow. Statements like expect(), which we use in our Protractor tests, don't execute their assertions immediately, but instead they put their assertions on a queue. WebdriverJS first fills the queue with all assertions and other statements from the test, and then it executes the commands on the queue. Click here for a more extensive explanation of the WebdriverJs control flow.

That means that all statements in Protractor tests need to return promises, otherwise they will execute immediately when Protractor is only building its test queue. And that's what happened with my first implementation of the businessMonitoring mock.
The solution is to let the getMonitoredPageName return its promise within another promise, like this:

var businessMonitoring = exports; 

businessMonitoring.getMonitoredPageName = function () {
  // Return a promise that will execute the rest call,
  // so that the call is only done when the controlflow queue is executed.
  var deferredExecutor = protractor.promise.defer();

  deferredExecutor.then(function() {
    var defer = protractor.promise.defer();

    executeRequest('GET', 'lastRequest')
      .then(function success(data) {
        defer.fulfill(data.url);
      }, function error(e) {
        defer.reject('Error when calling BusinessMonitoring mock: ' + e);
      });

    return defer.promise;
  });

  return deferredExecutor;
};

Protractor takes care of resolving all the promises, so the code in my Protractor test did not have to be changed.

Why Size As Part of Estimation?

Trail Length Are An Estimate of size,  while the time need to hike  is another story!

Trail length is an estimate of size, while the time need to hike it is another story!

More than occasionally I am asked, “Why should we size as part of estimation?” ¬†In many cases the actual question is, “why can’t we just estimate hours?” ¬†It is a good idea to size for many reasons, such as generating an estimate in a quantitative, repeatable process, but in the long run, sizing is all about the conversation it generates.

It is well established that size provides a major contribution to the cost of an engineering project.  In houses, bridges, planes, trains and automobiles the use of size as part of estimating cost and effort is a mature behavior. The common belief is that size can and does play a similar role in software. Estimation based on size (also known as parametric estimation) can be expressed as a function of size, complexity and capabilities.

E = f(size, complexity, capabilities)

In a parametric estimate these three factors are used to develop a set of equations that include a productivity rate, which is used to translate size into effort.

Size is a measure of the functionality that will be delivered by the project.  The bar for any project-level size measure is whether it can be known early in the project, whether it is predictive and whether the team can apply the metric consistently.  A popular physical measure is lines of code, function points are the most popular functional measure and story points are the most common relative measure of size.

Complexity refers to the technical complexity of the work being done and includes numerous properties of a project (examples of complexity could include code structure, math and logic structure).  Business problems with increased complexity generally require increased levels of effort to satisfy them.

Capabilities include the dimensions of skills, experience, processes, team structure and tools (estimation tools include a much broader list).  Variation in each capability influences the level of effort the project will require.

Parametric estimation is a top-down approach to generating a project estimate.  Planning exercises are then used to convert the effort estimate into a schedule and duration.  Planning is generally a bottom-up process driven by the identification of tasks, order of execution and specific staffing assignments.  Bottom-up planning can be fairly accurate and precise over short time horizons. Top-down estimation is generally easier than bottom-up estimation early in a project, while task-based planning makes sense in tactical, short-term scenarios. Examples of estimation and planning in an Agile project include iteration/sprint planning, which includes planning poker (sizing) and task planning (bottom-up plan).  A detailed schedule built from tasks in a waterfall project would be example of a bottom-up plan.  As most of us know, plans become less accurate as we push them further into the future even if they are done to the same level of precision. Size-based estimation provides a mechanism to predict the rough course of the project before release planning can be performed then again, as a tool to support and triangulate release planning.

The act of building a logical case for a function point count or participating in a planning poker session helps those that are doing an estimate to collect, organize and investigate the information that is known about a need or requirement.  As the data is collected, questions can be asked and conversations had which enrich understanding and knowledge.  The process of developing the understanding needed to estimate size provides a wide range of benefits ranging from simply a better understanding of requirements to a crisper understanding of risks.

A second reason for estimating size as a separate step in the process is that separating it out allows a discussion of velocity or productivity as a separate entity.  By fixing one part of the size, the complexity and capability equation, we gain greater focus on the other parts like team capabilities, processes, risks or changes that will affect velocity.  Greater focus leads to greater understanding, which leads to a better estimate.

A third reason for estimating size of the software project as part of the overall estimation process is that by isolating the size of the work when capabilities change or knowledge about the project increases, the estimate can more easily be re-scaled. In most projects that exist for more than a few months, understanding of the business problem, how to solve that problem and capabilities of the team increase while at the same time the perceived complexity[1] of the solution decreases. If a team has jumped from requirements or stories directly to an effort estimate  it will require more effort to re-estimate the remaining work because they will not be able to reuse previous estimate because the original rational will have change. When you have captured size re-estimation becomes a re-scaling exercise. Re-scaling is much closer to a math exercise (productivity x size) which saves time and energy.  At best, re-estimation is more time consuming and yields the same value.  The ability to re-scale will aid in sprint planning and in release planning. Why waste time when we should be focusing on delivering value?

Finally, why size? ¬†In the words of David Herron, author and Vice President of Solution Services at the David Consulting Group, “Sizing is all about the conversation that it generates.” ¬†Conversations create a crisper, deeper understanding of the requirements and the steps needed to satisfy the business need. ¬†Determining the size of the project is a tool with which to focus a discussion as to whether requirements are understood. ¬†If a¬†requirement¬†can’t be sized, you can’t know enough to actually¬†fulfill¬†it. ¬†Planning poker is an example of a sizing conversation. I am always amazed at the richness of the information that is exposed during a group-planning poker session (please remember to take notes). ¬†The conversation provides many of the nuances a story or requirement just can’t provide.

Estimates, by definition, are wrong. ¬†The question is just how wrong. ¬† The search for knowledge generated by the conversations needed to size a project provides the best platform for starting a project well. ¬†That same knowledge provides the additional inputs needed to complete the size, complexity, capability equation¬†in order to yield¬†a project estimate. ¬†If you are asked, “Why size?” it might be tempting to fire off the answer “Why not?” but in the end, I think you will change more minds by suggesting that it is all about the conversation after you have made the more quantitative arguments.

Check out an audio version of this essay as part of  SPaMCAST 201

[1] Perceived complexity is more important than actual complexity as what is perceived more directly drives behavior than actual complexity.


Categories: Process Management

Concordion without the JUnit Code

Xebia Blog - Fri, 06/20/2014 - 20:58

Concordion is a framework to support Behaviour Driven Design. It is based on JUnit to run tests and HTML enriched with a little Concordion syntax to call fixture methods and make assertions on test outcome. I won't describe Concordion because it is well documented here: http://concordion.org/.
Instead I'll describe a small utility class I've created to avoid code duplication. Concordion requires a JUnit class for each test. The utility I'll describe below allows you to run all Concordion tests without having a utility class for each test.

In Concordion you specify test cases and expected outcomes in a HTML file. Each HTML file is accompanied by a Java class of the same name that has the @RunWith(ConcordionRunner.class annotation. This Java class is comparable with Fitnesse's fixture classes. Here you can create methods that are used in the HTML file to connect the test case to the business code.

In my particular use case the team ended up writing lots of mostly empty Java files. The system under test was processing XML message files, so all the test needed to do was call a single method to hand the XML to the business code and then validate results. Each Java class was basically the same, except for its name.

To avoid this duplication I created a class that uses the JavaAssist library to generate a Java class on the fly and run it as a JUnit test. You can find my code on Github:

git clone git@github.com:jvermeir/concordionDemo

ConcordionRunner generates a class using a template. The template can be really simple like in my example where FixtureTemplate extends MyFixture. MyFixture holds all fixture code to connect the test to the application under test. This is where we would put all fixture code necessary to call a service using a XML message. In the example there's just the single getGreeting() method.
HelloWorldAgain.html is the actual Concordion test. It shows the call to getGreeting(), which is a method of MyFixture.
The dependencies are like this:
FixtureTemplate extends MyFixture
YourTest.html uses Concordion
Example uses ConcordionRunner uses JUnitCore
The example in Example.java shows how to use ConcordionRunner to execute a test. This could easily be extended to recursively go through a directory and execute all tests found. Note that Example writes the generated class to a file. This may help in troubleshooting but isn't really necessary.
Now it would be nice to adapt the Eclipse plugin to allow you to right click the HTML file and run it as a Concordion test without adding a unit test.

Quote of the Month June 2014

From the Editor of Methods & Tools - Fri, 06/20/2014 - 06:39
A UX team that deals with only the details of radio buttons and check boxes is committing a disservice to its organization. Today UX groups must deal with strategy. Source: Institutionalization of UX (2nd Edition), Eric Schaffer & Apala Lahiri, Addison-Wesley