Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management

Story Points Are Still About Effort

Mike Cohn's Blog - 4 hours 50 min ago

Story points are about time. There, I’ve said it, and can’t be more clear than that. I’ve written previously about why story points are about effort, not complexity. But I want to revisit that topic here.

The primary reason for estimating product backlog items is so that predictions can be made about how much functionality can be delivered by what date. If we want to estimate what can be delivered by when, we’re talking about time. We need to estimate time. More specifically, we need to estimate effort, which is essentially the person-days (or hours) required to do something.

Estimating something other than effort may be helpful, but we can’t use it to answer questions about when a project can be delivered. For example, suppose a team were to estimate for each product backlog item how many people would be involved in delivering that item.

One item might involve only a programmer and a tester, so it is given a “two.” Another item might involve two programmers, a designer, a database engineer, and a tester. So it is given an estimate of “five.”

It is entirely possible that the product backlog item involving only two people will take significantly longer than the one involving five people. This would be the case if the two people were involved intensely for days while the five were only involved for a few hours.

We may say that the number of people involved in delivering a product backlog item is a proxy for how long the feature will take to develop. In fact, I’d suspect that if we looked at a large number of product backlog items, we would see that those involving more people do, on average, take longer than those involving fewer people.

However, I’m equally sure we’d see lots of counter-examples, like that of the five and two people above. This means that the number of people involved is not a very good proxy for the effort involved in delivering the feature.

This is the problem with equating story points with complexity. Complexity is a factor in how long a product backlog item will take to develop. But complexity is not the only factor, and it is not sufficiently explanatory that we can get by with estimating just the complexity of each product backlog item.

Instead, story points should be an estimate of how long it will take to develop a user story. Story points represent time. This has to be so because time is what our bosses, clients and customers care about. They only care about complexity to the extent it influences the amount of time something will take.

So story points represent the effort involved to deliver a product backlog item. An estimate of the effort involved can be influenced by risk, uncertainty, and complexity.

Let’s look at an example:

Suppose you and I are to walk to a building. We agree that it will take one walking point to get there. That doesn’t mean one minute, one mile or even one kilometer. We just call it one walking point. We could have called it 2, 5, 10 or a million, but let’s call it 1.

What’s nice about calling this one walking point is that you and I can agree on that estimate, even though you are going to walk there while I hobble over there on crutches. Clearly you can get there much faster than I can; yet using walking points, we can agree to call it one point.

Next, we point to another building and agree that walking to it will take two points. That is, we both think it will take us twice as long to get to.

Let’s add a third building. This building is physically the same distance as the two-point building. So we are tempted to call it a two. However, separating us from that building is a narrow walkway across a deep chasm filled with boiling lava. The walkway is just wide enough that we can traverse it if we’re extremely careful. But, one misstep, and we fall into the lava.

Even though this third building is the same physical distance as the building we previously estimated as two walking points, I want to put a higher estimate on this building because of the extra complexity in walking to it.

As long as I’m cautious, there’s no real risk of falling into the lava, but I assure you I am going to walk more slowly and deliberately across that walkway. So slow, in fact, that I’m going to estimate that building as four walking points away.

Make sense? The extra complexity has influenced my estimate.

Complexity influences an estimate, but only to the extent the extra complexity affects the effort involved in doing the work. Walking to the one-point building while singing “Gangnam Style” is probably more complex that walking there without singing. But the extra complexity of singing won’t affect the amount of time it takes me to walk there, so my estimate in this case would remain one.

Risk and uncertainty affect estimates similarly. Suppose a fourth building is also physically the same distance as the building we called a two. But in walking to that building we must cross some train tracks. And the train crosses at completely unpredictable times.

There is extra uncertainty in walking to that building—sometimes we get there in two points. Other times we get stuck waiting for the train to pass and it takes longer. On average, we might decide to estimate this building as a three.

So, story points are about time—the effort involved in doing something. Because our bosses, clients and customers want to know when something will be done, we need to estimate with something based on effort. Risk, uncertainty and complexity are factors that may influence the effort involved.

Let me know what you think in the comments below.

React in modern web applications: Part 1

Xebia Blog - 7 hours 50 min ago

At Xebia we love to share knowledge! One of the ways we do this is by organizing 1-day courses during the summer. Together with Frank Visser we decided to do a training about full stack development with Node.js, AngularJS and Facebook's React. The goal of the training was to show the students how one could create a simple timesheet application. This application would use nothing but modern Javascript technologies while also teaching them best practices with regards to setting up and maintaining it.

To further share the knowledge gained during the creation of this training we'll be releasing several blog posts. In this first part we'll talk about why to use React, what React is and how you can incorporate it into your Grunt lifecycle.

This series of blog posts assume that you're familiar with the Node.js platform and the Javascript task runner Grunt.

What is React?

ReactJS logo

React is a Javascript library for creating user interfaces made by Facebook. It is their answer to the V in MVC. As it only takes care of the user interface part of a web application React can be (and most often will be) combined with other frameworks (e.g. AngularJS, Backbone.js, ...) for handling the MC part.

In case you're unfamiliar with the MVC architecture, it stands for model-view-controller and it is an architectural pattern for dividing your software into 3 parts with the goal of separating the internal representation of data from the representation shown to the actual user of the software.

Why use React?

There are quite a lot of Javascript MVC frameworks which also allow you to model your views. What are the benefits of using React instead of for example AngularJS?

What sets React apart from other Javascript MVC frameworks like AngularJS is the way React handles UI updates. To dynamically update a web UI you have to apply DOM updates whenever data in your UI changes. These DOM updates, compared to reading data from the DOM, are expensive operations which can drastically slow down your application's responsiveness if you do not minimize the amount of updates you do. React took a clever approach to minimizing the amount of DOM updates by using a virtual DOM (or shadow DOM) diff.

In contrast to the normal DOM consisting of nodes the virtual DOM consists of lightweight Javascript objects that represent your different React components. This representation is used to determine the minimum amount of steps required to go from the previous render to the next render. By using an observable to check if the state has changed React prevents unnecessary re-renders. By calling the setState method you mark a component 'dirty' which essentially tells React to update the UI for this component. When setState is called the component rebuilds the virtual DOM for all its children. React will then compare this to the current virtual sub-tree for the same component to determine the changes and thus find the minimum amount of data to update.

Besides efficient updates of only sub-trees, React batches these virtual DOM batches into real DOM updates. At the end of the React event loop, React will look up all components marked as dirty and re-render them.

How does React compare to AngularJS?

It is important to note that you can perfectly mix the usage of React with other frameworks like AngularJS for creating user interfaces. You can of course also decide to only use React for the UI and keep using AngularJS for the M and C in MVC.

In our opinion, using React for simple components does not give you an advantage over using AngularJS. We believe the true strength of React lies in demanding components that re-render a lot. React tends to really outperform AngularJS (and a lot of other frameworks) when it comes to UI elements that require a lot of re-rendering. This is due to how React handles UI updates internally as explained above.

JSX

JSX is a Javascript XML syntax transform recommended for use with React. It is a statically-typed object-oriented programming language designed for modern browsers. It is faster, safer and easier to use than Javascript itself. Although JSX and React are independent technologies, JSX was built with React in mind. React works without JSX out of the box but they do recommend using it. Some of the many reasons for using JSX:

  • It's easier to visualize the structure of the DOM
  • Designers are more comfortable making changes
  • It's familiar for those who have used MXML or XAML

If you decide to go for JSX you will have to compile the JSX to Javascript before running your application. Later on in this article I'll show you how you can automate this using a Grunt task. Besides Grunt there are a lot of other build tools that can compile JSX. To name a few, there are plugins for Gulp, Broccoli or Mimosa.

An example JSX file for creating a simple link looks as follows:

/** @jsx React.DOM */
var link = React.DOM.a({href: 'http://facebook.github.io/react'}, 'React');

Make sure to never forget the starting comment or your JSX file will not be processed by React.

Components

With React you can construct UI views using multiple, reusable components. You can separate the different concepts of your application by creating modular components and thus get the same benefits when using functions and classes. You should strive to break down the different common elements in your UI into reusable components that will allow you to reduce boilerplate and keep it DRY.

You can construct component classes by calling React.createClass() and each component has a well-defined interface and can contain state (in the form of props) specific to that component. A component can have ownership over other components and in React, the owner of a component is the one setting the props of that component. An owner, or parent component can access its children by calling this.props.children.

Using React you could create a hello world application as follows:

/** @jsx React.DOM */
var HelloWorld = React.createClass({
  render: function() {
    return <div>Hello world!</div>;
  }
});

Creating a component does not mean it will get rendered automatically. You have to define where you would like to render your different components using React.renderComponent as follows:

React.renderComponent(<HelloWorld />, targetNode);

By using for example document.getElementById or a jQuery selector you target the DOM node where you would like React to render your component and you pass it on as the targetNode parameter.

Automating JSX compilation in Grunt

To automate the compilation of JSX files you will need to install the grunt-react package using Node.js' npm installer:

npm install grunt-react --save-dev

After installing the package you have to add a bit of configuration to your Gruntfile.js so that the task knows where your JSX source files are located and where and with what extension you would like to store the compiled Javascript files.

react: {
  dynamic_mappings: {
    files: [
      {
        expand: true,
        src: ['scripts/jsx/*.jsx'],
        dest: 'app/build_jsx/',
        ext: '.js'
      }
    ]
  }
}

To speed up development you can also configure the grunt-contrib-watch package to keep an eye on JSX files. Watching for JSX files will allow you to run the grunt-react task whenever you change a JSX file resulting in continuous compilation of JSX files while you develop your application. You simply specify the type of files to watch for and the task that you would like to run when one of these files changes:

watch: {
  jsx: {
    files: ['scripts/jsx/*.jsx'],
    tasks: ['react']
  }
}

Last but not least you will want to add the grunt-react task to one or more of your grunt lifecycle tasks. In our setup we added it to the serve and build tasks.

grunt.registerTask('serve', function (target) {
  if (target === 'dist') {
    return grunt.task.run(['build', 'connect:dist:keepalive']);
  }

  grunt.task.run([
    'clean:server',
    'bowerInstall',
    <strong>'react',</strong>
    'concurrent:server',
    'autoprefixer',
    'configureProxies:server',
    'connect:livereload',
    'watch'
  ]);
});

grunt.registerTask('build', [
  'clean:dist',
  'bowerInstall',
  'useminPrepare',
  'concurrent:dist',
  'autoprefixer',
  'concat',
  <strong>'react',</strong>
  'ngmin',
  'copy:dist',
  'cdnify',
  'cssmin',
  'uglify',
  'rev',
  'usemin',
  'htmlmin'
]);
Conclusion

Due to React's different approach on handling UI changes it is highly efficient at re-rendering UI components. Besides that it's easily configurable and integrate in your build lifecycle.

What's next?

In the next article we'll be discussing how you can use React together with AngularJS, how to deal with state in your components and how to avoid passing through your entire component hierarchy using callbacks when updating state.

Traceability: Interpreting the Model

Tallying Up the Answers:
After assessing the three components (customer involvement, criticality and complexity), count the number of “yes” and “no” answers for each model axis. Plotting the results is merely a matter of indicating the number of yes and no answers on each axis. For example, if an appraisal yields:

Customer Involvement:   8 Yes 1 No

Criticality:                       7 Yes 2 No

Complexity:                    5 Yes 4 No

The responses could be shown graphically as:

1

The Traceability model is premised on the idea that as criticality and complexity increases, the need for communication intensifies. Communication becomes more difficult as customer involvement shifts from intimate to arm’s length. Each component of the model influences the other to some extent. In circumstances where customer involvement is high, there are many different planning and control tools that must be utilized than when involvement is lower. The relationships between each axes will suggest a different implementation of traceability. In a perfect world, the model would be implemented as a continuum with an infinite number of nuanced implementations of traceability. In real life, continuums are difficult to implement. Therefore, for ease of use, I suggest an implementation of model with three basic levels of traceability (the Three Bears Approach); Papa Bear or formal/detailed tracking, Mama Bear or formal with function level tracking and Baby Bear or informal (but disciplined)/anecdote based tracking. The three bears analogy is not meant to be pejorative; heavy, medium and light would work as well.

Interpreting the axes:
Assemble the axes you have plotted with the zero intercept at the center (see example below).

Untitled

As noted earlier, I suggest three levels of traceability, ranging from agile to formal. In general if the accumulated “No” answers exceed three (on any axis); an agile approach is not appropriate. An accumulated of 7, 8 or 9 strongly suggests as formal an approach as possible should be used. Note there are certain “NO” answers that are more equal than others. For example, in the Customer Involvement Category, if ‘Agile Methods Used’ is no . . . it probably makes sense to raise the level of formality immediately. A future refinement of the model will create a hierarchy of questions and to vary the impact of the responses based on that hierarchy. All components of the model are notional rather than carved in stone – implementing the model in specific environments will require tailoring. Apply the model through the filter of your experience. Organizational culture and experience will be most important on the cusps (3-4 and 6-7 yes answer ranges).

Informal – Anecdote Based Tracing

Component Scores: No axis with more than three “No” answers.

Traceability will be accomplished through combination of stories, test cases and later test results coupled with the tight interplay between customer and developers found in agile methods. This will ensure what was planned (and not unplanned) is implemented and what was implemented was what was planned.

Moderately Formal – Function Based Tracking

Component Scores: No axis with more than six “No” answers.

The moderately formal implementation of traceability links requirements to functions (each organization needs to define the precise unit – tracing use cases can be very effective when a detailed level control is not indicated), tests cases (development and user acceptance). This type of linkage is typically accomplished using matrices and numbering, requirements tools or some combination of the two.

Formal – Detailed Traceability

Component Scores: One or more axis with more than six “No” answers.

The most formal version of traceability links individual requirements (detailed, granular requirements) through design components, code and test cases, and results. This level of traceability provides the highest level of control and oversight. This type of traceability can be accomplished using paper and pencil for small projects; however for projects of any size, tools are required.

Caveats – As with all models, the proposed traceability model is a simplification of the real world. Therefore customization is expected. Three distinct levels of traceability may be too many for some organizations or too few for others. One implemented version of the model swings between an agile approach (primarily for WEB based projects where SCRUM is being practiced) and the moderately formal model for other types of projects.   For the example organization, adding additional layers has been difficult to implement without support to ensure high degrees of consistency. We found that leveraging project level tailoring for specific nuances has been the most practical means for dealing with “one off” issues.

In practice, teams have reported major benefits to using the model.

The first benefit is that using the model ensures an honest discussion of risks, complexity and customer involvement early in the life of the project. The model works best when all project team members (within reason) participate in the discussion and assessment of the model. Facilitation is sometimes required to ensure that discussion paralysis does not occur. One organization I work with has used this mechanism as a team building exercise.

The second benefit is that the model allows project managers, coaches and team members to define the expectations for the processes to be used for traceability in a transparent/collaborative manner. The framework presented allows all parties to understand what is driving where on the formality continuum your implementation of scalability will fall – HUH?. It should be noted that once the scalability topic is broached for traceability, it is difficult to contain the discussion to just this topic. I applaud those who embrace the discussion and would suggest that all project process need to be scalable based on a disciplined and participative process that can be applied early in a project.

Examples:

Extreme examples are easy to apply without leveraging a model, a questionnaire, or graph. An extreme example would be a critical system where defects could be life threatening – such as a project to build an air traffic control system. The attributes of this type of project would include extremely high levels of complexity, a large system, many groups of customers, each with differing needs, and probably a hard deadline with large penalties for missing the date and any misses on anticipated functionality. The model recommends that a detailed requirement for traceability is a component on the path of success. A similar example could be constructed for the model agile project in which intimate customer involvement can substitute for detailed traceability.

A more illustrative example would be for projects that inhabit gray areas. The following example employs the model to suggest a traceability approach.

An organization (The Org) was engaged a firm (WEB.CO) after evaluating a series of competitive bids to build a new ecommerce web site. The RFP required the use of several WEB 2.0 community and ecommerce functions. The customer that engaged WEB.CO felt they had defined the high level requirements in the RFP. WEB.CO uses some agile techniques on all projects in which they are engaged. The techniques include defining user stories, two weeks sprints, and a coach to support the team, co-located teams and daily builds. The RFP and negotiations indicated that the customer would not be on-site and at times would have constraints on their ability to participate in the project. These early pronouncements on involvement were deemed to non-negotiable. The contract included performance penalties that WEB.CO wished to avoid. The site was considered critical to the customer’s business. Delivery of the site was timed to be in conjunction with the initial introduction of the business. Let’s consider how we would apply the questionnaire in this case.

Question Number Involvement Complexity Criticality 1 Yes Yes No 2 No Yes No 3 No Yes Unknown
(need to know) 4 Yes Yes Yes 5 Yes
(Inferred) Yes Yes 6 Yes Yes No 7 Yes Yes No 8 Yes Yes No 9 Yes Yes Yes

 

Graphically the results look like:

2

Running the numbers on the individual radar plot axes highlights the high degree of perceived criticality for this project. The model recommends the moderate level of traceability documentation. As a final note, if this were a project I was involved on, I would keep an eye on the weakness in the involvement category. Knowing that there are weaknesses in the customer involvement category will make sure you do not rationalize away the criticality score.


Categories: Process Management

SPaMCAST 305 – Estimation

www.spamcast.net

http://www.spamcast.net

 

Click this link to listen to SPaMCAST 305

Software Process and Measurement Cast number 305 features our essay on Estimation.  Estimation is a hot bed of controversy. We begin by synchronizing on what we think the word means.  Then, once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 305 we will not shy away from a hard discussion.

The essay begins:

Software project estimation is a conflation of three related but different concepts. The three concepts are budgeting, estimation and planning.  These are typical in a normal commercial organization, however these concepts might be called different things depending your business model.  For example, organizations that sell software services typically develop sales bids instead of budgets.  Once the budget is developed the evolution from budget to estimate and then plan follows a unique path as the project team learns about the project.

Next

Software Process and Measurement Cast number 306 features our interview with Luis Gonçalves.  We discussed getting rid of performance appraisals.  Luis makes the case that performance appraisals hurt people and companies.

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise. Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM – 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 305 - Estimation Essay

Software Process and Measurement Cast - Sun, 08/31/2014 - 22:00

Software Process and Measurement Cast number 305 features our essay on Estimation.  Estimation is a hot bed of controversy. We begin by synchronizing on what we think the word means.  Then, once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 305 we will not shy away from a hard discussion.

The essay begins:

Software project estimation is a conflation of three related but different concepts. The three concepts are budgeting, estimation and planning.  These are typical in a normal commercial organization, however these concepts might be called different things depending your business model.  For example, organizations that sell software services typically develop sales bids instead of budgets.  Once the budget is developed the evolution from budget to estimate and then plan follows a unique path as the project team learns about the project.

Next

Software Process and Measurement Cast number 306 features our interview with Luis Gonçalves.  We discussed getting rid of performance appraisals.  Luis makes the case that performance appraisals hurt people and companies.

Upcoming Events

DCG Webinars:

Raise Your Game: Agile Retrospectives September 18, 2014 11:30 EDT Retrospectives are a tool that the team uses to identify what they can do better. The basic process – making people feel safe and then generating ideas and solutions so that the team can decide on what they think will make the most significant improvement – puts the team in charge of how they work. When teams are responsible for their own work, they will be more committed to delivering what they promise. Agile Risk Management – It Is Still Important! October 24, 2014 11:230 EDT Has the adoption of Agile techniques magically erased risk from software projects? Or, have we just changed how we recognize and manage risk?  Or, more frighteningly, by changing the project environment through adopting Agile techniques, have we tricked ourselves into thinking that risk has been abolished?

 

Upcoming: ITMPI Webinar!

We Are All Biased!  September 16, 2014 11:00 AM - 12:30 PM EST

Register HERE

How we think and form opinions affects our work whether we are project managers, sponsors or stakeholders. In this webinar, we will examine some of the most prevalent workplace biases such as anchor bias, agreement bias and outcome bias. Strategies and tools for avoiding these pitfalls will be provided.

Upcoming Conferences:

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management

Traceability: Criticality

UntitledThe final axis in the model is ‘criticality’. Criticality is defined as the quality, state or degree of being of the highest importance. The problem with criticality is that the concept is far easier to recognize than to define precisely. This attribute of projects fits the old adage, ‘I will know it when I see it’. Each person on a project will be able to easily identify what they think is critical. The difficultly is that each person has their own perception of what is most important and that perception will change over time. This makes it imperative to define a set of questions or status indicators to appraise criticality consistently. The appraisal processes uses “group think” to find the central tendency in teams and consolidate the responses. Using a consensus model to develop the appraisal will also help ensure that a broad perspective is leveraged. It is also important to remember that any appraisal is specific to a point in time and that the responses to the assessment can and will change over time. I have found that the following factors can be leveraged to assess importance and criticality:

Perceived moderate level of business impact (positive or negative)     y/n
Project does not show significant time sensitivity                        y/n
Fall back position exits if the project fails                                       y/n
Low possibility of impacting important customers                                  y/n
Project is not linked to other projects                                              y/n
Project not required to pay the bills                                                            y/n
Project is not labeled “Mission Critical”                                           y/n
Normal perceived value to the stakeholders                                              y/n
Neutral impact on the organizational architecture                                    y/n

Since each project has its own set of hot button issues, other major contributors can be substituted. However, be careful to understand the impact of the questions and the inter-relationships between the categories. The model does recognize that there will always be some overlap between responses.

Perceived Moderate Business Impact: Projects that are perceived to have a significant business impact are treated as more important than those that not. There are two aspects to the perception of importance. The first aspect is to determine whether or not the project team believes that their actions will have an impact on the outcome. The second aspect is whether the organization’s management acts as if they believe that the project will have a significant business impact (acting as if there will be an impact is more important than whether it “true” – at least in the short term). The perception as to whether the impact will be positive or negative is less important than the perception of the degree of the impact (a perception of a large impact will cause a large reaction). Assessment Tip: If both the project team and the organization’s management perceive that the project will have only a moderate business impact, appraise this attribute as a “Y”. If management does not perceive the significance, do not act as if there is significance or that nothing out of the ordinary is occurring, I would strongly suggest rating this attribute as a “Y”.

Lack of Significant Time Sensitivity for Delivery: Time sensitivity is the relationship between value and when the project is delivered. An example might be the implied time sensitivity when trying to be first to market with a new product, the perception of time sensitivity creates a sense of urgency which is central to criticality. While time is one of the legs of the project management iron triangle (identified in the management constraints above) this attribute measures the relationship between business value and delivery date. Assessment Tip: If the team perceives a higher than normal time sensitivity to delivery, appraise this attribute as ‘N’.

Fall Back Available: All or nothing projects or projects without fall backs, impart a sense of criticality that can easily be recognized (usually by the large bottles of TUMS at project manager’s desks). These types of projects occur, but are rare. Assessment Tip: A simple test for the whether the project is ‘all or nothing’ is to determine whether the team understands that when the project is implemented and works, everybody is good, or it does not work; everyone gets to look for a job, appraise this as an ‘N’. Note: This assumes that a project is planned to be an all or nothing scenario (must be done) and is not just an artifact of poor planning, albeit the impact might be the same.

Low Possibility of Impacting Important Customers: Any software has a possibility of impacting an important customer or block of customers. However, determining the level of that possibility and significance of the impact, if an impact occurs, can be a bit of an art form (or at least risk analysis). Impact is defined, for this attribute, as an effect that, if noticed, would be outside of the customers’ expectations. Assessment Tip: If the project is targeted to delivering functionality for an important customer assess this as ‘N’, if not directly targeted but if there is a high probability on an impact regardless of to whom the change is targeted toward also assess this attribute as ‘N’.

Projects Not Interlinked: Projects whose outcomes are linked to other projects require closer communication. The situation is analogous to building a bridge from both sides of the river, and hoping they meet in the middle. Tools – such as traceability – that formally identify, communicate and link the requirements of one project to another substantially increase the chances of the bridge meeting in the middle. Note: that is not to say that formally documented traceability is the only method that will deliver results. The model’s strength is that it is driven by the confluence of multiple attributes to derive recommendations. Assessment Tip: If the outcome of a project is required for another project (or vice versa), assess this attribute as an ‘N’. Note: “required” means that one project can not occur without the functionality delivered by the other project. It is easy to see the inter-linkage of people as interlinking functionality. I would suggest that the former is a different management problem than we are trying to solve.

Not Directly Linked to Paying the Bills: There are projects that are a paler reflection of a “bet the farm” scenario. While there are very few true “bet the farm” projects, there are many that projects in the tier just below. These ‘second tier’ projects would cause substantial damage to the business and/or to your CIO’s career if they fail, as they are tied to delivering business value (RE: paying the bills). Assessment Tip: Projects that meet the “bet the farm” test or at least a “bet the pasture” project (major impact on revenue or the CIO’s career) can be smelled a mile away; these should be assessed as an “N”. It should be noted that if a project has been raised to this level of urgency artificially, it should be appraised as “Y”. Another tip, projects with the words SAP or PeopleSoft should automatically be assessed as an “N”.

Indirectly Important to Mission: The title “important to mission” represents a long view of the impact of the functionality being delivered by a project. An easy gauge for importance is to determine whether the project can be directly linked to the current or planned core products of the business. Understanding linkages is critical to determining whether a project is important to the mission of the organization. Remember, projects can be important to paying the bills, but not core to the mission of the business. An example, a major component of a clothing manufacturer that I worked for after I left university was its transportation group. Projects for this division were important for paying the bills, but at the same time, they were not directly related to the mission of the business, which was the design and manufacturing of women’s clothing. As an outsider, one quick test for importance to mission is to simply ask the question, “What is the mission of the organization and how does the project support it.” Not knowing the answer is either a sign to ask a lot more questions, or a sign that the project is not important to mission. Assessment Tip: If the project is directly linked to the delivery of a core (current or planned) product assess this attribute as an ‘N’. Appraisal of this attribute can engender passionate debate, most project teams want to believe that the project they are involved in is important to mission. Perception is incredibly important, if there is deeply held passion that the project is directly important to the mission of the organization assess it as an ‘N’.

Moderate Perceived Value to the Stakeholders: Any perception of value is difficult at more than an individual level. Where stakeholders are concerned, involvement clouds the rational assessments.   Simply put, stakeholders perceive most of the projects they are involved in as having more than a moderate level of value. Somewhere in their mind, stakeholders must be asking, why would I be involved with anything of moderate value? The issue is that most projects will deliver, at best, an average value. Assessment Tip: Assuming that you have access to the projected ROI (quantitative and non-quantitative) for the project you are involved in, you have the basis for a decision. A rule of thumb is that projects projected to deliver an ROI that is 10% or more of the organization’s or department’s value, appraise this as an ‘N’. Using the derived ROI assumes that the evaluations are worth more than the paper they are printed on. If you are not tracking the delivery of benefits after the project, any published ROI is suspect.

Neutral to Organizational Architecture: This attribute assesses the degree of impact the functionality/infrastructure to be delivered will have to the organization’s architecture. This attribute has a degree of covariance with the ‘architectural impact’ attribute in the previous model component. While related, they are not the exactly the same. As an example, the delivered output of a project can be critical (important and urgent), but will cause little change (low impact). An explicit example is the installation of a service pack within Microsoft Office. The service pack is typically critical (usually for security reasons), but does not change the architecture of the desktop. Assessment Tip: If delaying the delivery of the project would cause raised voices and gnashing of teeth appraise this as an ‘N’ and argue impact versus criticality over a beer.

An overall note on the concept of criticality, you will need to account for ‘false urgency’. More than a few organizations oversell the criticality of a project. The process of overselling is sometimes coupled with yelling, threats and table-pounding in order to generate a major visual effect. False urgency can have short term benefits, generating concerted action, however as soon as the team figures out the game a whipsaw affect (reduced productivity and attention) typically occurs. Gauge whether the words being used to describe how critical a project is match the appraisal vehicle you just created. Mismatches will sooner or later require action to synchronize the two points of view.

The concept of criticality requires a deft touch to assess. It is rarely as cut and dry as making checkmarks on a form. A major component of the assessment has to be the evaluation of what the project team believes. Teams that believe a project is critical will act as if the stress of criticality is real, regardless of other perceptions of reality. Alternately if a team believes a project is not critical, they will act on that belief, regardless of truth. Make sure you know how all project stakeholders perceive criticality or be ready for surprises.


Categories: Process Management

Traceability: Complexity

Untitled

The second component, complexity, is a measure of the number of properties of a project that are judged to be outside of the norm.  The applicable norm is relative to the person or group making the judgment.  Assessing the team’s understanding of complexity is important because when a person or group perceives something to be complex they act differently.  The concept of complexity can be decomposed into many individual components, for this model the technical components of complexity will be appraised in this category.  The people or team driven attributes of complexity are dealt with in the user involvement section (above).  Higher levels of complexity are an important reason for pursuing traceability because complexity decreases the ability of a person to hold a consistent understanding of the problem and solution in their mind.  There are just too many moving parts.  The inability to develop and hold an understanding in the forefront of your mind increases the need to document understandings and issues to improve consistency.

The model assesses technical complexity by evaluating the following factors:

  1.  The project is the size you are used to doing
    2.    There is a single manager or right sized management
    3.    The technology is well known to the team
    4.    The business problem(s) is well understood
    5.    The degree of technical difficulty is normal or less
    6.    The requirements are stable (ish)
    7.    The project management constraints are minor
    8.    The architectural impact is minimal
    9.    The IT Staff perceives the impact to be minimal

As with customer involvement, the assessment process for complexity uses a simple yes or no scale for rating each of the factors.   Each factor will require some degree of discussion and introspection to arrive at an answer.  An overall assessment tip:  A maybe is equivalent to a ‘no’.   Remember that there is no prize for under or over-estimating the impact of these variables, value is only gained through an honest self-evaluation.

Project is normal size: The size of the project is a direct contributor to complexity; all things being equal, a larger than usual project will require more coordination, communication and interaction than a smaller project.  A common error when considering size of project is to use cost as a proxy.  Size is not the same thing as cost.  I suggest estimating the size of the project using standard functional size metrics.  Assessment Tip: Organizations with a baseline will be able to statistically determine the point where size causes a shift in productivity.  The shift is a sign post for where complexity begins to weigh on the processes being used.  In organizations without a baseline, develop and use a rule of thumb.  Consider using the rule that ‘if it is bigger than anything you have done before’ or the corollary ‘the same size as your biggest project’ as rules of thumb.  These equate to an ‘N’ rating.

Single Manager/Right Sized Management:
 There is an old saying ‘too many cooks in the kitchen spoil the broth’.  A cadre of managers supporting a single project can fit the ‘too many cooks’ bill.  While it is equally true that a large project will require more than one manager or leader it is important to understand the implications that the number of managers and leaders will have on a project.  Having the right number of managers and leaders can smooth out issues that are discovered, assemble and provide status without impacting the team dynamic while providing feedback to team members.  Having the wrong number of managers will gum up the works of project (measure the ratio of meeting time to a standard eight hour day anything over 25% is sign to closely check the level of management communication overhead).   The additional layers of communication and coordination are the downside of a project with multiple managers (it is easy for a single manager to communicate with himself or herself).  One of the most important lessons to be gleaned from the agile movement is that communication is critical (and this leads to the conclusion that communication difficulties may trump benefits) and that any process that gets in the way of communication should be carefully evaluated before they are implemented.  A larger communication web will need to be traversed with every manager added to the structure, which will require more formal techniques to ensure consistent and effective communication.  Assessment Tip: Projects with more than five managers and leaders or a worker to manager ratio lower than 8 workers to one manager/leader (with more than one manager) should assess this attribute as an ‘N’.

Well Known Technology: The introduction of a technology that is unfamiliar to the project team will require more coordination and interaction.  While the introduction of one or two hired guns into a group with experience is a good step to ameliorate the impact, it may not be sufficient (and may complicate communication in its own right).  I would suggest that until all relevant team members surmounts the learning curve; new technologies will require more formal communication patterns.  Assessment Tip:  If less than 50% of the project team has not worked with a technology on previous projects, assess the attribute as an ‘N’.

Well Understood Business Problem: A project team that has access to understanding of the business problem being solved by project will have a higher chance at solving the problem.  The amount of organizational knowledge the team has will dictate the level of analysis and communication required to find a solution.  Assessment Tip: If the business problem is not well understood or has not been dealt with in the past this attribute should be assessed as a ‘N”.

Low Technical Difficultly: The term ‘technical difficulty’ has many definitions.  The plethora of definitions means that measuring technical difficulty requires reflecting on many project attributes.  The attributes that define technical difficulty can initially be seen when there are difficulties in describing the solutions and alternatives for solving the problem.  Technical difficulty can include algorithms, hardware, software, data, logic or any combination of components.  Assessment Tip:  When assessing the level of technical difficulty, if it is difficult to frame the business problem in technical terms assess the level of complexity as ‘N’.

Stable Requirements: Requirements typically evolve as a project progresses (and that is a good thing).  Capers Jones indicates that requirements grow approximately 2% per calendar month across the life of a project.  Projects that are difficult to define or where project personnel or processes allow requirements to be amended or changed in an ad hoc manner should anticipate above average scope creep or churn.  Assessment Tip:  If historical data indicates that the project team, customer and application combination tends to have scope creep or churn above the norm assess this attribute as an ‘N’ unless there are procedural or methodological methods to control change.  (Note:  Control does not mean stop change, but rather that it happens in an understandable manner.)

Minor Project Management Constraints: Project managers have three macro levers (cost, scope and time) available to steer a project.   When those levers are constrained or locked (by management, users or contract) any individual problem becomes more difficult to address.  Formal communication becomes more important as options are constrained.  Assessment Tip:  If more than one of the legs of the project management iron triangle is fixed, assess this attribute as an ‘N’.

Minimal Architectural Impact: Changes to the standard architecture of the application(s) or organization will increase complexity on an exponential scale.  This change of complexity will increase the amount of communication required to ensure a trouble free change. Assessment Tip:  If you anticipate modifications (small or wholesale) to the standard architectural footprint of the application or organization, assess this attribute as an ‘N’.

Minimal IT Staff Impact:
 There are many ways a project can impact an IT staff ranging from process related changes (how work is done) to outcome related changes (employment or job duties).  Negative impacts are most apt to require increased formal communication, therefore the use of traceability methods that are more highly documented and granular.  Negative process impacts are those that are driven by the processes used or organizational constraints (e.g. death marches, poorly implemented processes, galloping requirements and resource constraints).  Outcome related impacts are those driven by the solution delivered (e.g. outsourcing, downsizing, and new application/solutions).  Assessment Tip:  Any perceived negative impact on the team or to the organization that is closely associated with the team should viewed as not neutral (assess as an ‘N’), unless you are absolutely certain you can remediate the impact on the team doing the work.  Reassess often to avoid surprises.


Categories: Process Management

Traceability: Assessing Customer Involvement

Ruminating on Customer Involvement

Ruminating on Customer Involvement

Customer involvement can be defined as the amount of time and effort applied to a project by the customers (or user) of the project.  Involvement can be both good (e.g. knowledge transfer and decision making) and bad (e.g. interference and indecision).  The goal in using the traceability model is to force the project team to predict both the quality and quantity of customer involvement as accurately as possible across the life of a project.  While the question of quality and quantity of customer involvement is important for all projects it becomes even more important as Agile techniques are leveraged.  Customer involvement is required for the effective use of Agile techniques and to reduce the need for classic traceability.  Involvement is used to replace documentation with a combination of lighter documentation and interaction with the customer.

Quality can be unpacked to include attributes such as competence: knowledge of the problem space, knowledge of the process and ability to make decisions that stick.  Assessing the quality attributes of involvement requires understanding how having multiple customer and/or user constituencies involved in the project outcome can change the complexity of the project.  For example, the impact of multiple customers and user constituencies’ on decision making, specifically the ability to make decisions correctly or on a timely basis, will influence how a project needs to be run.  Multiple constituencies complicate the ability to make decisions which drives the need for structure.  As the number of groups increases, the number of communication nodes increases, making it more difficult to get enough people involved in a timely manner.   Although checklists are used to facilitate the model, model users should remember that knowledge of the project and project management is needed to use the model effectively.  Users of the model should not see the lists of attributes and believe that this model can be used merely as a check-the-box method.

The methodical assessment of the quantity and quality of customer involvement requires determining the leading indicators of success.  Professional experience suggests a standard set of predictors for customer involvement which are incorporated into the appraisal questions below.
These predictors are as follows:

  1. Agile methods will be used                        y/n
  2. The customer will be available more than 80% of the time         y/n
  3. User/customer will be co-located with the project team            y/n
  4. Project has a single primary customer                    y/n
  5. The customer has adequate business knowledge            y/n
  6. The customer has knowledge of how development projects work         y/n
  7. Correct business decision makers are available                y/n
  8. Team members have a high level of interpersonal skills            y/n
  9. Process coaches are available                    y/n

The assessment process simplifies the evaluation process by using a simple yes-no evaluation.  Gray areas like ‘maybe’ are evaluated as an equivalent to a ‘no’.  While the rating scale is simple the discussion to get to a yes-no decision is typically far less simple.

Agile methods will be used:  The first component in the evaluation is to determine whether the project intends to use disciplined Agile methods for the project being evaluated.  The term ‘disciplined’ is used on purpose.  Agile methods like xP are a set of practices that interact to create development supported by intimate communication.  Without the discipline or without critical practices, the communication alone will not suffice.  Assessment tip:  Using a defined, agile process equates to a ‘Y’, making it up as you go equates to an ‘N’.

Customer availability (>80%):  Intense customer interaction is required to ensure effective development and to reduce reliance on classically documented traceability.  Availability is defined as the total amount of time the primary customer is available.  If customers are not available, lack of interaction is foregone conclusion.  I have found that agile methods (which require intense communication) tend to loose traction when customer availability drops below 80%.   Assessment Tip: Assess this attribute as a ‘Y’ if primary customer availability is above 80%.  Assess it as an ‘N’ if customer availability is below 80% (which means if your customers are not around 80% of the time normally during the project without very special circumstances rate this as a No).

Co-located customer/user:  Co-location is an intimate implementation scenario of customer/user availability.  The intimacy that co-location provides can be leveraged as a replacement for documentation-based communication by using less formal techniques like white boards and sticky notes.  Assessment Tip:  Stand up look around, if you don’t have a high probability of seeing your primary customer (unless it is lunch time), you should rate this attribute as an ‘N’.  Leveraging metaverse tools (e.g. Secondlife or similar) can be used to mitigate some of the problems of disparate physical location.

Project Has A Single Customer:  As the number of primary customers increase, the number of communication paths required for creating and deploying the project increases exponentially.  The impact that the number of customers has on communication is not a linear, it can be more easily conceived as a web.  Each node in the web will require attention (attention = communication) to coordinate activities.  Assessment Tip: Count the number of primary customers, if you need more than one finger, assess this question as an ‘N’.

Business Knowledge:  The quality and quantity of business knowledge the team has to draw upon is inversely related to the amount of documentation-based communication needed.  Availability of solid business knowledge impacts the amount of background that needs to be documented in order to establish the team’s bona fides.  It should be noted that it can be argued that sourcing long term business knowledge in human repositories is a risk.  Assessment Tip:  Assessing the quality and quantity of business knowledge will require introspection and fairly brutal honesty, but do not sell the team or yourself short.

Knowledge of How Development Projects Work:  All team members, whether they are filling a hardcore IT role or the most ancillary user role, need to understand both their project responsibilities and how they will contribute to the project.  The more intrinsically participants understand their roles and responsibilities the smaller the amount of wasted effort a project will typically have to expend on non-value added activities (like re-explaining how work is done).  Assessment Tip:  This is an area that can be addressed after assessment through training.  If team members can not be trained or educated as to their role, appraise this attribute as an ‘N’.

Decisions Makers:  The project attribute that defines “decision makers” is the process that leads to the selection of a course of action.  Most IT projects have a core set of business customers that are the decision makers for requirements and business direction.  Knowing who can make a decision (and have it stick) then having access to them is critical.  Having a set of customers available or co-located is not effective if they are not decision makers (‘the right people’).  The perfect customer for a development project is available, co-located and can make decisions that stick (and very apt not to be the person provided).  Assessment Tip:  This area is another that can only be answered after soul searching introspection (i.e. thinking about it over a beer).  If your customer has to check with a nebulous puppet master before making critical decisions then assessment response should be an “N”.

High Level of Interpersonal Skills:  All team members must be able to interact together and perform as a team.  Insular or other behavior that is not team conducive will cause communications to pool and stagnate as team members either avoid the non-team player or the offending party holds on to information at inopportune times.  Non-team behavior within a team is bad regardless of the development methodology being used.  Assessment Tip:  Teams that have worked together and crafted a good working relationship typically can answer this as a “Y”.

Facilitation: Projects perform more consistently with coaching (and seem to deliver better solutions), however coaching as a process has not been universally adopted.  The role that has been universally embraced is project manager (PM).  Coaches and project managers typically play two very different roles.  The PM role has an external focus and acts as the voice of the process, while the role of coach has an internal focus and acts the as the voice of the team (outside vs. inside, process vs. people).  Agile methods implement the role of coach and PM as two very different roles, even though they can co-exist.  Coaches nurture the personnel on the project; helping them to do their best (remember your last coach).  Shouldn’t the same facility be leveraged on all projects?  Assessment Tip:  Evaluate whether a coach is assigned if yes answer affirmatively.  If the role is not formally recognized within the group or organization, care should be taken, even if a coach is appointed.


Categories: Process Management

CocoaHeadsNL @ Xebia on September 16th

Xebia Blog - Thu, 08/28/2014 - 11:20

On Tuesday the 16th the Dutch CocoaHeads will be visiting us. It promises to be a great night for anybody doing iOS or OSX development. The night starts at 17:00, diner at 18:00.

If you are an iOS/OSX developer and like to meet fellow developers? Come join the CocoaHeads on september 16th at our office. More details are on the CocoaHeadsNL meetup page.

Software Development Conferences Forecast Agust 2014

From the Editor of Methods & Tools - Thu, 08/28/2014 - 08:38
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing and software quality, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. Agile on the Beach, September 4-5 2014, Falmouth in Cornwall, UK SPTechCon, September 16-19 2014, Boston, USA Receive a $200 discount on a 4 or 3-day pass with code SHAREPOINT Future of Web Apps, September 29-October 1 2014, London, ...

Traceability: Putting the Model Into Action

Three core concepts.

Three core concepts.

My model for scaling traceability is based on an assumption that there is a relationship between customer involvement, criticality and complexity.  This yields the level of documentation required to achieve the benefits of traceability.  The model leverages an assessment of project attributes that define the three common concepts.  The concepts are:

  • Customer involvement in the project
  • Complexity of the functionality being delivered
  • Criticality of the project

A thumbnail definition of each of the three concepts begins with the concept of customer involvement, which is defined as the amount of time and effort applied to a project in a positive manner by the primary users of the project.  The second concept, complexity, is a measure of the number of project properties that are outside the normal expectations as perceived by the project team (the norm is relative to the organization or project group rather than to any external standard).  The final concept, criticality, is defined as the attributes defining quality, state or degree of being of the highest importance (again relative to the organization or group doing the work).  We will unpack these concepts and examine them in greater detail as we peal away the layers of the model.

The Model

Untitled

The process for using the model is a simple set of steps.
1.    Get a project (and team members)
2.    Assess the project’s attributes
3.    Plot the results on the model
4.    Interpret the findings
5.    Reassess as needed

The model is built for project environments. Don’t have a project you say!  Get one, I tell you! Can’t get one? This model will be less useful, but not useless.

Who Is Involved And When Will They Be Involved:

Implementing the traceability model assessment works best when the team (or a relevant subset) charged with doing the work conducts the assessment of project attributes.  The use of team members acts to turn Putt’s theory of “Competence Inversion ” on it head by focusing project-level competencies on defining the impact of specific attributes.  The use of a number of team members will provide a basis for consistency if assessments are performed again later in the project.

While the assessment process is best done by a cross functional team, it can be also be performed by those in the project governance structure alone.  The smaller the group that is involved in the assessment the more open and honest the communication between the assessment group and the project team must be or the exercise will be just another process inflicted on the team.  Regardless of the size, the assessment team needs to include technical competence.  Technical competence is especially useful when appraising complexity.  Technical competence is also a good tool to sell the results of the process to the rest of the project team.  Regardless of the deployment model, diversity of thought is generated in cross functional groups that will provide the breadth of knowledge needed to apply the model (this suggestion is based on feedback from process users).  The use of cross functional groups becomes even more critical for large projects and/or projects with embedded sub-projects.  In a situation where the discussion will be contentious or the group participating will be large I suggest using a facilitator to ensure an effective outcome.

An approach I suggest for integrating the assessment process into your current methodology is to incorporate the assessment as part of your formal risk assessment.  An alternative for smaller projects is to perform the assessment process during the initial project planning activities or in a sprint zero (if used).  This will minimize the impact of yet another assessment.

In larger projects where the appraisal outcome may vary across teams or sub-projects, thoughtful discussion will be required to determine whether the lowest common denominator will drive the results or whether a mixed approach is needed.  Use of this method in the real world suggests that in large projects/programs the highest or lowest common denominator is seldom universally useful.  The needs for scalability should be addressed at the level it makes sense for the project, which may mean that sub-projects are different.


Categories: Process Management

What is your next step in Continuous Delivery? Part 1

Xebia Blog - Wed, 08/27/2014 - 21:15

Continuous Delivery helps you deliver software faster, with better quality and at lower cost. Who doesn't want to delivery software faster, better and cheaper? I certainly want that!

No matter how good you are at Continuous Delivery, you can always do one step better. Even if you are as good as Google or Facebook, you can still do one step better. Myself included, I can do one step better.

But also if you are just getting started with Continuous Delivery, there is a feasible step to take you forward.

In this series, I describe a plan that helps you determine where you are right now and what your next step should be. To be complete, I'll start at the very beginning. I expect most of you have passed the first steps already.

The steps you already took

This is the first part in the series: What is your next step in Continuous Delivery? I'll start with three steps combined in a single post. This is because the great majority of you has gone through these steps already.

Step 0: Your very first lines of code

Do you remember the very first lines of code you wrote? Perhaps as a student or maybe before that as a teenager? Did you use version control? Did you bring it to a test environment before going to production? I know I did not.

None of us was born with an innate skills for delivering software in a certain way. However, many of us are taught a certain way of delivering software that still is a long way from Continuous Delivery.

Step 1: Version control

At some point during your study of career, you have been introduced to Version Control. I remember starting with CVS, migrating to Subversion and I am currently using Git. Each of these systems are an improvement over te previous one.

It is common to store the source code for your software in version control. Do you already have definitions or scripts for your infrastructure in version control? And for your automated acceptance tests or database schemas? In later steps, we'll get back to that.

Step 2: Release process

Your current release process may be far from Continuous Delivery. Despite appearances, your current release process is a useful step towards Continuous Delivery.

Even if you delivery to production less than twice a year, you are better off than a company that delivers their code unpredictably, untested and unmanaged. Or worse, a company that edits their code directly on a production machine.

In your delivery process, you have planning, control, a production-like testing environment, actual testing and maintenance after the go-live. The main difference with Continuous Delivery is the frequency and the amount of software that is released at the same time.

So yes, a release process is a productive step towards Continuous Delivery. Now let's see if we can optimize beyond this manual release process.

Step 3: Scripts

Imagine you have issues on your production server... Who do you go to for help? Do you have someone in mind?

Let me guess, you are thinking about a middle-aged guy who has been working at your organisation for 10+ years. Even if your organization is only 3 years old, I bet he's been working there for more than 10 years. Or at least, it seems like it.

My next guess is that this guy wrote some scripts to automate recurring tasks and make his life easier. Am I right?

These scripts are an important step towards Continuous Delivery. in fact, Continuous Delivery is all about automating repetitive tasks. The only thing that falls short is that these scripts are a one-man-initiative. It is a good initiative, but there is no strategy behind it and a lack of management support.

If you don't have this guy working for you, then you may have a bigger step to take when continuing towards the next step of Continuous Delivery. To successfully adopt Continuous Delivery on the long run, you are going to need someone like him.

Following steps

In the next parts, we will look at the following steps towards becoming world champion delivering software:

  • Step 4: Continuous Delivery
  • Step 5: Continuous Deployment
  • Step 6: "Hands-off"
  • Step 7: High Scalability

Stay tuned for the following posts.

Traceability: An Approach Mixing CMMI and Agile

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement.

Traceability is an important tool in software engineering and a core tenant of the CMMI.  It is used as tool for the management and control of requirements. Controlling and understanding the flow of requirements puts a project manager’s hand on the throttle of the project by allowing and controlling the flow of work through a project. However, it is both hard to accomplish and requires a focused application to derive value. When does the control generated represent the proper hand on the throttle or a lead foot on a break?

The implementation of traceability sets the stage for the struggle over processes mandated by management or the infamous “model”.  Developers actively resist process when they perceive that the effort isn’t directly leading to functionality that can be delivered and therefore, not delivering value to their customers.  In the end, traceability, like insurance, is best when you don’t need the information it provides to sort out uncontrolled project changes or delivering functionality not related to requirements.

Identifying both the projects and the audience that can benefit from traceability is paramount for implementing and sustaining the process.  Questions that need to be asked and addressed include:

  • Is the need for control for all types of projects the same?
  • Is the value-to-effort ratio from tracing requirements the same for all projects?
  • What should be evaluated when determining whether to scale the traceability process?

Scalability is a needed step to affect the maximum value from any methodology component, traceability included, regardless of whether the project is plan-driven or Agile. A process is needed to ensure that traceability occurs based on a balance between process, effort and complexity.

The concept of traceability acts a lightening rod for the perceived excesses of CMMI (and by extension all other model-based improvement methods).  I will explore a possible approach for scaling traceability.  My approach bridges the typical approach (leveraging matrices and requirement tools) with an approach that trades documentation for intimate user involvement. It uses a simple set of three criteria (complexity, user involvement and criticality) to determine where a project should focus its traceability effort on continuum between documentation and involvement.

Traceability becomes a tool that can bridge the gaps caused by less than perfect involvement, a complex project, and increased criticality.  The model we will propose provides a means to apply traceability in a scaled manner so that it fits a project’s need and is not perceived as a one size fits all approach.


Categories: Process Management

Synchronize the Team

Xebia Blog - Tue, 08/26/2014 - 13:52

How can you, as a scrum master, improve the chances that the scrum team has a common vision and understanding of both the user story and the solution, from the start until the end of the sprint?   

The problem

The planning session is where the team should synchronize on understanding the user story and agree on how to build the solution. But there is no real validation that all the team members are on the same page. The team tends to dive into the technical details quite fast in order to identify and size the tasks. The technical details are often discussed by only a few team members and with little or no functional or business context. Once the team leaves the session, there is no guarantee that they remain synchronized when the sprint progresses. 

The only other team synchronization ritual, prescribed by the scrum process, is the daily scrum or stand-up. In most teams the daily scrum is as short as possible, avoiding semantic discussions. I also prefer the stand-ups to be short and sweet. So how can you or the team determine that the team is (still) synchronized?

Specify the story

In the planning session, after a story is considered ready enough be to pulled into the sprint, we start analyzing the story. This is the specification part, using a technique called ‘Specification by Example’. The idea is to write testable functional specifications with actual examples. We decompose the story into specifications and define the conditions of failure and success with examples, so they can be tested. Thinking of examples makes the specification more concrete and the interpretation of the requirements more specific.

Having the whole team work out the specifications and examples, helps the team to stay focussed on the functional part of the story longer and in more detail, before shifting mindsets to the development tasks.  Writing the specifications will also help to determine wether a story is ready enough. While the sprint progresses and all the tests are green, the story should be done for the part of building the functionality.

You can use a tool like Fitnesse  or Cucumber to write testable specifications. The tests are run against the actual code, so they provide an accurate view on the progress. When all the tests pass, the team has successfully created the functionality. In addition to the scrum board and burn down charts, the functional tests provide a good and accurate view on the sprint progress.

Design the solution

Once the story has been decomposed into clear and testable specifications we start creating a design on a whiteboard. The main goal is to create a shared visible understanding of the solution, so avoid (technical) details to prevent big up-front designs and loosing the involvement of the less technical members on the team. You can use whatever format works for your team (e.g. UML), but be sure it is comprehensible by everybody on the team.

The creation of the design, as an effort by the whole team, tends to sparks discussion. In stead of relying on the consistency of non-visible mental images in the heads of team members, there is a tangible image shared with everyone.

The whiteboard design will be a good starting point for refinement as the team gains insight during the sprint. The whiteboard should always be visible and within reach of the team during the sprint. Using a whiteboard makes it easy to adapt or complement the design. You’ll notice the team standing around the whiteboard or pointing to it in discussions quite often.

The design can be easily turned into a digital artefact by creating a photo copy of it. A digital copy can be valuable to anyone wanting to learn the system in the future. The design could also be used in the sprint demo, should the audience be interested in a technical overview.

Conclusion

The team now leaves the sprint planning with a set of functional tests and a whiteboard design. The tests are useful to validate and synchronize on the functional goals. The whiteboard designs are useful to validate and synchronize on the technical goals. The shared understanding of the team is more visible and can be validated, throughout the sprint. The team has become more transparent.

It might be a good practice to have the developers write the specification, and the testers or analysts draw the designs on the board. This is to provoke more communication, by getting the people out of their comfort zone and forcing them to ask more questions.

There are more compelling reasons to implement (or not) something like specification by design or to have the team make design overviews. But it also helps the team to stay on the same page, when there are visible and testable artefacts to rely on during the sprint.

Quote of the Month August 2014

From the Editor of Methods & Tools - Tue, 08/26/2014 - 09:55
We don’t mean that you should put on your Super Tester cape and go protect the world from bugs. There’s no room for big egos on agile teams. Your teammates share your passion for quality. Focus on the teams goals and do what you can to help everyone do their best work. Source: Agile Testing, Lisa Crispin and Janet Gregory, Addison Wesley

Best Practices

Best practices aren't magic and neither are goblins.

Best practices aren’t magic and neither are goblins.

To paraphrase Edwin Starr, “Best Practices, huh, what are they good for? Absolutely nothing,  Say it again . . .”

Every organization wants to use best practices. How many organizations do you know that would stand up and say we want to use average practices? Therefore a process with the moniker “best practice” on it has an allure that is hard to resist.  The problem is that one organization’s best practice is another’s average process, even if they produce the same quality and quantity of output.  Or even worse, one organization’s best practice might be beyond another organization.  The process reflects the overall organizational context.  It is possible that adopting a new process wholesale could produce output faster or better, but without tailoring, the chances are more random than many consultants would suggest. For example, just buying a configuration management tool without changing how you do configuration management will be less effective melding the tool with your processes.  Tailoring will allow you to use the process based on the attributes of the current organizational context such as the organization’s overall size or the capabilities of the people involved.

An example of an organization’s best practice that might not translate to all of its competitors is the use of super sophisticated inventory control computer systems used at Walmart. Would Walmart’s computer system help a local grocery store (let’s call this Hometown Grocery)? Not likely, the overhead of the same system would be beyond Hometown’s IT capabilities and budget.  However if hundreds of Hometown Groceries banded together, the answer might be different (tailoring the process to the environmental context).  Without tailoring the context, the best practice for Walmart would not be a best practice for our small town grocery.

The term best practice gets thrown around as if there was a dusty old tome full of magical incantations that will solve any crisis regardless of context (assuming you are a seventh level mage).  There are those that hold up the CMMI, ISO or SCRUM and shout (usually on email lists) that they are only way.  Let’s begin by putting the idea that there is a one-size-fits-all solution to every job to rest.  There isn’t and there never was any such animal.  Any individual process, practice or step that worked wonderfully in the company down the street will not work the same way for you, especially if you try to it do it same way they did.  Software development and maintenance isn’t a chemical reaction, a Lego construct or even magic.  Best practices, what are they good for?  Fortunately a lot, if used correctly.

Best practices find their highest value as a tool for you to use as a comparison in order for you to expose the assumptions hat have been used to build or evolve your own processes.   Knowledge allows you to challenge how and why are you are doing any specific step and provides an opportunity for change.  How many companies have embraced the tenants of the Toyota Production Systems after benchmarking Toyota?

Adopting best practices without regard to your context may not yield the benefits found on the box.  If you read the small print you’d see a warning. Use best practices only after reading all of the instructions and understanding of your goals and your environment.  This is not to say that exemplary practices should not be aggressively studied and translated into your organization.  Ignoring new ideas because they did not grow out of your context is just as crazy as embracing best practices without understanding the context it was created in. Best practices as an ideal, as a comparison so that you can understand your organization makes sense, not as plug-compatible modules.


Categories: Process Management

Vert.x with core.async. Handling asynchronous workflows

Xebia Blog - Mon, 08/25/2014 - 12:00

Anyone who was written code that has to coordinate complex asynchronous workflows knows it can be a real pain, especially when you limit yourself to using only callbacks directly. Various tools have arisen to tackle these issues, like Reactive Extensions and Javascript promises.

Clojure's answer comes in the form of core.async: An implementation of CSP for both Clojure and Clojurescript. In this post I want to demonstrate how powerful core.async is under a variety of circumstances. The context will be writing a Vert.x event-handler.

Vert.x is a young, light-weight, polyglot, high-performance, event-driven application platform on top of the JVM. It has an actor-like concurrency model, where the coarse-grained actors (called verticles) can communicate over a distributed event bus. Although Vert.x is still quite young, it's sure to grow as a big player in the future of the reactive web.

Scenarios

The scenario is as follows. Our verticle registers a handler on some address and depends on 3 other verticles.

1. Composition

Imagine the new Mars rover got stuck against some Mars rock and we need to send it instructions to destroy the rock with its inbuilt laser. Also imagine that the controlling software is written with Vert.x. There is a single verticle responsible for handling the necessary steps:

  1. Use the sensor to locate the position of the rock
  2. Use the position to scan hardness of the rock
  3. Use the hardness to calibrate and fire the laser. Report back status
  4. Report success or failure to the main caller

As you can see, in each step we need the result of the previous step, meaning composition.
A straightforward callback-based approach would look something like this:

(ns example.verticle
  (:require [vertx.eventbus :as eb]))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (let [reply-msg eb/*current-message*]
      (eb/send "rover.scope" (scope-msg instructions)
        (fn [coords]
          (eb/send "rover.sensor" (sensor-msg coords)
            (fn [data]
              (let [power (calibrate-laser data)]
                (eb/send "rover.laser" (laser-msg power)
                  (fn [status]
                    (eb/reply* reply-msg (parse-status status))))))))))))

A code structure quite typical of composed async functions. Now let's bring in core.async:

(ns example.verticle
  (:refer-clojure :exclude [send])
  (:require [ vertx.eventbus :as eb]
            [ clojure.core.async :refer [go chan put! <!]]))

(defn send [addr msg]
  (let [ch (chan 1)]
    (eb/send addr msg #(put! ch %))
    ch))

(eb/on-message
  "console.laser"
  (fn [instructions]
    (go (let [coords (<! (send "rover.scope" (scope-msg instructions)))
              data (<! (send "rover.sensor" (sensor-msg coords)))
              power (calibrate-laser data)
              status (<! (send "rover.laser" (laser-msg power)))]
          (eb/reply (parse-status status))))))

We created our own reusable send function which returns a channel on which the result of eb/send will be put. Apart from the 2. Concurrent requests

Another thing we might want to do is query different handlers concurrently. Although we can use composition, this is not very performant as we do not need to wait for reply from service-A in order to call service-B.

As a concrete example, imagine we need to collect atmospheric data about some geographical area in order to make a weather forecast. The data will include the temperature, humidity and wind speed which are requested from three different independent services. Once all three asynchronous requests return, we can create a forecast and reply to the main caller. But how do we know when the last callback is fired? We need to keep some memory (mutable state) which is updated when each of the callback fires and process the data when the last one returns.

core.async easily accommodates this scenario without adding extra mutable state for coordinations inside your handlers. The state is contained in the channel.

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan 3)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go (let [data (merge (<! ch) (<! ch) (<! ch))
                forecast (create-forecast data)]
            (eb/reply forecast))))))
3. Fastest response

Sometimes there are multiple services at your disposal providing similar functionality and you just want the fastest one. With just a small adjustment, we can make the previous code work for this scenario as well.

(eb/on-message
  "server.request"
  (fn [msg]
    (let [ch (chan 3)]
      (eb/send "service-A" msg #(put! ch %))
      (eb/send "service-B" msg #(put! ch %))
      (eb/send "service-C" msg #(put! ch %))
      (go (eb/reply (<! ch))))))

We just take the first result on the channel and ignore the other results. After the go block has replied, there are no more takers on the channel. The results from the services that were too late are still put on the channel, but after the request finished, there are no more references to it and the channel with the results can be garbage-collected.

4. Handling timeouts and choice with alts!

We can create timeout channels that close themselves after a specified amount of time. Closed channels can not be written to anymore, but any messages in the buffer can still be read. After that, every read will return nil.

One thing core.async provides that most other tools don't is choice. From the examples:

One killer feature for channels over queues is the ability to wait on many channels at the same time (like a socket select). This is done with `alts!!` (ordinary threads) or `alts!` in go blocks.

This, combined with timeout channels gives the ability to wait on a channel up a maximum amount of time before giving up. By adjusting example 2 a bit:

(eb/on-message
  "forecast.report"
  (fn [coords]
    (let [ch (chan)
          t-ch (timeout 3000)]
      (eb/send "temperature.service" coords #(put! ch {:temperature %}))
      (eb/send "humidity.service" coords #(put! ch {:humidity %}))
      (eb/send "wind-speed.service" coords #(put! ch {:wind-speed %}))
      (go-loop [n 3 data {}]
        (if (pos? n)
          (if-some [result (alts! [ch t-ch])]
            (recur (dec n) (merge data result))
            (eb/fail 408 "Request timed out"))
          (eb/reply (create-forecast data)))))))

This will do the same thing as before, but we will wait a total of 3s for the requests to finish, otherwise we reply with a timeout failure. Notice that we did not put the timeout parameter in the vert.x API call of eb/send. Having a first-class timeout channel allows us to coordinate these timeouts more more easily than adding timeout parameters and failure-callbacks.

Wrapping up

The above scenarios are clearly simplified to focus on the different workflows, but they should give you an idea on how to start using it in Vert.x.

Some questions that have arisen for me is whether core.async can play nicely with Vert.x, which was the original motivation for this blog post. Verticles are single-threaded by design, while core.async introduces background threads to dispatch go-blocks or state machine callbacks. Since the dispatched go-blocks carry the correct message context the functions eb/send, eb/reply, etc.. can be called from these go blocks and all goes well.

There is of course a lot more to core.async than is shown here. But that is a story for another blog.

Docker on a raspberry pi

Xebia Blog - Mon, 08/25/2014 - 07:11

This blog describes how easy it is to use docker in combination with a Raspberry Pi. Because of docker, deploying software to the Raspberry Pi is a piece of cake.

What is a raspberry pi?
The Raspberry Pi is a credit-card sized computer that plugs into your TV and a keyboard. It is a capable little computer which can be used in electronics projects and for many things that your desktop PC does, like spreadsheets, word-processing and games. It also plays high-definition video. A raspberry pi runs linux, has an ARM processor of 700 MHZ and internal memory of 512 MB. Last but not least, it only costs around  35 Euro.

A raspberry pi

A raspberry pi version B

Because of the price, size and performance, the raspberry pi is a step to the 'Internet of things' principle. With a raspberry pi it is possible to control and connect everything to everything. For instance, my home project which is an raspberry pi controlling a robot.

 

Raspberry Pi in action

What is docker?
Docker is an open platform for developers and sysadmins to build, ship and run distributed applications. With Docker, developers can build any app in any language using any toolchain. “Dockerized” apps are completely portable and can run anywhere. A dockerized app contains the application, its environment, dependencies and even the OS.

Why combine docker and raspberry pi?
It is nice to work with a Raspberry Pi because it is a great platform to connect devices. Deploying anything however, is kind of a pain. With dockerized apps we can develop and test our application on our own home machine, when it works we can deploy it to the raspberry. We can do this without any pain or worries about corruption of the underlying operating system and tools. And last but not least, you can easily undo your tryouts.

What is better than I expected
First of all; it was relatively easy to install docker on the raspberry pi. When you use the Arch Linux operating system, docker is already part of the package manager! I expected to do a lot of cross-compiling of the docker application, because the raspberry pi uses an ARM-architecture (instead of the default x86 architecture), but someone did this already for me!

Second of all; there are a bunch of ready-to-use docker-images especially for the raspberry pi. To run dockerized applications on the raspberry pi you are depending on the base images. These base images must also support the ARM-architecture. For each situation there is an image, whether you want to run node.js, python, ruby or just java.

The worst thing that worried me was the performance of running virtualized software on a raspberry pi. But it all went well and I did not notice any performance reduce. Docker requires far less resources than running virtual machines. A docker proces runs straight on the host, giving native CPU performance. Using Docker requires a small overhead for memory and network.

What I don't like about docker on a raspberry pi
The slogan of docker to 'build, ship and run any app anywhere' is not entirely valid. You cannot develop your Dockerfile on your local machine and deploy the same application directly to your raspberry pi. This is because each dockerfile includes a core image. For running your application on your local machine, you need a x86-based docker-image. For your raspberry pi you need an ARM-based image. That is a pity, because this means you can only build your docker-image for your Raspberry Pi on the raspberry pi, which is slow.

I tried several things.

  1. I used the emulator QEMU to emulate the rasberry pi on a fast Macbook. But, because of the inefficiency of the emulation, it is just as slow as building your dockerfile on a raspberry pi.
  2. I tried cross-compiling. This wasn't possible, because the commands in your dockerfile are replayed on a running image and the running raspberry-pi image can only be run on ... a raspberry pi.

How to run a simple node.js application with docker on a raspberry pi  

Step 1: Installing Arch Linux
The first step is to install arch linux on an SD card for the raspberry pi. The preferred OS for the raspberry pi is a debian based OS: Raspbian, which is nicely configured to work with a raspberry pi. But in this case, the ArchLinux is better because we use the OS only to run docker on it. Arch Linux is a much smaller and a more barebone OS. The best way is by following the steps at http://archlinuxarm.org/platforms/armv6/raspberry-pi. In my case, I use version 3.12.20-4-ARCH. In addition to the tutorial:

  1. After downloading the image, install it on a sd-card by running the command:
    sudo dd if=path_of_your_image.img of=/dev/diskn bs=1m
  2. When there is no HDMI output at boot, remove the config.txt on the SD-card. It will magically work!
  3. Login using root / root.
  4. Arch Linux will use 2 GB by default. If you have a SD-card with a higher capacity you can resize it using the following steps http://gleenders.blogspot.nl/2014/03/raspberry-pi-resizing-sd-card-root.html

Step 2: Installing a wifi dongle
In my case I wanted to connect a wireless dongle to the raspberry pi, by following these simple steps

  1. Install the wireless tools:
        pacman -Syu
        pacman -S wireless_tool
        
  2. Setup the configuration, by running:
    wifi-menu
  3. Autostart the wifi with:
        netctl list
        netctl enable wlan0-[name]
    

Because the raspberry pi is now connected to the network you are able to SSH to it.

Step 3: Installing docker
The actual install of docker is relative easy. There is a docker version compatible with the ARM processor (that is used within the Raspberry Pi). This docker is part of the package manager of Arch Linux and the used version is 1.0.0. At the time of writing this blog docker release version 1.1.2. The missing features are

  1. Enhanced security for the LXC driver.
  2. .dockerignore support.
  3. Pause containers during docker commit.
  4. Add --tail to docker logs.

You will install docker and start is as a service on system boot by the commands:

pacman -S docker
systemctl enable docker
Installing docker with pacman

Installing docker with pacman

Step 4: Run a single nodejs application
After we've installed docker on the raspberry pi, we want to run a simple nodejs application. The application we will deploy is inspired on the nodejs web in the tutorial on the docker website: https://github.com/enokd/docker-node-hello/. This nodejs application prints a "hello world" to the console of the webbrowser. We have to change the dockerfile to:

# DOCKER-VERSION 1.0.0
FROM resin/rpi-raspbian

# install required packages
RUN apt-get update
RUN apt-get install -y wget dialog

# install nodejs
RUN wget http://node-arm.herokuapp.com/node_latest_armhf.deb
RUN dpkg -i node_latest_armhf.deb

COPY . /src
RUN cd /src; npm install

# run application
EXPOSE 8080
CMD ["node", "/src/index.js"]

And it works!

Screen Shot 2014-08-07 at 20.52.09

The webpage that runs in nodejs on a docker image on a raspberry pi

 

Just by running four little steps, you are able to use docker on your raspberry pi! Good luck!

 

SPaMCAST 304 – Jamie Lynn Cooke, Power of the Agile Business Analyst

www.spamcast.net

http://www.spamcast.net

Listen to the Software Process and Measurement Cast 304

Software Process and Measurement Cast number 304 features our interview with Jamie Lynn Cooke. Jamie Lynn Cooke is the author of The Power of the Agile Business Analyst. We discussed the definition of an Agile business analyst and what they actually do in Agile projects.  Jamie provides a clear and succinct explanation of the role and huge value of Agile business analysts bring to projects!

Jamie Lynn’s Bio:
Jamie Lynn Cooke has 24 years of experience as a senior business analyst and solutions consultant, working with more than 130 public and private sector organizations throughout Australia, Canada, and the United States.

She is the author of The Power of the Agile Business Analyst: 30 surprising ways a business analyst can add value to your Agile development team, which details how Agile business analysts can increase the relevance, quality and overall business value of Agile projects; Agile Principles Unleashed, a book written specifically to explain Agile in non-technical business terms to managers and executives outside of the IT industry; Agile: An Executive Guide: Real results from IT budgets, which gives IT executives the tools and strategies needed for bottom-line business decisions on using Agile methodologies; and Everything You Want to Know About Agile: How to get Agile results in a less-than-Agile organization, which gives readers strategies for aligning Agile work within the reporting, budgeting, staffing, and governance constraints of their organization. Also checkout,  Agile Productivity Unleashed: Proven Approaches for Achieving Real Productivity Gains in Any Organization (Second Edition)!

Jamie has a Bachelor of Science in Engineering Psychology (Human Factors Engineering) from Tufts University in Medford, Massachusetts; and a Graduate Certificate in e-Business/Business Informatics from the University of Canberra in Australia.

You can find her website here.

 

Next

Software Process and Measurement Cast number 305 will feature our essay on estimation (here is our essay on specific topics within estimation). Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 305 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.


Categories: Process Management

SPaMCAST 304 - Jamie Lynn Cooke, Power of the Agile Business Analyst

Software Process and Measurement Cast - Sun, 08/24/2014 - 22:00

Software Process and Measurement Cast number 304 features our interview with Jamie Lynn Cooke. Jamie Lynn Cooke is the author of The Power of the Agile Business Analyst. We discussed the definition of an Agile business analyst and what they actually do in Agile projects.  Jamie provides a clear and succinct explanation of the role and huge value of Agile business analysts bring to projects!

Jamie Lynn’s Bio:
Jamie Lynn Cooke has 24 years of experience as a senior business analyst and solutions consultant, working with more than 130 public and private sector organizations throughout Australia, Canada, and the United States.

She is the author of The Power of the Agile Business Analyst: 30 surprising ways a business analyst can add value to your Agile development team, which details how Agile business analysts can increase the relevance, quality and overall business value of Agile projects; Agile Principles Unleashed, a book written specifically to explain Agile in non-technical business terms to managers and executives outside of the IT industry; Agile: An Executive Guide: Real results from IT budgets, which gives IT executives the tools and strategies needed for bottom-line business decisions on using Agile methodologies; and Everything You Want to Know About Agile: How to get Agile results in a less-than-Agile organization, which gives readers strategies for aligning Agile work within the reporting, budgeting, staffing, and governance constraints of their organization. Also checkout,  Agile Productivity Unleashed: Proven Approaches for Achieving Real Productivity Gains in Any Organization (Second Edition)!

Jamie has a Bachelor of Science in Engineering Psychology (Human Factors Engineering) from Tufts University in Medford, Massachusetts; and a Graduate Certificate in e-Business/Business Informatics from the University of Canberra in Australia.

You can find her website here.

 

Next

Software Process and Measurement Cast number 305 will feature our essay on estimation (here is our essay on specific topics within estimation). Estimation is a hot bed of controversy. But perhaps first we should synchronize on just what we think the word means.  Once we have a common vocabulary we can commence with the fisticuffs. In SPaMCAST 305 we will not shy away from a hard discussion.

Upcoming Events

I will be presenting at the International Conference on Software Quality and Test Management in San Diego, CA on October 1.  I have a great discount code!!!! Contact me if you are interested.

I will be presenting at the North East Quality Council 60th Conference October 21st and 22nd in Springfield, MA.

More on all of these great events in the near future! I look forward to seeing all SPaMCAST readers and listeners that attend these great events!

The Software Process and Measurement Cast has a sponsor.

As many you know I do at least one webinar for the IT Metrics and Productivity Institute (ITMPI) every year. The ITMPI provides a great service to the IT profession. ITMPI’s mission is to pull together the expertise and educational efforts of the world’s leading IT thought leaders and to create a single online destination where IT practitioners and executives can meet all of their educational and professional development needs. The ITMPI offers a premium membership that gives members unlimited free access to 400 PDU accredited webinar recordings, and waives the PDU processing fees on all live and recorded webinars. The Software Process and Measurement Cast some support if you sign up here. All the revenue our sponsorship generates goes for bandwidth, hosting and new cool equipment to create more and better content for you. Support the SPaMCAST and learn from the ITMPI.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: “This book will prove that software projects should not be a tedious process, neither for you or your team.” Support SPaMCAST by buying the book here.

Available in English and Chinese.

Categories: Process Management