Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/5' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Process Management
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Agile Portfolio Metrics: Mix and Flow Metrics

Portfolio Metrics Help Clear The Fog!

Portfolio Metrics Help Clear The Fog!

Organizations and pundits often recommend a wide range of Agile portfolio metrics.  Almost all these metrics provide rich and interesting information about the portfolio and how an organization manages the portfolio. Agile Portfolio Metrics Categories describes five high-level categories so organizations can get the greatest measurement value with the least measurement effort. Each category focuses on a question. For example, if an organization wanted to understand how quickly prioritized projects reach point in the portfolio where they can deliver value, metrics from the Demand and Capacity (flow) category would be usefull.  The first two categories in the model are detailed below.

Portfolio Mix metrics provide portfolio managers and the organization with an understanding of how the organization allocates work across different teams or classes of service. 

Basic Focus/Question: How has the organization allocated its resources?  Alternate versions of this question include: What is the forecasted business value by class of service? What business value is currently in the process of development?

  1. Portfolio Budget by Class of Service provides allocation information based on classic budgeting mechanisms.
  2. Portfolio Backlog Value (by Class of Service) provides a view of how work is allocated based on value.
  3. Portfolio Value WIP (by Class of Service) provides a view of the value of the work has entered the development process.
  4. Portfolio ROI or ROA (by Class of Service) provides a view of the return (pick the type) of the work in the portfolio.  Generally, this metric is categorized by status to project the flow of return.
  5. Cost of Delay by (by Class of Service) provides a view of how efficiently the portfolio is prioritized and managed. The cost of delay (CoD) is a method of placing a value on the waste inherent in delay.

Data for this category is often ‚Äúharvested‚ÄĚ from the business cases, and include return on investment (ROI), return on asset (ROA) and, in mature Agile organizations, estimated¬†cost¬†of delay.¬† The data needed to define ROI and ROA will include benefit and cost data. Use cost and¬†benefit¬†data¬†investigate whether portfolio allocations are¬†maximizing¬†value or to meet other goals.

Demand and Capacity metrics provide feedback on the organizational capacity and the demand placed on the capacity.

Basic Focus/Question: How effectively is capacity tuned meet to demand?  Alternate versions of the question include: Where are the bottlenecks in the flow of the work?  How much of forecasted value enters the portfolio funnel compared to the amount that is delivered?

  1. Weighted Shortest Job First (WSJF) provides data on how effectively  work is prioritized  and feedback on the WSJF estimates for tuning.
  2. Velocity or Value Velocity provides a view of the rate works moves through the portfolio lifecycle. This metric type uses size (function points or story points) or value as their basis.
  3. Work in Process Violations / Expedited Requests provides a view of the how many an organization exceeds times capacity limits.  Exceeded capacity limits is a reflection of demand/capacity mismatch.

Demand and capacity metrics use data generated by the movement of work through the portfolio lifecycle. Changes in trend data of any of these metrics often provide an early sign of a mismatch between demand and capacity. 

Effective measurement is a balance.  The sample of metrics in first two categories in the Agile Portfolio Metrics framework provides part of a balanced picture.  Which metrics an organization chooses is linked to the uncertainty being faced.

.  


Categories: Process Management

Agile Portfolio Metrics Categories

532154894_d077e58fbf_z

Portfolio provide direction

The term portfolio has many uses in a software development organizations, ranging from product portfolios to application portfolios.  I have even seen scenarios in which organizations put all their development, enhancement and maintenance efforts into a single bucket for prioritization and queuing until there was the capacity to work on them.  We will use an inclusive definition of the portfolio to include, at a high level, all the software development and enhancement work in an organization.  The definition of the term organization is flexible, and can be defined to meet any specific reporting and control needs.  Products, lines or business, and even applications are often used to define portfolios. Agile portfolio metrics are integral to prioritization and validating the flow of work.  I break Agile portfolio metrics into five high-level categories.

  1. Portfolio Mix metrics provide portfolio managers and the organization with an understanding of how the organization allocates work across different teams or classes of service.¬† For example, one organization I have worked with classifies the work in their portfolio as innovation (big bets that might leapfrog competitors), grow the business (extensions of current applications or products), run the business (work that maintains current products or applications) and expedited (high priority or squeaky wheel).¬† Data for this category is often ‚Äúharvested‚ÄĚ from the business cases.
  2. Demand and Capacity metrics provide feedback on the organizational capacity and the demand placed on the capacity. Metrics are often focused on flow and might include work-in-process limits and value flow.  Much of the data for metrics in this category is captured based on the observing the movement of work through the portfolio lifecycle.
  3. Value metrics provide feedback on the economic value work in and delivered from the portfolio. Data for these metrics comes from market share data and accounting data, including the cost of work performed, sales, margin contribution, and cost avoidance.
  4. Portfolio Health metrics provide feedback on how satisfied stakeholders and team members are with the work performed. Health metrics often use satisfaction surveys (customer and team), net promoter scores and quantified risk metrics as a sign of portfolio health.  One organization I recently observed measures backlog age as a proxy for portfolio health.  Satisfaction data is often gathered via survey tools, while risks data comes from calibrated estimates identified for the work.
  5. Financial Management metrics leverage funding and budget information.  The data comes from accounting systems and project tracking systems, depending on where the work is in the portfolio lifecycle.

Agile portfolio metrics are only useful if they provide value. As our re-read of Hubbard’s How to Measure Anything has made plain, just because we measure something, does not mean we should. All measure must have economic value or they are not useful.  Metrics and measures add value if they reduce uncertainty so that we can make better decisions.  The five categories of metrics are targeted at providing data for different decisions (which will be explored next).


Categories: Process Management

Should Scrum Teams Include a Stretch Goal In Their Sprints?

Mike Cohn's Blog - Tue, 02/09/2016 - 16:00

There are, of course, a variety of ways to go about planning a sprint. I’ve written previously about velocity-driven sprint planning and commitment-driven sprint planning and my preference. But regardless of which approach a team takes to sprint planning, there is also the question of how full to fill the sprint.

Some teams prefer to leave plenty of excess capacity in each sprint, perhaps so there is time for emergent needs or experimentation. Other teams prefer to fill the sprint to the best of their ability to forecast their capacity.

Still other teams like to take on a “stretch goal” each sprint, which is kind of a product backlog item that is not quite in the sprint, but is one the team hopes to complete if the sprint goes well.

In this post, I’d like to share my thoughts on bringing a stretch goal into a sprint.

This is one of those things that needs to be left entirely up to the team. It should not be up to the ScrumMaster or the product owner, but up to the team. Some teams do extremely well with a stretch goal. Other teams do not.

It really depends on how the team views the stretch goal.

For example, I feel stretch goals are like a crushing weight. I feel like I need to complete them. When I set a goal, I almost always achieve it. I have a hard time distinguishing between what I call a “normal goal” and stretch goal. I don’t think this is good, but it’s who I am. But, I’m not the only one who does this.

If a team included me and perhaps a couple of others like me, we would probably not do well with a stretch goal. The stretch goal would likely be in our minds and possibly even affect our ability to finish all of the main work of the sprint.

Other people--those unlike me--have what I’ll call a more mature attitude toward stretch goals. They can look at it as it’s intended. They can think, “OK, great if we get to it but no big deal if not.” Teams comprising mostly people like that will probably do quite well with a stretch goal.

So: Should your team have a stretch goal in their sprints?

This really has to be up to the team. Unless I’m on the team. Then the answer is no.

Does Your Team Use Stretch Goals?

What do you do? Does your team use stretch goals? Does it help? Please share your thoughts in the comments below.

Managing Programmers

From the Editor of Methods & Tools - Tue, 02/09/2016 - 15:24
Programmers are not like the other kids. They cannot and should not be managed like normal people if your intent is to produce high quality software. The things you would do to make the process go faster will actually make things go slower. This session gives you insight on the care and feeding of programmers, […]

Automated UI Testing with React Native on iOS

Xebia Blog - Mon, 02/08/2016 - 21:30
code { display: inline !important; font-size: 90% !important; color: #6a205e !important; background-color: #f9f9f9 !important; border-radius: 4px !important; }

React Native is a technology to develop mobile apps on iOS and Android that have a near-native feel, all from one codebase. It is a very promising technology, but the documentation on testing can use some more depth. There are some pointers in the docs but they leave you wanting more. In this blog post I will show you how to use XCUITest to record and run automated UI tests on iOS.

Start by generating a brand new react native project and make sure it runs fine:
react-native init XCUITest && cd XCUITest && react-native run-ios
You should now see the default "Welcome to React Native!" screen in your simulator.

Let's add a textfield and display the results on screen by editing index.ios.js:

class XCUITest extends Component {

  constructor(props) {
    super(props);
    this.state = { text: '' };
  }

  render() {
    return (
      <View style={styles.container}>
        <TextInput
          testID="test-id-textfield"
          style={{borderWidth: 1, height: 30, margin: 10}}
          onChangeText={(text) => this.setState({text})}
          value={this.state.text}
        />
        <View testID="test-id-textfield-result" >
          <Text style={{fontSize: 20}}>You typed: {this.state.text}</Text>
        </View>
      </View>
    );
  }
}

Notice that I added testID="test-id-textfield" and testID="test-id-textfield-result" to the TextInput and the View. This causes React Native to set a accessibilityIdentifier on the native view. This is something we can use to find the elements in our UI test.

Recording the test

Open the XCode project in the ios folder and click File > New > Target. Then pick iOS > Test > iOS UI Testing Bundle. The defaults are ok, click Finish. Now there should be a XCUITestsUITests folder with a XCUITestUITests.swift file in it.

Let's open XCUITestUITests.swift and place the cursor inside the testExample method. At the bottom left of the editor there is a small red button. If you press it, the app will build and start in the simulator.

Every interaction you now have with the app will be recorded and added to the testExample method, just like in the looping gif at the bottom of this post. Now type "123" and tap on the text that says "You typed: 123". End the recording by clicking on the red dot again.

Something like this should have appeared in your editor:

      let app = XCUIApplication()
      app.textFields["test-id-textfield"].tap()
      app.textFields["test-id-textfield"].typeText("123")
      app.staticTexts["You typed: 123"].tap()

Notice that you can pull down the selectors to change them. Change the "You typed" selector to make it more specific, change the .tap() into .exists and then surround it with XCTAssert to do an actual assert:

      XCTAssert(app.otherElements["test-id-textfield-result"].staticTexts["You typed: 123"].exists)

Now if you run the test it will show you a nice green checkmark in the margin and say "Test Succeeded".

In this short blogpost I showed you how to use the React Native testID attribute to tag elements and record and adapt a XCUITest in XCode. There is a lot more to be told about React Native, so don't forget to follow me on twitter (@wietsevenema)

Recording UI Tests in XCode

Making Agile even more Awesome. By Nature.

Xebia Blog - Mon, 02/08/2016 - 11:31

Watching the evening news and it should be no surprise the world around us is increasingly changing and is becoming too complex to fit in a system we as humankind still can control.  We have to learn and adapt much faster solving our epic challenges. The Agile Mindset and methodologies are an important mainstay here. Adding some principles from nature makes it even more awesome.

In organizations, in our lives, we are in a constant battle ‚Äúbeating the system‚ÄĚ. ¬†Steering the economy, nature, life. ¬†We‚Äôre fighting against it, and becoming less and less successful in it. ¬†What should change here?

First, we could start to let go the things we can’t control and fully trust the system we live in: Nature. It’s the ultimate Agile System, continuously learning and adapting to changing environments.  But how?

We have created planes and boats by observing how nature did it: Biomimetics.  In my job as an Agile Innovation consultant, I’m using these and other related principles:

  1. Innovation engages in lots of experimentation: life creates success models through making mistakes, survival of the fittest.
  2. Continuously improve by feedback loops.
  3. Use only the energy you need. Work smart and effective.
  4. Fit form to function. Function is primary important to esthetics.
  5. Recycle: Resources are limited, (re)use them smart.
  6. Encourage cooperation.
  7. Positivity is an important source of energy, like sunlight can be for nature.
  8. Aim for diversity. For example, diverse problem solvers working together can outperform groups of high-ability problem solvers.
  9. Demand local expertise, to be aware of the need of local differences.
  10. Create a safe environment to experiment. Like Facebook is able to release functionality every hour for a small group of users.
  11. Outperform frequently to gain endurance and to stay fit.
  12. Reduce complexity by minimizing the number of materials and tools.For example, 96% of life on this planet is made up of six types of atoms: Carbon, Hydrogen, Oxygen, Nitrogen, Phosphorus and Sulphur
How to kickstart your start-up?

Until a couple of years ago,  innovative tools were only available for financial powerful companies.  Now, innovative tools like 3D printing and the Internet of Things are accessible for everybody.  The same applies for Agile.  This enables you to enter new markets against extreme low marginal costs.  In these start-ups you can recognize elements of natural agility.  A brilliant example is Joe Justice’ WikiSpeed. In less than 3 months he succeeded in building a 100 Mile/Gallon street legal car defeating companies like Tesla.  This all shows you can solve apparently impossible challenges by trusting on your natural common sense.  It's that simple.

Paul Takken (Xebia) and Joe Justice (Scrum inc.) are currently working together on several global initiatives coaching governments and large enterprises in reinventing themselves how they can anticipate on today's epic challenges.  This is done by a smarter use of people’s talents, tooling, materials and Agile- and Lean principles as mentioned above.

Spamcast 380 - Kim Robertson, The Big Picture of Configuration Management

Software Process and Measurement Cast - Sun, 02/07/2016 - 23:00

Software Process and Measurement Cast 380 features our interview with Kim Robertson. Kim and I talked about big picture configuration management.  Without good configuration managements work, products, and programs often go wildly astray. Kim describes the a process that is as old a dirt . . . but WORKS and delivers value. We also discussed the book Kim co-authored with Jon Quigley (Jon was interviewed in SPaMCAST 346) Configuration Management: Theory, Practice, and Application.

Kims Bio

Kim Robertson is a NDIA Certified Configuration Management (CM) practitioner, consultant, and trainer with over 30 years of experience in contracts, subcontracts, finance, systems engineering and configuration management. He has an advanced degree in organizational management with a government contracts specialty and is the co-author of Configuration Management: Theory Practice and Application. He can be reached at Kim.Robertson@ValueTransform.com

If you are interested in the seed questions used to frame our interview please visit the SPaMCAST Facebook page.

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Eight, we begin the transition from what to measure to how to measure.

Upcoming Events

I am facilitating the CMMI Capability Challenge. This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on February 17 at 11 AM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd.  I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th.  Register now! 

Next SPaMCAST

The next Software Process and Measurement Cast features our essay on Agile adoption.¬† Words are important. They can rally people to your banner or create barriers. Every word communicates information and intent. There has been a significant amount of energy spent discussing whether the phrase ‚ÄėAgile transformation‚Äô delivers the right message. There is a suggestion that ‚Äėadoption‚Äô is a better term.¬†We shall see!

We will also have an entry from Gene Hughson’s Form Follows Function Blog. Gene will discuss his blog entry, Seductive Myths of Greenfield Development. And a visit from the Software Sensei, Kim Pries!  Kim’s essay is on women in the tech field.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, ¬†for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

Spamcast 380 ‚Äď Kim Robertson, The Big Picture of Configuration Management

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 380 features our interview with Kim Robertson. Kim and I talked about big picture configuration management.  Without good configuration managements work, products, and programs often go wildly astray. Kim describes the a process that is as old a dirt . . . but WORKS and delivers value. We also discussed the book Kim co-authored with Jon Quigley (Jon was interviewed in SPaMCAST 346) Configuration Management: Theory, Practice, and Application.

Kims Bio

Kim Robertson is a NDIA Certified Configuration Management (CM) practitioner, consultant, and trainer with over 30 years of experience in contracts, subcontracts, finance, systems engineering and configuration management. He has an advanced degree in organizational management with a government contracts specialty and is the co-author of Configuration Management: Theory Practice and Application. He can be reached at Kim.Robertson@ValueTransform.com

If you are interested in the seed questions used to frame our interview please visit the SPaMCAST Facebook page.

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Eight, we begin the transition from what to measure to how to measure.

Upcoming Events

I am facilitating the CMMI Capability Challenge. This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on February 17 at 11 AM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

I will be at the QAI Quest 2016 in Chicago beginning April 18th through April 22nd.  I will be teaching a full day class on Agile Estimation on April 18 and presenting Budgeting, Estimating, Planning and #NoEstimates: They ALL Make Sense for Agile Testing! on Wednesday, April 20th.  Register now!

Next SPaMCAST

The next Software Process and Measurement Cast features our essay on Agile adoption.¬† Words are important. They can rally people to your banner or create barriers. Every word communicates information and intent. There has been a significant amount of energy spent discussing whether the phrase ‚ÄėAgile transformation‚Äô delivers the right message. There is a suggestion that ‚Äėadoption‚Äô is a better term.¬†We shall see!

We will also have an entry from Gene Hughson’s Form Follows Function Blog. Gene will discuss his blog entry, Seductive Myths of Greenfield Development. And a visit from the Software Sensei, Kim Pries!  Kim’s essay is on women in the tech field.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, ¬†for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

How To Measure Anything, Chapter 8: The Transition: From What to Measure to How to Measure

HTMA

How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

Chapter 8 of¬†How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition, begins the third section of the book.¬† Part III is focused on Measurement Methods.¬† Chapter 8 is titled, The Transition: From What to Measure to How to Measure. This is where you roll up your sleeves, crack your knuckles and get to work. Whenever you are beginning something new, the question of where to start emerges.¬† If I were to summarize the chapter in three sentences I would say:

  1. Understand the concepts of measurement and error.
  2. Break what you are measuring down into smaller pieces.
  3. Never reinvent the wheel; research how others have measured what you are interested in measuring.

In measurement, we collect data by observing something happening. Our instruments could be a survey, an experiment, a gauge, or many other mechanisms. Instruments have varying levels of precision.  For example, measuring the components of water using chromatography is far more precise than an employee attitude survey. Your choice of instrument will be driven by how much uncertainty the measurement needs to remove.   Remember the goal of measurement is to reduce uncertainty to an acceptable level.  We use standardized instruments because they are consistent and can be calibrated to account for certain types of error.  For example, a colleague developed a sizing instrument for software that was always 12% low when compared to a full IFPUG function point count. Knowing the error allowed him to calibrate the instrument.

Some variables are hard to measure because they represent a nebulous concepts or processes such as the value of information technology. ¬† Decomposition is a step that starts with the variable you want to measure and then breaks it down into its component¬†parts. That way you can identify where there is uncertainty, which parts are¬†observable, which are ‚Äúeasy‚ÄĚ to measure and which parts have¬†value.¬† As noted in Chapter 7, not everything that is measurable has economic value.¬† Hubbard points out that there is a decomposition effect. ¬†The effect is that as we decompose a metric it is possible to learn enough¬†not to need new observation. Quoting Hubbard:

‚ÄúThe entire process of decomposition itself is a gradual conceptual revelation for those who think that something is immeasurable.‚ÄĚ

It is easy to fall into the trap of believing that your measurement problem is unique. Hubbard suggests that we start the measurement process with the assumption that someone else done this before us.  If someone has developed a measurement solution (or close) for what we need, then we can tap into those ideas with some research.  The internet and library provide a rich source of secondary sources. Start by searching on your topic while including terms that will help filter fluff articles.  For example, if you are looking for information on measuring software productivity include terms like data, correlation or tables to the search criteria.  Consider trolling the reference links in Wikipedia articles.  Also, consider reviewing the bibliographies of somewhat related articles.

Once the research has been done observations need to be made to collect data. One technique for determining how to measure, as suggested by Hubbard, is to describe in detail how you see or detect the object being measured. This step is not always easy, especially¬†for anyone that believes any specific¬†object or concept can’t be measured. One the important pieces of advice from Hubbard in when generating a description is difficult, is that¬†if you have any basis for the belief that an object exists, you are observing it.¬† Describing how you or detect something will provide a strong sign of how to observe the¬†variable.¬† Determining where to start measuring is like hunting for the first few pieces of a¬†jigsaw¬†puzzle (the fun kind ‚Äď not the one my wife has going that is just varieties of green and gray).

Here are some of the questions that Hubbard uses to begin:

  • Does it leave a trail of any kind? Almost all phenomena or processes generate some evidence that they occur.¬† For example, seeing a contrail is evidence that jet has passed overhead.
  • If the trail doesn’t already exist, can you observe it directly¬†or at least, a sample of it? I am often struck by the shock on people‚Äôs faces when I suggest that they actually go and observe what is happening. Early in my career in process¬†improvement,¬†we had data at showed a large productivity fall off in key punch operators beginning 30 minutes before lunch.¬† We had no audit trail that ‚Äútold‚ÄĚ us what was happening and did not understand until we actually watched and measured what was happening.¬† Let‚Äôs leave the answer at a¬†poor application design and move on.
  • If it doesn’t appear to leave behind a detectable trail of any kind, and direct one time observations do not seem workable, can you devise a way to begin to track it now? For example, the EU built the Large Hadron Collider to discover and the Higgs boson particle.
  • If tracking the existing conditions doesn’t suffice, can you force the phenomena¬†to occur under conditions that allow easier observation? Experimentation in the corporate environment is not always easy and often times are not perceived as fair.¬† Remember, when experimenting on projects fairness issue might be as broad as who is allowed to participate¬†in the experiment or as nitpicky as why team members are asked to expend effort¬†on collecting extra data. Dan Airily, of the Wall Street Journal, in a recent blog addressed the fairness issue noting across an organization¬†(or between teams), ‚Äú If you can figure out how to frame them as fair, they might become more palatable.‚ÄĚ

All measurements include error.  There are two basic types of error. Systematic are those that are consistent and not just random variations from one observation to the next.  The error in the software measurement example earlier represents a systematic error.  You can account for systematic error through careful calibration.  The second type of error is random error.  Individual observations influenced by random error can’t, by definition, be precisely predicted. Understanding the amount and types of error present in any measurement affects its value.  While you can’t remove all error, that does not mean that you should not understand (mathematically) the error present and to get rid of the amount of error that is economically feasible.

Measurement done on a formal, systematic basis provides information that has value for making decisions.  A colleague has a business that, as a third party, measures the amount of software produced by a development organization and delivered to the client.  Based on that size the client pays the producer. In this example, measurement is used to make a payment decision (and often the estimated size is used to make a purchasing decision). All parties monitor the amount of systemic and random error in the transaction so that the cost and precision meet the need of all parties.  In earlier installments we discussed the economic value of perfect information (EVPI), in this example, I have been told that my colleague and his clients have discussed the EVPI of the information and that the actual cost is far below the EVPI.  The goal is to measure just enough to reduce their uncertainty in the transaction to acceptable levels by focusing on the observable portion of the transaction that is relevant to both parties.  I suspect that they all read this chapter before agreeing on what to measure.

Previous Installments in Re-read Saturday,¬†How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

  1. How To Measure Anything, Third Edition, Introduction
  2. Chapter 1: The Challenge of Intangibles
  3. Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily
  4. Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t
  5. Chapter 4: Clarifying the Measurement Problem
  6. Chapter 5: Calibrated Estimates: How Much Do You Know Now?
  7. Chapter 6: Quantifying Risk Through Modeling
  8. Chapter 7: Quantifying The Value of Information

Categories: Process Management

Storytelling A Tool For More Than Just Presentations

Stories help you visualize your goals

Stories help you visualize your goals

In the Harvard Business Review¬†article¬†The Irresistible Power of Storytelling as a Strategic Business Tool¬†by Harrison Monarth (March 11, 2014), Keith Quesenberry,¬†a researcher¬†from Johns Hopkins,¬†notes¬†‚ÄúPeople are attracted to stories because we‚Äôre social creatures and we relate to other people.‚ÄĚ The power of storytelling is that it helps us understand each other and develop empathy. Storytelling is a tool¬†that is useful in many scenarios; for presentations, but also to help people frame their thoughts and for gathering information. A story provides both a deeper and more nuanced connection with information than most lists of PowerPoint bullets or even structured requirements documents. Here are just a few scenarios (other than presentations) where stories can be useful:

Imagining the impact of change
Almost every change project begins with a vision statement or a set of high-level goals.  Rarely are these visions and goals inspirational, nor do they provide enough structure to guide behavior as the program progresses.  Stories can be used to generate a more emotive vision of the future in the words of the leader(s) or stakeholders. These stories describe what working for the organization would be like after the change happens or a day in the life of the new organization.  The story that unfolds will provide a more nuanced set of requirements, gaps in the vision and guidance on the expectation of how people will behave in language that provides guidance and motivation.

Establishing goals
Asking an executive or even a group of leaders for their goals will typically generate a crisp bullet-pointed list. While important, these types of goals don’t tend to be actionable for anyone other than leader whose MBO is tied to the goal. For everyone else, the bullet-pointed goals describe an endpoint without the nuance of what that endpoint means and how to get to there. In this scenario, a story is the raw material to generate goals at an actionable level.  In scenarios where an organization already has developed a vision or specific SMART goals, I often ask the team or leaders to imagine that they have attained those goals and to tell me how that happened or how they got there.  I have the storytellers use story patterns such as the Hero’s Journey or Freytag’s Pyramid as tools to organize their thoughts and stories.  The use of patterns helps the storyteller think through and describe the entire journey to a goal. The storyteller envisions the whole process which provides motivation for attaining the goal.

Storytelling to generate scenarios for epics, features and stories
One of most interesting dilemmas most Agile teams face is generating an initial backlog. In classic waterfall projects a business analyst or two, a bunch of subject matter experts and possibly a project manager would get together to generate a requirements document. Sometimes Agile projects and products use the same process (thus is a hybrid form of Agile).  A better option is to assemble a cross functional team of SMEs, product owners, product managers and the development team personnel (or at least a subset of the developers), and use facilitated storytelling to generate a set of scenarios which are then decomposed into features, epics and user stories using standard grooming techniques. The stories that emerge will include functional, non-functional and technical themes, rather than simply the functional view user requirements documents usually exhibited. The process of beginning by generating story-based scenarios not only provides the team with the information needed to create user stories, but also provides context for what is being built.

Storytelling might seem like just something that you do when you are making a presentation or explaining the current status.  Storytelling might even seem a little old fashion, yet it seems to be an integral part of the human experience.  Stories establish a vision of where we want a journey to end and for how we want to make the journey.  Stories provide a rich context and the emotional power needed to help guide a team as they work through the process of writing, testing and demonstrating code.  Starting at the beginning, if we think stories are valuable then we need to embrace using storytelling as a tool to develop understanding.

 


Categories: Process Management

Robot Framework and the keyword-driven approach to test automation - Part 2 of 3

Xebia Blog - Wed, 02/03/2016 - 18:03

In part 1 of our three-part post on the keyword-driven approach, we looked at the position of this approach within the history of test automation frameworks. We elaborated on the differences, similarities and interdependencies between the various types of test automation frameworks. This provided a first impression of the nature and advantages of the keyword-driven approach to test automation.

In this post, we will zoom in on the concept of a 'keyword'.

What are keywords? What is their purpose? And what are the advantages of utilizing keywords in your test automation projects? And are there any disadvantages or risks involved?

As stated in an earlier post, the purpose of this first series of introductory-level posts is to prevent all kinds of intrusive expositions in later posts. These later posts will be of a much more practical, hands-on nature and should be concerned solely with technical solutions, details and instructions. However, for those that are taking their first steps in the field of functional test automation and/or are inexperienced in the area of keyword-driven test automation frameworks, we would like to provide some conceptual and methodological context. By doing so, those readers may grasp the follow-up posts more easily.

Keywords in a nutshell A keyword is a reusable test function

The term ‚Äėkeyword‚Äô refers to a callable, reusable, lower-level test function that performs a specific, delimited and recognizable task. For example: ‚ÄėOpen browser‚Äô, ‚ÄėGo to url‚Äô, ‚ÄėInput text‚Äô, ‚ÄėClick button‚Äô, ‚ÄėLog in‚Äô, 'Search product', ‚ÄėGet search results‚Äô, ‚ÄėRegister¬†new customer‚Äô.

Most, if not all, of these are recognizable not only for developers and testers, but also for non-technical business stakeholders.

Keywords implement automation layers with varying levels of abstraction

As can be gathered from the examples given above, some keywords are more atomic and specific (or 'specialistic') than others. For instance,¬†‚ÄėInput text‚Äô will merely enter a string into an edit field, while ‚ÄėSearch product‚Äô will be comprised of a chain (sequence) of such atomic actions (steps), involving multiple operations on various types of controls (assuming GUI-level automation).

Elementary keywords, such as 'Click button' and 'Input text', represent the lowest level of reusable test functions: the technical workflow level. These often do not have to be created, but are being provided by existing, external keyword libraries (such as Selenium WebDriver), that can be made available to a framework. A situation that could require the creation of such atomic, lowest-level keywords, would be automating at the API level.

The atomic keywords are then reused within the framework to implement composite, functionally richer keywords, such as 'Register new customer', 'Add customer to loyalty program', 'Search product', 'Add product to cart', 'Send gift certificate' or 'Create invoice'. Such keywords represent the domain-specific workflow activity level. They may in turn be reused to form other workflow activity level keywords that automate broader chains of workflow steps. Such keywords then form an extra layer of wrappers within the layer of workflow activity level keywords. For instance, 'Place an order' may be comprised of 'Log customer in', 'Search product', 'Add product to cart', 'Confirm order', etc. The modularization granularity applied to the automation of such broader workflow chains is determined by trading off various factors against each other - mainly factors such as the desired levels of readability (of the test design), of maintainablity/reusability and of coverage of possible alternative functional flows through the involved business process. The eventual set of workflow activity level keywords form the 'core' DSL (Domain Specific Language) vocabulary in which the highest-level specifications/examples/scenarios/test designs/etc. are to be written.

The latter (i.e. scenarios/etc.) represent the business rule level. For example, a high-level scenario might be:  'Given a customer has joined a loyalty program, when the customer places an order of $75,- or higher, then a $5,- digital gift certificate will be sent to the customer's email address'. Such rules may of course be comprised of multiple 'given', 'when' and/or 'then' clauses, e.g. multiple 'then' clauses conjoined through an 'and' or 'or'. Each of these clauses within a test case (scenario/example/etc.) is a call to a workflow activity level, composite keyword. As explicated, the workflow-level keywords, in turn, are calling elementary, technical workflow level keywords that implement the lowest-level, technical steps of the business scenario. The technical workflow level keywords will not appear directly in the high-level test design or specifications, but will only be called by keywords at the workflow activity level. They are not part of the DSL.

Keywords thus live in layers with varying levels of abstraction, where, typically, each layer reuses (and is implemented through) the more specialistic, concrete keywords from lower levels. Lower level keywords are the building blocks of higher level keywords and at the highest-level your test cases will also be consisting of keyword calls.

Of course, your automation solution will typically contain other types of abstraction layers, for instance a so-called 'object-map' (or 'gui-map') which maps technical identifiers (such as an xpath expression) onto logical names, thereby enhancing maintainability and readability of your locators. Of course, the latter example once again assumes GUI-level automation.

Keywords are wrappers

Each keyword is a function that automates a simple or (more) composite/complex test action or step. As such, keywords are the 'building blocks' for your automated test designs. When having to add a customer as part of your test cases, you will not write out (hard code) the technical steps (such as entering the first name, entering the surname, etc.), but you will have one statement that calls the generic 'Add a customer' function which contains or 'wraps' these steps. This wrapped code, as a whole, thereby offers a dedicated piece of functionality to the testers.

Consequently, a keyword may encapsulate sizeable and/or complex logic, hiding it and rendering it reusable and maintainable. This mechanism of keyword-wrapping entails modularization, abstraction and, thus, optimal reusability and maintainability. In other words, code duplication is prevented, which dramatically reduces the effort involved in creating and maintaining automation code.

Additionally, the readability of the test design will be improved upon, since the clutter of technical steps is replaced by a human readable, parameterized call to the function, e.g.: | Log customer in | Bob Plissken | Welcome123 |. Using so-called embedded or interposed arguments, readability may be enhanced even further. For instance, declaring the login function as 'Log ${userName} in with password ${password}' will allow for a test scenario to call the function like this: 'Log Bob Plissken in with password Welcome123'.

Keywords are structured

As mentioned in the previous section, keywords may hide rather complex and sizeable logic. This is because the wrapped keyword sequences may be embedded in control/flow logic and may feature other programmatic constructs. For instance, a keyword may contain:

  • FOR loops
  • Conditionals (‚Äėif, elseIf, elseIf, ‚Ķ, else‚Äô branching constructs)
  • Variable assignments
  • Regular expressions
  • Etc.

Of course, keywords will feature such constructs more often than not, since encapsulating the involved complexity is one of the main purposes for a keyword. In the second and third generation of automation frameworks, this complexity was an integral part of the test cases, leading to automation solutions that were inefficient to create, hard to read & understand and even harder to maintain.

Being a reusable, structured function, a¬†keyword can also be made generic, by taking¬†arguments (as briefly touched upon in the previous section). For example,¬†‚ÄėLog in‚Äô takes arguments: ${user}, ${pwd} and perhaps¬†${language}. This adds to the already high levels of reusability¬†of a keyword, since¬†multiple input conditions can be tested through the same¬†function. As a matter of fact, it is precisely this aspect of a keyword that¬†enables so-called data-driven test designs.

Finally, a keyword may also have return values, e.g.: ‚ÄėGet search results‚Äô returns: ${nrOfItems}. The return value can be used for a myriad of purposes, for instance to perform¬†assertions, as input for decision-making or for¬†passing it into another function as argument, Some¬†keywords will return nothing, but only perform an action (e.g. change the application state, insert a database record or create a customer).

Risks involved With great power comes great responsibility

The benefits of using keywords have been explicated above. Amongst other advantages, such as enhanced readability and maintainability, the keyword-driven approach provides a lot of power and flexibility to the test automation engineer. Quasi-paradoxically, in harnessing this power and flexibility, the primary risk involved in the keyword-driven approach is being introduced. That this risk should be of topical interest to us, will be established by somewhat digressing into the subject of 'the new testing'.

In many agile teams, both 'coders' and 'non-coders' are expected to contribute to the automation code base. The boundaries between these (and other) roles are blurring. Despite the current (and sometimes rather bitter) polemic surrounding this topic, it seems to be inevitable that the traditional developer role will have to move towards testing (code) and the traditional tester role will have to move towards coding (tests). Both will use testing frameworks and tools, whether it be unit testing frameworks (such as JUnit), keyword-driven functional test automation frameworks (such as RF or Cucumber) and/or non-functional testing frameworks (such as Gatling or Zed Attack Proxy).

To this end, the traditional developer will have to become knowledgeable and gain experience in the field of testing strategies. Test automation that is not based on a sound testing strategy (and attuned to the relevant business and technical risks), will only result in a faster and more frequent execution of ineffective test designs and will thus provide nothing but a false sense of security. The traditional developer must therefore make the transition from the typical tool-centric approach to a strategy-centric approach. Of course, since everyone needs to break out of the silo mentality, both developer and tester should also collaborate on making these tests meaningful, relevant and effective.

The challenge for the traditional tester may prove to be even greater and it is there that the aforementioned risks are introduced. As stated, the tester will have to contribute test automation code. Not only at the highest-level test designs or specifications, but also at the lowest-level-keyword (fixture/step) level, where most of the intelligence, power and, hence, complexity resides. Just as the developer needs to ascend to the 'higher plane' of test strategy and design, the tester needs to descend into the implementation details of turning a test strategy and design into something executable. More and more testers with a background in 'traditional', non-automated testing are therefore entering the process of acquiring enough coding skills to be able to make this contribution.

However, by having (hitherto) inexperienced people authoring code, severe stability and maintainability risks are being introduced. Although all current (i.e. keyword-driven) frameworks facilitate and support creating automation code that is reusable, maintainable, robust, reliable, stable and readable, still code authors will have to actively realize these qualities, by designing for them and building them in into their automation solutions. Non-coders though, in my experience, are (at least initially) having quite some trouble understanding and (even more dangerously) appreciating the critical importance of applying design patters and other best practices to their code. That is, most traditional testers seem to be able to learn how to code (at a sufficiently basic level) rather quickly, partially because, generally, writing automation code is less complex than writing product code. They also get a taste for it: they soon get passionate and ambitious. They become eager to applying their newly acquired skills and to create lot's of code. Caught in this rush, they often forget to refactor their code, downplay the importance of doing so (and the dangers involved) or simply opt to postpone it until it becomes too large a task. Because of this, even testers who have been properly trained in applying design patterns, may still deliver code that is monolithic, unstable/brittle, non-generic and hard to maintain. Depending on the level at which the contribution is to be made (lowest-level in code or mid-level in scripting), these risks apply to a greater or lesser extent. Moreover, this risky behaviour may be incited by uneducated stakeholders, as a consequence of them holding unrealistic goals, maintaining a short-term view and (to put it bluntly) being ignorant with regards to the pitfalls, limitations, disadvantages and risks that are inherent to all test automation projects.

Then take responsibility ... and get some help in doing so

Clearly then, the described risks are not so much inherent to the frameworks or to the approach to test automation, but rather flow from inexperience with these frameworks and, in particular, from inexperience with this approach. That is, to be able to (optimally) benefit from the specific advantages of this approach, applying design patterns is imperative. This is a critical factor for the long-term success of any keyword-driven test automation effort. Without applying patterns to the test code, solutions will not be cost-efficient, maintainable or transferable, amongst other disadvantages. The costs will simply outweigh the benefits on the long run. Whats more, essentially the whole purpose and added value of using keyword-driven frameworks are lost, since these frameworks had been devised precisely to this end: counter the severe maintainability/reusability problems of the earlier generation of frameworks. Therefore, from all the approaches to test automation, the keyword-driven approach depends to the greatest extent on the disciplined and rigid application of standard software development practices, such as modularization, abstraction and genericity of code.

This might seem a truism. However, since typically the traditional testers (and thus novice coders) are nowadays directed by their management towards using keyword-driven frameworks for automating their functional, black-box tests (at the service/API- or GUI-level), automation anti-patterns appear and thus the described risks emerge. To make matters worse, developers remain mostly uninvolved, since a lot of these testers are still working within siloed/compartmented organizational structures.

In our experience, a combination of a comprehensive set of explicit best practices, training and on-the-job coaching, and a disciplined review and testing regime (applied to the test code) is an effective way of mitigating these risks. Additionally, silo's need to be broken down, so as to foster collaboration (and create synergy) on all testing efforts as well as to be able to coordinate and orchestrate all of these testing efforts through a single, central, comprehensive and shared overall testing strategy.

Of course, the framework selected to implement a keyword-driven test automation solution, is an important enabler as well. As will become apparent from this series of blog posts, the Robot Framework is the platform par excellence to facilitate, support and even stimulate these counter-measures and, consequently, to very swiftly enable and empower seasoned coders and beginning coders alike to contribute code that is efficient, robust, stable, reusable, generic, maintainable as well as readable and transferable. That is not to say that it is the platform to use in any given situation, just that it has been designed with the intent of implementing the keyword-driven approach to its fullest extent. As mentioned in a previous post, the RF can be considered as the epitome of the keyword-driven approach, bringing that approach to its logical conclusion. As such it optimally facilitates all of the mentioned preconditions for long-term success. Put differently, using the RF, it will be hard not to avoid the pitfalls inherent to keyword-driven test automation.

Some examples of such enabling features (that we will also encounter in later posts):

  • A straightforward, fully keyword-oriented scripting syntax, that is both very powerful and yet very simple, to create low- and/or mid-level test functions.
  • The availability of¬†dozens of keyword libraries out-of-the-box, holding both convenience functions (for instance to manipulate and¬†perform¬†assertions on xml) and specialized keywords for directly driving various¬†interface types. Interfaces such as REST, SOAP or JDBC can thus be interacted with¬†without having to write a single line of integration code.
  • Very easy, almost intuitive means to apply a broad range of¬†design patterns, such as creating¬†various types of¬†abstraction layers.
  • And lots and lots of other great and unique features.
Summary

We have now an understanding of the characteristics and purpose of keywords and of the advantages of structuring our test automation solution into (various layers of) keywords. At the same time, we have looked at the primary risk involved in the application of such a keyword-driven approach and at ways to deal with these risks.

Keyword-driven test automation is aimed at solving the problems that were instrumental in the failure of prior automation paradigms. However, for a large part it merely facilitates the involved solutions. That is, to actually reap the benefits that a keyword-driven framework has to offer, we need to use it in an informed, professional and disciplined manner, by actively designing our code for reusability, maintainability and all of the other qualities that make or break long-term success. The specific design as well as the unique richness of powerful features of the Robot Framework will give automators a head start when it comes to creating such code.

Of course, this 'adage' of intelligent and adept usage, is true for any kind of framework that may be used or applied in the course of a software product's life cycle.

Part 3 of this second post, will go into the specific implementation of the keyword-driven approach by the Robot Framework.

FitNesse in your IDE

Xebia Blog - Wed, 02/03/2016 - 17:10

FitNesse has been around for a while. The tool has been created by Uncle Bob back in 2001. It’s centered around the idea of collaboration. Collaboration within a (software) engineering team and with your non-programmer stakeholders. FitNesse tries to achieve that by making it easy for the non-programmers to participate in the writing of specifications, examples and acceptance criteria. It can be launched as a wiki web server, which makes it accessible to basically everyone with a web browser.

The key feature of FitNesse is that it allows you to verify the specs with the actual application: the System Under Test (SUT). This means that you have to make the documentation executable. FitNesse considers tables to be executable. When you read ordinary documentation you’ll find that requirements and examples are outlined in tables often, hence this makes for a natural fit.

There is no such thing as magic, so the link between the documentation and the SUT has to be created. That’s where things become tricky. The documentation lives in our wiki server, but code (that’s what we require to connect documentation and SUT) lives on the file system, in an IDE. What to do? Read a wiki page, remember the class and method names, switch to IDE, create classes and methods, compile, switch back to browser, test, and repeat? Well, so much for fast feedback! When you talk to programmers, you’ll find this to be the biggest problem with FitNesse.

Imagine, as a programmer, you're about to implement an acceptance test defined in FitNesse. With a single click, a fixture class is created and adding fixture methods is just as easy. You can easily jump back and forth between the FitNesse page and the fixture code. Running the test page is as simple as hitting a key combination (Ctrl-Shift-R comes to mind). You can set breakpoints, step through code with ease. And all of this from within the comfort of your IDE.

Acceptance test and BDD tools, such as Cucumber and Concordion, have IDE plugins to cater for that, but for FitNesse this support was lacking. Was lacking! Such a plugin is finally available for IntelliJ.

screenshot_15435-1

Over the last couple of months, a lot of effort has been put in building this plugin. It’s available from the Jetbrains plugin repository, simply named FitNesse. The plugin is tailored for Slim test suites, but also works fine with Fit tables. All table types are supported. References between script, decision tables and scenarios work seamlessly. Running FitNesse test pages is as simple as running a unit test. The plugin automatically finds FitNesseRoot based on the default Run configuration.

The current version (1.4.3) even has (limited) refactoring support: renaming Java fixture classes and methods will automatically update the wiki pages.

Feel free to explore the new IntelliJ plugin for FitNesse and let me know what you think!

(GitHub: https://github.com/gshakhn/idea-fitnesse)

Stories in Presentations: Three Useful Patterns

A puzzle and patterns have a lot in common.

A puzzle and patterns have a lot in common.

Stories are a tool to help structure information so that audiences can easily consume them. They help presenters make sure their message stays front and center so it can be heard. While many presentations and stories in the corporate environment use the metaphor of a journey, some are best represented in other ways. Other patterns are useful both to fit other circumstances or as a tool to inject a bit of variety into presentation heavy meetings. (Just how many journeys can you take in any one meeting?)

The Redirect or False Start is a pattern in which the presenter goes down a path in a predictable manner, then stops and restarts down a different path. The change in direction catches listeners off guard and causes them to concentrate on the new information being presented. The Sandler Sales System teaches a similar tactic, in which you begin to answer a question and then stop and ask a clarifying question. The 1970’s television detective series, Colombo, represents the perfect embodiment of this style of interaction. Lieutenant Colombo often used false starts, stops and restarts to engage suspects, often to their own detriment.

This presentation style is good for presenting information about projects that have changed direction or run into issues that have caused directional changes. The change in direction is also good disrupting an audience that might be tuning out because they think they know where the presentation is going or have become complacent. Directional changes (if not overdone) are useful for keeping an audience on their toes.

I once had to present a project status at an all-day portfolio review. The review session included presentations from 30 separate projects that EVERYONE involved had to sit though. The only positive was that the coffee, cookies and sandwiches were excellent. My team’s presentation slot occurred mid-afternoon (approximately project 20). No one in that room had any interest in what was going on after lunch and were just waiting for the day to end (this is not a best practice). We were looking for more resources from our stakeholders as part of the review and anticipated their boredom. We used the false start approach to shock people back into life at least for the 15 minutes we were presenting and got the resources on the spot.

Convergence or Converging Lines is a pattern that is useful in scenarios that begin without a consensus approach or common theme. I typically use this pattern in situations where there are several competing approaches that either need to be synthesized or where a final decision needs to be made to choose an approach. For example, I saw a team use the convergence pattern to portray how their working group explored several competing ideas for addressing product owner engagement before finally settling on a consensus approach.

The power of this type of presentation is in showing that alternate voices were heard and incorporated into the final result. This is a very common pattern in consensus-driven organizations.

The Onion or Nested Loops is a useful pattern to walk or draw an audience to a final conclusion incrementally. Each layer of the presentation could be considered as a separate narrative that brings the audience closer to the core message. In many cases, each loop of the presentation uses a separate metaphor, each getting more personal as the onion is peeled. This helps to generate a connection in a less threatening manner than immediately jumping to the core without the context that the outer layers would convey. I watched an analyst use the onion pattern to describe how an earthquake in Japan and its impact on a series of auto part suppliers ended up impacting a start-up’s ability to go live. When the onion was peeled it became apparent why without delivery vans the firm could not execute its same day delivery service as part of its online strategy. Without parts, they had to wait for the delivery vans to be assembled. As the presenter peeled back each layer of the supply chain with a story for each layer, the core impact to the startup became clearer and more ominous. This presentation pattern is also useful to help educate the audience on the impact of events that are outside of their field of vision.

Presentation and story patterns are useful tools to help frame a story or a message. Patterns are often used to help teams or presenters build a presentation that doesn’t jump all over the place. I use patterns to keep me from wandering. The pattern acts almost as a template storyboard into which I place the plot elements. Patterns can also be useful to help expose goals and requirements by providing the outline for a structured discussion. In the end, patterns are only tools without an actual story they have little value.

Next: Non-presentation uses of presentation patterns


Categories: Process Management

Nine Product Management lessons from the Dojo

Xebia Blog - Tue, 02/02/2016 - 23:00
Are you kidding? a chance to add the Matrix to a blogpost?

Are you kidding? a chance to add the Matrix to a blogpost?

As I am gearing up for the belt exams next Saturday I couldn’t help to notice the similarities of what we learn in the dojo (it’s where the martial arts are taught) and how we should behave as Product Managers. Here are 9 lessons, straight from the Dojo, ready for your day job:

1.) Some things are worth fighting for

In Judo we practice Randori, which means ground wrestling. You will find that there are some grips that are worth fighting for, but some you should let go in search of a better path to victory.

In Product Management, we are the heat shield of the product, constantly between engineering striving for perfection, sales wanting something else, marketing pushing the launch date and management hammering on the PNL.

You need to pick your battles, some you deflect, some you unarm, and some you accept, because you are maneuvering yourself so you can make the move that counts.

Good product managers are not those who win the most battles, but those who know which ones to win.

2.) Preserve your partners

It‚Äôs fun to send people flying through the air, but the best way to improve yourself is to improve your partner. You are in this journey together, just as in Product Management. Ask yourself the following question today: ‚Äúwhom do I need to train as my successor‚ÄĚ and start doing so.

I was delayed to the airport because of the taxi strike, but saved by the strike of the air traffic controllers

"I was delayed to the airport because of the taxi strike, but saved by the strike of the air traffic controllers"

3.) There is no such thing as fair

It’s a natural reaction if someone changed the rules of the game. We protest, we go on strike, we say it’s not fair, but in a market driven environment, what is fair? Disruption, changing the rules of the game has become the standard (24% of the companies experience it already, 58% expect it, 42% is still in denial) We can go on strike or adapt to it.

The difference between Kata and free sparing is that your opponents will not follow a prescribed path. Get over it.

4.) Behavior leads to outcome

I’m heavily debating the semantics with my colleague from South Africa (you know who you are), so it’s probably wording but the grunt of it is: if you want more of something, you should start doing it. Positive brand experiences will drive people to your products; hence one bad product affects all other products of your brand.

It’s not easy to change your behavior, whether it is in sport, health, customer interaction or product philosophy, but a different outcome starts with different behaviour.

Where did my product go?

Where did my product go?

5.) If it’s not working try something different

Part of Saturday‚Äôs exams will be what in Jujitsu is called ‚Äúindirect combinations‚ÄĚ. This means that you will be judged on the ability to move from one technique to another when the first one fails. Brute force is also an option, but not one that is likely to succeed, even if you are very strong.

Remember Microsoft pouring over a billion marketing dollars in Windows Phone? Brute forcing its position by buying Nokia? Blackberry doing something similar with QNX and only now switching to Android? Indirect combinations is not a lack of perseverance but adaptability to achieve result without brute force and with a higher chance of success.

This is where you tap out

This is where you tap out

6.) Failure is always an option

Tap out! Half of the stuff in Jujitsu is originally designed to break your bones, so tap out if your opponent has got a solid grip. It’s not the end, it’s the beginning. Nobody gets better without failing.

Two third of all Product Innovations fails, the remaining third takes about five iterations to get it right. Test your idea thoroughly but don’t be afraid to try something else too.

7.) Ask for help

There is no way you know it all. Trust your team, peers and colleagues to help you out. Everyone has something to offer, they may not always have the solution for you but in explaining your problem you will often find the solution.

8.) The only way to get better is to show up

I’m a thinker. I like to get the big picture before I act. This means that I can also overthink something that you just need to do. Though it is okay to study and listen, don’t forget to go out there and start doing it. Short feedback loops are key in building the right product, even if the product is not build right. So talk to customers, show them what you are working on, even in an early stage. You will not get better at martial arts or product management if wait too long to show up.

9.) Be in the moment

Don’t worry about what just happened, or what might happen. Worry about what is right in front of you. The technique you are forcing is probably not the one you want.

 

This blog is part of the Product Samurai series. Sign up here to stay informed of the upcoming book: The Product Manager's Guide to Continuous Innovation.

The Product Manager's guide to Continuous Innovation

Successful Software Development Project Failures

From the Editor of Methods & Tools - Mon, 02/01/2016 - 15:10
Software development projects have always been challenging. The most referenced source for metrics about project success is the Standish Group yearly CHAOS report. According to this source, around 30% of software development project were considered as a “failure” in the recent years. The failure rate of larger projects is a little bit higher than the […]

SPaMCAST 379 - Done and Value, Test Data, Budgets Are Harmful

Software Process and Measurement Cast - Sun, 01/31/2016 - 23:00

Software Process and Measurement Cast 379 features our short essay on the relationship between done and value. The essay is in response to a question from Anteneh Berhane.  Anteneh called me to ask one of the hardest questions I had ever been asked: Why doesn’t the definition of done include value?

We will also have an entry of Jeremy Berriault’s QA Corner.  Jeremy and I discussed test data, and why having a suite of test data that many projects can use is important for efficiency.  One question is who should bite the bullet and build the first iteration of any test data library?

Steve Tendon completes this cast with a discussion of the next chapter in his book, Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban.¬† Chapter 7 is titled ‚ÄúBudgeting is Harmful.‚Ä̬† Steve hits classic budgeting head on, and provides options that improve flexibility and innovation.

Remember to help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player. Then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Seven, we discuss the concept of the economic value of information.

Upcoming Events

I am facilitating the CMMI Capability Challenge. This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on February 17 at 11 AM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

Next SPaMCAST

The next Software Process and Measurement Cast features our interview with Kim Robertson.  Kim and I talked about the big picture configuration management.  Kim suggests that the basic need and process for configuration management has not changed since ancient China.  Complexity and speed of change, however, has forced changes to the tools and who needs to be involved in the big picture of configuration management.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.

Categories: Process Management

SPaMCAST 379 ‚Äď Done and Value, Test Data, Budgets Are Harmful

 www.spamcast.net

http://www.spamcast.net

Listen Now

Subscribe on iTunes

Software Process and Measurement Cast 379 features our short essay on the relationship between done and value. The essay is in response to a question from Anteneh Berhane.  Anteneh called me to ask one of the hardest questions I had ever been asked: Why doesn’t the definition of done include value?

We will also have an entry of Jeremy Berriault’s QA Corner.  Jeremy and I discussed test data, and why having a suite of test data that many projects can use is important for efficiency.  One question is who should bite the bullet and build the first iteration of any test data library?

Steve Tendon completes this cast with a discussion of the next chapter in his book, Tame The Flow: Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban.¬† Chapter 7 is titled ‚ÄúBudgeting is Harmful.‚Ä̬† Steve hits classic budgeting head on, and provides options that improve flexibility and innovation.

Remember to help grow the podcast by reviewing the SPaMCAST on iTunes, Stitcher or your favorite podcatcher/player. Then share the review! Help your friends find the Software Process and Measurement Cast. After all, friends help friends find great podcasts!

Re-Read Saturday News

We continue the re-read of How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition by Douglas W. Hubbard on the Software Process and Measurement Blog. In Chapter Seven, we discuss the concept of the economic value of information.

Upcoming Events

I am facilitating the CMMI Capability Challenge. This new competition showcases thought leaders who are building organizational capability and improving performance. Listeners will be asked to vote on the winning idea which will be presented at the CMMI Institute’s Capability Counts 2016 conference.  The next CMMI Capability Challenge session will be held on February 17 at 11 AM EST.

http://cmmiinstitute.com/conferences#thecapabilitychallenge

Next SPaMCAST

The next Software Process and Measurement Cast features our interview with Kim Robertson.  Kim and I talked about the big picture configuration management.  Kim suggests that the basic need and process for configuration management has not changed since ancient China.  Complexity and speed of change, however, has forced changes to the tools and who needs to be involved in the big picture of configuration management.

Shameless Ad for my book!

Mastering Software Project Management: Best Practices, Tools and Techniques co-authored by Murali Chematuri and myself and published by J. Ross Publishing. We have received unsolicited reviews like the following: ‚ÄúThis book will prove that software projects should not be a tedious process, neither for you or your team.‚ÄĚ Support SPaMCAST by buying the book here. Available in English and Chinese.


Categories: Process Management

How To Measure Anything, Chapter 7: Quantifying The Value of Information

 How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

Chapter 7 of¬†How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition, is titled: Quantifying The Value of Information. Chapter 7 continues to build on the concepts of quantification and estimation presented in previous chapters. Chapter 7 and the idea that we can quantify the value of information is the centerpiece to Hubbard‚Äôs premise that we measure because measurement has value. Chapter 7 defines how to quantify the value of information.

Hubbard’s approach to quantifying the value of information begins with the concept of expected opportunity loss (EOL). Opportunity loss is the cost incurred if the wrong decision is made. The expected opportunity loss is calculated as the difference between the best decision and the average of the possible losses in the decisions based on the uncertainty.  The calculation uses a large sample of possible losses typically generated using often using a table or, in complex situations, a Monte Carlo Analysis.

Here is a simple EOL example:

Scenario:  A neighbor is considering investing $50,000 dollars in home improvements so they can list their house for $200,000 more than it currently can be sold for.  Realtors have indicated that there is a well calibrated 60% chance the investment would pay off.

 

Variable House Sells Higher Price House Sell at Lower Price Chance of Outcome 60% 40% Impact if improvements are made +$200,000 -$50,000 Impact if improvements are not made $0 $0 Expected Opportunity Loss $120,000

EOL if improvement not made. $20,000

EOL if improvement made.

In the example, the EOL if the improvements are not made and the house can have been sold at the higher price, we have an expected opportunity loss of $120,000. Alternately, the EOL if we made the investment and the house sells at the lower price is $20,000 (40% * absolute value of (-$50,000). If we could improve our knowledge of the possible outcome that information would have a value because we would make a better decision.  We can begin to understand the value of additional information in this example if we could know that the real chance of outcomes was a 90% chance of selling the house at the higher price if the improvements were made? The EOL of making the improvement and not selling at higher amount is now $50,000 * 10% or $5,000. Measurements that reduce the uncertainty in the outcome of a decision with economic consequences has a value.  A reduction in uncertainty reduces the EOL by increasing the chance of a better decision. In this simple example, the impact in EOL is called the expected value of information (EVI), and is calculated by subtracting the initial EOL before measurement and the EOL after (this is also known as the expected value of perfect information РEVPI).  In the example, the EVI is $15,000.

Unfortunately, most decisions aren’t as simple.  Instead of precise levels of uncertainty, we are often confronted by ranges of possible outcomes.  In these circumstances, we need to establish a probability distribution of outcomes (Monte Carlo) from which we could calculate the EOL for each point on the distribution (or slice of the distribution).  The EVPI would be calculated by adding all of the EOLs calculated.  Conceptually this is a fairly straightforward; however, the statistics can be quite daunting . . .so use a spreadsheet and formulas or a tool to do the calculations.

There are numerous other scenarios that can make finding EOLs and EVPIs more complicated. However, most if not all of them leverage the same basic fundamental even if they require more computing power than a ten key calculator.  Tools and spreadsheets remove a lot of the mathematical burden required to generate EOLs and EVPIs.

In most situations, decisions aren‚Äôt made with perfect information. Therefore, determining the value of partial uncertainty reduction is important.¬† The EVPI can be useful in its own sense because it describes a maximum that we should never exceed spending to make a measurement. Instead of the expected value of the perfect information we need to know the expected value of information (EVI).¬† EVI defines how much we expect to pay for additional information. The value of information tends to rise more quickly with small reductions of uncertainty¬†but levels off as we approach perfect certainty. If developed a graph to express this statement, the curve would go up rapidly and then flatten out. The slope of the curve is a factor of many things including how much uncertainty we have and the details of the loss function. Understanding this curve means that the first few observations have a much higher value in relative terms. Stated another way, in order to continuously reduce uncertainty will take more and more effort. I have seen this phenomenon in many measurement programs.¬† The first few measurement reports generate significant observations and ideas for process changes; however, as time goes by those large jumps in knowledge tend to come at a slower pace. This describes a cost curve that is nearly the opposite of the EVI, costs rise slowly when uncertainty is highest and then rise sharply as we get closer to certainty. Based on the EVI and cost curves it is immediately apparent that the assumption that if you have a lot of uncertainty that you need lot of information (and, therefore, cost) to reduce that understanding is WRONG. One the best lines in this chapter can be found on page 162, “If you know almost nothing, almost anything will tell you something.”

Another interesting topic tackled in the chapter is the concept of perishable information values.  In its most basic form consider the value of solving the right problem too late.  For example, if you were buying a house in a tight market and could either spend money to reduce uncertainty about prices in the market or you could wait to see prices evolve (much closer to perfect information) EVI would be a tool to decide to wait or buy more information. In a tight market, waiting tends to have far less utility and value than getting information quicker.

As the complexity of decisions increase, so does the number of variables needed to determine the value of information. Hubbard observes that the vast majority of variables in most models have an information value of zero. People measure what they think is important whether it is or not. However, when the EVI is determined for the variables in the model the known level of uncertainty for most variables was acceptable justifying no further measurement. He further found that the variables with the highest information values were routinely those that the client never measured. He named this the measurement inversion.  Measurement inversion typically occurred because people measure what they know, they measure things that will provide good news or they don’t establish the business value. Chapter 7 puts on firm footing to quantify the value of information and the value to gathering more data rather than guessing or just measuring everything.

Previous Installments in Re-read Saturday,¬†How to Measure Anything, Finding the Value of ‚ÄúIntangibles in Business‚ÄĚ Third Edition

How To Measure Anything, Third Edition, Introduction

Chapter 1: The Challenge of Intangibles

Chapter 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily

Chapter 3: The Illusions of Intangibles: Why Immeasurables Aren’t

Chapter 4: Clarifying the Measurement Problem

Chapter 5: Calibrated Estimates: How Much Do You Know Now?

Chapter 6: Quantifying Risk Through Modeling


Categories: Process Management

Which Agile Organizational Model or Framework to use? Use them all!

Xebia Blog - Sat, 01/30/2016 - 22:11

Many organizations are reinventing themselves as we speak.  One of the most difficult questions to answer is: which agile organizational model or framework do we use? SAFe? Holacracy? LeSS? Spotify?

Based on my experience on all these models, my answer is: just use as many agile models and frameworks you can get your hands on.  Not by choosing one of them specifically, but by experimenting with elements of all these models the agile way: Inspect, Learn and Adapt continuously.

For example, you could use Spotify‚Äôs tribe-structure, Holacracy‚Äôs consent- and role principles and SAFe‚Äôs Release Trains in your new agile organization. But most important: experiment towards your own ‚Äúcustom made‚ÄĚ agile organizational building blocks. ¬†And remember: taking on¬†the Agile Mindset is 80% of the job, only 20% is implementing this agile "organization".

Probably the worst thing you can do is just copy-paste an existing model.  You will inherit the same rigid situation you just wanted to prevent by implementing a scaled, agile organizational model.

Finally, the main ingredient of this agile recipe is trust.  You have to trust your colleagues and this new born agile organization in being anti-fragile and self-correcting just right from the start.  These principles are the same as successful agile organizations you probably admire, depend on.

Hoe om te gaan met Start-ups

Xebia Blog - Fri, 01/29/2016 - 16:15

Dit is een vraag die regelmatig door mijn hoofd speelt. In ieder geval moeten we stoppen met het continu romantiseren van deze initiatieven en als corporate Nederland nou eens echt mee gaan spelen.

Maar hoe?

Grofweg zijn er twee strategie√ęn als corporate: opkopen of zelf beter doen! Klinkt simpel, maar is toch best wel complex. Waarschijnlijk is de beste strategie om een mix te kiezen van beide, waarbij je maximaal je eigen corporate kracht gebruikt (ja, die heb je), en tegelijkertijd volledig de kracht van start-up innovatie kunt gebruiken.

Deze post verkent de mogelijkheden en je moet vooral verder lezen, als ook jij wilt weten hoe jij de digitalisering van de 21ste eeuw wilt overleven.

Waarom moet ik hier iets mee?

Eigenlijk hoef ik deze alinea natuurlijk niet meer te schrijven toch? De gemiddelde leeftijd van bedrijven neemt af.
avg age fortune 500
Dit komt mede doordat de digitalisering van producten en diensten de entree barrières in veel markten steeds lager maken. Er is dus meer concurrentie en daarom moet je beter je best doen om relevant te blijven.
Ten tweede, start-ups zijn hot! Iedereen wil voor een start-up werken en dus gaat het talent daar ook heen. Talent uit een toch al (te) krappe pool. Daarom moet je meer dan voorheen innoveren, omdat je anders de ‚Äúwar on talent‚ÄĚ verliest.
Als laatste is er natuurlijk veel te winnen met digitale innovatie. De snelheid waarmee bedrijven tegenwoordig via digitale producten en diensten winstgevend kunnen worden is ongelofelijk, dus doe je het goed, dan doe je mee.

Wat zijn mijn mogelijkheden?

Er zijn eigenlijk maar twee manieren om met start-ups om te gaan. De eerste is om simpelweg een aandeel te nemen in een start-up of een veelbelovende start-up over te nemen. De andere is om zelf te gaan innoveren vanuit de eigen kracht in de organisatie.

De voordelen van aandelen en overnames is natuurlijk de snelle winst. De huid wordt vaak niet goedkoop verkocht, maar dan heb je ook wat. Helemaal mooi is het, de start-up actief is in een segment of markt, waar je zelf met je brand niet in wilt of kunt zitten (incumbent inertia). De nieuwe aanwinst is dan complementair aan de bestaande business. Bijvoorbeeld een grote bank, die een start-up overneemt die gericht is op het verkopen van kortlopend krediet aan midden- en kleinbedrijf.

Het nadeel is natuurlijk dat het moeilijk is om de bestaande assets over te hevelen. Bovendien wordt het nieuwe bijna nooit echt een onderdeel van de staande organisatie en misschien wil je dat ook helemaal niet. De kans bestaat namelijk dat de overgenomen start-up te veel be√Įnvloed wordt door het moeder bedrijf en daarom meegetrokken wordt door de zwaartekracht van bureaucratie en contraproductieve bestaande corporate gedragspatronen.

Daarbij komt, dat een succesvolle start-up vanzelf onderhevig wordt aan aanvallen van weer andere start-ups. De ‚Äúkannibalen mindset‚ÄĚ moet er in blijven! Facebook heeft daarom altijd gezegd, als wij ons eigen model niet disrupten, doet iemand anders dat wel. Misschien is het waar wat de CEO van Intel Andy Grove eens zei: ‚Äúonly the paranoid survive‚ÄĚ.

Zelf innovatiever worden is natuurlijk ook een optie. Dat is echter behoorlijk complex. Meestal wordt innovatie binnen de corporate nog in een lab-setting ge√Įsoleerd. Niet dat dit fout is, maar start-ups doen natuurlijk zoiets niet he. De start-up is namelijk het lab!

Het is grappig dat het lijkt alsof start-ups altijd nieuwe markten met nieuwe producten proberen te bereiken en dat we dit doorgaans bestempelen als ‚Äúechte‚ÄĚ innovatie. In een corporate setting worden namelijk alle product-marktcombinaties in een portfolio gezet (bijvoorbeeld een BCG-matrix) en gaat het om balans tussen huidige business en nieuwe business en de juiste cashflow verhoudingen.
Nou is het leuke dat start-ups maling hebben aan jouw portfolio en dus in elk kwadrant concurreren, zij het business die voor jou in de volwassenheid zit, of in de groei of in je lab-fase. Start-ups zijn simpelweg in de meerderheid en opereren los van elkaar op verschillende fronten. Dit betekent dat feitelijk iedereen in de corporate setting onder vuur ligt door start-ups en dus dat ook iedereen ongeacht de rol in het portfolio moet leren innoveren. Een voorbeeld van hoe waardevol het is om deze mindset te adopteren is het verhaal van deze jongeman uit 1973, hij werkte voor Kodak.

Je hele bedrijf veranderen is natuurlijk een ontzettende klus en als alternatief zou je ook kunnen kiezen om hetzelfde effect te cre√ęren als bij een overname; het bewust lanceren van eigen start-ups voldoende los opgezet van de moederorganisatie om te versnellen. Deze eigen start-ups moeten directe concurrenten worden van de huidige business en zo succesvol worden dat iig een deel van de bestaande eigen en concurrerende business daar naartoe stroomt. Grote corporates kunnen zich op die manier meer en meer omvormen tot een netwerk van start-up nodes, waarbij het moeder bedrijf ondersteund en strategische complementaire nodes aankoopt waar nodig. Hoe zo‚Äôn node er uit ziet en hoe je dit organiseert is voer voor een volgende post.

Mocht je niet kunnen wachten en dit eerder willen weten dan kun je natuurlijk altijd bellen voor een gesprek.