Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/sources/22' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Xebia Blog
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.
Syndicate content
Updated: 1 week 2 days ago

Experimenteren kun je leren

Mon, 04/18/2016 - 17:04

pdcaValidated learning: het leren door een initieel idee uit te voeren en daarna de resultaten te meten. Deze manier van experimenteren is de primaire filosofie achter Lean Startup en veel van het Agile gedachtegoed zoals het op dit moment wordt toegepast.

In wendbare organisaties moet je experimenteren om te kunnen voldoen aan de veranderende markt behoefte. Een goed experiment kan ongelooflijk waardevol zijn, mits goed uitgevoerd. En hier zit meteen een veel voorkomend probleem: het experiment wordt niet goed afgerond. Er wordt wel een proef gedaan, maar daar zit vaak geen goede hypothese achter en de lessen worden niet of nauwelijks meegenomen. Ik heb gemerkt dat, om een hoger lerend vermogen in de organisatie te krijgen, het handig om een vaste structuur aan te houden voor experimenten.

Er zijn veel structuren die goed werken. Toyota (of Kanban) Kata vind ik persoonlijk heel erg fijn, maar ook de “gewone” Plan-Do-Check-Act werkt erg goed. Die structuur voor een staat met een simpel voorbeeld hieronder uitgelegd:

  1. Hypothese

Welk probleem ga je oplossen? En hoe wil je dat gaan doen?

Als het hele team vanuit huis inbelt voor de stand up dan worden we niet minder effectief dan als iedereen aanwezig is en kunnen we beter omgaan met thuiswerkdagen

  1. Voorspelling van de uitkomsten

Wat is je verwachting van de uitkomsten? Wat ga je zien?

Geen lagere productiviteit, hogere score op team happiness omdat mensen vanuit huis kunnen werken

  1. Experiment

Op welke manier ga je toetsen of je het probleem kunt oplossen? Is dit experiment ook safe to fail?

De komende zes weken belt iedereen in vanuit huis voor de stand up. We scoren in de retrospective op productiviteit en happiness. Daarna doen we zes weken de stand up samen op kantoor

  1. Observaties

Verzamel zo veel mogelijk data tijdens je experiment. Wat zie je gebeuren?

Het opzetten van de call duurt erg lang (10-15 minuten). Het is lastig iedereen aan het woord te laten. Bij het inbellen kunnen we het gewone board niet gebruiken omdat niemand post-its kan verhangen.

A well designed experiment is as likely to fail as it is to succeed – Free to Don Reinertsen

 Dit is vast niet het beste experiment dat geformuleerd kan worden. Maar daar gaat het niet om. Waar het om gaat is dat het leerproces ontstaat bij de verschillen tussen de voorspelling en de observaties. Het is dus belangrijk om allebei deze stappen te doen en bewust stil te staan bij het leerproces wat daarop volgt. Op basis van je observaties kun je een nieuw experiment formuleren voor nieuwe verbeteringen.

Hoe doe jij je experimenten? Ik ben benieuwd naar wat goed werkt in jouw organisatie.

 

Experimenteren kun je leren

Mon, 04/18/2016 - 17:04

pdcaValidated learning: het leren door een initieel idee uit te voeren en daarna de resultaten te meten. Deze manier van experimenteren is de primaire filosofie achter Lean Startup en veel van het Agile gedachtegoed zoals het op dit moment wordt toegepast.

In wendbare organisaties moet je experimenteren om te kunnen voldoen aan de veranderende markt behoefte. Een goed experiment kan ongelooflijk waardevol zijn, mits goed uitgevoerd. En hier zit meteen een veel voorkomend probleem: het experiment wordt niet goed afgerond. Er wordt wel een proef gedaan, maar daar zit vaak geen goede hypothese achter en de lessen worden niet of nauwelijks meegenomen. Ik heb gemerkt dat, om een hoger lerend vermogen in de organisatie te krijgen, het handig om een vaste structuur aan te houden voor experimenten.

Er zijn veel structuren die goed werken. Toyota (of Kanban) Kata vind ik persoonlijk heel erg fijn, maar ook de “gewone” Plan-Do-Check-Act werkt erg goed. Die structuur voor een staat met een simpel voorbeeld hieronder uitgelegd:

  1. Hypothese

Welk probleem ga je oplossen? En hoe wil je dat gaan doen?

Als het hele team vanuit huis inbelt voor de stand up dan worden we niet minder effectief dan als iedereen aanwezig is en kunnen we beter omgaan met thuiswerkdagen

  1. Voorspelling van de uitkomsten

Wat is je verwachting van de uitkomsten? Wat ga je zien?

Geen lagere productiviteit, hogere score op team happiness omdat mensen vanuit huis kunnen werken

  1. Experiment

Op welke manier ga je toetsen of je het probleem kunt oplossen? Is dit experiment ook safe to fail?

De komende zes weken belt iedereen in vanuit huis voor de stand up. We scoren in de retrospective op productiviteit en happiness. Daarna doen we zes weken de stand up samen op kantoor

  1. Observaties

Verzamel zo veel mogelijk data tijdens je experiment. Wat zie je gebeuren?

Het opzetten van de call duurt erg lang (10-15 minuten). Het is lastig iedereen aan het woord te laten. Bij het inbellen kunnen we het gewone board niet gebruiken omdat niemand post-its kan verhangen.

A well designed experiment is as likely to fail as it is to succeed – Free to Don Reinertsen

 Dit is vast niet het beste experiment dat geformuleerd kan worden. Maar daar gaat het niet om. Waar het om gaat is dat het leerproces ontstaat bij de verschillen tussen de voorspelling en de observaties. Het is dus belangrijk om allebei deze stappen te doen en bewust stil te staan bij het leerproces wat daarop volgt. Op basis van je observaties kun je een nieuw experiment formuleren voor nieuwe verbeteringen.

Hoe doe jij je experimenten? Ik ben benieuwd naar wat goed werkt in jouw organisatie.

 

The Ultimate Tester: Value Creation

Tue, 04/12/2016 - 10:10

rsz_value_creation3

Once upon a time, when project managers still thought that building software was a completely manageable and predictable activity, and testers were put in a seperate team to ensure independence, the result was shitty software and frustrated people. Even though the rise of the agile way of working has improved some aspects of software development, the journey will never end. We still have a lot of work to do. Creating good software starts with the people making it, the team. As an agile tester, a tester 3.0 if you will, you are a frontrunner of this paradigm change. 

No longer do you have to sit in a seperate team, crunching requirements to make test scripts that you then manually execute, pretending to be a human computer (how silly is that!). No longer do you have to fake your belief in a Master Test Plan that your test manager urges you to honour. No longer do you have to put your defects in a defect management tool, and then wait for a couple of releases for your findings to be solved.

It is time to take matters in your own hands. It is time to start creating value from the start.

Before we discuss what this ‘value’ is, lets make clear what it isn’t. Adding value from the start in an agile context means: not writing a detailed test plan, not making a huge risk assessment up front. The beauty of agile is in keeping things small and open for change. The value lies in creating working software and to try to eliminate waste.

Before Coding / Building the Right Thing

Why not start by exploring the requirements you find in User Stories or Use Cases? Try to make them as clear and concise as possible, so you have a shared understanding in your team. If you then write down your requirements as test cases (that you can automate), you have created value in more than one way: you have shortened the feedback loop, created testcases that the developers can work with and you have hopefully weeded out a couple of unclear requirements in the process. This way of working is not new, but very rarely is it executed properly, so it is wise to invest time in learning to get better at this. Want to improve yourself in this area of expertise? Read Dan North’s take on BDD and this blog post with an example on how to work with Behaviour Driven Development. Also, the book Specification by Example is a great way to increase your knowledge about this topic. 

value_creation2

However, having perfect requirements won’t get you very far if they aren’t helping the product forward. Say you're working on a mobile app and the Product Owner wants a new feature built in a way that is not using native components of the Operating System, would you not say anything? Being proactive means speaking up when you think, based on expertise and facts, that another solution would be better. You can do that by asking a very simple question: ‘why?’ Using techniques like the 5 Why’s  or the Socratic method are excellent ways to uncover an underlying problem. 

Another common problem with agile teams is that they focus completely on the implementation details and forget what they are doing it for. As a tester, your position in the team is perfect to involve yourself in technical decisions and programming, but also to help the Product Owner with the overarching goal. You are able to put yourself in their shoes and see things from a business perspective. When you increase your knowledge of planning techniques like Impact Mapping, you can really help the business create an awesome product. You can do all this before any coding happens, with the overarching goal to ‘build the right thing’ and to achieve the business goal.

impact map example

Example of an Impact Map

After coding / measuring 

Of course, achieving the business goal is not completely possible up front. In the end, you need to build, test and measure the success of your software. So don’t forget the value of metrics! In a lot of cases, measuring how many users you have, how they are using your product, where they come from and how much money they spend, are very helpful metrics to determine how successful the product is. From a test perspective, it is also wise to measure if the software crashes, or if the user sees errors. Analytics are very important these days, and you can use many of them to improve your understanding of the user (helpful for testing with persona’s) and to improve the way you test (A/B testing, for example). If you want to learn more about metrics, the book The Lean Startup focusses on the ‘why. The ‘how’ is beyond the scope of this blog and is very contextual. My advice is: be very careful with metrics, because you probably know the saying “There are lies, damned lies and statistics!”.  

To conclude, testers no longer need to wait until coding is complete before they can add value. The role of the tester has changed drastically in the last few years and it can feel a bit daunting; so many things to learn! I hope this blog post has given you a few ideas on where you can improve your skills in this area. Please share your ideas about how testers create value.

In the next blog post we will discuss the unique curiosity that testers possess and how we use that for critical thinking about the product and exploratory testing. Hope to see you back here next time!

Five leaderships lessons from the Samurai for Product Managers

Sat, 04/02/2016 - 11:54

We have covered several topics in the Product Samurai series that should make you a better product manager. But what if you are leading product management or run innovation within your enterprise? Here are five leadership lessons that make your team better.

“New eras don't come about because of swords, they're created by the people who wield them. ” ― Nobuhiro Watsuki

Japanese-martial-arts-wallpapers-HD-Wallpapers-Backgrounds-Judo
1. Focus
One of the more interesting traits one develops in martial arts is the ability to see roughly 180 degrees, without losing sight on the direct opponent. The rationale being that on a battlefield it is important to not only see what is directly in front, but also around me. As a father of four, keeping an eye my offspring while focussing on something else is a godsend gift.

When I ran product management teams, one of my main concerns would be to create a current quarter priority list. This should be obvious but is surprisingly difficult. "what are our top three priorities this quarter?" If you do not have such a list, your team is likely to deviate in their roadmaps, releases and communications. It is vital that you set a clear (and limited) set of goals for your team, before they start planning. The biggest benefit being that you can use this list to defend against oppertunistic sales that would throw off your strategy. Have this list on your desk, written on the wall, put it in your internal email signature (okay, maybe not that) make sure everyone knows what your are doing.

2. Armour
Samurai armour is pretty strong but most of all designed to give the wearer mobility whilst still be able to absorb blows. So how good is your armour.

The best use of your status is to act as an armour for your team. Especially product management teams receive a lot of pressure from sales, support, customers, prospects, operations etc. Typically multiple times per day and everyone of these requests makes sense in their specific context.

As a result your team ends up spending most of their time defending the current strategy against a weave of seemingly good suggestions. Before you know it, they spend all their time in meetings rather than being out there and talk to customers.

Your priority list is the first line of defence. You can now reply with: “That’s a great idea, tell me more about the customer’s root issue... but we’ll need to put that into the backlog since it’s not one of our top two things for this quarter.”

The priorities list is an exclusive-or universe, if we add something, something else goes away. A Samurai can wield two swords (well according to Minamoto Musashi, I find more than one troublesome) and not three.

3. The budo belt system
When you are training in martial arts, the belt system is a great way to know what is expected of you. Every level indicates a limited set of techniques like throws, grappling, chokes, weapons etc. Which means it is clear for both the student and the environment what can be expected. We have set boundaries.

In Product Management we often fail to do so. Steve Johnson (http://under10consulting.com/) calls this the “product janitor” syndrome where everyone gets to decide what product managers are supposed to do, and the combined list is impossible.

The scrum teams will ask the product manager to be available 24/7 for clarifications, daily standup, continuous refinement and of course we need at least 2 sprints of fully refined stories in our backlog. Sales knows that product managers make the best sales support so they want the product managers to be on every new customer call and in every technical partnering meeting. By the way: everyone at your company works in Sales. Research and Innovation knows that the best way to get their ideas turned into products requires full commitment of one your team members. Marketing will hunt them down for webinars, podcasts, blogs, industry events, creation of pitches that are specific for market segments. Finally customer support desire triage on all the incoming bugs.

Individual product managers don’t have enough organizational leverage to fight back this job scope creep, so you need to set down the boundaries for your team. Just as with the belt system you will need to explain how much time you expect them to work on their respective duties. For example: “Spend ~50% with Development; ~30% with customers, prospects and market discovery; and 20% with organizational communications/planning. No unqualified sales calls: only train-the-trainer.” Don't forget to take the heat when things get nasty and make sure everyone has a balanced load.

4. Inspire
We still see too many under-engaged development teams, complaining about wasted work and constantly changing priorities. Daniel Pink explained how people work and it's not possible to force developers to work harder, but we can motivate them. We can get them excited about problems and emotionally connected to users.

We can inspire teams that they are working on what really matters to our customers. Do you think soldiers go in battle because it's their job? no! they are inspired. Share your customers problems with the team, let them feel the pain, and before you know it engineers start working late because they love to solve meaningful problems.

5. Continuous learning
You don't study martial arts, reach a certain level and stay there. It's a continuous process to just upkeep what you know, continuous growth comes only with investment. Typically investment of a lot of time.

Make sure your product managers spend enough time in the field talking to real customers, but don't stop there. Create a cross functional discovery team, (perhaps part-time to avoid ivory tower syndrome) with product management, UX specialist, a developer with rapid prototyping skills and perhaps a data scientist. Validate new product concepts with real customers. Or push for A/B testing of new features. C-level support for these teams is crucial, since her peers tend to value current-quarter deliverables above all else, and keep postponing real market learning for just another quarter or two.

Sound's like a lot? leading in product management is about creating the right conditions for your team to be successful. I hope these five leadership lessons from the Samurai inspire you to be a better leader.

This blog is part of the Product Samurai series. Sign up here to stay informed of the upcoming book: The Product Manager's Guide to Continuous Innovation.

The Product Manager's guide to Continuous Innovation

The Ultimate Tester

Tue, 03/29/2016 - 16:14

xebior1

 

 

Compile-Time Evaluation in Scala with macros

Sun, 03/27/2016 - 15:20

Many 'compiled' languages used to have a strict separation between what happens at 'compile-time' and what happens at 'run-time'. This distinction is starting to fade: JIT compilation moves more of the compile phase to run-time, while conversely various kinds of optimizations do 'run-time' work at compile time. Powerful type systems allow the expression things previously only checked at run time, especially with the recent renaissance of dependent types popularized by Idris.

This post will show a very simple example of compile-time evaluation in Scala: we'll write a regular 'factorial' function, and use macros to apply it (to constants) at compile time.

Our factorial

Let's start with our factorial function:

def normalFactorial(n: Int): Int =
  if (n == 0) 1
  else n * factorial(n - 1)

This is fairly uneventful, but especially notice that this is a plain old Scala function, nothing fancy.

Defining a macro function

When defining a macro function, we give a signature in regular Scala syntax. The implementation consists of the special 'macro' keyword and a reference to the macro implementation.

import scala.language.experimental.macros
def factorial(n: Int): Int = macro factorial_impl

import scala.reflect.macros.blackbox.Context
def factorial_impl(c: Context)(n: c.Expr[Int]): c.Expr[Int] = ???

The macro implementation, then, is also a regular Scala function. The difference is that instead of 'normal' values, it receives and produces AST's.

Implementing the macro function

To implement our compile-time factorial, we must unwrap the AST, apply our function, and produce a new AST that will be placed at the call site. Because the AST is expressed in regular Scala code, we can pattern-match on it, for example recognizing a literal Int constant:

import scala.reflect.macros.blackbox.Context
def factorial_impl(c: Context)(n: c.Expr[Int]): c.Expr[Int] = {
  import c.universe._
    
  n match {
    case Expr(Literal(Constant(nValue: Int))) =>
      val result = normalFactorial(nValue)
      c.Expr(Literal(Constant(result)))
    case _ => 
      println("Yow!")
      ???
  } 
}

Keep in mind this code is executed at compile time: even our cheeky little message will be printed when the compiler encounters a non-matching AST, and the compilation will fail due to the NotImplementedError.

The final result

Combining all the previous steps, our macro-based compile-time factorial implementation now looks like this:

object CompileTimeFactorial {
  import scala.language.experimental.macros
  
  // This function exposed to consumers has a normal Scala type:
  def factorial(n: Int): Int =
    // but it is implemented as a macro:
    macro CompileTimeFactorial.factorial_impl

  import scala.reflect.macros.blackbox.Context 

  // The macro implementation will receive a 'Context' and
  // the AST's of the parameters passed to it:
  def factorial_impl(c: Context)(n: c.Expr[Int]): c.Expr[Int] = {
    import c.universe._
    
    // We can pattern-match on the AST:
    n match {
      case Expr(Literal(Constant(nValue: Int))) =>
        // We perform the calculation:
        val result = normalFactorial(nValue)
        // And produce an AST for the result of the computation:
        c.Expr(Literal(Constant(result)))
      case other => 
        // Yes, this will be printed at compile time:
        println("Yow!")
        ??? 
    }
  }
  
  // The actual implementation is regular old-fashioned scala code:    
  private def normalFactorial(n: Int): Int =
    if (n == 0) 1
    else n * normalFactorial(n - 1)
}

For those of you following along at home, save this implementation in CompileTimeFactorial.scala.

Using the macro

Using the macro is as simple as calling a regular Scala function:

import CompileTimeFactorial._

object Test extends App {
  println(factorial(10))

  // When uncommented, this will produce an error at compile-time, as we
  // only implemented a case for an Int literal, not a variable:
  // val n = 10
  // println(factorial(n))
}

There are a couple of caveats: to be able to evaluate the macro when compiling Test.scala above, CompileTimeFactorial.scala must have been previously compiled. Compiling both Test.scala and CompileTimeFactorial.scala in the same scalac invocation will not work reliably.

Also, this example illustrates that while factorial has a regular Scala type definition, the fact that macro's perform AST manipulations does leak through. The bottom invocation, while semantically trivially equivalent to the upper one, will not compile: we only implemented our macro for Int literals, not for variables.

Did it work?

Indeed our program produces the correct output:

$ scalac CompileTimeFactorial.scala
$ scalac Test.scala 
$ scala Test
3628800
$

But can we prove this value was indeed calculated at compile time? Turns out we can, using the javap disassembler that comes with the JVM.

I'll spare you the complete output, but the relevant bit is:

$ javap -c Test$
....
    Code:
       0: getstatic     #61                 // Field scala/Predef$.MODULE$:Lscala/Predef$;
       3: ldc           #62                 // int 3628800
       5: invokestatic  #68                 // Method scala/runtime/BoxesRunTime.boxToInteger:(I)Ljava/lang/Integer;
       8: invokevirtual #72                 // Method scala/Predef$.println:(Ljava/lang/Object;)V
      11: return

Showing the constant integer 3628800 is loaded, boxed and printed.

Conclusions

Of course this is a toy example, meant to illustrate the concept of Scala macro's. When doing more serious AST matching and manipulation you'll certainly want to look at quasiquotes, and the extra power of macro bundles.

This kind of metaprogramming has turned out to be hard to support across versions of the compiler. This lead to the scala.meta initiative to decouple metaprogramming from the compiler ⊱ though it will probably be a while before they get to macros. In the mean time, macro-compat might make it easier to write macro's that support multiple compiler versions.

Applications of macro's include:

It seems to be early days for metaprogramming in Scala still, but the potential is amazing.

Common Sense Agile Scaling (Part 1: Intro)

Sat, 03/26/2016 - 10:05

Agile is all about running experiments and see what works for you. Inspect and adapt. Grow. This also applies to scaling your agile organization. There is no out of the box scaling framework that will fit your organization perfectly and instantly. Experiment, and combine what you need from as many agile models and frameworks as you can to do “Common Sense Scaling”.

You might want to compare scaling frameworks like SAFe, LeSS, Nexus and the “Spotify Model” and choose one for your scaled agile implementation. This will yield suboptimal results because you are focusing on the solution rather than focusing on your business goals and the business results that you want to achieve. They should be the drivers for you going agile and scaling up.

Your company culture highly influences your scaling effort since for example it defines if you introduce structure to cling to or a vision to self-organize towards. Michael Sahota wrote an excellent minibook on this.

In his recent blog post Paul Takken wrote to “just use as many agile models and frameworks you can get your hands on”. This is exactly what I have been doing at a medium sized financial enterprise. Inspired by Serge Beaumont’s saying “Scrum is Recursive”, I used what I call “Common Sense Scaling” to do a scaled scrum implementation a large enterprise I have been consulting. Both scrum (minimal process, structure) and the agile manifesto and its underlying principles (culture, mindset) provide a solid base and are powerful enough to set the stage for any agile scaling effort at any organization. Frameworks like those mentioned above will help since they allow you to combine framework elements that fit your need.

In a series of 3 blog posts I will share with you my experiences in transforming the enterprise into an organization that works more focused, more together. As a result time to market has decreased and productivity has increased (being the company’s most important business drivers at the moment).

This transformation has been about optimizing a Waterscrum release process using scaling principles. From 26 week overlapping release periods in which the system development department tried to scrum away the pile of backlog items generated by the business consultants, followed by weeks of testing by the testing department (fig. A), to a network of collaborating teams and individuals that refine and deliver work together in 8 week release periods (fig. B).

CSSPart1-Waterscum

Figure A: Waterscrum Releases

CSSPart1-Scaled

Figure B: Scaled scrum Releases

I will address the following scaling moves:

Before Scaling After Scaling Component Team System of Component Teams Sprint Release Timebox (sequence of sprints) Sprint Planning Release Planning Sprint Review Release Review Sprint Retrospective Release Retrospective Daily Standup Scrum of Scrums Focus on team output and utilization Focus on cocreating valuable outcome Functional Silos Integrated disciplines Long analysis phase Release Refinement Long testing phase Integrated testing

My next post will be about scaling refinement. Making work ready just in time. In the mean time I would love to hear from your common sense scaling experiences!

Refactoring to Microservices - Introducing a Process Manager

Fri, 03/25/2016 - 14:15

A while ago I described the first part of our journey to refactor a monolith to microservices (see here). While this was a useful first step, a lot can be improved. I was inspired by Greg Young's course at Skills Matter, see CQRS/DDD course. Because I think it’s useful to reflect on the steps you take when changing software architecture, I’ve set a couple of milestones and will report on each when I get there. The first goal is to introduce process in our domain and see what happens.

If you’re interested in details, you can find the code I’m referring to here:
code on Github.
Clone the project and then check out the shopManager tag

git clone git@github.com:xebia/microservices-breaking-up-a-monolith.git 
git checkout -t origin/shopManager

The code can be found in the services/messages folder.

The problem with our first implementation is that it misses a concept: there is no notion of a process. In earlier solutions the process was hidden in the sense that whenever a service thought it couldn't proceed, it would send out a message. E.g. Shop would say it had a completed Order. This Order would then be picked up by Payment and Fulfillment. Payment would allow a customer to pay and Fulfillment would have to wait because it needed paid Orders. So when Payment was done it would send out a PaymentReceived message that would allow Fulfillment to continue. This works but Greg argues that this allows only a single process and that the solution would be more flexible if we would have a process manager that delegates steps in the process to different services, waiting for them to complete.

That touches upon an aspect that wasn't implemented in our earlier solution: what happens if payment takes to long? In our first solution this would mean we would have a database with completed but unpaid orders. The problem of abandoned shopping carts could be solved by running a cleaning process that would send a message to customer support prompting them to call the customer, or just to get rid of the order. This is where our earlier solution starts to feel a bit constrained; what if we needed several services to find out what to do? Implementing process logic in a separate service seems to make sense, so this version of the code tries to do just that to find out what the consequences will be.

The process now looks like this:
Process using a Shop Manager

Process using a Shop Manager

So ShopManager starts a session and creates a Clerk. In the real world a Clerk would be a person who pushes your shopping cart around for you while you do the shopping, takes you to the payment terminal when you're done, delivers the cart to fulfillment so the contents can be shipped to you and finally brings you back to the exit and waves goodbye while you leave the parking lot (sounds like a great idea actually).

I won't describe all details here but highlight some of the key points below. To help understanding the process I would advise to start in the [scenarioTest sub project].

[Clerk] functions as a sort of container for all the state we need in the shopping process. This is comparable to a clerk in the real world who pushes your shopping cart around for you while carrying a clipboard that holds other information about you or the shopping process.

The process from the perspective of the clerk is easy to see in the [EventListener class].
It starts in [ClerkController] when the software receives a POST on /shop/session. That results in a StartShopping message being sent.
This message is picked up by the shop component, look for an EventListener class in the shop sub project. You can follow the flow by checking EventListener classes and calls to rabbitTemplate.convertAndSend() in each service.

One important consequence of this architecture is that we need to pass all data about the clerk around between the services and make sure all of it is send back to the ShopManager service each time a step is completed. In previous versions we got away with partial implementations of the domain in e.g. the payment or fulfillment services (using the double edged sword that is @JsonIgnoreProperties(ignoreUnknown = true)"), but now that isn't possible anymore because we're sending all of the Clerk data around. To make life easy for myself I just copied all classes in the domain package to each service. A particular service updates its part of the document and when its sub-process completes it sends all of the document back to ShopManager. I'll refactor this to get rid of the copied code in a later version.

The [ShopManager class] in the shopManager project keeps an eye on Clerks. Whenever a Clerk is created and sent on its way, ShopManager starts a session that will expire after a while. If the session expires and the process isn't done yet (the Clerk is left standing in the shop somewhere) ShopManager cleans up the mess. In this example it only cleans up its own mess, but in real life it would have to send clean up messages to each component involved in the process.

Centralizing the implementation of the process makes it easy to define what should happen in exceptional situations, so this is an advantage we gained from changing the architecture.
But more importantly it now becomes possible to change the process based on external properties like the type of customer or shop.

Plans and good intentions...

The picture below shows how far I've got up till now. Next I'll describe how to get rid of most of the domain classes that were necessary to process Clerk messages and I hope to find the time to study Greg Youngs ideas about messages and notifications. There's lots more to explore: I've introduced Docker to run the services which might be interesting in its own right or combined with a solution to run several versions of the software concurrently. Another interesting experiment would be to allow different processes based on characteristics of, e.g. the customer.
plans

Nobody cares about your product (Part 2)

Tue, 03/01/2016 - 10:55

In my previous blog we looked at how customers look at your product. In essence, they don't care about the product, they care about the problem they have and how your product can make it go away. If the problem goes away, so will your product. So the key lesson here is:

“a good Product Manager does not fall in love with his product, but with the problem”

Let’s dive into a simple technique to figure out if we are on the right path.

The job map

The job map is a map of all the steps that the solving the customers need (job) involves. So we look at the product as a tool that performs a “job” in order to solve the customer need. We can now look at how much or how good products are at satisfying these steps. Note that a job can be functional (as in our example) but also emotional (makes you feel good) or social (everyone else talking/doing this too). There might be a lot of steps involved, but the good news is you do not need to excel in everything.

A job map can be used to compare different products in the market and reveal unmet needs.

Job

Product A

Product B

Product C

Code

low

high

med

Compile

med

high

Package

med

med

Deploy

high

low

low

Test

high

med

Monitor

low

Update Document

med

The example above compares 3 products that serve as an integrated development environment for software development.

Each of the products is good in solving a part of the the problem the user is trying to solve the job map is a simple way to map your product against a competitors product or better said: products that your users use to get the job done. Typically this reveals some new competition and opportunities.

In one workshop for example: we compared remote control Apps for your (multi color) lights. We discovered that users adapted the lights to their mood, but what if they would adapt automatically, e.g. by observing my playlist.

We can do a better job at solving the customers needs and that is where the real opportunity lies.

The only quadrant that matters

Still these are ill-defined needs, and that is usually the cause for confusion and different opinions about what we are trying to do. It is therefore crucial that we define needs in a better way:

  1. Describe what the user is trying to accomplish, not the solution
  2. Quantify how valuable this is to your user, if you can measure value you can control value creation

Needs are usually stable, they tend to stick around much longer than our solution. Remember our Dyson example?

There is no fixed template, but a sentence that starts with “minimize the time it takes to...” is not bad. Before you know it you have identified a whole bunch of needs, but how to quantify them? there is value in statistics (say N>100) and there is value in observation (N = 1).

If you go for a survey you will end up with a nice 2 by 2 quadrant which will make your presentations look very scientific:

Job Map

Create a grid that maps how well your product solves the needs for your users, buyers and eco system. Let them also rank how important this functionality is for them.

Once you have plotted the needs, you can find those who are under served easily, they are in the bottom right. The over served needs are in the upper left.

The sample diagram shows that the product is not meeting the demands of the eco system, the buyers seem to get what they want, but users are not too happy. This will help to spot an opportunity for disruption, an open solution that solves at least one of the user needs better than the current product will.

Don’t forget to map out emotional and social needs. How well does this product make you feel? In IT products it is often a difficult differentiator, but phrases like: “no-one was every fired for hiring big blue (IBM)” have a deep rooted truth and you should be aware what your customers think of this.

This technique also helps if you have a product in the market, either you solve the customer need or some competitor will. Remember Apple cannibalized their successful iPod when it introduced a phone that could do the same.

Typical strategies:

  1. Slightly under served needs
    Add features to the existing product, do the job better
  2. Highly underserved needs
    Introduce a new product that does a significantly better job
  3. Mainly over served needs
    Create a new low-cost product
  4. Well served needs
    Add jobs, do more with the same product

The Kano model

Side-step: there are needs and then there are needs. Be careful how you interview your customers about their needs, they may not see them as you do.

Kano Model

Professor Noriaki Kano developed a model in 1980 which explains the different view users have on their needs.

There are one-dimensional needs. These are the known performance indicators and users usually are the key competition attributes. In eCommerce business this might be the size of your catalogue, delivery cost, delivery times, speed of the website etc. or in a car it’s fuel consumption rate. These needs need to be on par, but ultimately will not be the key factor. The choice when buying a new car, with two cars with a similar fuel consumption rate will not be made on that attribute. (Unless you’re considering a Prius and a Hummer [11])

There are also “must-be” needs, called basic needs. Users usually forget about these because these are table-stakes, they just assume they are there. I’ve never compared cars on their ability to brake, yet I’d be quite dissatisfied if they could not.

Delighters are needs you didn’t know you had. When Ford introduced active parking assist in 2013 we did not expect it. Blessed with a male ego I might even say I don’t need it. Then again, I said that about cruise control and can’t imagine life without it anymore.

This blog is part of the Product Samurai series. Sign up here to stay informed of the upcoming book: The Product Manager's Guide to Continuous Innovation.

The Product Manager's guide to Continuous Innovation

The right attitude towards Agile estimates/ forecasts & what playing golf has to do with it.

Fri, 02/26/2016 - 23:36

golf_goagileSetting realistic targets help scrum teams to manage expectations better.Thinking in target ranges, instead of just one precise target is the trick. Learn which attitude it takes to deal with target forecasts successfully. Take your club and join me at hole 6
.

Tee-Off Hole 6

The thing about golf is that somehow deep in my heart I want to hit this small golf ball as perfect as possible. Do not ask me why -  it is just in me. I would like to hit it like Tiger Woods. As close to the flag as possible.  A hole in one? Yes, that would be just right!

An Agile team aims for the flag as well

How does this relate to the estimates and forecasts of Agile teams? A lot! Because when a team starts with a sprint they are aiming for the target as well.

In Agile, we call this estimated velocity = a forecast of how many requirements (user stories) a team will make in a time box (sprint) of 2-4 weeks; estimated in story points. This estimate comes from the team and is a result of the sprint planning meeting. And as myself, standing there at the tee box of hole 6, there is a natural inner drive of a Scrum team to reach this target as good as possible.

How team members can react when they miss the flag

we failed

Not meeting the estimated forecast of a sprint can create stress and demotivation for team members. Especially when team members think in terms of failing/ winning: “We did not meet the target, so we failed!”.

As Scrum is so transparent to the outside world they might also think: "What will my manager, co-worker
.(whoever) think/say of this 'failure'?"

Returning to golf- this compares to standing at the tee-off box with the golf club in your hand and thinking: “I HAVE to have THE perfect shot NOW! Otherwise I FAIL (again)! What will <whoever > think if I miss this ball"?

Do you feel the tension and pressure?

The mindset and attitude towards the expectations of forecasts can therefore have a big impact on the motivation and the behavior of scrum team members.

Always have realistic expectations

That is why I always tell my agile teams to have the right and REALISTIC expectations towards estimates. So instead, this is what the team members should think when they stand at the tee-off of hole 6 with the club in their hand:

golf velocity range

  • The estimated velocity is never just only the flag- it is a realistic target range where the ball will somewhere land. There is an OK chance that the ball will land somewhere there.
  • As beginners we do not know our range yet. It takes in average at least 3 sprints/shots to now the first rough target range.
  • The target range depends on the experience level. As beginners we know that this range is much wider than for an experienced pro like Tiger Woods.

Forecast target range

  • If we focus on the shot and do our best, we have the good chance to make a very good shot.
  • If we have a bad shot, this can happen as well. We are not surprised then. We will then identify the “disturbances” of this shot, will learn from it and try to make it better next time.
  • The more golf shots (sprints) we do, the more we practice, the better we get in knowing our target range. This does not mean that we still can have a bad ball here and there. Even pros like Tiger Woods have this!

The effect of this way of thinking:

  • A more realistic attitude towards estimates.
  • A more relaxed way of thinking towards bad balls/failures.

What is your experience with estimates in Agile teams? What worked well, what did not work in the end?

Testing infrastructure with Saltstack, Docker and Testinfra

Thu, 02/25/2016 - 13:57

We’re running a few (virtual) servers, nothing special. It is rather easy to turn those machines into snowflakes. To counter this we introduced Salt. Salt does a nice job in deploying software to servers and upgrading them when run regularly. But how do we counter issues when changing the Salt configuration itself? The solution is simple: Test!

I do not plan to test my changes directly in our live environment, nor do I want to set up and maintain such a dynamic environment locally. I want to put as much configuration as possible under version control (Git).

What I want to check is if provisioning an environment works and if the key services are online. I’m not so much interested in the individual states. It’s the overall result I want to check for. That’s what will run in production, so that’s the important part.

For example, say I want to set up a Jenkins master. I’d like to build and test my configuration locally as much as possible, maybe even test provisioning different operating systems. I might event want to validate my configuration on our CI server. You can find an example in my Salt formula testing repository on GitHub.

I created a small top.sls file for the salt environment:

base:
  'jenkins.*':
      - git
      - java
      - jenkins
      - node

And added the required Salt formula. So far so good. From this point onwards there are two things I can do:

  1. Spin up a VM, hook it up to our Salt environment as a minion and start the provisioning.
  2. Test it locally.

You can probably see the strategy 1 has some downsides. If I need to tweak the formula, I need to re-provision the VM, which in itself can already lead to configuration drift. That means that when I’m finished with the configuration I need to remove the VM and create a new one (and then hope I did not miss anything). Even worse: I'm testing on a live environment. I can't imagine what could happen when the environment gets reprovisioned with my intermediate work.

Instead, I create a Salt test environment locally with Vagrant (a blessing when you’re not running Linux on your laptop). The configurations themselves I want to deploy to “the simplest thing possible”: a Docker container. I considered using only Vagrant images, but Docker containers are much faster and it’s all about feedback in the end. Finally, I want to ensure that the right services are running. In the case of this example that will be Jenkins, listening on port 8080. For this I use a tool called Testinfra. Testinfra has a nice interface to test infrastructure and is built on top of Pytest. My checks are simple to start with:

import pytest

HOST_ID = "jenkins"

@pytest.fixture(scope="module", autouse=True)
def provision(Docker):
    Docker.provision_as(HOST_ID)

def test_service_running(Docker):
    Service = Docker.get_module("Service")
    assert Service("jenkins").is_running

def test_service_listening_on_port_8080(Docker, Slow):
    import time
    Socket = Docker.get_module("Socket")
    Slow(lambda: Socket("tcp://:::8080").is_listening)

The heavy lifting is done in conftest.py. This test setup file is loaded by Pytest by default.

Since I’m all into Salt anyway, the VM can be provisioned by Salt as well. Let’s add our test config to the top.sls:

  'salt-dev':
      - docker
      - testinfra

In this setup, I want to make sure I have the required tooling (Docker and Testinfra) and I want to have a few Docker images ready. Those images mimic the configuration found on the real VMs.

Running the tests becomes as simple as:

[vagrant@salt-dev test]$ testinfra -v
============================= test session starts ==============================
platform linux2 -- Python 2.7.5, pytest-2.8.7, py-1.4.31, pluggy-0.3.1 -- /usr/bin/python
cachedir: .cache
rootdir: /srv/test, inifile:
plugins: testinfra-1.0.1
collected 4 items

test_jenkins.py::test_service_running[centos7-salt-local] PASSED
test_jenkins.py::test_service_listening_on_port_8080[centos7-salt-local] PASSED
test_jenkins.py::test_service_running[ubuntu15-salt-local] PASSED
test_jenkins.py::test_service_listening_on_port_8080[ubuntu15-salt-local] PASSED

========================== 4 passed in 228.02 seconds ==========================

Ain’t that nice? With little hassle I can check complete Salt rollouts and verify they’re installed correctly. This approach already caught a few regression bugs.

Considerations

You may want to run the tests on your CI server. That’s a nice idea and it will definitely catch some regression in the end. As you can see, you still have to wait a couple of minutes to see if all tests pass. Depending on your CI infrastructure some tweaks may be required. (Open question: how to deal with pillar data then?)

You may want to decouple the scripts from the Docker containers and also use them to check the production infra. Testinfra can output in a format understood by Nagios, which is nice if you do service monitoring.

Source repository: Salt formula testing.

Dialogue with a Scrum Master

Thu, 02/25/2016 - 13:19

I've had the privilege to work with a several teams and Scrum Masters over the last few years. On many occasions I had some interesting discussions. I've combined some of these subjects into one virtual conversation with an imaginary Scrum Master.

Scrum Master:  My Team believes I don't give them enough room for error.  I try to facilitate them but I feel responsible for the result. So if they don't do what I think is best, I believe I'm entitled to tell them what to do.  After all, I'm the Scrum Master and so I determine the process.  Right?
Me: It depends on the context. Most companies start with Scrum to move away from the “good” old-fashioned way of working on a product. They have to try something else, because the current way of working is NOT working. Even when we think we are doing something different, we tend to hold on to the things we were used to do. Which isn't necessarily a bad thing, but if you really want your team to change, you’ll have to change too. Being the traditional Project Manager or Technical lead in a Scrum team is different. A Scrum Master is supposed to facilitate change and keep the learning curve as high as possible. Controlling the team or process is too old school. And you already have seen it fail before. Right? Yes, you can take the lead, but so can other team members. And please let them feel safe to take a step towards change. Make it safe for them to fail. Many great ideas and improvements come from mistakes. That’s how we learn.

Scrum Master: Yeah I understand, but the symptoms of the team are bad estimates, not having the commitment to deliver high quality software. If I don’t take the lead in that, no one will!
Me: I don’t believe that anyone wants to deliver low quality software. Everyone has a different opinion about quality. Some think it’s in using code patterns or standards, others think it’s about having all your Test Automation tools “green”. I’m not so sure. I think quality of a product reflects from the people who are responsible for building it. So if someone really feels bad about the product, process or organisation, it reflects on the product.

One day I’ve asked the team to write down their imaginary Nightmare Headline of the companies’ newsletter to stimulate creativity around our next exploratory test session. One of them wrote down “Nobody is using our software”. And everyone else was nodding and agreeing with the statement. Suddenly I understood the commitment of the team. They didn’t believe in their product! I understand you feel responsible for delivering high quality software, but if no one feels proud or passionate about building the right product, you have a bigger problem than not delivering your product right on time.

Scrum Master: Wow! That sounds scary. And worst thing is... It’s kind off recognizable with our team. I should work on that! I’m still responsible for the standup meeting right?
Me: Well... You are not the only one responsible for the standup. I’ve actually seen really good standups without a Scrum Master being present. Most teams think that a standup is a daily status meeting. Especially when a Scrum Master is present. The team reports to their master. The same thing you were doing before? Remember? Try to stimulate and inspire the team to really collaborate on tasks. Just look away when someone is reporting to you. Look at your Scrumboard and let the team talk to each other about the burndown and the steps they need to take to reach the sprint goal.

Oh... and it’s really important to keep these meetings as short as possible. Sometimes you don’t have to use the Past, Future, and Impediments format. No one likes to be in status meetings. So keep the focus on the next steps (future) instead of the things that have been done yesterday. Let the team members feel engaged about the Scrum process. Like they own it. Not you.

Scrum Master: So let me get this straight. The team leads the standup. And the standup format is just a guideline. I can live with that. I'm worried that no one will take responsibility of the impediments and other issues. We still need some sort of structure in the process right?
Me: Yes! We should try to use these guidelines, but don’t let the perfect process drive you and your team. Achieving some goal never had to do with following some process. It’s still the effort and commitment of the team that made a difference. When no one takes responsibility of the impediments and issues, guide the team with ideas. Give them inspiration to solve the impediments themselves. When some issue or impediment is outside the scope of the team they will ask you for help!

For example: The QA and IT Operations departments are doing too much manual work. So the team gets feedback and bug reports outside the sprints which delays every release. And yet, you are still pushing your team to estimate better and chasing their commitments. Maybe you should change your scope towards engagement that extends beyond the team?

Scrum Master: I see... I really should try to give them a chance. I just don’t trust them you know. And I agree that I could help the team with issues outside the scope of the team. Maybe I could ask our management to get some QA and Ops guys in our team. That would help speed up the feedback loops.
Me: Yes! You should! The team will take you where you want, but in a way the team wants, the way the team knows!

Scrum Master: What about all these Agile Software Tools, spreadsheets, charts and metrics?
Me: You feel responsible for our performance. And you try to come up with a process and metrics. I understand that you want to make the team look good, but be careful overdoing things. Think about the things you were doing before adopting Scrum. If you want to change something, you’ll have to let go of your old habits. I’ve worked with teams who only cared about the amount of users using their system after a release. And they kept track of these numbers and reacted / changed their plans accordingly. Be open for new things. And don’t try to take over control of the team. The whole team should be held accountable and feel responsible for the work. If they come up with a cool new way of keeping metrics that matter. Let them! And you’ll see that they feel empowered to do great things and even become more engaged with the business.

Scrum Master: Yeah... Engagement, commitment and empowerment. I’ve tried to work on that during our retrospectives, but they suck! No one likes to attend them.
Me: I noticed. It’s OK to be in charge of the retrospectives, but leave some room for creativity in solving impediments or any other kind of issues by the team. You don’t have to come up with solutions. Just inspire the team with great ideas and give them a feeling that they had a choice in doing it your way, or maybe their way. One other thing I noticed is that the team doesn’t take time to evaluate their progress regarding the improvements of last retrospectives. Whenever you decide to improve something, make it measurable and evaluate the outcome, so the team can see their effort towards improvements.

Before I forget! Make sure that at least one person in the team is held accountable for chasing the retrospective actions (like stabilizing the test automation). That doesn’t necessarily mean that it’s you. In order to create group responsibility towards improvement, it’s also important to let someone else take the responsibility for solving problems.

Scrum Master: So this whole Scrum Master role is not about authority or managing people?
Me: Again it depends on the context. I’ve actually worked with a Scrum Master who was directive, but he also inspired us to do better. We had a tight schedule and deadline. He let us see / evaluate the strengths and weaknesses of several options. From his superior experience, he always made the right choice. And we trusted him.

It all depends on who you want to be. If you want to improve your team regarding engagement with the company, you should help them build greater awareness and responsibility, so it can make its own choices.

In the end you want your team to feel safe and comfortable making mistakes. Failure should not be given. We should embrace learning. For me it’s important to be happy at my work and doing my best for making our environment a great place to work, since we spend most of our time / life at work! I want to wake up the next morning knowing we did a good job and wanting to go to work again.

So please. Dear Scrum Master... You can be so much greater than what a Scrum Master is.

The I in Team
Don’t look for the I in team, because it can be found in the ‘A-hole’ &#x1f609;

 

Nobody cares about your product (Part 1)

Sun, 02/21/2016 - 11:00

There are so many takedown techniques in JĆ«jutsu. By the time you reach your first belt you have learned at least 4 different throws, 3 different joint-locks and a variety of hooks that you can apply to get your opponent to the mat.

At belt level exams the Sensei will ask to start with your favorite take down technique. In your comfort zone you are most likely to give a good performance. After that, you need to demonstrate the techniques outside your comfort zone. The point is that there is more way to achieve what we need than the one we favor, and if you want to survive, you better learn different approaches to achieve the desired outcome.

In fact as a Product Manager we tend to fall in love with our solution, with our product, but a good Product Manager falls in love with the problem his or her product is trying to solve.

So we are looking for a better way to find answers to questions like:

  1. Who is the customer?
  2. What problem are they trying to solve?
  3. What customer segment makes the most attractive target?
  4. What unmet customer needs should we address?
  5. Should we pursue a sustaining, disruptive or breakthrough strategy?
  6. Will customers pay more for a better solution?

Quarter inch drill or quarter inch hole?

Now imagine that you are a top notch drill maker, your is simply the best power drill in the market. Some engineer may come up to you and say: “I have this great idea for making holes using this new mechanical contraption. Okay it only works for little holes but look how small the device is and no electricity required.” The Product Manager will lean back and say “sorry we are in the market for power drills”. If the engineer persists the Product Manager will probably show a ton of market research showing what a drill should look like, what shelf space it can take, what price point is needed for which channel, but completely missing the point that a client doesn’t need a drill. He needs a hole.

This behavior is often enlarged by the “sunk cost trap”. Meaning that even when we (grudgingly) accept that the new innovation could work for some edge cases, we refuse to invest in it since we have already invested in an alternative technology in the past.

Changing technology just because there is a new way to do things is not right, but sticking to the old way because we have invested in it is also not correct. The right argument is switching cost. If the new technology is (potentially) better it must be able to compensate the investment we need to make to switch to this new technology.

In our example: drilling the big holes in massive concrete walls are probably served well with power drills. But in guitar building the raw power will damage the wood. This is why Luthiers will rely on different drills to make sensitive holes.

Back to customer needs. They are a quite elusive concept. Here’s a nice assignment: ask 5 managers in your company what customer need your product is solving. There should be a roughly 95.67% chance that you will get different answers.

What happens if we segment our customers differently and search for metrics based on the need or role that our product fulfills for them:

  1. Users
    The persons using the product or service to get a functional job done. They are the reason the market exists.
  2. Buyers
    The buyer and user are often different. The buyer engages in the buying process, not product use. Buyers apply a set of financial metrics to select which products and services they will acquire.
  3. Eco system
    Customers often need their products and services to be installed, set up, monitored, maintained, and upgraded. This is often executed by third parties.

Addressing the right pain

At one time I was responsible for a cross-mobile development platform. Our users were developers, they cared about ease of use and having no limits. Their main concern was that our product would limit their freedom. The product was bought by the IT department, usually after having been introduced by the marketing department. Where the marketing department saw an opportunity for a faster time to market, it would be reducing cost and stability for the CIO. Finally the large IT services companies wanted all their stuff to be deployable as fast as possible.

Each market segment has a different problem. Finding out what the problem is for each segment can be done in various ways, job mapping, contextual inquiry or customer journey mapping. We will cover some a simple technique in the next post.

 

This blog is part of the Product Samurai series. Sign up here to stay informed of the upcoming book: The Product Manager's Guide to Continuous Innovation.

The Product Manager's guide to Continuous Innovation

The Many Faces of Behaviour-Driven Development Tools

Fri, 02/19/2016 - 15:32

Software development is an inherent social activity. Put a group of great minds together, each from their own discipline, and see how innovation can flourish. In high-performing agile teams, bottlenecks are really not what one wants. And surely no 'business bottlenecks'. A lack of innovative ideas, poorly specified epics or user stories will put the software development team machine to a grinding halt. Luckily, solutions exist that can act as catalysts or even fuel the machine: behaviour-driven development, or BDD in short.

BDD is synonymous for a collaborative approach for entire agile teams, and has common understanding and sustainable agility as its two cornerstones. BDD comes with a fair amount of tool support as well. The main focus of these tools is to accelerate the development process in conjunction with checking that the software that's being developed, is right (cf. test-driven development). And this is what is very much needed. Not a single hour (minute?) of waste between the specification of (functional) requirements in user stories and the verifiable correct implementation of these requirements in well-defined acceptance criteria. This allows organizations to spend most of their precious time on actually implementing the functionality!

Whereas a plethora of tools already exists (see e.g. this list), new tools are popping up quite frequently. Some of the tools, such as FitNesse, focus on collaboration using a wiki. This allows teams to not be dependent on an integrated development environment (IDE) to which e.g. a product owner might not have access. Other collaboration tools (e.g. Cucumber or JBehave) integrate well in IDEs which allows teams to treat the specifications just as source code ("requirements are versioned too, you know!?"). As an aside, FitNesse was recently brought to your IDE as well..

Within Xebia, we've recently spent some time on evaluating JGiven. A new star at the BDD firmament but not yet widely known. JGiven truly brings BDD to the developers. JGiven does not start with the specification in text (where example test cases are specified in nearly plain-English, ready for automation), but with the specification in Java source code. The textual specification, which can then safely be handed to the business, is generated from the Java source code. A pro: JGiven has improved maintainability over some other BDD tools. These may seem just small and subtle differences, but organizations should weigh these considerations carefully when selecting a BDD solution. At least, this shows the dynamic nature of the collaboration tool arena.

Rest assured, Xebia will continue to explore these novelties in the specification and test automation landscape! Want to join the discussion? See what we're doing or give us a call!

Including custom Flume components in Cloudera Manager

Thu, 02/18/2016 - 19:52

I'm currently working on a Hadoop project using Cloudera's stack. We're running a couple Flume jobs to move data around our cluster. Our Flume Metric Details page in Cloudera Manager looked like this:

screenshot

You could infer from the image that we run a BarSource alongside our FooSource and BazSource and you would be correct. However, it doesn't show up in Cloudera Manager. Why not?

The FooSource and BazSource are standard source types included with the platform. The BarSource is a subclass of AbstractEventDrivenSource that we wrote ourselves to pull data from a customer-specific system.

How do you get a custom Flume source or sink included on this dashboard? This is not difficult, the secret is simply JMX. There's very little documentation. At the time of writing, the Flume Developer Guide doesn't mention JMX at all.

The flume-core package includes JMX MBeans for each of the component types: SourceCounter, ChannelCounter and SinkCounter. If you include the appropriate counter MBean in your custom Flume component, that component will appear in Cloudera Manager. Here's a simple example of SourceCounter in use:

package com.xebia.blog.flume;

import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.FlumeException;
import org.apache.flume.instrumentation.SourceCounter;
import org.apache.flume.source.AbstractEventDrivenSource;

/**
 * Demonstrates the use of the {@code SourceCounter} in Flume-NG.
 */
public class DemoSource extends AbstractEventDrivenSource {

    private SourceCounter counter;

    @Override
    protected void doConfigure(Context context) throws FlumeException {

        // Counter MBeans are created in the configure method, with the component name
        // we've been provided.
        this.counter = new SourceCounter(this.getName());
    }

    @Override
    protected void doStart() throws FlumeException {
        // You start the counter in start()
        this.counter.start();

        // This example is an event-driven source, so we'll typically have some sort of
        // connection and callback method.
        connectToDataSourceWithCallback(this);
        this.counter.setOpenConnectionCount(1);
    }

    @Override
    protected void doStop() throws FlumeException {
        // Disconnect from the data source...
        disconnect();
        this.counter.setOpenConnectionCount(0);

        // ...and stop the counter.
        this.counter.stop();
    }

    /**
     * Callback handler for our example data source.
     */
    public void onIncomingData(Object dataSourceEvent) {
        // Count how many events we receive...
        this.counter.incrementEventReceivedCount();

        // ...do whatever processing it is we do...
        Event flumeEvent = convertToFlumeEvent(dataSourceEvent);

        // ...and count how many are successfully forwarded.
        getChannelProcessor().processEvent(flumeEvent);
        this.counter.incrementEventAcceptedCount();
    }
}

The SourceCounter MBean has a some other metrics that you can increment as needed. The ChannelCounter and SinkCounter MBeans work the same way. Documentation is thin, but the Flume source code can be mined for examples: sources, channels and sinks.

Mapping Biases to Testing: the Anchoring Effect

Mon, 02/15/2016 - 10:17

Dear reader, welcome back to the Mapping Biases to Testing series. Today it is my pleasure to discuss the first bias in this series: the Anchoring Effect. Before we start mapping that to testing, I want to make sure that we have a clear understanding of what the anchoring effect is. 

“Anchoring is a cognitive bias that describes the common human tendency to rely too heavily on the first piece of information offered (the "anchor") when making decisions. During decision making, anchoring occurs when individuals use an initial piece of information to make subsequent judgments. Once an anchor is set, other judgments are made by adjusting away from that anchor, and there is a bias toward interpreting other information around the anchor. For example, the initial price offered for a used car sets the standard for the rest of the negotiations, so that prices lower than the initial price seem more reasonable even if they are still higher than what the car is really worth.”

I highlighted the important parts. Decision making is something we constantly have to do during testing, and it is important to realise which anchors might affect you. Also, to make this clear, I think ‘testing’ is not just the act of doing a test session, but thinking about everything that involves quality. You can apply a testing mindset to all that is needed to make software: the process, the specifications, the way the team works, etc.

My experience

Personally, some scrum artefacts are anchors for me, namely the duration of the sprint, estimation of stories and counting bugs to measure quality. Let me explain this with examples.

scrum

The clients I worked for all had sprints that lasted two to three weeks. Those of you also working with the Scrum framework know the drill: you create a sprint backlog consisting of stories and make sure the work is done at the end of the allotted time. What I have seen happening again and again is the last day of the sprint being a hectic one, with the focus on testing. That’s because a lot of companies are secretly doing the ‘scrumwaterfall’. Development starts at the beginning of the sprint, but testing is still an activity that takes place at the end. The business wants to get the stories done, so testing is rushed. The duration of the sprint suddenly has become the anchor. It takes a lot of courage as a tester to change this by speaking up, giving options to solve this, and not succumb to the pressure of cheating the Definition of Done.

Sadly, I’ve witnessed teams cheating the Definition of Done because it was the last day of the sprint and they were under pressure to deliver the work. Low quality work was accepted and the fact that technical debt will come back to haunt the team wasn’t a consideration at that moment.

The anchor of the sprint is strong. When you’re working with Scrum you are drilled to think in these increments, even when reality is sometimes more obtuse. You could say that the reason stories don’t get completed in time (or with low quality) is also because people are very bad at estimating the stories. That brings us to the next anchor. 

Estimation of stories

estimates

Estimating is something that has fascinated me since I first stepped into the wondrous world of office work. I still wonder why people put so much faith in a planning, why managers judge profit against a fictional target they produced months ago, why people keep getting surprised when a deadline isn’t made. Can we really not see that a complicated reality, consisting of so many uncontrollable factors, cannot be estimated?

A movement called 'No Estimates' is on the rise to counter the problems that come from estimating. Personally, I haven’t read enough about it to say “this is the solution”,  but I do sympathise with the arguments. It's worth investigating if this sort of thing interests you.

Something I have witnessed in estimating user stories, is that the estimate is usually too low. The argument is often “yeah, but we did a story similar to this one and that was 8 points”. That other story is suddenly the anchor, and if you estimate the new story at 13 points, people want an explanation. I always say: “There are so many unknown factors”, or the even less popular argument of “we have a track record of picking up stories that we estimated at 8 points, but didn’t manage to finish in one sprint”. Sadly, such an argument rarely convinces others, because the belief in estimates is high. I have succumbed to the general consensus more often than I’d like to admit. Trust me, I get no joy from saying “I told you so”, when a story that we estimated at 8 points (and I wanted to give it 13 points) ends up not being done in one sprint. I keep my mouth shut at that point, but during the next planning session I will say “remember that 8 point story? Yeah
let’s not be so silly this time”, and the cycle can repeat itself.

defects

My most ridiculous example comes from a few years back. I worked for a large company back then; let’s just say they were pretty big on processes and plans. Every release was managed by at least 10 managers, risk was a very big deal. The way they handled the risk though, with anchors, that was a bit crazy. A new release was considered ‘good’ if it didn’t have more than 2 high severity issues, 5-10 medium severity issues, and any amount of low severity issues. The Defect Report Meetings were a bit surreal. There we were in a room, with a bunch of people, discussing lists of bugs and saying ‘the quality is okay’ based on numbers of bugs. The amount of time we wasted talking about low severity bugs could probably have been used to fix those. Office craziness at its finest. I hope the anchor is clear here, but let me say it very clearly: The quality of your product is NOT based on the amount of bugs you have found. Taking that as an anchor is making the discussion and definition of ‘quality’ very easy and narrow, but it’s also denying reality. Quality is something very complex, so be very careful to resort to anchors like ‘how much defects of type A or type B do we have’ and basing your judgement on that alone.

What can we learn from this from a test perspective?

As a tester, you have to act as the conscience of the team. If it is in your power: don’t let a bad estimation, or sprint that is in danger of not getting done completely, affect your judgement. Our job is to inform our clients and teammates of risks we see in the product, based on sound metrics and feeling (yes, feelings!). If there was not enough time to test thoroughly, because the team fell for the anti-pattern of Scrumwaterfall, try to take steps to combat this (improve the testability of the product by working more closely together with the developers, for instance).

If you are under pressure from outside the team to deliver the software, even when it is not done yet, make the risks visible! Inform, inform, inform. That should be our main concern. Although, if my team would constantly be forced to release low quality software, I would get kind of depressed with the working environment. However, sometimes it happens that someone higher up the chain makes a decision to bring live shitty software.

Also, don’t forget to take a look inwards. Are there anchors that are influencing your work? Do you count the bugs you find and do you draw conclusions from there? Do you write a certain amount of automated checks because you think that sounds about right? Are there any other test-related numbers that seem normal to you? If so, challenge yourself to think ‘is this normal or could it be an anchor?’.

If you have more examples of anchors, please post them below in the comments!

Missed the first post, the introduction? Read it here.

Automated UI Testing with React Native on iOS

Mon, 02/08/2016 - 21:30
code { display: inline !important; font-size: 90% !important; color: #6a205e !important; background-color: #f9f9f9 !important; border-radius: 4px !important; } .syntaxhighlighter code { display: inline !important; border-radius: 4px !important; } .syntaxhighlighter { font-size: 120% !important; } .syntaxhighlighter .plain { color: #6a205e !important; } .post h2, h1 { text-transform: none !important; }

React Native is a technology to develop mobile apps on iOS and Android that have a near-native feel, all from one codebase. It is a very promising technology, but the documentation on testing can use some more depth. There are some pointers in the docs but they leave you wanting more. In this blog post I will show you how to use XCUITest to record and run automated UI tests on iOS.

Start by generating a brand new react native project and make sure it runs fine:
react-native init XCUITest && cd XCUITest && react-native run-ios
You should now see the default "Welcome to React Native!" screen in your simulator.

Let's add a textfield and display the results on screen by editing index.ios.js:

class XCUITest extends Component {

  constructor(props) {
    super(props);
    this.state = { text: '' };
  }

  render() {
    return (
      <View style={styles.container}>
        <TextInput
          testID="test-id-textfield"
          style={{borderWidth: 1, height: 30, margin: 10}}
          onChangeText={(text) => this.setState({text})}
          value={this.state.text}
        />
        <View testID="test-id-textfield-result" >
          <Text style={{fontSize: 20}}>You typed: {this.state.text}</Text>
        </View>
      </View>
    );
  }
}

Notice that I added testID="test-id-textfield" and testID="test-id-textfield-result" to the TextInput and the View. This causes React Native to set a accessibilityIdentifier on the native view. This is something we can use to find the elements in our UI test.

Recording the test

Open the XCode project in the ios folder and click File > New > Target. Then pick iOS > Test > iOS UI Testing Bundle. The defaults are ok, click Finish. Now there should be a XCUITestsUITests folder with a XCUITestUITests.swift file in it.

Let's open XCUITestUITests.swift and place the cursor inside the testExample method. At the bottom left of the editor there is a small red button. If you press it, the app will build and start in the simulator.

Every interaction you now have with the app will be recorded and added to the testExample method, just like in the looping gif at the bottom of this post. Now type "123" and tap on the text that says "You typed: 123". End the recording by clicking on the red dot again.

Something like this should have appeared in your editor:

      let app = XCUIApplication()
      app.textFields["test-id-textfield"].tap()
      app.textFields["test-id-textfield"].typeText("123")
      app.staticTexts["You typed: 123"].tap()

Notice that you can pull down the selectors to change them. Change the "You typed" selector to make it more specific, change the .tap() into .exists and then surround it with XCTAssert to do an actual assert:

      XCTAssert(app.otherElements["test-id-textfield-result"].staticTexts["You typed: 123"].exists)

Now if you run the test it will show you a nice green checkmark in the margin and say "Test Succeeded".

In this short blogpost I showed you how to use the React Native testID attribute to tag elements and record and adapt a XCUITest in XCode. There is a lot more to be told about React Native, so don't forget to follow me on twitter (@wietsevenema)

Recording UI Tests in XCode

Making Agile even more Awesome. By Nature.

Mon, 02/08/2016 - 11:31

Watching the evening news and it should be no surprise the world around us is increasingly changing and is becoming too complex to fit in a system we as humankind still can control.  We have to learn and adapt much faster solving our epic challenges. The Agile Mindset and methodologies are an important mainstay here. Adding some principles from nature makes it even more awesome.

In organizations, in our lives, we are in a constant battle “beating the system”.  Steering the economy, nature, life.  We’re fighting against it, and becoming less and less successful in it.  What should change here?

First, we could start to let go the things we can’t control and fully trust the system we live in: Nature. It’s the ultimate Agile System, continuously learning and adapting to changing environments.  But how?

We have created planes and boats by observing how nature did it: Biomimetics.  In my job as an Agile Innovation consultant, I’m using these and other related principles:

  1. Innovation engages in lots of experimentation: life creates success models through making mistakes, survival of the fittest.
  2. Continuously improve by feedback loops.
  3. Use only the energy you need. Work smart and effective.
  4. Fit form to function. Function is primary important to esthetics.
  5. Recycle: Resources are limited, (re)use them smart.
  6. Encourage cooperation.
  7. Positivity is an important source of energy, like sunlight can be for nature.
  8. Aim for diversity. For example, diverse problem solvers working together can outperform groups of high-ability problem solvers.
  9. Demand local expertise, to be aware of the need of local differences.
  10. Create a safe environment to experiment. Like Facebook is able to release functionality every hour for a small group of users.
  11. Outperform frequently to gain endurance and to stay fit.
  12. Reduce complexity by minimizing the number of materials and tools.For example, 96% of life on this planet is made up of six types of atoms: Carbon, Hydrogen, Oxygen, Nitrogen, Phosphorus and Sulphur
How to kickstart your start-up?

Until a couple of years ago,  innovative tools were only available for financial powerful companies.  Now, innovative tools like 3D printing and the Internet of Things are accessible for everybody.  The same applies for Agile.  This enables you to enter new markets against extreme low marginal costs.  In these start-ups you can recognize elements of natural agility.  A brilliant example is Joe Justice’ WikiSpeed. In less than 3 months he succeeded in building a 100 Mile/Gallon street legal car defeating companies like Tesla.  This all shows you can solve apparently impossible challenges by trusting on your natural common sense.  It's that simple.

Paul Takken (Xebia) and Joe Justice (Scrum inc.) are currently working together on several global initiatives coaching governments and large enterprises in reinventing themselves how they can anticipate on today's epic challenges.  This is done by a smarter use of people’s talents, tooling, materials and Agile- and Lean principles as mentioned above.

Robot Framework and the keyword-driven approach to test automation - Part 2 of 3

Wed, 02/03/2016 - 18:03

In part 1 of our three-part post on the keyword-driven approach, we looked at the position of this approach within the history of test automation frameworks. We elaborated on the differences, similarities and interdependencies between the various types of test automation frameworks. This provided a first impression of the nature and advantages of the keyword-driven approach to test automation.

In this post, we will zoom in on the concept of a 'keyword'.

What are keywords? What is their purpose? And what are the advantages of utilizing keywords in your test automation projects? And are there any disadvantages or risks involved?

As stated in an earlier post, the purpose of this first series of introductory-level posts is to prevent all kinds of intrusive expositions in later posts. These later posts will be of a much more practical, hands-on nature and should be concerned solely with technical solutions, details and instructions. However, for those that are taking their first steps in the field of functional test automation and/or are inexperienced in the area of keyword-driven test automation frameworks, we would like to provide some conceptual and methodological context. By doing so, those readers may grasp the follow-up posts more easily.

Keywords in a nutshell A keyword is a reusable test function

The term ‘keyword’ refers to a callable, reusable, lower-level test function that performs a specific, delimited and recognizable task. For example: ‘Open browser’, ‘Go to url’, ‘Input text’, ‘Click button’, ‘Log in’, 'Search product', ‘Get search results’, ‘Register new customer’.

Most, if not all, of these are recognizable not only for developers and testers, but also for non-technical business stakeholders.

Keywords implement automation layers with varying levels of abstraction

As can be gathered from the examples given above, some keywords are more atomic and specific (or 'specialistic') than others. For instance, ‘Input text’ will merely enter a string into an edit field, while ‘Search product’ will be comprised of a chain (sequence) of such atomic actions (steps), involving multiple operations on various types of controls (assuming GUI-level automation).

Elementary keywords, such as 'Click button' and 'Input text', represent the lowest level of reusable test functions: the technical workflow level. These often do not have to be created, but are being provided by existing, external keyword libraries (such as Selenium WebDriver), that can be made available to a framework. A situation that could require the creation of such atomic, lowest-level keywords, would be automating at the API level.

The atomic keywords are then reused within the framework to implement composite, functionally richer keywords, such as 'Register new customer', 'Add customer to loyalty program', 'Search product', 'Add product to cart', 'Send gift certificate' or 'Create invoice'. Such keywords represent the domain-specific workflow activity level. They may in turn be reused to form other workflow activity level keywords that automate broader chains of workflow steps. Such keywords then form an extra layer of wrappers within the layer of workflow activity level keywords. For instance, 'Place an order' may be comprised of 'Log customer in', 'Search product', 'Add product to cart', 'Confirm order', etc. The modularization granularity applied to the automation of such broader workflow chains is determined by trading off various factors against each other - mainly factors such as the desired levels of readability (of the test design), of maintainablity/reusability and of coverage of possible alternative functional flows through the involved business process. The eventual set of workflow activity level keywords form the 'core' DSL (Domain Specific Language) vocabulary in which the highest-level specifications/examples/scenarios/test designs/etc. are to be written.

The latter (i.e. scenarios/etc.) represent the business rule level. For example, a high-level scenario might be:  'Given a customer has joined a loyalty program, when the customer places an order of $75,- or higher, then a $5,- digital gift certificate will be sent to the customer's email address'. Such rules may of course be comprised of multiple 'given', 'when' and/or 'then' clauses, e.g. multiple 'then' clauses conjoined through an 'and' or 'or'. Each of these clauses within a test case (scenario/example/etc.) is a call to a workflow activity level, composite keyword. As explicated, the workflow-level keywords, in turn, are calling elementary, technical workflow level keywords that implement the lowest-level, technical steps of the business scenario. The technical workflow level keywords will not appear directly in the high-level test design or specifications, but will only be called by keywords at the workflow activity level. They are not part of the DSL.

Keywords thus live in layers with varying levels of abstraction, where, typically, each layer reuses (and is implemented through) the more specialistic, concrete keywords from lower levels. Lower level keywords are the building blocks of higher level keywords and at the highest-level your test cases will also be consisting of keyword calls.

Of course, your automation solution will typically contain other types of abstraction layers, for instance a so-called 'object-map' (or 'gui-map') which maps technical identifiers (such as an xpath expression) onto logical names, thereby enhancing maintainability and readability of your locators. Of course, the latter example once again assumes GUI-level automation.

Keywords are wrappers

Each keyword is a function that automates a simple or (more) composite/complex test action or step. As such, keywords are the 'building blocks' for your automated test designs. When having to add a customer as part of your test cases, you will not write out (hard code) the technical steps (such as entering the first name, entering the surname, etc.), but you will have one statement that calls the generic 'Add a customer' function which contains or 'wraps' these steps. This wrapped code, as a whole, thereby offers a dedicated piece of functionality to the testers.

Consequently, a keyword may encapsulate sizeable and/or complex logic, hiding it and rendering it reusable and maintainable. This mechanism of keyword-wrapping entails modularization, abstraction and, thus, optimal reusability and maintainability. In other words, code duplication is prevented, which dramatically reduces the effort involved in creating and maintaining automation code.

Additionally, the readability of the test design will be improved upon, since the clutter of technical steps is replaced by a human readable, parameterized call to the function, e.g.: | Log customer in | Bob Plissken | Welcome123 |. Using so-called embedded or interposed arguments, readability may be enhanced even further. For instance, declaring the login function as 'Log ${userName} in with password ${password}' will allow for a test scenario to call the function like this: 'Log Bob Plissken in with password Welcome123'.

Keywords are structured

As mentioned in the previous section, keywords may hide rather complex and sizeable logic. This is because the wrapped keyword sequences may be embedded in control/flow logic and may feature other programmatic constructs. For instance, a keyword may contain:

  • FOR loops
  • Conditionals (‘if, elseIf, elseIf, 
, else’ branching constructs)
  • Variable assignments
  • Regular expressions
  • Etc.

Of course, keywords will feature such constructs more often than not, since encapsulating the involved complexity is one of the main purposes for a keyword. In the second and third generation of automation frameworks, this complexity was an integral part of the test cases, leading to automation solutions that were inefficient to create, hard to read & understand and even harder to maintain.

Being a reusable, structured function, a keyword can also be made generic, by taking arguments (as briefly touched upon in the previous section). For example, ‘Log in’ takes arguments: ${user}, ${pwd} and perhaps ${language}. This adds to the already high levels of reusability of a keyword, since multiple input conditions can be tested through the same function. As a matter of fact, it is precisely this aspect of a keyword that enables so-called data-driven test designs.

Finally, a keyword may also have return values, e.g.: ‘Get search results’ returns: ${nrOfItems}. The return value can be used for a myriad of purposes, for instance to perform assertions, as input for decision-making or for passing it into another function as argument, Some keywords will return nothing, but only perform an action (e.g. change the application state, insert a database record or create a customer).

Risks involved With great power comes great responsibility

The benefits of using keywords have been explicated above. Amongst other advantages, such as enhanced readability and maintainability, the keyword-driven approach provides a lot of power and flexibility to the test automation engineer. Quasi-paradoxically, in harnessing this power and flexibility, the primary risk involved in the keyword-driven approach is being introduced. That this risk should be of topical interest to us, will be established by somewhat digressing into the subject of 'the new testing'.

In many agile teams, both 'coders' and 'non-coders' are expected to contribute to the automation code base. The boundaries between these (and other) roles are blurring. Despite the current (and sometimes rather bitter) polemic surrounding this topic, it seems to be inevitable that the traditional developer role will have to move towards testing (code) and the traditional tester role will have to move towards coding (tests). Both will use testing frameworks and tools, whether it be unit testing frameworks (such as JUnit), keyword-driven functional test automation frameworks (such as RF or Cucumber) and/or non-functional testing frameworks (such as Gatling or Zed Attack Proxy).

To this end, the traditional developer will have to become knowledgeable and gain experience in the field of testing strategies. Test automation that is not based on a sound testing strategy (and attuned to the relevant business and technical risks), will only result in a faster and more frequent execution of ineffective test designs and will thus provide nothing but a false sense of security. The traditional developer must therefore make the transition from the typical tool-centric approach to a strategy-centric approach. Of course, since everyone needs to break out of the silo mentality, both developer and tester should also collaborate on making these tests meaningful, relevant and effective.

The challenge for the traditional tester may prove to be even greater and it is there that the aforementioned risks are introduced. As stated, the tester will have to contribute test automation code. Not only at the highest-level test designs or specifications, but also at the lowest-level-keyword (fixture/step) level, where most of the intelligence, power and, hence, complexity resides. Just as the developer needs to ascend to the 'higher plane' of test strategy and design, the tester needs to descend into the implementation details of turning a test strategy and design into something executable. More and more testers with a background in 'traditional', non-automated testing are therefore entering the process of acquiring enough coding skills to be able to make this contribution.

However, by having (hitherto) inexperienced people authoring code, severe stability and maintainability risks are being introduced. Although all current (i.e. keyword-driven) frameworks facilitate and support creating automation code that is reusable, maintainable, robust, reliable, stable and readable, still code authors will have to actively realize these qualities, by designing for them and building them in into their automation solutions. Non-coders though, in my experience, are (at least initially) having quite some trouble understanding and (even more dangerously) appreciating the critical importance of applying design patters and other best practices to their code. That is, most traditional testers seem to be able to learn how to code (at a sufficiently basic level) rather quickly, partially because, generally, writing automation code is less complex than writing product code. They also get a taste for it: they soon get passionate and ambitious. They become eager to applying their newly acquired skills and to create lot's of code. Caught in this rush, they often forget to refactor their code, downplay the importance of doing so (and the dangers involved) or simply opt to postpone it until it becomes too large a task. Because of this, even testers who have been properly trained in applying design patterns, may still deliver code that is monolithic, unstable/brittle, non-generic and hard to maintain. Depending on the level at which the contribution is to be made (lowest-level in code or mid-level in scripting), these risks apply to a greater or lesser extent. Moreover, this risky behaviour may be incited by uneducated stakeholders, as a consequence of them holding unrealistic goals, maintaining a short-term view and (to put it bluntly) being ignorant with regards to the pitfalls, limitations, disadvantages and risks that are inherent to all test automation projects.

Then take responsibility ... and get some help in doing so

Clearly then, the described risks are not so much inherent to the frameworks or to the approach to test automation, but rather flow from inexperience with these frameworks and, in particular, from inexperience with this approach. That is, to be able to (optimally) benefit from the specific advantages of this approach, applying design patterns is imperative. This is a critical factor for the long-term success of any keyword-driven test automation effort. Without applying patterns to the test code, solutions will not be cost-efficient, maintainable or transferable, amongst other disadvantages. The costs will simply outweigh the benefits on the long run. Whats more, essentially the whole purpose and added value of using keyword-driven frameworks are lost, since these frameworks had been devised precisely to this end: counter the severe maintainability/reusability problems of the earlier generation of frameworks. Therefore, from all the approaches to test automation, the keyword-driven approach depends to the greatest extent on the disciplined and rigid application of standard software development practices, such as modularization, abstraction and genericity of code.

This might seem a truism. However, since typically the traditional testers (and thus novice coders) are nowadays directed by their management towards using keyword-driven frameworks for automating their functional, black-box tests (at the service/API- or GUI-level), automation anti-patterns appear and thus the described risks emerge. To make matters worse, developers remain mostly uninvolved, since a lot of these testers are still working within siloed/compartmented organizational structures.

In our experience, a combination of a comprehensive set of explicit best practices, training and on-the-job coaching, and a disciplined review and testing regime (applied to the test code) is an effective way of mitigating these risks. Additionally, silo's need to be broken down, so as to foster collaboration (and create synergy) on all testing efforts as well as to be able to coordinate and orchestrate all of these testing efforts through a single, central, comprehensive and shared overall testing strategy.

Of course, the framework selected to implement a keyword-driven test automation solution, is an important enabler as well. As will become apparent from this series of blog posts, the Robot Framework is the platform par excellence to facilitate, support and even stimulate these counter-measures and, consequently, to very swiftly enable and empower seasoned coders and beginning coders alike to contribute code that is efficient, robust, stable, reusable, generic, maintainable as well as readable and transferable. That is not to say that it is the platform to use in any given situation, just that it has been designed with the intent of implementing the keyword-driven approach to its fullest extent. As mentioned in a previous post, the RF can be considered as the epitome of the keyword-driven approach, bringing that approach to its logical conclusion. As such it optimally facilitates all of the mentioned preconditions for long-term success. Put differently, using the RF, it will be hard not to avoid the pitfalls inherent to keyword-driven test automation.

Some examples of such enabling features (that we will also encounter in later posts):

  • A straightforward, fully keyword-oriented scripting syntax, that is both very powerful and yet very simple, to create low- and/or mid-level test functions.
  • The availability of dozens of keyword libraries out-of-the-box, holding both convenience functions (for instance to manipulate and perform assertions on xml) and specialized keywords for directly driving various interface types. Interfaces such as REST, SOAP or JDBC can thus be interacted with without having to write a single line of integration code.
  • Very easy, almost intuitive means to apply a broad range of design patterns, such as creating various types of abstraction layers.
  • And lots and lots of other great and unique features.
Summary

We have now an understanding of the characteristics and purpose of keywords and of the advantages of structuring our test automation solution into (various layers of) keywords. At the same time, we have looked at the primary risk involved in the application of such a keyword-driven approach and at ways to deal with these risks.

Keyword-driven test automation is aimed at solving the problems that were instrumental in the failure of prior automation paradigms. However, for a large part it merely facilitates the involved solutions. That is, to actually reap the benefits that a keyword-driven framework has to offer, we need to use it in an informed, professional and disciplined manner, by actively designing our code for reusability, maintainability and all of the other qualities that make or break long-term success. The specific design as well as the unique richness of powerful features of the Robot Framework will give automators a head start when it comes to creating such code.

Of course, this 'adage' of intelligent and adept usage, is true for any kind of framework that may be used or applied in the course of a software product's life cycle.

Part 3 of this second post, will go into the specific implementation of the keyword-driven approach by the Robot Framework.

FitNesse in your IDE

Wed, 02/03/2016 - 17:10

FitNesse has been around for a while. The tool has been created by Uncle Bob back in 2001. It’s centered around the idea of collaboration. Collaboration within a (software) engineering team and with your non-programmer stakeholders. FitNesse tries to achieve that by making it easy for the non-programmers to participate in the writing of specifications, examples and acceptance criteria. It can be launched as a wiki web server, which makes it accessible to basically everyone with a web browser.

The key feature of FitNesse is that it allows you to verify the specs with the actual application: the System Under Test (SUT). This means that you have to make the documentation executable. FitNesse considers tables to be executable. When you read ordinary documentation you’ll find that requirements and examples are outlined in tables often, hence this makes for a natural fit.

There is no such thing as magic, so the link between the documentation and the SUT has to be created. That’s where things become tricky. The documentation lives in our wiki server, but code (that’s what we require to connect documentation and SUT) lives on the file system, in an IDE. What to do? Read a wiki page, remember the class and method names, switch to IDE, create classes and methods, compile, switch back to browser, test, and repeat? Well, so much for fast feedback! When you talk to programmers, you’ll find this to be the biggest problem with FitNesse.

Imagine, as a programmer, you're about to implement an acceptance test defined in FitNesse. With a single click, a fixture class is created and adding fixture methods is just as easy. You can easily jump back and forth between the FitNesse page and the fixture code. Running the test page is as simple as hitting a key combination (Ctrl-Shift-R comes to mind). You can set breakpoints, step through code with ease. And all of this from within the comfort of your IDE.

Acceptance test and BDD tools, such as Cucumber and Concordion, have IDE plugins to cater for that, but for FitNesse this support was lacking. Was lacking! Such a plugin is finally available for IntelliJ.

screenshot_15435-1

Over the last couple of months, a lot of effort has been put in building this plugin. It’s available from the Jetbrains plugin repository, simply named FitNesse. The plugin is tailored for Slim test suites, but also works fine with Fit tables. All table types are supported. References between script, decision tables and scenarios work seamlessly. Running FitNesse test pages is as simple as running a unit test. The plugin automatically finds FitNesseRoot based on the default Run configuration.

The current version (1.4.3) even has (limited) refactoring support: renaming Java fixture classes and methods will automatically update the wiki pages.

Feel free to explore the new IntelliJ plugin for FitNesse and let me know what you think!

(GitHub: https://github.com/gshakhn/idea-fitnesse)