Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Testing & QA

Project vs product teams

Actively Lazy - Wed, 01/18/2017 - 22:45

One of the hardest things for companies trying to be agile is how to structure teams. Back in the bad-old days, teams would form around a project. Then six months later, everyone would dissipate and go onto new teams. By the time a team has formed and become effective it is ripped apart again. You get no sense of ownership, no continuity.

children-laptop

Nowadays everyone knows that projects are bad, you need scrum teams instead. So a scrum team is formed with a product owner to prioritise the work. But what often happens is that what gets prioritised onto the backlog is a project in bite-size pieces. For example, I saw one team that ran out of work to do. The backlog was empty because, except for bugs, none of the outstanding projects had been signed off. There’s that word again. Project.

Behind the scenes a scrum team often becomes a slightly better way of delivering projects. You get the benefits of team consistency and continuity and the added benefit that the business can carry on thinking of the work in terms of projects. The downside of this approach is the scrum team can lack clear focus: there’s no overarching goal for the team. From sprint to sprint the focus might change as the relative importance of different projects changes. This makes it hard for the team to feel committed to a big idea, to some greater purpose. It ends up an endless procession through the backlog.

Why does this happen? I think it comes down to money. Somebody, somewhere is watching the money. Somebody wants to know “if I spend £x here, how much am I going to make back and by when?” The idea of the project is very easy to fit into this model. The team costs £x per day. The project is estimated to take n days. It’s expected to deliver £y profit. From this we can calculate the expected return on our investment. The trouble is, most of these numbers are entirely made up. If not fundamentally unknowable.

Let’s start with the obvious one: how long is the project going to take? Really, we still actually ask this question? Have we learnt nothing from agile? It seems not: many, many people still think about the world in terms of delivery dates and certainties. When will we learn that the best way is always to deliver a little, inspect the results; then decide whether to keep on the same path or deliver something different. You can’t have an end date with this approach – it’s not even meaningful. Keep on delivering one thing until there’s something better you could be doing, then go do that. Rinse, repeat.

What about the other question: how much profit will this project make? Well, let’s assume for now that the entire project, as originally conceived, will actually be delivered (as if this ever actually happens in software). Can you tell how much money it’s made you? Really? Independent of every other change that the organisation has made at the same time? From software to operations to marketing?

Now sometimes you can come up with a good estimate of expected returns, but often it’s just a pipe dream. But, if you’re vigorously disagreeing with me: I assume you’re religiously tracking actual costs and feed that back into future project planning? I have seen very, very few companies actually do this. If you’re not actually measuring how much you made from a project, how do you know your original estimates were any good?

So we have two made up numbers, both almost certainly unachievable in practice – but we use this to dictate the team’s priority order. I once saw a project signed off and jump to the top of the priority order because it predicted something like a 10% uplift in revenue for the company. This was a very large number for a single project and clearly ridiculous to everyone involved, but it was signed off and duly implemented. Revenue projections later that year were re-estimated downwards and downwards due to difficult market conditions. And some blatant over-estimation. And yet, this non-science is what passes for return on investment planning in all-too-many organisations.

What’s the alternative? The best teams I’ve seen have been structured around products. Give the team complete ownership of one or more products. Any and all changes to those products go via the product team. A product owner guides product direction. As an area expert they are entrusted to decide what are the most important things to work on. They can discuss long term directions with the team and have a consistent, coherent vision for where the product will evolve towards. While, inevitably, some changes are large and sufficiently inter-dependent that they become a project (if one part is delivered then it all must be); the team understands the business benefit of the solution and can evolve the implementation to meet the underlying business need, instead of trying to satisfy some arbitrary internal project deadline. This gives teams the complete freedom to inspect and adapt each iteration. With an understanding of the business priorities for their products they can make sensible trade-offs as each iteration surfaces more information.

What about the money? It’s hard, but let’s be honest about it: return on investment is not clear with the project model of software delivery, so accept that it isn’t clear. The hard thing is working out which products are making you money and which could make more money if more was invested. The trouble is I’ve worked in teams where, honestly, the product was so profitable with so little scope for uplift that the most cost-effective thing to do would have been to fire the dev team and just keep milking the cash cow.

So how can we decide where to spend our money? I think the empirical model of agile could fit here perfectly well. Let’s assume for a minute that the amount of money you have for the delivery team as a whole is fixed – your only choice is where to put it. How much to spend on product A vs how much on product B. Can you estimate how much money each product is making for the business? How is it changing over time?

If one product is making more profit each month – if it’s a growing product – then invest more resources there, to accelerate the growth. If a product is slowing down, with smaller increases in profit each month, or even with profit decreasing – then stop spending so much money on it. This naturally means that your money goes where it seems to be delivering the biggest return. Put your money where it seems to be delivering results.

The hardest thing with this is that it takes time to get the feedback: changing resource allocation could take months to show up on the bottom line. But at least we’re being honest about the impact our decisions have. Instead of trying to micro-manage delivery via projects, manage where resources are put and let the product owner manage the priority order.


Categories: Programming, Testing & QA

Cross-functional teams

Actively Lazy - Tue, 01/10/2017 - 22:16

Cross-functional teams aren’t a new idea. And yet, somehow, we still don’t seem to have got the memo.

I was listening to the excellent Scott Hanselman’s podcast “Hanselminutes” last week, he had Angie Jones on to talk about automation. Among all the great advice around ensuring that automation is a first-class citizen in your development process one thing stood out for me

you need to get your automation engineers into your scrum team

I don’t know why, but it surprised me. Are there really companies out there up to speed enough to be doing test automation; yet so behind the times with agile that they think it’s a good idea to have a dedicated team of automation engineers, removed from the rest of the dev team?

Cross-functional teams are a pretty central idea to agile – breaking down silos and ensuring that everyone that is required to produce an increment of working software is aligned and working together. It’s certainly not a new idea, but it’s clearly an idea that we’re still struggling to absorb.

But then, looking back, I remember working for a certain large company that decided they needed to “do more test automation”. So they hired a room full of automation engineers, who sat two floors away from the dev team, in a room hidden away (we honestly didn’t know they were even there for weeks, maybe even months). This team were responsible for creating an automated test pack for the application (rather than use the one the test automation engineers in the scrum teams had been working on for the last few years). But… they weren’t even talking to the scrum teams. So they were constantly chasing to keep up. As you can imagine, hilarity ensued. I say hilarity. Arguments, really. Then anger. Eventually laughter as we realised all this effort was wasted because the scrum teams wouldn’t – in fact couldn’t – support this new automation code.

Clearly not getting the idea of cross-functional teams is an age old problem. Compare this to a more recent client of mine – one that had a genuinely more cross-functional team. Not only did the scrum team have its own automation engineer, the developers (actual developers, not re-branded testers) were encouraged to work on the test automation tools – to everyone’s benefit. Test tooling written to the same standards as production code, with the insight and experience of the test automation specialist. This is moving beyond cross-functional teams into cross-skilled teams. Not only is every skill set you need within the same scrum team, but each individual can have multiple skills, taking on multiple roles.

Sure, you still need specialists. But with generalizing specialists you get the best of both worlds: the experience of specialists in their area, with the flexibility and breadth of ideas that come from the whole team being able to work on whatever is required. When the entire team can swarm on any area you have a very flexible team, if we need a big push for test automation the entire team can focus on it. Similarly with plenty of pairing and rotation everyone on the team will see every area and every role, allowing everyone’s unique perspective to improve the product and the process.

But then, a counter-example, the same client suffered from another age-old silo: operations. I thought devops had killed this silo, but it seems not. If the scrum team can’t release an increment of software to actual users then it isn’t a fully self-contained, self-sufficient, cross-functional team. A scrum team working with a separate test automation team seems like a crazy idea; and yet, somehow, a scrum team working with a separate operations team is much more normal, much more accepted. But it’s the exact same problem: if you don’t have everyone you need in the same scrum team then you’re going to get bottlenecks. You’re going to get communication problems. You’re going to get a “them-vs-us” attitude.

Every time I’ve come up against this the typical argument against operations staff being embedded within scrum teams is that they’re not working on “your stuff” all the time, so the rest of the time they’d be busy doing other stuff, unrelated to “your team” or they’d be bored. Well, maybe if we freed up that extra capacity we could release more often? Maybe they could be working on making it quicker, easier, safer to release more often? Maybe they could be more deeply involved with development when we’re making decisions which affect what they’re going to release and how it’s going to wake them up in the middle of the night. Maybe they could even help with other, non-production environments? The neglected, little siblings of production that every company seems to struggle to pay enough attention to.

Maybe, even, over time the team evolves from having the operations specialist to having team members cross skilled into operations. Under the watchful eye of the specialist could we, shock horror, let testers touch production? Could the BA manage a release? In some industries this is completely impossible for regulatory reasons, but in all the others its “impossible” for merely arbitrary reasons.

Breaking down silos is never easy – but I think it’s an interesting reflection of how far we’ve come that some silos seem frankly ridiculous now, while others just seem old-fashioned. I still hold out for the distant dream of genuinely cross-functional teams. Whenever I’ve seen this actually happen the lack of bottlenecks and mis-communication makes everything so much faster, so much easier. In the end a cross-functional team is better than silos. But a cross skilled team is better still, if you can manage it.


Categories: Programming, Testing & QA

Scrum & Tests Refactoring in Methods & Tools Winter 2016 issue

From the Editor of Methods & Tools - Mon, 01/09/2017 - 13:36
Methods & Tools – the free e-magazine for software developers, testers and project managers – has published its Winter 2016 issue that discusses Better Retrospectives, Refactoring Tests, Delivering Scrum Projects and the following free software tools: doctest, MarkupKit. Methods & Tools Winter 2016 issue content: * Making Sprint Retrospectives Work * Embracing the Red Bar: […]

Downsizing society

Actively Lazy - Thu, 01/05/2017 - 21:38

The world is becoming increasingly automated. Jobs that were once done by people are now frequently done by machines instead. This started off in manufacturing, but the coming of self-driving cars will put huge numbers of people out of work; even lawyers are being replaced by machines. Some reports suggest that machines could do 50% of jobs within the next 30 yearsFifty percent! What will we do when half the population have been made redundant? How will we cope with such a restructuring of society?

To an economist, a job is an income. To a human being, it is much more than that. It provides a sense that you matter in society, that people beyond your family rely on you and even admire you.

If 50% of jobs are being done by machines, the question is what will people do instead? This is an important question, because we identify as our job. Our jobs define us. Let me introduce you to Louise, she’s a doctor. And this is Barry, he’s an estate agent. I bet you have different ideas of who those two people are. What about when they’re both unemployed? Unemployable. Made redundant from society.

How would so many people survive without jobs? Where’s the money to live coming from? The only possible answer seems to be some kind of universal basic income. The idea that everyone, employed or not, receives a fixed amount of money each month from the government – replacing all existing benefits and all the bureaucracy involved in administering them. With a UBI, the 50% of people without jobs would at least have some money with which to buy food and heating. But who wants to live on handouts? Who wants to be defined as unemployed? Even if the majority of people are in the same situation.

This inevitably results in a two-tier society: those that earn little to nothing on top of the basic income; and those increasingly rare few that still have jobs the machines aren’t able to do, yet. We have the non-working class, and a diminishing middle class. How can this cause anything other than resentment, envy and anger?

This is to say nothing of the challenges of funding a universal basic income. While ideas like funding it through a wealth tax make a lot of sense, could they ever succeed? Would the wealthy half ever vote for it? Would the big businesses (and their very wealthy owners) that bankroll governments stand for it?

As Brexit and Trump have shown, voters can vote for what previously seemed impossible. And we’ve barely scratched the surface of the anger and disenfranchisement that automation will bring upon us. However, with both Brexit and Trump people voted out of hope. Hope for something better. For respect. For status. For jobs. Could Farage evoke such a passionate response from a platform of “more handouts”? Of massive wealth redistribution, the likes of which no socialist government has ever even proposed? It seems improbable.

If we don’t introduce a UBI, what’s going to happen? The jobs that are left aren’t going to be spread evenly. London will always have disproportionately more jobs and lower unemployment than, say, Sunderland or Hull. What’s going to happen in these areas as unemployment becomes endemic? Rising anger seems inevitable. Riots. A public whipped into a frenzy of hatred against some group of “others”?

With so much anger, extreme politics is only going to increase. Does a majority of the public already feel left behind? Made redundant by automation? So far this has given us Brexit and Trump. It’s only going to get worse. Who are we going to wage war on? Syria? ISIS? China? Germany? This is the armageddon outcome. So many people feel so disenfranchised that we inadvertently start world war three.

What other outcomes are there? Another possibility is some kind of neo-luddite movement railing against technology. So far everyone’s anger at disappearing jobs has been directed at migrants. When people realise its actually technology taking their jobs, could that anger end up directed at technology instead? You can imagine someone like Farage standing against technology. Of banning automation. Of holding back progress to protect jobs. While this couldn’t hold back the tide forever, it could delay the inevitable march of technology for some years.

The flip-side of this neo-luddite revolution, is the inevitable flight of technology outside of such a regime. With the future being held back in one place, technologists will move somewhere else; to somewhere innovation is encouraged instead of criminalised. Some enterprising country, say Switzerland, would reduce taxes for technology companies to encourage them to relocate. The remaining jobs, and the people to do them, all move to an employment enclave. This exacerbates the split in society: between the wealthy employed few, and the many under-employed poor. This is the “Switzerland outcome”. A physical as well as economic split in society.

Of course, there is always the possibility that the jobs that disappear are replaced by new jobs. Jobs in manual labour become replaced with jobs like “social media consultant” that would have been unimaginable in Victorian England (some would say still are). But the signs aren’t good: in the wake of the 2007 financial crisis jobs aren’t returning in great numbers – they’re being done by robots instead. This might be the first economic recovery in history that hasn’t been accompanied by an increase in employment.

Eventually we’re bound to discover a universal basic income, or something like it. It might take a very long time – a time in which the social strife could be immense. But eventually, barring armageddon, we will have to find a way to move to a post-work society; that means finding a way for people without work to live happy, fulfilled lives.

But in a life without work, how will people find meaning in their lives? With the ready availability of automation, some people might move back onto the land – to become 21st century peasant farmers. With the help of machines it could be quite possible for a family to provide for itself and maintain a decent standard of living through farming alone. Of course, with our heavily urbanised society plenty of people have neither the space nor the wherewithal to do this; but for some this could be a good alternative to the slums from where jobs have vanished.

The ready availability of time frees people up for any project they wish to embark on. The utopian outcome is enabling everyone to become creative, to embark on personal projects and self-expression. A society full of people doing what people do best: being human and creative. Is this realistic? Some people would relish the opportunity to dedicate themselves to creative pursuits. But plenty of others would not, instead looking for the structure and security their jobs used to offer.

Could this usher in an era of grand projects? Where people band together to achieve some lofty goal? Not for financial compensation, but for the joy of being part of something bigger. With everyone’s basic needs met, instead of working for money people look for meaning. They will take part in activities that inspire them, for free. The cost of labour at this point is basically zero, for the first time in human history. The only constraint: the end goal has to inspire people. Improving the efficiency of a government department? Hardly. Flying to Mars? Almost certainly.

But there will still be plenty of people who feel they can’t contribute but need structure in their life. Iain M. Banks’ Culture series describes a post-work society where the machines run everything; but in Banks’ universe people dedicate themselves full-time to leisure. While no doubt attractive to some, this life seems without structure, without purpose. Can people really live like this? Work has defined us, given structure to our day, given us meaning. Without this, who are we?

It is likely that the end result is some mixture of all of these outcomes. Each individual finding their own way in an increasingly diverse world – the simple answer of finding a job, any job, is no longer possible; instead people are forced to find their own answers to hard questions: what am I going to do with my life?

As we approach this future, what are we to do? How can we prepare for the upcoming earthquake in society? It seems inevitable that this will be a tumultuous time. How can we smooth the transition to a post-work society?


Categories: Programming, Testing & QA

Self-Healing Systems

From the Editor of Methods & Tools - Wed, 01/04/2017 - 16:33
We can think of the whole computer systems as a human body that consist of cells of various types. They can be hardware or software. When they are software units, the smaller they are, the easier it is for them to self-heal, recuperate from failures, multiply or even get destroyed when that is needed. We […]

Software Development Conferences Forecast December 2016

From the Editor of Methods & Tools - Tue, 12/27/2016 - 08:41
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

Blockchain for Software Developers

From the Editor of Methods & Tools - Tue, 12/20/2016 - 09:18
In a lot of software developer conferences, there are talks about the technical aspects of the blockchains, how to develop smart contracts on top of Ethereum and things like that. But before looking at those, it is crucial to take a step back and understand what is the blockchain, what it brings to the table […]

Software Development Linkopedia December 2016

From the Editor of Methods & Tools - Wed, 12/14/2016 - 12:21
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about project management personalities, better teams, starting a new job, code reviews, agile testing, scaling Agile, IoT and tests quality. Blog: Implementers, Solvers, and Finders Blog: Giving better code reviews Blog: […]

Quote of the Month December 2016

From the Editor of Methods & Tools - Mon, 12/12/2016 - 15:48
Experience shows that architecting is not something that’s performed once, early in a project. Rather, architecting is applied over the life of the project; the architecture is grown through the delivery of a series of incremental and iterative deliveries of executable software. At each delivery, the architecture becomes more complete and stable, which raises the […]

The Impostor Software Developer Syndrome

From the Editor of Methods & Tools - Wed, 12/07/2016 - 17:50
Did you ever feel like a fraud as a software developer? Have the feeling that at some point, someone is going to find out that you really don’t belong where you are? That you are not as smart as other people think? You are not alone with this; many high-achieving people suffer from the imposter […]

Using Helm to install Traefik as an Ingress Controller in Kubernetes

Agile Testing - Grig Gheorghiu - Tue, 12/06/2016 - 23:15
That was a mouthful of a title...Hope this post lives up to it :)

First of all, just a bit of theory. If you want to expose your application running on Kubernetes to the outside world, you have several choices.

One choice you have is to expose the pods running your application via a Service of type NodePort or LoadBalancer. If you run your service as a NodePort, Kubernetes will allocate a random high port on every node in the cluster, and it will proxy traffic to that port to your service. Services of type LoadBalancer are only supported if you run your Kubernetes cluster using certain specific cloud providers such as AWS and GCE. In this case, the cloud provider will create a specific load balancer resource, for example an Elastic Load Balancer in AWS, which will then forward traffic to the pods comprising your service. Either way, the load balancing you get by exposing a service is fairly crude, at the TCP layer and using a round-robin algorithm.

A better choice for exposing your Kubernetes application is to use Ingress resources together with Ingress Controllers. An ingress resource is a fancy name for a set of layer 7 load balancing rules, as you might be familiar with if you use HAProxy or Pound as a software load balancer. An Ingress Controller is a piece of software that actually implements those rules by watching the Kubernetes API for requests to Ingress resources. Here is a fragment from the Ingress Controller documentation on GitHub:

What is an Ingress Controller?

An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the ApiServer's /ingresses endpoint for updates to the Ingress resource. Its job is to satisfy requests for ingress.
Writing an Ingress Controller

Writing an Ingress controller is simple. By way of example, the nginx controller does the following:
  • Poll until apiserver reports a new Ingress
  • Write the nginx config file based on a go text/template
  • Reload nginx
As I mentioned in a previous post, I warmly recommend watching a KubeCon presentation from Gerred Dillon on "Kubernetes Ingress: Your Router, Your Rules" if you want to further delve into the advantages of using Ingress Controllers as opposed to plain Services.
While nginx is the only software currently included in the Kubernetes source code as an Ingress Controller, I wanted to experiment with a full-fledged HTTP reverse proxy such as Traefik. I should add from the beginning that only nginx offers the TLS feature of Ingress resources. Traefik can terminate SSL of course, and I'll show how you can do that, but it is outside of the Ingress resource spec.

I've also been looking at Helm, the Kubernetes package manager, and I noticed that Traefik is one of the 'stable' packages (or Charts as they are called) currently offered by Helm, so I went the Helm route in order to install Traefik. In the following instructions I will assume that you are already running a Kubernetes cluster in AWS and that your local kubectl environment is configured to talk to that cluster.

Install Helm

This is pretty easy. Follow the instructions on GitHub to download or install a binary for your OS.

Initialize Helm

Run helm init in order to install the server component of Helm, called tiller, which will be run as a Kubernetes Deployment in the kube-system namespace of your cluster.

Get the Traefik Helm chart from GitHub

I git cloned the entire kubernetes/charts repo, then copied the traefik directory locally under my own source code repo which contains the rest of the yaml files for my Kubernetes resource manifests.

# git clone https://github.com/kubernetes/charts.git helmcharts# cp -r helmcharts/stable/traefik traefik-helm-chart
It is instructive to look at the contents of a Helm chart. The main advantage of a chart in my view is the bundling together of all the Kubernetes resources necessary to run a specific set of services. The other advantage is that you can use Go-style templates for the resource manifests, and the variables in those template files can be passed to helm via a values.yaml file or via the command line.
For more details on Helm charts and templates, I recommend this linux.com article.
Create an Ingress resource for your application service
I copied the dashboard-ingress.yaml template file from the Traefik chart and customized it so as to refer to my application's web service, which is running in a Kubernetes namespace called tenant1.

# cd traefik-helm-chart/templates# cp dashboard-ingress.yaml web-ingress.yaml# cat web-ingress.yaml{{- if .Values.tenant1.enabled }}apiVersion: extensions/v1beta1kind: Ingressmetadata:  namespace: {{ .Values.tenant1.namespace }}  name: {{ template "fullname" . }}-web-ingress  labels:    app: {{ template "fullname" . }}    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"    release: "{{ .Release.Name }}"    heritage: "{{ .Release.Service }}"spec:  rules:  - host: {{ .Values.tenant1.domain }}    http:      paths:      - path: /        backend:          serviceName: {{ .Values.tenant1.serviceName }}          servicePort: {{ .Values.tenant1.servicePort }}{{- end }}
The variables referenced in the template above are defined in the values.yaml file in the Helm chart. I started with the variables in the values.yaml file that came with the Traefik chart and added my own customizations:
# vi traefik-helm-chart/values.yamlssl:  enabled: trueacme:  enabled: true  email: admin@mydomain.com  staging: false  # Save ACME certs to a persistent volume. WARNING: If you do not do this, you will re-request  # certs every time a pod (re-)starts and you WILL be rate limited!  persistence:    enabled: true    storageClass: kubernetes.io/aws-ebs    accessMode: ReadWriteOnce    size: 1Gidashboard:  enabled: true  domain: tenant1-lb.dev.mydomain.comgzip:  enabled: falsetenant1:  enabled: true  namespace: tenant1  domain: tenant1.dev.mydomain.com  serviceName: web  servicePort: http
Note that I added a section called tenant1, where I defined the variables referenced in the web-ingress.yaml template above. I also enabled the ssl and acme sections, so that Traefik can automatically install SSL certificates from Let's Encrypt via the ACME protocol.
Install your customized Helm chart for Traefik
With these modifications done, I ran 'helm install' to actually deploy the various Kubernetes resources included in the Traefik chart. 
I specified the directory containing my Traefik chart files (traefik-helm-chart) as the last argument passed to helm install:
# helm install --name tenant1-lb --namespace tenant1 traefik-helm-chart/NAME: tenant1-lbLAST DEPLOYED: Tue Nov 29 09:51:12 2016NAMESPACE: tenant1STATUS: DEPLOYED
RESOURCES:==> extensions/IngressNAME                                  HOSTS                    ADDRESS   PORTS     AGEtenant1-lb-traefik-web-ingress   tenant1.dev.mydomain.com             80        1stenant1-lb-traefik-dashboard   tenant1-lb.dev.mydomain.com             80        0s
==> v1/PersistentVolumeClaimNAME                    STATUS    VOLUME    CAPACITY   ACCESSMODES   AGEtenant1-lb-traefik-acme   Pending                                      0s
==> v1/SecretNAME                            TYPE      DATA      AGEtenant1-lb-traefik-default-cert   Opaque    2         1s
==> v1/ConfigMapNAME               DATA      AGEtenant1-lb-traefik   1         1s
==> v1/ServiceNAME                         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGEtenant1-lb-traefik-dashboard   10.3.0.15    <none>        80/TCP    1stenant1-lb-traefik   10.3.0.215   <pending>   80/TCP,443/TCP   1s
==> extensions/DeploymentNAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEtenant1-lb-traefik   1         1         1            0           1s

NOTES:1. Get Traefik's load balancer IP/hostname:
    NOTE: It may take a few minutes for this to become available.
    You can watch the status by running:
        $ kubectl get svc tenant1-lb-traefik --namespace tenant1 -w
    Once 'EXTERNAL-IP' is no longer '<pending>':
        $ kubectl describe svc tenant1-lb-traefik --namespace tenant1 | grep Ingress | awk '{print $3}'
2. Configure DNS records corresponding to Kubernetes ingress resources to point to the load balancer IP/hostname found in step 1
At this point you should see two Ingress resources, one for the Traefik dashboard and on for the custom web ingress resource:
# kubectl --namespace tenant1 get ingressNAME                           HOSTS                       ADDRESS   PORTS     AGEtenant1-lb-traefik-dashboard   tenant1-lb.dev.mydomain.com           80        50stenant1-lb-traefik-web-ingress tenant1.dev.mydomain.com            80        51s
As per the Helm notes above (shown as part of the output of helm install), run this command to figure out the CNAME of the AWS ELB created by Kubernetes during the creation of the tenant1-lb-traefik service of type LoadBalancer:
# kubectl describe svc tenant1-lb-traefik --namespace tenant1 | grep Ingress | awk '{print $3}'a5be275d8b65c11e685a402e9ec69178-91587212.us-west-2.elb.amazonaws.com
Create tenant1.dev.mydomain.com and tenant1-lb.dev.mydomain.com as DNS CNAME records pointing to a5be275d8b65c11e685a402e9ec69178-91587212.us-west-2.elb.amazonaws.com.

Now, if you hit http://tenant1-lb.dev.mydomain.com you should see the Traefik dashboard showing the frontends on the left and the backends on the right:
Screen Shot 2016-11-29 at 10.54.07 AM.pngIf you hit http://tenant1.dev.mydomain.com you should see your web service in action.
You can also inspect the logs of the tenant1-lb-traefik pod to see what's going on under the covers when Traefik is launched and to verify that the Let's Encrypt SSL certificates were properly downloaded via ACME:
# kubectl --namespace tenant1 logs tenant1-lb-traefik-3710322105-o2887time="2016-11-29T00:03:51Z" level=info msg="Traefik version v1.1.0 built on 2016-11-18_09:20:46AM"time="2016-11-29T00:03:51Z" level=info msg="Using TOML configuration file /config/traefik.toml"time="2016-11-29T00:03:51Z" level=info msg="Preparing server http &{Network: Address::80 TLS:<nil> Redirect:<nil> Auth:<nil> Compress:false}"time="2016-11-29T00:03:51Z" level=info msg="Preparing server https &{Network: Address::443 TLS:0xc4201b1800 Redirect:<nil> Auth:<nil> Compress:false}"time="2016-11-29T00:03:51Z" level=info msg="Starting server on :80"time="2016-11-29T00:03:58Z" level=info msg="Loading ACME Account..."time="2016-11-29T00:03:59Z" level=info msg="Loaded ACME config from store /acme/acme.json"time="2016-11-29T00:04:01Z" level=info msg="Starting provider *main.WebProvider {\"Address\":\":8080\",\"CertFile\":\"\",\"KeyFile\":\"\",\"ReadOnly\":false,\"Auth\":null}"time="2016-11-29T00:04:01Z" level=info msg="Starting provider *provider.Kubernetes {\"Watch\":true,\"Filename\":\"\",\"Constraints\":[],\"Endpoint\":\"\",\"DisablePassHostHeaders\":false,\"Namespaces\":null,\"LabelSelector\":\"\"}"time="2016-11-29T00:04:01Z" level=info msg="Retrieving ACME certificates..."time="2016-11-29T00:04:01Z" level=info msg="Retrieved ACME certificates"time="2016-11-29T00:04:01Z" level=info msg="Starting server on :443"time="2016-11-29T00:04:01Z" level=info msg="Server configuration reloaded on :80"time="2016-11-29T00:04:01Z" level=info msg="Server configuration reloaded on :443"
To get an even better warm and fuzzy feeling about the SSL certificates installed via ACME, you can run this command against the live endpoint tenant1.dev.mydomain.com:
# echo | openssl s_client -showcerts -servername tenant1.dev.mydomain.com -connect tenant1.dev.mydomain.com:443 2>/dev/nullCONNECTED(00000003)---Certificate chain 0 s:/CN=tenant1.dev.mydomain.com   i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3-----BEGIN CERTIFICATE-----MIIGEDCCBPigAwIBAgISAwNwBNVU7ZHlRtPxBBOPPVXkMA0GCSqGSIb3DQEBCwUA-----END CERTIFICATE----- 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3   i:/O=Digital Signature Trust Co./CN=DST Root CA X3-----BEGIN CERTIFICATE-----uM2VcGfl96S8TihRzZvoroed6ti6WqEBmtzw3Wodatg+VyOeph4EYpr/1wXKtx8/KOqkqm57TH2H3eDJAkSnh6/DNFu0Qg==-----END CERTIFICATE--------Server certificatesubject=/CN=tenant1.dev.mydomain.comissuer=/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3---No client certificate CA names sent---SSL handshake has read 3009 bytes and written 713 bytes---New, TLSv1/SSLv3, Cipher is AES128-SHAServer public key is 4096 bitSecure Renegotiation IS supportedCompression: NONEExpansion: NONESSL-Session:    Protocol  : TLSv1    Cipher    : AES128-SHA    Start Time: 1480456552    Timeout   : 300 (sec)    Verify return code: 0 (ok)etc.
Other helm commands
You can list the Helm releases that are currently running (a Helm release is a particular versioned instance of a Helm chart) with helm list:
# helm listNAME        REVISION UPDATED                  STATUS   CHARTtenant1-lb    1        Tue Nov 29 10:13:47 2016 DEPLOYED traefik-1.1.0-a

If you change any files or values in a Helm chart, you can apply the changes by means of the 'helm upgrade' command:

# helm upgrade tenant1-lb traefik-helm-chart
You can see the status of a release with helm status:
# helm status tenant1-lbLAST DEPLOYED: Tue Nov 29 10:13:47 2016NAMESPACE: tenant1STATUS: DEPLOYED
RESOURCES:==> v1/ServiceNAME               CLUSTER-IP   EXTERNAL-IP        PORT(S)          AGEtenant1-lb-traefik   10.3.0.76    a92601b47b65f...   80/TCP,443/TCP   35mtenant1-lb-traefik-dashboard   10.3.0.36   <none>    80/TCP    35m
==> extensions/DeploymentNAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGEtenant1-lb-traefik   1         1         1            1           35m
==> extensions/IngressNAME                                  HOSTS                    ADDRESS   PORTS     AGEtenant1-lb-traefik-web-ingress   tenant1.dev.mydomain.com             80        35mtenant1-lb-traefik-dashboard   tenant1-lb.dev.mydomain.com             80        35m
==> v1/PersistentVolumeClaimNAME                    STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGEtenant1-lb-traefik-acme   Bound     pvc-927df794-b65f-11e6-85a4-02e9ec69178b   1Gi        RWO           35m
==> v1/SecretNAME                            TYPE      DATA      AGEtenant1-lb-traefik-default-cert   Opaque    2         35m
==> v1/ConfigMapNAME               DATA      AGEtenant1-lb-traefik   1         35m




Software Development Conferences Forecast November 2016

From the Editor of Methods & Tools - Tue, 11/29/2016 - 16:00
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

Rethinking Equivalence Class Partitioning, Part 1

James Bach’s Blog - Sun, 11/27/2016 - 13:41

Wikipedia’s article on equivalence class partitioning (ECP) is a great example of the poor thinking and teaching and writing that often passes for wisdom in the testing field. It’s narrow and misleading, serving to imply that testing is some little game we play with our software, rather than an open investigation of a complex phenomenon.

(No, I’m not going to edit that article. I don’t find it fun or rewarding to offer my expertise in return for arguments with anonymous amateurs. Wikipedia is important because it serves as a nearly universal reference point when criticizing popular knowledge, but just like popular knowledge itself, it is not fixable. The populus will always prevail, and the populus is not very thoughtful.)

In this article I will comment on the Wikipedia post. In a subsequent post I will describe ECP my way, and you can decide for yourself if that is better than Wikipedia.

“Equivalence partitioning or equivalence class partitioning (ECP)[1] is a software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived.”

Not exactly. There’s no reason why ECP should be limited to “input data” as such. The ECP thought process may be applied to output, or even versions of products, test environments, or test cases themselves. ECP applies to anything you might be considering to do that involves any variations that may influence the outcome of a test.

Yes, ECP is a technique, but a better word for it is “heuristic.” A heuristic is a fallible method of solving a problem. ECP is extremely fallible, and yet useful.

“In principle, test cases are designed to cover each partition at least once. This technique tries to define test cases that uncover classes of errors, thereby reducing the total number of test cases that must be developed.”

This text is pretty good. Note the phrase “In principle” and the use of the word “tries.” These are softening words, which are important because ECP is a heuristic, not an algorithm.

Speaking in terms of “test cases that must be developed,” however, is a misleading way to discuss testing. Testing is not about creating test cases. It is for damn sure not about the number of test cases you create. Testing is about performing experiments. And the totality of experimentation goes far beyond such questions as “what test case should I develop next?” The text should instead say “reducing test effort.”

“An advantage of this approach is reduction in the time required for testing a software due to lesser number of test cases.”

Sorry, no. The advantage of ECP is not in reducing the number of test cases. Nor is it even about reducing test effort, as such (even though it is true that ECP is “trying” to reduce test effort). ECP is just a way to systematically guess where the bigger bugs probably are, which helps you focus your efforts. ECP is a prioritization technique. It also helps you explain and defend those choices. Better prioritization does not, by itself, allow you to test with less effort, but we do want to stumble into the big bugs sooner rather than later. And we want to stumble into them with more purpose and less stumbling. And if we do that well, we will feel comfortable spending less effort on the testing. Reducing effort is really a side effect of ECP.

“Equivalence partitioning is typically applied to the inputs of a tested component, but may be applied to the outputs in rare cases. The equivalence partitions are usually derived from the requirements specification for input attributes that influence the processing of the test object.”

Typically? Usually? Has this writer done any sort of research that would substantiate that? No.

ECP is a process that we all do informally, not only in testing but in our daily lives. When you push open a door, do you consciously decide to push on a specific square centimeter of the metal push plate? No, you don’t. You know that for most doors it doesn’t matter where you push. All pushable places are more or less equivalent. That is ECP! We apply ECP to anything that we interact with.

Yes, we apply it to output. And yes, we can think of equivalence classes based on specifications, but we also think of them based on all other learning we do about the software. We perform ECP based on all that we know. If what we know is wrong (for instance if there are unexpected bugs) then our equivalence classes will also be wrong. But that’s okay, if you understand that ECP is a heuristic and not a golden ticket to perfect testing.

“The fundamental concept of ECP comes from equivalence class which in turn comes from equivalence relation. A software system is in effect a computable function implemented as an algorithm in some implementation programming language. Given an input test vector some instructions of that algorithm get covered, ( see code coverage for details ) others do not…”

At this point the article becomes Computer Science propaganda. This is why we can’t have nice things in testing: as soon as the CS people get hold of it, they turn it into a little logic game for gifted kids, rather than a pursuit worthy of adults charged with discovering important problems in technology before it’s too late.

The fundamental concept of ECP has nothing to do with computer science or computability. It has to do with logic. Logic predates computers. An equivalence class is simply a set. It is a set of things that share some property. The property of interest in ECP is utility for exploring a particular product risk. In other words, an equivalence class in testing is an assertion that any member of that particular group of things would be more or less equally able to reveal a particular kind of bug if it were employed in a particular kind of test.

If I define a “test condition” as something about a product or its environment that could be examined in a test, then I can define equivalence classes like this: An equivalence class is a set of tests or test conditions that are equivalent with respect to a particular product risk, in a particular context. 

This implies that two inputs which are not equivalent for the purposes of one kind of bug may be equivalent for finding another kind of bug. It also implies that if we model a product incorrectly, we will also be unable to know the true equivalence classes. Actually, considering that bugs come in all shapes and sizes, to have the perfectly correct set of equivalence classes would be the same as knowing, without having tested, where all the bugs in the product are. This is because ECP is based on guessing what kind of bugs are in the product.

If you read the technical stuff about Computer Science in the Wikipedia article, you will see that the author has decided that two inputs which cover the same code are therefore equivalent for bug finding purposes. But this is not remotely true! This is a fantasy propagated by people who I suspect have never tested anything that mattered. Off the top of my head, code-coverage-as-gold-standard ignores performance bugs, requirements bugs, usability bugs, data type bugs, security bugs, and integration bugs. Imagine two tests that cover the same code, and both involve input that is displayed on the screen, except that one includes an input which is so long that when it prints it goes off the edge of the screen. This is a bug that the short input didn’t find, even though both inputs are “valid” and “do the same thing” functionally.

The Fundamental Problem With Most Testing Advice Is…

The problem with most testing advice is that it is either uncritical folklore that falls apart as soon as you examine it, or else it is misplaced formalism that doesn’t apply to realistic open-ended problems. Testing advice is better when it is grounded in a general systems perspective as well as a social science perspective. Both of these perspectives understand and use heuristics. ECP is a powerful, ubiquitous, and rather simple heuristic, whose utility comes from and is limited by your mental model of the product. In my next post, I will walk through an example of how I use it in real life.

Categories: Testing & QA

A Whirlwind Tour of the Kotlin Type Hierarchy

Mistaeks I Hav Made - Nat Pryce - Fri, 10/28/2016 - 09:08
Kotlin has plenty of good language documentation and tutorials. But I’ve not found an article that describes in one place how Kotlin’s type hierarchy fits together. That’s a shame, because I find it to be really neat1. Kotlin’s type hierarchy has very few rules to learn. Those rules combine together consistently and predictably. Thanks to those rules, Kotlin can provide useful, user extensible language features – null safety, polymorphism, and unreachable code analysis – without resorting to special cases and ad-hoc checks in the compiler and IDE. Starting from the Top All types of Kotlin object are organised into a hierarchy of subtype/supertype relationships. At the “top” of that hierarchy is the abstract class Any. For example, the types String and Int are both subtypes of Any. Any is the equivalent of Java’s Object class. Unlike Java, Kotlin does not draw a distinction between “primitive” types, that are intrinsic to the language, and user-defined types. They are all part of the same type hierarchy. If you define a class that is not explicitly derived from another class, the class will be an immediate subtype of Any. class Fruit(val ripeness: Double) If you do specify a base class for a user-defined class, the base class will be the immediate supertype of the new class, but the ultimate ancestor of the class will be the type Any. abstract class Fruit(val ripeness: Double) class Banana(ripeness: Double, val bendiness: Double): Fruit(ripeness) class Peach(ripeness: Double, val fuzziness: Double): Fruit(ripeness) If your class implements one or more interfaces, it will have multiple immediate supertypes, with Any as the ultimate ancestor. interface ICanGoInASalad interface ICanBeSunDried class Tomato(ripeness: Double): Fruit(ripeness), ICanGoInASalad, ICanBeSunDried The Kotlin type checker enforces subtype/supertype relationships. For example, you can store a subtype into a supertype variable: var f: Fruit = Banana(bendiness=0.5) f = Peach(fuzziness=0.8) But you cannot store a supertype value into a subtype variable: val b = Banana(bendiness=0.5) val f: Fruit = b val b2: Banana = f // Error: Type mismatch: inferred type is Fruit but Banana was expected Nullable Types Unlike Java, Kotlin distinguishes between “non-null” and “nullable” types. The types we’ve seen so far are all “non-null”. Kotlin does not allow null to be used as a value of these types. You’re guaranteed that dereferencing a reference to a value of a “non-null” type will never throw a NullPointerException. The type checker rejects code that tries to use null or a nullable type where a non-null type is expected. For example: var s : String = null // Error: Null can not be a value of a non-null type String If you want a value to maybe be null, you need to use the nullable equivalent of the value type, denoted by the suffix ‘?’. For example, the type String? is the nullable equivalent String, and so allows all String values plus null. var s : String? = null s = "foo" s = null s = bar The type checker ensures that you never use a nullable value without having first tested that it is not null. Kotlin provides operators to make working with nullable types more convenient. See the Null Safety section of the Kotlin language reference for examples. When non-null types are related by subtyping, their nullable equivalents are also related in the same way. For example, because String is a subtype of Any, String? is a subtype of Any?, and because Banana is a subtype of Fruit, Banana? is a subtype of Fruit?. Just as Any is the root of the non-null type hierarchy, Any? is the root of the nullable type hierarchy. Because Any? is the supertype of Any, Any? is the very top of Kotlin’s type hierarchy. A non-null type is a subtype of its nullable equivalent. For example, String, as well as being a subtype of Any, is also a subtype of String?. This is why you can store a non-null String value into a nullable String? variable, but you cannot store a nullable String? value into a non-null String variable. Kotlin’s null safety is not enforced by special rules, but is an outcome of the same subtype/supertype rules that apply between non-null types. This applies to user-defined type hierarchies as well. Unit Kotlin is an expression oriented language. All control flow statements (apart from variable assignment, unusually) are expressions. Kotlin does not have void functions, like Java and C. Functions always return a value. Functions that don’t actually calculate anything – being called for their side effect, for example – return Unit, a type that has a single value, also called Unit. Most of the time you don’t need to explicitly specify Unit as a return type or return Unit from functions. If you write a function with a block body and do not specify the result type, the compiler will treat it as a Unit function. Otherwise the compiler will infer it. fun example() { println("block body and no explicit return type, so returns Unit") } val u: Unit = example() There’s nothing special about Unit. Like any other type, it’s a subtype of Any. It can be made nullable, so is a subtype of Unit?, which is a subtype of Any?. The type Unit? is a strange little edge case, a result of the consistency of Kotlin’s type system. It has only two members: the Unit value and null. I’ve never found a need to use it explicitly, but the fact that there is no special case for “void” in the type system makes it much easier to treat all kinds of functions generically. Nothing At the very bottom of the Kotlin type hierarchy is the type Nothing. As its name suggests, Nothing is a type that has no instances. An expression of type Nothing does not result in a value. Note the distinction between Unit and Nothing. Evaluation of an expression type Unit results in the singleton value Unit. Evaluation of an expression of type Nothing never returns at all. This means that any code following an expression of type Nothing is unreachable. The compiler and IDE will warn you about such unreachable code. What kinds of expression evaluate to Nothing? Those that perform control flow. For example, the throw keyword interrupts the calculation of an expression and throws an exception out of the enclosing function. A throw is therefore an expression of type Nothing. By having Nothing as a subtype of every other type, the type system allows any expression in the program to actually fail to calculate a value. This models real world eventualities, such as the JVM running out of memory while calculating an expression, or someone pulling out the computer’s power plug. It also means that we can throw exceptions from within any expression. fun formatCell(value: Double): String = if (value.isNaN()) throw IllegalArgumentException("$value is not a number") else value.toString() It may come as a surprise to learn that the return statement has the type Nothing. Return is a control flow statement that immediately returns a value from the enclosing function, interrupting the evaluation of any expression of which it is a part. fun formatCellRounded(value: Double): String = val rounded: Long = if (value.isNaN()) return "#ERROR" else Math.round(value) rounded.toString() A function that enters an infinite loop or kills the current process has a result type of Nothing. For example, the Kotlin standard library declares the exitProcess function as: fun exitProcess(status: Int): Nothing If you write your own function that returns Nothing, the compiler will check for unreachable code after a call to your function just as it does with built-in control flow statements. inline fun forever(action: ()->Unit): Nothing { while(true) action() } fun example() { forever { println("doing...") } println("done") // Warning: Unreachable code } Like null safety, unreachable code analysis is not implemented by ad-hoc, special-case checks in the IDE and compiler, as it has to be in Java. It’s a function of the type system. Nullable Nothing? Nothing, like any other type, can be made nullable, giving the type Nothing?. Nothing? can only contain one value: null. In fact, Nothing? is the type of null. Nothing? is the ultimate subtype of all nullable types, which lets the value null be used as a value of any nullable type. Conclusion When you consider it all at once, Kotlin’s entire type hierarchy can feel quite complicated. But never fear! I hope this article has demonstrated that Kotlin has a simple and consistent type system. There are few rules to learn: a hierarchy of supertype/subtype relationships with Any? at the top and Nothing at the bottom, and subtype relationships between non-null and nullable types. That’s it. There are no special cases. Useful language features like null safety, object-oriented polymorphism, and unreachable code analysis all result from these simple, predictable rules. Thanks to this consistency, Kotlin’s type checker is a powerful tool that helps you write concise, correct programs. “Neat” meaning “done with or demonstrating skill or efficiency”, rather than the Kevin Costner backstage at a Madonna show sense of the word↩
Categories: Programming, Testing & QA

Accountability for What You Say is Dangerous and That’s Okay

James Bach’s Blog - Sat, 10/01/2016 - 20:33

[Note: I offered Maaret Pyhäjärvi the right to review this post and suggest edits to it before I published it. She declined.]

A few days ago I was keynoting at the New Testing Conference, in New York City, and I used a slide that has offended some people on Twitter. This blog post is intended to explore that and hopefully improve the chances that if you think I’m a bad guy, you are thinking that for the right reasons and not making a mistake. It’s never fun for me to be a part of something that brings pain to other people. I believe my actions were correct, yet still I am sorry that I caused Maaret hurt, and I will try to think of ways to confer better in the future.

Here’s the theme of this post: Getting up in front of the world to speak your mind is a dangerous process. You will be misunderstood, and that will feel icky. Whether or not you think of yourself as a leader, speaking at a conference IS an act of leadership, and leadership carries certain responsibilities.

I long ago learned to let go of the outcome when I speak in public. I throw the ideas out there, and I do that as an American Aging Overweight Left-Handed Atheist Married Father-And-Father-Figure Rough-Mannered Bearded Male Combative Aggressive Assertive High School Dropout Self-Confident Freedom-Loving Sometimes-Unpleasant-To-People-On-Twitter Intellectual. I know that my ideas will not be considered in a neutral context, but rather in the context of how people feel about all that. I accept that.  But, I have been popular and successful as a speaker in the testing world, so maybe, despite all the difficulties, enough of my message and intent gets through, overall.

What I can’t let go of is my responsibility to my audience and the community at large to speak the truth and to do so in a compassionate and reasonable way. Regardless of what anyone else does with our words, I believe we speakers need to think about how our actions help or harm others. I think a lot about this.

Let me clarify. I’m not saying it’s wrong to upset people or to have disagreement. We have several different culture wars (my reviewers said “do you have to say wars?”) going on in the software development and testing worlds right now, and they must continue or be resolved organically in the marketplace of ideas. What I’m saying is that anyone who speaks out publicly must try to be cognizant of what words do and accept the right of others to react.

Although I’m surprised and certainly annoyed by the dark interpretations some people are making of what I did, the burden of such feelings is what I took on when I first put myself forward as a public scold about testing and software engineering, a quarter century ago. My annoyance about being darkly interpreted is not your problem. Your problem, assuming you are reading this and are interested in the state of the testing craft, is to feel what you feel and think what you think, then react as best fits your conscience. Then I listen and try to debug the situation, including helping you debug yourself while I debug myself. This process drives the evolution of our communities. Jay Philips, Ash Coleman, Mike Talks, Ilari Henrik Aegerter, Keith Klain, Anna Royzman, Anne-Marie Charrett, David Greenlees, Aaron Hodder, Michael Bolton, and my own wife all approached me with reactions that helped me write this post. Some others approached me with reactions that weren’t as helpful, and that’s okay, too.

Leadership and The Right of Responding to Leaders

In my code of conduct, I don’t get to say “I’m not a leader.” I can say no one works for me and no one has elected me, but there is more to leadership than that. People with strong voices and ideas gain a certain amount of influence simply by virtue of being interesting. I made myself interesting, and some people want to hear what I have to say. But that comes with an implied condition that I behave reasonably. The community, over time negotiates what “reasonable” means. I am both a participant and a subject of those negotiations. I recommend that we hold each other accountable for our public, professional words. I accept accountability for mine. I insist that this is true for everyone else. Please join me in that insistence.

People who speak at conferences are tacitly asserting that they are thought leaders– that they deserve to influence the community. If that influence comes with a rule that “you can’t talk about me without my permission” it would have a chilling effect on progress. You can keep to yourself, of course; but if you exercise your power of speech in a public forum you cannot cry foul when someone responds to you. Please join me in my affirmation that we all have the right of response when a speaker takes the microphone to keynote at a conference.

Some people have pointed out that it’s not okay to talk back to performers in a comedy show or Broadway play. Okay. So is that what a conference is to you? I guess I believe that conferences should not be for show. Conferences are places for conferring. However, I can accept that some parts of a conference might be run like infomercials or circus acts. There could be a place for that.

The Slide

Here is the slide I used the other day:

maaret

Before I explain this slide, try to think what it might mean. What might its purposes be? That’s going to be difficult, without more information about the conference and the talks that happened there. Here are some things I imagine may be going through your mind:

  • There is someone whose name is Maaret who James thinks he’s different from.
  • He doesn’t trust nice people. Nice people are false. Is Maaret nice and therefore he doesn’t trust her, or does Maaret trust nice people and therefore James worries that she’s putting herself at risk?
  • Is James saying that niceness is always false? That’s seems wrong. I have been nice to people whom I genuinely adore.
  • Is he saying that it is sometimes false? I have smiled and shook hands with people I don’t respect, so, yes, niceness can be false. But not necessarily. Why didn’t he put qualifying language there?
  • He likes debate and he thinks that Maaret doesn’t? Maybe she just doesn’t like bad debate. Did she actually say she doesn’t like debate?
  • What if I don’t like debate, does that mean I’m not part of this community?
  • He thinks excellence requires attention and energy and she doesn’t?
  • Why is James picking on Maaret?

Look, if all I saw was this slide, I might be upset, too. So, whatever your impression is, I will explain the slide.

Like I said I was speaking at a conference in NYC. Also keynoting was Maaret Pyhäjärvi. We were both speaking about the testing role. I have some strong disagreements with Maaret about the social situation of testers. But as I watched her talk, I was a little surprised at how I agreed with the text and basic concepts of most of Maaret’s actual slides, and a lot of what she said. (I was surprised because Maaret and I have a history. We have clashed in person and on Twitter.) I was a bit worried that some of what I was going to say would seem like a rehash of what she just did, and I didn’t want to seem like I was papering over the serious differences between us. That’s why I decided to add a contrast slide to make sure our differences weren’t lost in the noise. This means a slide that highlights differences, instead of points of connection. There were already too many points of connection.

The slide was designed specifically:

  • for people to see who were in a specific room at a specific time.
  • for people who had just seen a talk by Maaret which established the basis of the contrast I was making.
  • about differences between two people who are both in the spotlight of public discourse.
  • to express views related to technical culture, not general social culture.
  • to highlight the difference between two talks for people who were about to see the second talk that might seem similar to the first talk.
  • for a situation where both I and Maaret were present in the room during the only time that this slide would ever be seen (unless someone tweeted it to people who would certainly not understand the context).
  • as talking points to accompany my live explanation (which is on video and I assume will be public, someday).
  • for a situation where I had invited anyone in the audience, including Maaret, to ask me questions or make challenges.

These people had just seen Maaret’s talk and were about to see mine. In the room, I explained the slide and took questions about it. Maaret herself spoke up about it, for which I publicly thanked her for doing so. It wasn’t something I was posting with no explanation or context. Nor was it part of the normal slides of my keynote.

Now I will address some specific issues that came up on Twitter:

1. On Naming Maaret

Maaret has expressed the belief that no one should name another person in their talk without getting their permission first. I vigorously oppose that notion. It’s completely contrary to the workings of a healthy society. If that principle is acceptable, then you must agree that there should be no free press. Instead, I would say if you stand up and speak in the guise of an expert, then you must be personally accountable for what you say. You are fair game to be named and critiqued. And the weird thing is that Maaret herself, regardless of what she claims to believe, behaves according to my principle of freedom to call people out. She, herself, tweeted my slide and talked about me on Twitter without my permission. Of course, I think that is perfectly acceptable behavior, so I’m not complaining. But it does seem to illustrate that community discourse is more complicated than “be nice” or “never cause someone else trouble with your speech” or “don’t talk about people publicly unless they gave you permission.”

2. On Being Nice

Maaret had a slide in her talk about how we can be kind to each other even though we disagree. I remember her saying the word “nice” but she may have said “kind” and I translated that into “nice” because I believed that’s what she meant. I react to that because, as a person who believes in the importance of integrity and debate over getting along for the sake of appearances, I observe that exhortations to “be nice” or even to “be kind” are often used when people want to quash disturbing ideas and quash the people who offer them. “Be nice” is often code for “stop arguing.” If I stop arguing, much of my voice goes away. I’m not okay with that. No one who believes there is trouble in the world should be okay with that. Each of us gets to have a voice.

I make protests about things that matter to me, you make protests about things that matter to you.

I think we need a way of working together that encourages debate while fostering compassion for each other. I use the word compassion because I want to get away from ritualized command phrases like “be nice.” Compassion is a feeling that you cultivate, rather than a behavior that you conform to or simulate. Compassion is an antithesis of “Rules of Order” and other lists of commandments about courtesy. Compassion is real. Throughout my entire body of work you will find that I promote real craftsmanship over just following instructions. My concern about “niceness” is the same kind of thing.

Look at what I wrote: I said “I don’t trust nice people.” That’s a statement about my feelings and it is generally true, all things being equal. I said “I’m not nice.” Yet, I often behave in pleasant ways, so what did I mean? I meant I seek to behave authentically and compassionately, which looks like “nice” or “kind”, rather than to imagine what behavior would trick people into thinking I am “nice” when indeed I don’t like them. I’m saying people over process, folks.

I was actually not claiming that Maaret is untrustworthy because she is nice, and my words don’t say that. Rather, I was complaining about the implications of following Maaret’s dictum. I was offering an alternative: be authentic and compassionate, then “niceness” and acts of kindness will follow organically. Yes, I do have a worry that Maaret might say something nice to me and I’ll have to wonder “what does that mean? is she serious or just pretending?” Since I don’t want people to worry about whether I am being real, I just tell them “I’m not nice.” If I behave nicely it’s either because I feel genuine good will toward you or because I’m falling down on my responsibility to be honest with you. That second thing happens, but it’s a lapse. (I do try to stay out of rooms with people I don’t respect so that I am not forced to give them opinions they aren’t willing or able to process.)

I now see that my sentence “I want to be authentic and compassionate” could be seen as an independent statement connected to “how I differ from Maaret,” implying that I, unlike her, am authentic and compassionate. That was an errant construction and does not express my intent. The orange text on that line indicated my proposed policy, in the hope that I could persuade her to see it my way. It was not an attack on her. I apologize for that confusion.

3. Debate vs. Dialogue

Maaret had earlier said she doesn’t want debate, but rather dialogue. I have heard this from other Agilists and I find it disturbing. I believe this is code for “I want the freedom to push my ideas on other people without the burden of explaining or defending those ideas.” That’s appropriate for a brainstorming session, but at some point, the brainstorming is done and the judging begins. I believe debate is absolutely required for a healthy professional community. I’m guided in this by dialectical philosophy, the history of scientific progress, the history of civil rights (in fact, all of politics), and the modern adversarial justice system. Look around you. The world is full of heartfelt disagreement. Let’s deal with it. I helped create the culture of small invitational peer conferences in our industry which foster debate. We need those more than ever.

But if you don’t want to deal with it, that’s okay. All that means is that you accept that there is a wall between your friends and those other people whom you refuse to debate with. I will accept the walls if necessary but I would rather resolve the walls. That’s why I open myself and my ideas for debate in public forums.

Debate is not a process of sticking figurative needles into other people. Debate is the exchange of views with the goal of resolving our differences while being accountable for our words and actions. Debate is a learning process. I have occasionally heard from people I think are doing harm to the craft that they believe I debate for the purposes of hurting people instead of trying to find resolution. This is deeply insulting to me, and to anyone who takes his vocation seriously. What’s more, considering that these same people express the view that it’s important to be “nice,” it’s not even nice. Thus, they reveal themselves to be unable to follow their own values. I worry that “Dialogue not debate” is a slogan for just another power group trying to suppress its rivals. Beware the Niceness Gang.

I understand that debating with colleagues may not be fun. But I’m not doing it for fun. I’m doing it because it is my responsibility to build a respectable craft. All testing professionals share this responsibility. Debate serves another purpose, too, managing the boundaries between rival value systems. Through debate we may discover that we occupy completely different paradigms; schools of thought. Debate can’t bridge gaps between entirely different world views, and yet I have a right to my world view just as you have a right to yours.

Jay Philips said on Twitter:

@jamesmarcusbach pointless 2debate w/ U because in your mind you’re right. Slide &points shouldn’t have happened @JokinAspiazu @ericproegler

— Jay Philips (@jayphilips) September 30, 2016

I admire Jay. I called her and we had a satisfying conversation. I filled her in on the context and she advised me to write this post.

One thing that came up is something very important about debate: the status of ideas is not the only thing that gets modified when you debate someone; what also happens is an evolution of feelings.

Yes I think “I’m right.” I acted according to principles I think are eternal and essential to intellectual progress in society. I’m happy with those principles. But I also have compassion for the feelings of others, and those feelings may hold sway even though I may be technically right. For instance, Maaret tweeted my slide without my permission. That is copyright violation. She’s objectively “wrong” to have done that. But that is irrelevant.

[Note: Maaret points out that this is legal under the fair use doctrine. Of course, that is correct. I forgot about fair use. Of course, that doesn’t change the fact that though I may feel annoyed by her selective publishing of my work, that is irrelevant, because I support her option to do that. I don’t think it was wise or helpful for her to do that, but I wouldn’t seek to bar her from doing so. I believe in freedom to communicate, and I would like her to believe in that freedom, too]

I accept that she felt strongly about doing that, so I [would] choose to waive my rights. I feel that people who tweet my slides, in general, are doing a service for the community. So while I appreciate copyright law, I usually feel okay about my stuff getting tweeted.

I hope that Jay got the sense that I care about her feelings. If Maaret were willing to engage with me she would find that I care about her feelings, too. This does not mean she gets whatever she wants, but it’s a factor that influences my behavior. I did offer her the chance to help me edit this post, but again, she refused.

4. Focus and Energy

Maaret said that eliminating the testing role is a good thing. I worry it will lead to the collapse of craftsmanship. She has a slide that says “from tester to team member” which is a sentiment she has expressed on Twitter that led me to say that I no longer consider her a tester. She confirmed to me that I hurt her feelings by saying that, and indeed I felt bad saying it, except that it is an extremely relevant point. What does it mean to be a tester? This is important to debate. Maaret has confirmed publicly (when I asked a question about this during her talk) that she didn’t mean to denigrate testing by dismissing the value of a testing role on projects. But I don’t agree that we can have it both ways. The testing role, I believe, is a necessary prerequisite for maintaining a healthy testing craft. My key concern is the dilution of focus and energy that would otherwise go to improving the testing craft. This is lost when the role is lost.

This is not an attack on Maaret’s morality. I am worried she is promoting too much generalism for the good of the craft, and she is worried I am promoting too much specialism. This is a matter of professional judgment and perspective. It cannot be settled, I think, but it must be aired.

The Slide Should Not Have Been Tweeted But It’s Okay That It Was

I don’t know what Maaret was trying to accomplish by tweeting my slide out of context. Suffice it to say what is right there on my slide: I believe in authenticity and compassion. If she was acting out of authenticity and compassion then more power to her. But the slide cannot be understood in isolation. People who don’t know me, or who have any axe to grind about what I do, are going to cry “what a cruel man!” My friends contacted me to find out more information.

I want you to know that the slide was one part of a bigger picture that depicts my principled objection to several matters involving another thought leader. That bigger picture is: two talks, one room, all people present for it, a lot of oratory by me explaining the slide, as well as back and forth discussion with the audience. Yes, there were people in the room who didn’t like hearing what I had to say, but “don’t offend anyone, ever” is not a rule I can live by, and neither can you. After all, I’m offended by most of the talks I attend.

Although the slide should not have been tweeted, I accept that it was, and that doing so was within the bounds of acceptable behavior. As I announced at the beginning of my talk, I don’t need anyone to make a safe space for me. Just follow your conscience.

What About My Conscience?
  • My conscience is clean. I acted out of true conviction to discuss important matters. I used a style familiar to anyone who has ever seen a public debate, or read an opinion piece in the New York Times. I didn’t set out to hurt Maaret’s feelings and I don’t want her feelings to be hurt. I want her to engage in the debate about the future of the craft and be accountable for her ideas. I don’t agree that I was presuming too much in doing so.
  • Maaret tells me that my slide was “stupid and hurtful.” I believe she and I do not share certain fundamental values about conferring. I will no longer be conferring with her, until and unless those differences are resolved.
  • Compassion is important to me. I will continue to examine whether I am feeling and showing the compassion for my fellow humans that they are due. These conversations and debates I have with colleagues help me do that.
  • I agree that making a safe space for students is important. But industry consultants and pundits should be able to cope with the full spectrum, authentic, principled reactions by their peers. Leaders are held to a higher standard, and must be ready and willing to defend their ideas in public forums.
  • The reaction on Twitter gave me good information about a possible trend toward fragility in the Twitter-facing part of the testing world. There seems to be a significant group of people who prize complete safety over the value that comes from confrontation. In the next conference I help arrange, I will set more explicit ground rules, rather than assuming people share something close to my own sense of what is reasonable to do and expect.
  • I will also start thinking, for each slide in my presentation: “What if this gets tweeted out of context?”

(Oh, and to those who compared me to Donald Trump… Can you even imagine him writing a post like this in response to criticism? BELIEVE ME, he wouldn’t.)

Categories: Testing & QA

How Michael Bolton and I Collaborate on Articles

James Bach’s Blog - Mon, 09/05/2016 - 07:28

(Someone posted a question on Quora asking how Michael and I write articles together. This is the answer I gave, there.)

It begins with time. We take our time. We rarely write on a deadline, except for fun, self-imposed deadlines that we can change if we really want to. For Michael and I, the quality of our writing always dominates over any other consideration.

Next is our commitment to each other. Neither one of us can contemplate releasing an article that the other of us is not proud of and happy with. Each of us gets to “stop ship” at any time, for any reason. We develop a lot of our work through debate, and sometimes the debate gets heated. I have had many colleagues over the years who tired of my need to debate even small issues. Michael understands that. When our debating gets too hot, as it occasionally does, we know how to stop, take a break if necessary, and remember our friendship.

Then comes passion for the subject. We don’t even try to write articles about things we don’t care about. Otherwise, we couldn’t summon the energy for the debate and the study that we put into our work. Michael and I are not journalists. We don’t function like reporters talking about what other people do. You will rarely find us quoting other people in our work. We speak from our own experiences, which gives us a sort of confidence and authority that comes through in our writing.

Our review process also helps a lot. Most of the work we do is reviewed by other colleagues. For our articles, we use more reviewers. The reviewers sometimes give us annoying responses, and they generally aren’t as committed to debating as we are. But we listen to each one and do what we can to answer their concerns without sacrificing our own vision. The responses can be annoying when a reviewer reads something into our article that we didn’t put there; some assumption that may make sense according to someone else’s methodology but not for our way of thinking. But after taking some time to cool off, we usually add more to the article to build a better bridge to the reader. This is especially true when more than one reviewer has a similar concern. Ultimately, of course, pleasing people is not our mission. Our mission is to say something true, useful, important, and compassionate (in that order of priority, at least in my case). Note that “amiable” and “easy to understand” or “popular” are not on that short list of highest priorities.

As far as the mechanisms of collaboration go, it depends on who “owns” it. There are three categories of written work: my blog, Michael’s blog, and jointly authored standalone articles. For the latter, we use Google Docs until we have a good first draft. Sometimes we write simultaneously on the same paragraph; more normally we work on different parts of it. If one of us is working on it alone he might decide to re-architect the whole thing, subject, of course, to the approval of the other.

After the first full draft (our recent automation article went through 28 revisions, according to Google Docs, over 14-weeks, before we reached that point), one of us will put it into Word and format it. At some point one of us will become the “article boss” and manage most of the actual editing to get it done, while the other one reviews each draft and comments. One heuristic of reviewing we frequently use is to turn change-tracking off for the first re-read, if there have been many changes.  That way whichever of us is reviewing is less likely to object to a change based purely on attachment to the previous text, rather than having an actual problem with the new text.

For the blogs, usually we have a conversation, then the guy who’s going to publish it on his blog writes a draft and does all the editing while getting comments from the other guy. The publishing party decides when to “ship” but will not do so over the other party’s objections.

I hope that makes it reasonably clear.

(Thanks to Michael Bolton for his review.)

Categories: Testing & QA

Thought after Test Automation Day 2013

Henry Ford said “Obstacles are those frightful things you see when take your eyes off the goal.” After I’ve been to Test Automation Day last month I’m figuring out why industrializing testing doesn’t work. I try to put it in this negative perspective, because I think it works! But also when is it successful? A lot of times the remark from Ford is indeed the problem. People tend to see obstacles. Obstacles because of the thought that it’s not feasible to change something. They need to change. But that’s not an easy change.

After attending the #TAD2013 as it was on Twitter I saw a huge interest in better testing, faster testing, even cheaper testing by using tools to industrialize. Test automation has long been seen as an interesting option that can enable faster testing. it wasn’t always cheaper, especially the first time, but at least faster. As I see it it’ll enable better testing. “Better?” you may ask. Test automation itself doesn’t enable better testing, but by automating regression tests and simple work the tester can focus on other areas of the quality.

images

And isn’t that the goal? In the end all people involved in a project want to deliver a high quality product, not full of bugs. But they also tend to see the obstacles. I see them less and less. New tools are so well advanced and automation testers are becoming smarter and smarter that they enable us to look beyond the obstacles. I would like to say look over the obstacles.

At the Test Automation Day I learned some new things, but it also proved something I already know; test automation is here to stay. We don’t need to focus on the obstacles, but should focus on the goal.

Categories: Testing & QA

The Toyota Way: The need for doing it right the first time

After WWII Toyota started developing its Toyota Production System (TPS); which was identified as ‘Lean’ in the 1990s. Taiichi Ohno, Shigeo Shingo and Eiji Toyoda developed the system between 1948 and 1975. In the myth surrounding the system it was not inspired by the American automotive industry, but from a visit to American supermarkets, Ohno saw the supermarket as model for what he was trying to accomplish in the factor and perfect the Just-in-Time (JIT) production system. While accomplishing this low inventory levels were a key outcome of the TPS, and an important element of the philosophy behind its system is to work intelligently and eliminate waste so that only minimal inventory is needed.

As TPS and Lean have their own principles as outlined by Toyota:

  • Long-term Philosophy
  • Right process will produce the right results
  • Value to organization by developing people
  • Solving root problems drives organizational learning

As these principles were summed up and published by Toyota in 2001, by naming it “The Toyota Way 2001”. It consists the above named principles in two key areas: Continuous Improvement, and Respect for People.

the-toyota-way

The principles for a continuous improvement include establishing a long-term vision, working on challenges, continual innovation, and going to the source of the issue or problem. The principles relating to respect for people include ways of building respect and teamwork. When looking at the ALM all these principles come together in the ‘first time right’ approach already mentioned. And from Toyota’s view they were outlined as followed:

  • The right process will produce the right results
    • Create continuous process flow to bring problems to the surface.
    • Use the ‘pull’ system to avoid overproduction (kanban)
    • Level out the workload (heijunka).
    • Build a culture of stopping to fix problems, to get quality right from the first (jidoka)
  • Continuously solving root problems drives organizational learning
    • Go and see for yourself to thoroughly understand the situation (Genchi Genbutsu);
    • Make decisions slowly by consensus, thoroughly considering all options (nemawashi); implement decisions rapidly;
    • Become a learning organization through relentless reflection (hansei) and continuous improvement (kaizen).
Let’s do it right now!

As the economy is changing and IT is more common sense throughout ore everyday life the need for good quality software products has never been this high. Software issues create bigger and bigger issues in our lives. Think about trains that cannot ride due to software issues, bank clients that have no access to their bank accounts, and people oversleeping because their alarm app didn’t work on their iPhone. As Capers Jones [Jones, 2011] states in his 2011 study that “software is blamed for more major business problems than any other man-made product” and that “poor quality has become one of the most expensive topics in human history”. The improvement of software quality is a key topic for all industries.

 Right the first time vs jidoka

In both TPS and Lean autonomation or jidoka are used. Autonomation can be described as ‘intelligent autonomation’, it means that when an abnormal situation arises the ‘machine’ stops and fix the abnormality. Autonomation prevents the production of defective products, eliminates overproduction, and focuses attention on understanding the problem and ensuring that it never recurs; a quality control process that applies the following four principles:

  • Detect the abnormality.
  • Stop.
  • Fix or correct the immediate condition.
  • Investigate the root cause and install a countermeasure.
Find defects as early as possible

In other words autonomation helps to get quality right the first time perfectly. With IT projects being different from the Toyota car production line, ‘perfectly’ may be a bit too much, but the process around quality assurance should be the same:

  • Find the defect.
  • Stop.
  • Fix or correct the error.
  • Investigate the root cause and take countermeasures.

The defect should be found as early as possible to be fixed as early as possible. And as with Lean and TPS the reason behind this is to make it possible to address the identification and correction of defects immediately in the process.

Categories: Testing & QA

When will you start with test automation?

I just came back from vacation and when I started again I noticed a slight change in resource requests I now see coming by; as almost all requests are with a statement around test automation. In the last two days I had two separate sessions around test automation tools. Has test automation all of a sudden become more important? Did people follow up on my last post, where I state that tools are a prerequisite in testing today, or actually: yesterday!

If you missed the latest cycle in new tools for test automation you’re either an ostrich with your head in the ground (“sorry vacation was in Southern Africa”), or you just simply still afraid. Afraid of change that test automation would cannibalize your manual test execution.

Test automation is not anymore that it takes over test execution in a very complex and unmanageable way. No it offers higher efficiency on test design, test execution, but also more options to test certain non-functional parts of applications – that could not be done without those tools – like security and performance, and virtual environments to do end-2-end tests without test environments that are down all the time. Tools are now also offering more support for testing mobile solutions. Tools are everywhere!

automated-testing

Test automation offers us testers the opportunity to do more, faster, les risky, and cheaper. I set these words specifically in that order. Test automation is often seen as a way to do test cheaper. You can, but you also can do more, for instance:

  • Let the tool do the checks and you explore the application further;
  • Setup a virtual test environment that doesn’t go down after 1 hour of use and test more in the same time;
  • Create and execute more test cases by generating and executing them automatically.
  • Get higher quality by really doing a thorough regression test, instead of a simple check, to find integration errors.

There are enough reasons to work on test automation and I don’t see why not. I think now it is even time for the next step in test automation. What that is time will tell, but I look forward to hearing that at the Test Automation Day in June. Where Bryan Bakker will tell more on this next step in his presentation “Design for Testability – the next step in Test Automation”. After the congress I’ll post my ideas here.

Categories: Testing & QA

Tools should are a prerequisite for efficient and effective QA

We now live in a world where testing and quality are becoming more and more important. Last month I had a meeting with senior management in my company and I made the statement that “quality is user experience”, in other words “without the right amount of quality the user experience will always be low”. And I think most people in QA and Testing will agree with me on that. Even organizations agree on that. Then, but why do we still see so much failures in software around us? Why do we still create software without the needed quality.

For one, because it’s not possible to test for 100%! A known issue in QA, but that’s not the answer we’re looking for. I think the answer is that we still rely too much on old-fashioned manual (functional) testing. As I explained in an earlier blog we need to go past that, move forward. Testing is part of IT and needs to showcase itself as a highly versatile profession. We need to be bale to save money, deliver higher quality, shorten time to market, and go-live with as less bugs as possible…

How can we do that? There are multiple ways to answer that, but one thing will always be one of the answers: test automation or industrialization. Tools should be a prerequisite for efficient and effective QA. It should not be a question to use them, but why not to use them.

Why not use test tools?

The need for test automation has never been as high as now with Agile approaches within the software development lifecycle. New generation test tools are easy to use, low cost, or both. Examples I favor are the new Tricentis TOSCA™ Testsuite, Worksoft Sertify©, SOASTA® Platform, but also open source tool Selenium. And QA, and IT as a whole, needs to go further. Not only use tools to automate test execution, performance testing, security testing, but even more on test specification.

The upcoming Modelization of IT enables the usage of tools even further. We can create models and specify test cases with them (with the use of special tools), create requirements, create code or more. IT can benefit by this Modelization to help the business go further and achieve its goals. I’ve written about a good example of this in this blog on fully automated testing.

The tools are the prerequisite, but how can you learn more about them. Well if you are in the Netherlands in the end of June you could go to the Test Automation Day. They just published their program on their site to enable you to learn more about test automation.

Categories: Testing & QA