Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/2&page=1' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Testing & QA
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Exposing a private Amazon RDS instance with iptables NAT rules

Agile Testing - Grig Gheorghiu - Thu, 07/28/2016 - 17:43
I needed to expose a private Amazon MySQL RDS instance to a 3rd party SaaS tool. I tried several approaches and finally found one that seemed to work pretty well.

I ended up creating a small EC2 instance in the same VPC as the RDS instance, and applied these iptables NAT/masquerading rules to it, mapping local port 3307 to port 3306 on the RDS instance, whose internal IP address is in this case 172.16.11.2.

# cat iptables_tunnel_port_3307.sh
#!/bin/bash

iptables -F
iptables -F -t nat
iptables -X
iptables -t nat -A PREROUTING -p tcp --dport 3307 -j DNAT --to 172.16.11.2:3306
iptables -A FORWARD -p tcp -d 172.16.11.2 --dport 3306 -j ACCEPT
iptables -t nat -A OUTPUT -p tcp -o lo --dport 3307 -j DNAT --to 172.16.11.2:3306
iptables -t nat -A POSTROUTING  -j MASQUERADE

I also had to enable IP forwarding on the EC2 instance:

# sysctl net.ipv4.ip_forward
# sysctl -p

At this point, I was able to hit the external IP of the EC2 instance on port 3307, and get to the private RDS instance on port 3306. I was also able to attach the EC2 instance to an EC2 Security Group allowing the 3rd party SaaS tool IP addresses to access port 3307 on the EC2 instance.

My thanks to the people discussing a similar issue on this thread of LinuxQuestions. Without their discussion, I don't think I'd have been able to figure out a solution.

Code awareness levels

Actively Lazy - Thu, 07/28/2016 - 09:23

Writing code is all about working at multiple levels of abstraction concurrently. But as well as working at multiple levels of abstraction there are also multiple levels of awareness of the code.

The most basic level of code awareness is just making the code work. Does this line of code compile, run and do what I intended it to do? While learning to code this lowest level of awareness consumes almost all our capacity: it is difficult to focus on anything else other than just typing in the correct syntax.

Once we’ve mastered typing in code that works there is another level of awareness: is the intent of this line of code clear? Will another human being be able to understand this? Hell, will I be able to understand it in a month’s time? This is where code readability begins. While writing code we expend some of our mental capacity on making sure that the code will be readable.

Code legibility though is about more than just each single line of code in isolation. We also need to consider whether this line makes sense in the context of the rest of this method, maybe we should extract a new method? Does this line of code make sense in the context of the rest of this class? Does the behaviour belong somewhere else? Does the design that I’m discovering actually make sense? Is it consistent? Is the pattern I’m implementing consistent with the rest of the application? Besides making each line of code compile there are numerous higher levels – rising through each of the larger structures in the application: the method, the class, the module, the component, the system.

Besides code legibility there are other factors to be aware of: will failures in this code be easy to understand, will exceptions help me get to the right part of the code quickly, what happens if things I depend on fail? Will this code perform well, in what scenarios and with what frequency will it be called? Is this code secure, are there ways for an attacker to take advantage of this code?

Writing good code requires developers to be aware of these levels all the time. However, if we’re tired, hungover or fed up we probably won’t give much time to these concerns. We’ll just make the code work that’s in front of us and ignore the bigger picture. When we’re lacking mental capacity, we tend to stop worrying about the higher levels and focus on the lower level, easier problems. “I’ll come back later and fix the design”, we tell ourselves.

This is where pairing can help. I don’t think it’s true that the pairing partner is only thinking about the bigger picture; but the non-driving partner certainly has spare capacity to think about these things and act as a conscience against the driver missing something or getting tired.

I like to use hands-on coding exercises in interview situations because it helps highlight how much awareness a developer has. Generally I’m looking for developers that, for a given problem, have enough capacity to solve the problem quickly and well. Less capable developers will generally take longer and/or solve the problem badly. In an interview situation though, there are extra levels of awareness, besides just solving the problem: is what I’m doing creating a good impression? Am I solving the problem in the right way (e.g. using TDD)? Is the way I’m solving the problem showing off the right skills?

For example a while back I interviewed a candidate who used “yield return”. This is an unusual thing to see, so I asked why use it? “To start a discussion about whether or not you should ever use it”. This I found quite impressive: a candidate with enough mental capacity to not only solve the problem at hand, in a good way; but still with enough spare capacity left to realise that there was an opportunity to use an unusual construct purely to provoke a discussion of it – effectively driving the interview through code, to show off the depth of their knowledge and experience. This is the kind of person you want on your team: someone with enough spare mental capacity to make sure the code is clean and the design correct.

I think this lack of capacity is why the code we write when we’re tired or when we’re learning a new language or learning a new framework is bad: all our capacity is focused on just making the code work, we don’t have spare capacity to also think about whether the code is readable or whether the design makes sense. When we’re struggling to just make the code do what we want, to just keep the compiler happy, is it any wonder the outcome is a mess?

If you want to see how much mental capacity I can bring to your company, why not hire me? I’m looking for new roles right now – so drop me a line dave@activelylazy.co.uk.


Categories: Programming, Testing & QA

CC to Everyone

James Bach’s Blog - Tue, 07/26/2016 - 18:23
I sent this to someone who’s angry with me due to some professional matter we debated. A colleague thought it would be worth showing you, too. So, for whatever it’s worth:

I will say this. I don’t want anyone to feel bad about me, or about my behavior, or about themselves. I can live with that, but I don’t want it.

So, if there is something simple I can do to help people feel better, and it does not require me to tell a lie, then I am willing to do so.

I want people to excel at their craft and be happy. That’s actually what is motivating me, underneath all my arguing.

Categories: Testing & QA

Software Development Conferences Forecast July 2016

From the Editor of Methods & Tools - Mon, 07/18/2016 - 14:12
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP), DevOps and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods […]

Using JMESPath queries with the AWS CLI

Agile Testing - Grig Gheorghiu - Wed, 07/13/2016 - 22:05
The AWS CLI, based on the boto3 Python library, is the recommended way of automating interactions with AWS. In this post I'll show some examples of more advanced AWS CLI usage using the query mechanism based on the JMESPath JSON query language.

Installing the AWS CLI tools is straightforward. On Ubuntu via apt-get:

# apt-get install awscli

Or via pip:

# apt-get install python-pip
# pip install awscli

The next step is to configure awscli by specifying the AWS Access Key ID and AWS Secret Access Key, as well as the default region and output format:

# aws configureAWS Access Key ID: your-aws-access-key-idAWS Secret Access Key: your-aws-secret-access-keyDefault region name [us-west-2]: us-west-2
Default output format [None]: json
The configure command creates a ~/.aws directory containing two files: config and credentials.
You can specify more than one pair of AWS keys by creating profiles in these files. For example, in ~/.aws/credentials you can have:
[profile profile1]AWS_ACCESS_KEY_ID=key1AWS_SECRET_ACCESS_KEY=secretkey1
[profile profile2]AWS_ACCESS_KEY_ID=key2AWS_SECRET_ACCESS_KEY=secretkey2
In ~/.aws/config you can have:[profile profile1]region = us-west-2
[profile profile2]region = us-east-1
You can specify a given profile when you run awscli:
# awscli --profile profile1
Let's assume you want to write a script using awscli that deletes EBS snapshots older than N days. Let's go through this one step at a time.
Here's how you can list all snapshots owned by you:
# aws ec2 describe-snapshots --profile profile1 --owner-id YOUR_AWS_ACCT_NUMBER --query "Snapshots[]"
Note the use of the --query option. It takes a parameter representing a JMESPath JSON query string. It's not trivial to figure out how to build these query strings, and I advise you to spend some time reading over the JMESPath tutorial and JMESPath examples.
In the example above, the query string is simply "Snapshots[]", which represents all the snapshots that are present in the AWS account associated with the profile profile1. The default output in our case is JSON, but you can specify --output text at the aws command line if you want to see each snapshot on its own line of text.
Let's assume that when you create the snapshots, you specify a description with contains PROD or STAGE for EBS volumes attached to production and stage EC2 instances respectively. If you want to only display snapshots containing the string PROD, you would do:
# aws ec2 describe-snapshots --profile profile1 --owner-id YOUR_AWS_ACCT_NUMBER--query "Snapshots[?contains(Description, \`PROD\`) == \`true\`]" --output text
The Snapshots[] array now contains a condition represented by the question mark ?. The condition uses the contains() function included in the JMESPath specification, and is applied against the Description field of each object in the Snapshots[] array, verifying that it contains the string PROD.  Note the use of backquotes surrounding the strings PROD and true in the condition. I spent some quality time troubleshooting my queries when I used single or double quotes with no avail. The backquotes also need to be escaped so that the shell doesn't interpret them as commands to be executed.
To restrict the PROD snapshots even further, to the ones older than say 7 days ago, you can do something like this:
DAYS=7TARGET_DATE=`date --date="$DAYS day ago" +%Y-%m-%d`
# aws ec2 describe-snapshots --profile profile1 --owner-id YOUR_AWS_ACCT_NUMBER--query "Snapshots[?contains(Description, \`PROD\`) == \`true\`]|[?StartTime < \`$TARGET_DATE\`]" --output text
Here I used the StartTime field of the objects in the Snapshots[] array and compared it against the target date. In this case, string comparison is good enough for the query to work.
In all the examples above, the aws command returned a subset of the Snapshots[] array and displayed all fields for each object in the array. If you wanted to display specific fields, let's say the ID, the start time and the description of each snapshot, you would run:
# aws ec2 describe-snapshots --profile profile1 --owner-id YOUR_AWS_ACCT_NUMBER--query "Snapshots[?contains(Description, \`PROD\`) == \`true\`]|[?StartTime < \`$TARGET_DATE\`].[SnapshotId,StartTime,Description]" --output text
To delete old snapshots, you can use the aws ec2 delete-snapshot command, which needs a snapshot ID as a parameter. You could use the command above to list only the SnapshotId for snapshots older than N days, then for each of these IDs, run something like this:
# aws ec2 delete-snapshot --profile profile1 --snapshot-id $id
All this is well and good when you run these commands interactively at the shell. However, I had no luck running them out of cron. The backquotes resulted in boto3 syntax errors. I had to do it the hard way, by listing all snapshots first, then going all in with sed and awk:
aws ec2 describe-snapshots --profile profile1 --owner-id YOUR_AWS_ACCT_NUMBER --output=text --query "Snapshots[].[SnapshotId,StartTime,Description]"  > $TMP_SNAPS
DAYS=7TARGET_DATE=`date --date="$DAYS day ago" +%Y-%m-%d`
cat $TMP_SNAPS | grep PROD | sed 's/T[0-9][0-9]:[0-9][0-9]:[0-9][0-9].000Z//' | awk -v target_date="$TARGET_DATE" '{if ($2 < target_date){print}}' > $TMP_PROD_SNAPS
echo PRODUCTION SNAPSHOTS OLDER THAN $DAYS DAYScat $TMP_PROD_SNAPS
for sid in `awk '{print $1}' $TMP_PROD_SNAPS` ; do echo Deleting PROD snapshot $sid aws ec2 delete-snapshot --profile $PROFILE --region $REGION --snapshot-id $siddone
Ugly, but it works out of cron. Hope it helps somebody out there.

July 14th 2016: I initially forgot to include this very good blog post from Joseph Lawson on advanced JMESPath usage with the AWS CLI.

Cracking the Code Interview

From the Editor of Methods & Tools - Wed, 07/13/2016 - 14:03
Software development interviews are a different breed from other interviews and, as such, require specialized skills and techniques. This talk will teach you how to prepare for coding interviews, what top companies like Google, Amazon, and Microsoft really look for, and how to tackle the toughest programming and algorithm problems. This is not a fluffy […]

More tips and tricks for running Gatling in Docker containers

Agile Testing - Grig Gheorghiu - Fri, 07/01/2016 - 18:11
This post is a continuation of my previous one on "Running Gatling tests in Docker containers via Jenkins". As I continued to set up Jenkins jobs to run Gatling tests, I found the need to separate those tests for different environments - development, staging and production. The initial example I showed contained a single setup, which is not suitable for multiple environments.

Here is my updated Gatling directory structure

gatling
gatling/conf
gatling/conf/production
gatling/conf/production/gatling.conf
gatling/conf/staging
gatling/conf/staging/gatling.conf
gatling/Dockerfile
gatling/results
gatling/user-files
gatling/user-files/data
gatling/user-files/data/production-urls.csv
gatling/user-files/data/staging-urls.csv
gatling/user-files/simulations
gatling/user-files/simulations/development
gatling/user-files/simulations/production
gatling/user-files/simulations/production/Simulation.scala
gatling/user-files/simulations/staging
gatling/user-files/simulations/staging/Simulation.scala

Note that I created a separate directory under simulations for each environment (development, staging, production), each with its own simulation files.

I also created a data directory under user-files, because that is the default location for CSV files used by Gatling feeders.

Most importantly, I created a separate configuration directory (staging, production) under gatling/conf, each directory containing its own customized gatling.conf file. I started by copying the gatling-defaults.conf file from GitHub to gatling/conf/staging/gatling.conf and gatling/conf/production/gatling.conf respectively.

Here is what I customized in staging/gatling.conf:

mute = true # When set to true, don't ask for simulation name nor run description
simulations = user-files/simulations/staging

I customized production/gatling.conf in a similar way:

mute = true # When set to true, don't ask for simulation name nor run description
simulations = user-files/simulations/production

Setting mute to true is important because without it, running Gatling in a Docker container was segfaulting while waiting for user input for the simulation ID:

Select simulation id (default is 'gatlingsimulation'). Accepted characters are a-z, A-Z, 0-9, - and _ Exception in thread "main" java.lang.NullPointerException at io.gatling.app.Selection$Selector.loop$1(Selection.scala:127) at io.gatling.app.Selection$Selector.askSimulationId(Selection.scala:135) at io.gatling.app.Selection$Selector.selection(Selection.scala:50) at io.gatling.app.Selection$.apply(Selection.scala:33) at io.gatling.app.Gatling.runIfNecessary(Gatling.scala:75) at io.gatling.app.Gatling.start(Gatling.scala:65) at io.gatling.app.Gatling$.start(Gatling.scala:57) at io.gatling.app.Gatling$.fromArgs(Gatling.scala:49) at io.gatling.app.Gatling$.main(Gatling.scala:43) at io.gatling.app.Gatling.main(Gatling.scala)

The other customization was to point the simulations attribute to the specific staging or production sub-directories.
Since the CSV files containing URLs to be load tested are also environment-specific, I modified the Simulation.scala files to take this into account. I also added 2 JAVA_OPTS variables that can be passed at runtime for HTTP basic authentication. Here is the new Crawl object (compare with the one from my previous post):
object Crawl {  val feeder = csv("staging-urls.csv").random
  val userName = System.getProperty("username")  val userPass = System.getProperty("password")
  val crawl = exec(feed(feeder)    .exec(http("${loc}")    .get("${loc}").basicAuth(userName, userPass)    ))}
One more thing is needed: to make Gatling use a specific configuration file instead of its default one, which is conf/gatling.conf. To do that, I set GATLING_CONF as an ENV variable in the Dockerfile, so it can be passed as a 'docker run' command line parameter. Here is the Dockerfile:
# Gatling is a highly capable load testing tool.## Documentation: http://gatling.io/docs/2.2.2/# Cheat sheet: http://gatling.io/#/cheat-sheet/2.2.2
FROM java:8-jdk-alpine
MAINTAINER Denis Vazhenin
# working directory for gatlingWORKDIR /opt
# gating versionENV GATLING_VERSION 2.2.2
# create directory for gatling installRUN mkdir -p gatling
# install gatlingRUN apk add --update wget && \  mkdir -p /tmp/downloads && \  wget -q -O /tmp/downloads/gatling-$GATLING_VERSION.zip \  https://repo1.maven.org/maven2/io/gatling/highcharts/gatling-charts-highcharts-bundle/$GATLING_VERSION/gatling-charts-highcharts-bundle-$GATLING_VERSION-bundle.zip && \  mkdir -p /tmp/archive && cd /tmp/archive && \  unzip /tmp/downloads/gatling-$GATLING_VERSION.zip && \  mv /tmp/archive/gatling-charts-highcharts-bundle-$GATLING_VERSION/* /opt/gatling/
# change context to gatling directoryWORKDIR  /opt/gatling
# set directories below to be mountable from hostVOLUME ["/opt/gatling/conf", "/opt/gatling/results", "/opt/gatling/user-files"]
# set environment variablesENV PATH /opt/gatling/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binENV GATLING_HOME /opt/gatlingENV GATLING_CONF /opt/gatling/confENV JAVA_OPTS ""
ENTRYPOINT ["gatling.sh"]
Finally, here is how I invoke 'docker run' to tie everything together:
docker run --rm -v ${WORKSPACE}/gatling/conf:/opt/gatling/conf -v ${WORKSPACE}/gatling/user-files:/opt/gatling/user-files -v ${WORKSPACE}/gatling/results:/opt/gatling/results -e GATLING_CONF="/opt/gatling/conf/staging" -e JAVA_OPTS="-Dusers=$USERS -Dduration=$DURATION -Dusername=myusername -Dpassword=mypass" /PATH/TO/DOCKER/REGISTRY/gatling
Note the GATLING_CONF parameter passed with -e with the value of /opt/gatling/conf/staging. Also note the username and password JAVA_OPTS parameters.

Happy load testing!

Running Gatling load tests in Docker containers via Jenkins

Agile Testing - Grig Gheorghiu - Wed, 06/29/2016 - 00:16
Gatling is a modern load testing tool written in Scala. As part of the Jenkins setup I am in charge of, I wanted to run load tests using Gatling against a collection of pages for a given website. Here are my notes on how I managed to do this.

Running Gatling as a Docker container locally

There is a Docker image already available on DockerHub, so you can simply pull down the image locally:


$ docker pull denvazh/gatling:2.2.2
Instructions on how to run a container based on this image are available on GitHub:
$ docker run -it --rm -v /home/core/gatling/conf:/opt/gatling/conf \-v /home/core/gatling/user-files:/opt/gatling/user-files \-v /home/core/gatling/results:/opt/gatling/results \ denvazh/gatling:2.2.2
Based on these instructions, I created a local directory called gatling, and under it I created 3 sub-directories: conf, results and user-files. I left the conf and results directories empty, and under user-files I created a simulations directory containing a Gatling load test scenario written in Scala. I also created a file in the user-files directory called urls.csv, containing a header named loc and a URL per line for each page that I want to load test.
Assuming the current directory is gatling, here are examples of these files:
$ cat user-files/urls.csvlochttps://my.website.comhttps://my.website.com/category1https://my.website.com/category2/product3
$ cat user-files/simulations/Simulation.scala
package my.gatling.simulation
import io.gatling.core.Predef._import io.gatling.http.Predef._import scala.concurrent.duration._
class GatlingSimulation extends Simulation {
  val httpConf = http    .baseURL("http://127.0.0.1")    .acceptHeader("text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8")    .doNotTrackHeader("1")    .acceptLanguageHeader("en-US,en;q=0.5")    .acceptEncodingHeader("gzip, deflate")    .userAgentHeader("Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0")
  val scn1 = scenario("Scenario1")    .exec(Crawl.crawl)
  val userCount = Integer.getInteger("users", 1)  val durationInSeconds  = java.lang.Long.getLong("duration", 10L)  setUp(    scn1.inject(rampUsers(userCount) over (durationInSeconds seconds))  ).protocols(httpConf)}
object Crawl {
  val feeder = csv("/opt/gatling/user-files/urls.csv").random
  val crawl = exec(feed(feeder)    .exec(http("${loc}")    .get("${loc}")    ))}

I won't go through the different ways of writing Gatling load tests scenarios here. There are good instructions on the Gatling website -- see the Quickstart and the Advanced Tutorial. What the scenario above does is it reads the file urls.csv and randomly picks a URL from it, then runs a load test against that URL.
I do want to point out 2 variables in the above script:

  val userCount = Integer.getInteger("users", 1)  val durationInSeconds  = java.lang.Long.getLong("duration", 10L)
These variables specify the max number of users we want to ramp up to, and the duration of the ramp-up. They are used in the inject call:

scn1.inject(rampUsers(userCount) over (durationInSeconds seconds))

The special thing about these 2 variables is that they are read from JAVA_OPTS by Gatling. So if you have a -Dusers Java option and a -Dduration Java option, Gatling will know how to read them and how to set the userCount and durationInSeconds variables accordingly. This is a good thing, because it allows you to specify those numbers outside of Gatling, without hardcoding them in your simulation script. Here is more info on passing parameters via the command line to Gatling.

While pulling the Gatling docker image and running it is the simplest way to run Gatling, I prefer to understand what's going on in that image. I started off by getting the Dockerfile from GitHub:

$ cat Dockerfile
# Gatling is a highly capable load testing tool.## Documentation: http://gatling.io/docs/2.2.2/# Cheat sheet: http://gatling.io/#/cheat-sheet/2.2.2
FROM java:8-jdk-alpine
MAINTAINER Denis Vazhenin
# working directory for gatlingWORKDIR /opt
# gating versionENV GATLING_VERSION 2.2.2
# create directory for gatling installRUN mkdir -p gatling
# install gatlingRUN apk add --update wget && \  mkdir -p /tmp/downloads && \  wget -q -O /tmp/downloads/gatling-$GATLING_VERSION.zip \  https://repo1.maven.org/maven2/io/gatling/highcharts/gatling-charts-highcharts-bundle/$GATLING_VERSION/gatling-charts-highcharts-bundle-$GATLING_VERSION-bundle.zip && \  mkdir -p /tmp/archive && cd /tmp/archive && \  unzip /tmp/downloads/gatling-$GATLING_VERSION.zip && \  mv /tmp/archive/gatling-charts-highcharts-bundle-$GATLING_VERSION/* /opt/gatling/
# change context to gatling directoryWORKDIR  /opt/gatling
# set directories below to be mountable from hostVOLUME ["/opt/gatling/conf", "/opt/gatling/results", "/opt/gatling/user-files"]
# set environment variablesENV PATH /opt/gatling/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binENV GATLING_HOME /opt/gatling
ENTRYPOINT ["gatling.sh"]
I then added a way to pass JAVA_OPTS via an environment variable. I added this line after the ENV GATLING_HOME line:
ENV JAVA_OPTS ""
I dropped this Dockerfile in my gatling directory, then built a local Docker image off of it:
$ docker build -t gatling:local .
I  then invoked 'docker run' to launch a container based on this image, using the csv and simulation files from above. The current directory is still gatling.

$ docker run --rm -v `pwd`/conf:/opt/gatling/conf -v `pwd`/user-files:/opt/gatling/user-files -v `pwd`/results:/opt/gatling/results -e JAVA_OPTS="-Dusers=10 -Dduration=60" gatling:local -s MySimulationName

Note the -s flag which denotes a simulation name (which can be any string you want). If you don't specify this flag, the gatling.sh script which is the ENTRYPOINT in the container will wait for some user input and you will not be able to fully automate your load test.

Another thing to note is the use of JAVA_OPTS. In the example above, I pass -Dusers=10 and   -Dduration=60 as the two JAVA_OPTS parameters. The JAVA_OPTS variable itself is passed to 'docker run' via the -e option, which tells Docker to replace the default value for ENV JAVA_OPTS (which is "") with the value passed with -e.

Running Gatling as a Docker container from Jenkins

Once you have a working Gatling container locally, you can upload the Docker image built above to a private Docker registry. I used a private EC2 Container Registry (ECR).  

I also added the gatling directory and its sub-directories to a GitHub repository called devops.

In Jenkins, I created a new "Freestyle project" job with the following properties:

  • Parameterized build with 2 string parameters: USERS (default value 10) and DURATION in seconds (default value 60)
  • Git repository - add URL and credentials for the devops repository which contains the gatling files
  • An "Execute shell" build command similar to this one:
docker run --rm -v ${WORKSPACE}/gatling/conf:/opt/gatling/conf -v ${WORKSPACE}/gatling/user-files:/opt/gatling/user-files -v ${WORKSPACE}/gatling/results:/opt/gatling/results -e JAVA_OPTS="-Dusers=$USERS -Dduration=$DURATION"  /PATH/TO/DOCKER/REGISTRY/gatling -s MyLoadTest 


Note that we mount the gatling directories as Docker volumes, similarly to when we ran the Docker container locally, only this time we specify ${WORKSPACE} as the base directory. The 2 string parameters USERS and DURATION are passed as variables in JAVA_OPTS.
A nice thing about running Gatling via Jenkins is that the reports are available in the Workspace directory of the project. If you go to the Gatling project we created in Jenkins, click on Workspace, then on gatling, then results, you should see directories named gatlingsimulation-TIMESTAMP for each Gatling run. Each of these directories should have an index.html file, which will show you the Gatling report dashboard. Pretty neat.

The Bestselling Software Intended for People Who Couldn’t Use It.

James Bach’s Blog - Mon, 06/27/2016 - 16:26

In 1983, my boss, Dale Disharoon, designed a little game called Alphabet Zoo. My job was to write the Commodore 64 and Apple II versions of that game. Alphabet Zoo is a game for kids who are learning to read. The child uses a joystick to move a little character through a maze, collecting letters to spell words.

We did no user testing on this game until the very day we sent the final software to the publisher. On that day, we discovered that our target users (five year-olds) did not have the ability to use a joystick well enough to play the game. They just got frustrated and gave up.

We shipped anyway.

This game became a bestseller.

alphabet-zoo

It was placed on a list of educational games recommended by the National Education Association.

alphabet-zoo-list

Source: Information Please Almanac, 1986

So, how to explain this?

Some years later, when I became a father, I understood. My son was able to play a lot of games that were too hard for him because I operated the controls. I spoke to at least one dad who did exactly that with Alphabet Zoo.

I guess the moral of the story is: we don’t necessarily know the value of our own creations.

Categories: Testing & QA

Agile Hiring, Load Testing & Goal Management in Methods & Tools Summer 2016 issue

From the Editor of Methods & Tools - Tue, 06/21/2016 - 08:02
Methods & Tools has published its Summer 2016 issue that discusses hiring for agility, load testing scripts errors, managing with goals on every level and Behavior-Driven Development (BDD) with the open source Turnip tool. Methods & Tools is a free e-magazine for software developers, testers and project managers. * Hiring for Agility – Mindset Matters […]

Running Jenkins jobs in Docker containers

Agile Testing - Grig Gheorghiu - Fri, 06/17/2016 - 00:08
One of my main tasks at work is to configure Jenkins to act as a hub for all the deployment and automated testing jobs we run. We use CloudBees Jenkins Enterprise, mostly for its Role-Based Access Control plugin, which allows us to create one Jenkins folder per project/application and establish fine grained access control to that folder for groups of users. We also make heavy use of the Jenkins Enterprise Pipeline features (which I think are also available these days in the open source version).

Our Jenkins infrastructure is composed of a master node and several executor nodes which can run jobs in parallel if needed.

One pattern that my colleague Will Wright and I have decided upon is to run all Jenkins jobs as Docker containers. This way, we only need to install Docker Engine on the master node and the executor nodes. No need to install any project-specific pre-requisites or dependencies on every Jenkins node. All of these dependencies and pre-reqs are instead packaged in the Docker containers. It's a simple but powerful idea, that has worked very well for us. One of the nice things about this pattern is that you can keep adding various types of automated tests. If it can run from the command line, then it can run in a Docker container, which means you can run it from Jenkins!

I have seen this pattern discussed in multiple places recently, for example in this blog post about "Using Docker for a more flexible Jenkins".

Here are some examples of Jenkins jobs that we create for a given project/application:
  • a deployment job that runs Capistrano in its own Docker container, against targets in various environments (development, staging, production); this is a Pipeline script written in Groovy, which can call other jobs below
  • a Web UI testing job that runs the Selenium Python WebDriver and drives Firefox in headless mode (see my previous post on how to do this with Docker)
  • a JavaScript syntax checking job that runs JSHint against the application's JS files
  • an SSL scanner/checker that runs SSLyze against the application endpoints
We also run other types of tasks, such as running an AWS CLI command to perform certain actions, for example to invalidate a CloudFront resource. I am going to show here how we create a Docker image for one of these jobs, how we test it locally, and how we then integrate it in Jenkins.

I'll use as an example a simple Docker image that installs the AWS CLI package and runs a command when the container is invoked via 'docker run'.
I assume you have a local version of Docker installed. If you are on a Mac, you can use Docker Toolbox, or, if you are lucky and got access to it, you can use the native Docker for Mac. In any case,  I will assume that you have a local directory called awscli with the following Dockerfile in it:
FROM ubuntu:14.04
MAINTAINER You Yourself <you@example.com>
# disable interactive functionsARG DEBIAN_FRONTEND=noninteractiveENV AWS_ACCESS_KEY_ID=""ENV AWS_SECRET_ACCESS_KEY=""ENV AWS_COMMAND=""
RUN apt-get update && \    apt-get install -y python-pip && \    pip install awscli
WORKDIR /root
CMD (export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID; export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY; $AWS_COMMAND)
As I mentioned, this simply installs the awscli Python package via pip, then runs a command given as an environment variable when you invoke 'docker run'. It also uses two other environment variables that contain the AWS access key ID and secret access key. You don't want to hardcode these secrets in the Dockerfile and have them end up on GitHub.
The next step is to build an image based on this Dockerfile. I'll call the image awscli and I'll tag it as local:

$ docker build -t awscli:local .

Then you can run a container based on this image. The command line looks a bit complicated because I am passing (via the -e switch) the 3 environment variables discussed above:


$ docker run --rm -e AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY -e AWS_COMMAND='aws cloudfront create-invalidation --distribution-id=abcdef --invalidation-batch Paths={Quantity=1,Items=[/media/*]},CallerReference=my-invalidation-123456' awscli:local
(where distribution-id needs to be the actual ID of your CloudFront distribution, and CallerReference needs to be unique per invalidation)

If all goes well, you should see the output of the 'aws cloudfront create-invalidation' command.

In our infrastructure, we have a special GitHub repository where we check in the various folders containing the Dockerfiles and any static files that need to be copied over to the Docker images. When we push the awscli directory to GitHub for example, we have a Jenkins job that will be notified of that commit and that will build the Docker image (similarly to how we did it locally with 'docker build'), then it will 'docker push' the image to a private AWS ECR repository we have.

Now let's assume we want to create a Jenkins job that will run this image as a container. First we define 2 secret credentials, specific to the Jenkins folder where we want to create the job (there are also global Jenkins credentials that can apply to all folders). These credentials are of type "Secret text" and contain the AWS access key ID and the AWS secret access key.

Then we create a new Jenkins job of type "Freestyle project" and call it cloudfront.invalidate. The build for this job is parameterized and contains 2 parameters: CF_ENVIRONMENT which is a drop-down containing the values "Staging" and "Production" referring to the CloudFront distribution we want to invalidate; and CF_RESOURCE, which is a text variable that needs to be set to the resource that needs to be invalidated (e.g. /media/*).

In the Build Environment section of the Jenkins job, we check "Use secret text(s) or file(s)" and add 2 Bindings, one for the first secret text credential containing the AWS access key ID, which we save in a variable called AWS_ACCESS_KEY_ID, and the other one for the second secret text credential containing the AWS secret access key, which we save in a variable called AWS_SECRET_ACCESS_KEY.

The Build section for this Jenkins job has a step of type "Execute shell" which uses the parameters and variables defined above and invokes 'docker run' using the path to the Docker image from our private ECR repository:

DISTRIBUTION_ID=MY_CF_DISTRIBUTION_ID_FOR_STAGING
if [ $CF_ENVIRONMENT == "PRODUCTION" ]; then
    DISTRIBUTION_ID=MY_CF_DISTRIBUTION_ID_FOR_PRODUCTION
fi

INVALIDATION_ID=jenkins-invalidation-`date +%Y%m%d%H%M%S`

COMMAND="aws cloudfront create-invalidation --distribution-id=$DISTRIBUTION_ID --invalidation-batch Paths={Quantity=1,Items=[$CF_RESOURCE]},CallerReference=$INVALIDATION_ID"

docker run --rm -e AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$SECRET_ACCESS_KEY -e AWS_COMMAND="$COMMAND" MY_PRIVATE_ECR_ID.dkr.ecr.us-west-2.amazonaws.com/awscli


When this job is run, the Docker image gets pulled down from AWS ECR, then a container based on the image is run and then removed upon completion (that's what --rm does, so that no old containers are left around).

I'll write another post soon with some more examples of Jenkins jobs that we run as Docker containers to do Selenium test, JSHint testing and SSLyze scanning.






Software Development Linkopedia June 2016

From the Editor of Methods & Tools - Mon, 06/13/2016 - 14:35
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about software bugs, rewriting code, project routines, improving retrospectives, acceptance testing problems, #noprojects, DevOps and Agile (or not) software architecture. Blog: Software has bugs. This is normal. Blog: When to Rewrite […]

Quote of the Month June 2016

From the Editor of Methods & Tools - Mon, 06/06/2016 - 12:56
The only software which doesn’t change is dead software. Stopping software from changing is a fast way to kill it. Treating software development as a temporary endeavour kills it. Regarding software as temporary also means the teams which create it get regarded as temporary. Such teams get disbanded at “the end”. The developers scatter to […]

Continuum of Design

Actively Lazy - Tue, 05/17/2016 - 06:45

How best to organise your code? At one end of the scale there is a god class – a single, massive entity that stores all possible functions; at the other end of the scale are hundreds of static methods, each in their own class. In general, these two extremes are both terrible designs. But there is a continuum of alternatives between these two extremes – choosing where along this scale your code should be is often difficult to judge and often changes over time.

Why structure code at all?

It’s a fair enough question – what is so bad about the two extremes? They are, really, more similar to each other than any of the points between. In one there is a single class with every method in it; in the other, I have static methods one-per class or maybe I’m using a language which doesn’t require me to even group by classes. In both cases, there is no organisation. No structure larger than methods with which to help organise or group related code.

Organising code is about making it easier for human beings to understand. The compiler or runtime doesn’t care how your code is organised, it will run it just the same. It will work the same and look the same – from the outside there is no difference. But for a developer making changes, the difference is critical. How do I find what I need to change? How can I be sure what my change will impact? How can I find other similar things that might be affected? These are all questions we have to answer when making changes to code and they require us to be able to reason about the code.

Bounded contexts help contain understanding – by limiting the size of the problem I need to think about at any one time I can focus on a smaller portion of it, but even within that bounded context organisation helps. It is much easier for me to understand how 100 methods work when they are grouped into 40 classes, with relationships between the classes that make sense in the domain – than it is for me to understand a flat list of 100 methods.

An Example

Let’s imagine we’re writing software to manage a small library (for you kids too young to remember: a library is a place where you can borrow books and return them when you’re done reading them; a book is like a physically printed blog but without the comments). We can imagine the kinds of things (methods) this system might support:

  • Add new title to the catalog
  • Add new copy of title
  • Remove copy of a title
  • User borrows a copy
  • User returns a copy
  • Register a new user
  • Print barcode for new copy
  • Fine a user for a late return
  • Pay outstanding fines for a user
  • Find title by ISBN
  • Find title by name, author
  • Find copy by scanned id
  • Change user’s address
  • List copies user has already borrowed
  • Print overdue book letter for user

This toy example is small enough that written in one-class it would probably be manageable; pretty horrible, but manageable. How else could we go about organising this?

Horizontal Split

We could split functionality horizontally: by technical concern. For example, by the database that data is stored in; or by the messaging system that’s used. This can work well in some cases, but can often lead to more god classes because your class will be as large as the surface area of the boundary layer. If this is small and likely to remain small it can be a good choice, but all too often the boundary is large or grows over time and you have another god class.

For example, functionality related to payments might be grouped simply into a PaymentsProvider – with only one or two methods we might decide it is unlikely to grow larger. Less obviously, we might group printing related functionality into a PrinterManager – while it might only have two methods now, if we later start printing marketing material or management reports the methods become less closely related and we have the beginnings of a god class.

Vertical Split

The other obvious way to organise functionality is vertically – group methods that relate to the same domain concept together. For example, we could group some of our methods into a LendingManager:

  • User borrows a copy
  • User returns a copy
  • Register a new user
  • Fine a user for a late return
  • Find copy by scanned id
  • List copies user has already borrowed

Even in our toy example this class already has six public methods. A coarse grained grouping like this often ends up being called a SomethingManager or TheOtherService. While this is sometimes a good way to group methods, the lack of clear boundary means new functionality is easily added over time and we grow ourselves a new god class.

A more fine-grained vertical grouping would organise methods into the domain objects they relate to – the recognisable nouns in the domain, where the methods are operations on those nouns. The nouns in our library example are obvious: Catalog, Title, Copy, User. Each of these has two or three public methods – but to understand the system at the outer level I only need to understand the four main domain objects and how they relate to each other, not all the individual methods they contain.

The fine-grained structure allows us to reason about a larger system than if it was unstructured. The extents of the domain objects should be relatively clear, if we add new methods to Title it is because there is some new behaviour in the domain that relates to Title – functionality should only grow slowly, we should avoid new god classes; instead new functionality will tend to add new classes, with new relationships to the existing ones.

What’s the Right Way?

Obviously there’s no right answer in all situations. Even in our toy example it’s clear to see that different structures make sense for different areas of functionality. There are a range of design choices and no right or wrong answers. Different designs will ultimately all solve the same problem, but humans will find some designs easier to understand and change than others. This is what makes design such a hard problem: the right answer isn’t always obvious and might not be known for some time. Even worse, the right design today can look like the wrong design tomorrow.


Categories: Programming, Testing & QA

Message Obsession

Mistaeks I Hav Made - Nat Pryce - Wed, 03/23/2016 - 23:19
I’ve noticed a “code smell” in object-oriented code that I call “Message Obsession”. I find Message Obsession causes similar difficulties to Primitive Obsession. However, Message Obsession appears to be the complete opposite of Primitive Obsession. Refactoring to address the difficulties caused by either Primitive Obsession or Message Obsession leads to the same design. To demonstrate the Message Obsession, and to compare it to Primitive Obsession, imagine a game in which a robot moves around a grid of tiles. Primitive Obsession The code below, which moves the robot east and then north, suffers from Primitive Obsession. Domain concepts – direction of movement, in this case – are held as multiple primitive data types instead of being modelled explicitly. robot.move(0,1) robot.move(-1,0) It’s awkward to work with this interface because you have to pass the deltaX and deltaY coordinates around as a pair of values instead of as a single Direction value. This can lead to code duplication. Any code that has to store moves will have to define a structure to do so. Code that has to perform calculations on directions – to add, invert or rotate directions, for example – will do so via helper functions or inline calculations. This logic ends up duplicated all over the code because programmers working in one area do not know that programmers working in other areas are writing the the same calculations. Primitive Obsession can also lead to errors. Are the coordinates passed to the move function as x and y or row and column? The type system won’t catch the error because both coordinates are integers. Named parameters help, of course, but not all languages support them. Message Obsession Programmers know about the drawbacks of Primitive Obsession, but sometimes create designs that go too far the other way, to the smell I call “Message Obsession”. A design that suffers from Message Obsession does not pass around primitive values – it does not pass around values at all. Instead, objects implement several similar methods, each of which manipulates the same fields used to store primitive values inside the object. Message Obsession leads to code like: robot.moveEast() robot.moveNorth() It’s awkward to work with this kind of interface because you can’t pass around, store or perform calculations on the direction at all. Symptoms Symptoms caused by this smell include… …class hierarchies that mirror the direction methods. For example, a hierarchy of Command classes with a concrete class for each direction the robot can move: interface MoveCommand { fun apply(r: robot) } object MoveNorth : MoveCommand { fun applyTo(r: robot) = r.moveNorth() } object MoveSouth : MoveCommand { fun applyTo(r: robot) = r.moveSouth() } ... etc. ... …dispatching with conditional statements: when (keyCode) { UP_ARROW -> robot.moveNorth() DOWN_ARROW -> robot.moveSouth() ... etc. ... } …using anonymous functions to treat directions as first-class values. val movesByKeyCode : MapUnit> = mapOf( UP_ARROW to {robot -> robot.moveNorth()}, DOWN_ARROW to {robot -> robot.moveSouth()}, ... etc. ... ) Cure The duplication in the method names shows that, as with primitive obsession, there’s a value type to be factored out. Let’s call it “Direction”. The names of the methods show that we’ll need some constant Direction values and what those constants should be called. We really want code that moves the robot to look like: robot.move(east) robot.move(north) We could define the Direction type as: data class Direction(val deltaX: Int, val deltaY: Int) And direction constants as: val north = Direction( 0, -1) val east = Direction( 1, 0) val south = Direction( 0, 1) val west = Direction(-1, 0) Now directions are first class values, they can be stored in collections, mapped to and from user input events, rotated, inverted, etc. For example: val movementKeyBindings: Map The value type acts as an “attractor” for behaviour. We can add operations to it to rotate or invert directions, for example. fun Direction.inverse() = Direction(-deltaX, -deltaY) fun Direction.rotatedClockwise() = Direction(deltaY, -deltaX) fun Direction.rotatedAnticlockwise() = Direction(-deltaY, deltaX) ... etc. ... Those rotatedClockwise and rotatedAnticlockwise methods also exhibit Message Obsession! Maybe we will need to introduce a RightAngleTurn type as the system evolves… Implicit Duplication Sometimes the Message Obsession smell is not obvious from the names. The duplication in naming is missing or implicit. For example, in the term “move” is not used in following code, but is implied by the behaviour of the east and north methods. robot.east() robot.north() The missing concept of “move” has to be named before the duplication stands out. Refactoring too Far, and Back Again I’m struck by how Message Obsession acts as an opposite of primitive obsession. I have often seen Message Obsession appearing as a reaction to Primitive Obsession, pushing the design too far in the opposite direction. The design eventually settles on a compositional style as the need for first-class values arises and the drawbacks of Message Obsession become clear.
Categories: Programming, Testing & QA

Early Impressions of Kotlin

Mistaeks I Hav Made - Nat Pryce - Thu, 12/31/2015 - 00:07
We’ve been using the Kotlin programming language for a few weeks on our latest project to perform technical experiments, explore the problem space, and write a few HTTP services. I’ve also ported Hamcrest to Kotlin, as HamKrest, to help us write tests, and written a small library for type safe configuration of our services. Why Kotlin? The organisation I’m working with has mature infrastructure for deploying JVM services in their internal PaaS cloud. They use a mix of Java and Scala but have found Scala builds too slow. Watching my colleague struggle to use Java 8 streams to write what should have been basic functional map-and-fold code, I decided to have a look at some other “post-Java” JVM languages. We wanted a typed language. We wanted language aware editing. And we wanted a language that had an organisation behind its development and enough people using it that we could get questions answered, even the stupid ones we’d be likely to ask while learning. That eliminated dynamically typed languages (Groovy, JavaScript, Clojure) and languages that are less popular or have small, informal development teams behind them (Xtend, Gosu, Fantom, Frege, etc.). In the end it came down to Red Hat’s Ceylon and JetBrains’ Kotlin. Of the two, Ceylon is the most innovative, and therefore (to me, anyway) the most interesting, but Kotlin met more of our criteria. Kotlin has a more active community, is being used for commercial development by JetBrains and a number of other companies1, has good editing support within IntelliJ, has an active community on social media, and promises easy interop with existing Java libraries. Good points Kotlin was very quick to learn. Kotlin is a conservative increment to Java that smooths off a lot of Java’s rough edges. Kotlin is small and regular, with few special cases and gotchas to learn. In many ways it feels a bit like a compiled, typed Python with curly brace syntax. The type system is a breath of fresh air compared to Java. Type inferencing makes code less cluttered. There is no distinction between primitive & reference types. Generics and subtyping work together far better than Java: the type system uses declaration site variance, not use site variance, and variance does not usually have to be specified at all for functions. You never have unavoidable compiler warnings, as you do with Java’s type system. Functional programming is more convenient in Kotlin than Java. You can define free-standing functions and constants at the top level of a module, instead of having to define them in a “Utils” class. Function definitions can be nested. There is language support for immutable value types (aka “data classes”) and algebraic data types (“sealed classes”). Null safety is enforced by the type system and variables and fields are non-nullable by default. The language defines standard function types and a lambda syntax for anonymous functions. You can define extension methods on existing types. They are only syntactic sugar for free-standing functions, but nevertheless can lead to more concise, expressive code. The Kotlin standard library defines a number of useful extension methods, especially on the Iterable and String types. Kotlin supports, and carefully controls, operator overloading. You can overload operators that act as functions (e.g. arithmetic, comparison, function call) but not those that perform flow control (e.g. short-cut logical operators). You cannot define your own operators, which will stop me going down the rabbit hole of “ascii-art programming”, which I found hard to resist when writing Scala. Class definitions require a lot less boilerplate than in Java. Using old fashioned getter-and-setter style Java code is made more convenient by language support for bean properties. Anonymous extension methods (borrowed from Groovy, I believe) make domain specific embedded languages easier to write. Method chaining builder APIs are unnecessary in Kotlin and more awkward than using the apply function to set properties of an object. Kotlin has a philosophy of preferring behaviour to be explicitly specified. For example, there is no automatic coercion between numeric types, even from low to high precision. That means you have to explicitly convert from Int to Long, for example. I like that, but I expect some people will find it annoying. Surprises I didn’t miss destructuring pattern match Kotlin doesn’t have destructuring pattern match & language support for tuples. A limited form of destructuring can be used in for loops and assignments. The when expression is an alternative conditional expression that doesn’t do destructuring. I was surprised to find that I didn’t really miss destructuring. Without tuples you have to use data classes with named fields. The flow-sensitive typing then works rather well where destructuring would be necessary in a language where the type system is not flow-sensitive. For example: sealed class Result { class Success(val value: T) : Result() class Failure(val exception: Throwable): Result() } ... val e : Result = doSomething() if (e is Result.Success) { // e is known to be a Result.Success and // can be used as such without a downcast println(e.value) } Sealed classes cannot be data classes An algebraic data type cannot be be a data class, so you have to write equals, hashCode, etc. yourself. This is a surprising omission, since I always want an algebraic data type to be a value type. The Kotlin developers say that this will be fixed after version 1.0 is released. Functions are not quite first-class objects There’s a difference between f1 and f2 below: val f1 = {i:Int -> i+1} fun f2(i:Int) = i + 1 Code can refer to the value of f1 directly, but must use the “::” operator to obtain a reference to f2: val f = ::f2 Null safe operators push me to write more extension functions The null safe operators (?. and ?:) only help when dereferencing a nullable reference. When you have a nullable reference and want to pass a value (if it exists) to a function as a parameter, you have to use the awkward construct: nullableThing?.let{ thing -> fn(thing, other, parameters) } I find myself refactoring my functions to extension functions to reduce the syntactic noise, letting me rewrite the code above as: nullableThing?.fn(other, parameters) Is that a good thing? I’m not sure. Sometimes it feels a little forced. Companion objects Kotlin borrows the concept of companion objects from Scala. Why can’t classes be objects? Perhaps it’s a limitation of the JVM but compared to, say, Python, it feels clunky. Frustrations I only have a couple of frustrations with Kotlin. Lack of polymorphism Operators and the for loop are syntactic sugar that desugar to method calls. The methods are invoked statically and structurally — the target does not need to implement an interface that defines the method. For example, a + b desugars to a.add(b). However, there’s no way to write a generic function that sums its parameters, because add is not defined on an interface that can be used as an upper bound. There’s no way to define the following function for any type that has an add operator method. fun sum(T first, T second) : T = first + second You would have to define overloads for different types, but then wouldn’t be able to call sum within a generic function. fun sum(first: Int, second: Int) = first + second fun sum(first: Long, second: Long) = first + second fun sum(first: Double, second: Double)= first + second fun sub(first: Matrix, second: Matrix)= first + second fun sum(first: Money, second: Money)= first + second ... That kind of duplication is what generics are meant to avoid! Extension methods are also statically bound. That means you cannot write a generic function that calls an extension method on the value of a type parameter. For example, the standard library defines the same extension methods on unrelated types — forEach, map, flatMap, fold, etc. But there is no concept of, for example, “mappable” or “foldable”. Nor is there a way of defining such a concept and applying it to existing types to allow you to write generic functions over unrelated types. Kotlin doesn’t have higher kinded types that would let you express this kind of generic function or type classes that would let you add polymorphism without modifying existing type definitions. Compared to Rust’s traits, which support extension methods, operator overloading, type classes for parametric polymorphism, and interfaces for subtype polymorphism, Kotlin’s monomorphic extension methods and operator overloading are quite limited and do not help refactor duplicated logic. However, for typical monomorphic, procedural Java code this is probably not an issue. Optionality is a Special Case Kotlin’s support for nullable types is implemented by a special case in the type system and special case operators that only apply to nullable references. The operators do not desugar to methods that can be implemented for other types. For example, optionalThing?.foo can be considered a map of the function {thing -> thing.foo} over the option optionalThing. If foo itself is nullable, then it can be considered a flatMap. But the expressions do not desugar to map and flatMap And if you want to map or flatMap a function, you have to use a different syntax: optionalThing?.let(theFunction). For typical Java code, which is monomorphic and uses null references with wild abandon, language support for nullability is invaluable. But I would find it more convenient if it could be used polymorphically, or if Kotlin used a common naming convention for optional and other functor types. Null safety is not enforced when you interop with Java code. And you do that a lot. Kotlin doesn’t have many libraries or much of a runtime and makes it easy to call existing Java libraries directly. Kotlin expects you to know what you’re doing with respect to null references when calling Java code, and doesn’t force you to treat every value returned from Java as nullable. As a result, null safety is a bit of an illusion in a lot of our Kotlin code and it has come back to bite us. Conclusion The code we’ve been writing has been a mix of coordinating and piping data between existing Java libraries – Apache HTTP client, Undertow HTTP server, JDBC, Sesame, JSON and XML parsers, and so on – and algorithmic code that analyses human readable text. For that kind of work, Kotlin has been very useful. The Kotlin code is far more concise than the equivalent Java. In our domain models and algorithmic code, Kotlin’s type safety and especially null safety, has been a big help. Exactly how happy we’ve been with Kotlin has depended on the design style of the code we’re writing: For functional programming, Kotlin has occasionally been frustrating, because we cannot use parametric polymorphism to factor out duplicated logic as much as we’d like. For object-oriented programming, Kotlin’s concise syntax for class definitions and language support for delegation avoids a lot of Java’s boilerplate. However, most Java out there is monomorphic, procedural code moving data between “NOJOs” and APIs that expect objects to have “bean” getters and setters. Kotlin has made working with that kind of API much easier and far more concise than doing so in Java. As far as I can tell, RedHat sponsor development of Ceylon but do not actually use it to develop their own products. If I’m wrong, please let me know in the comments.↩
Categories: Programming, Testing & QA

Thought after Test Automation Day 2013

Henry Ford said “Obstacles are those frightful things you see when take your eyes off the goal.” After I’ve been to Test Automation Day last month I’m figuring out why industrializing testing doesn’t work. I try to put it in this negative perspective, because I think it works! But also when is it successful? A lot of times the remark from Ford is indeed the problem. People tend to see obstacles. Obstacles because of the thought that it’s not feasible to change something. They need to change. But that’s not an easy change.

After attending the #TAD2013 as it was on Twitter I saw a huge interest in better testing, faster testing, even cheaper testing by using tools to industrialize. Test automation has long been seen as an interesting option that can enable faster testing. it wasn’t always cheaper, especially the first time, but at least faster. As I see it it’ll enable better testing. “Better?” you may ask. Test automation itself doesn’t enable better testing, but by automating regression tests and simple work the tester can focus on other areas of the quality.

images

And isn’t that the goal? In the end all people involved in a project want to deliver a high quality product, not full of bugs. But they also tend to see the obstacles. I see them less and less. New tools are so well advanced and automation testers are becoming smarter and smarter that they enable us to look beyond the obstacles. I would like to say look over the obstacles.

At the Test Automation Day I learned some new things, but it also proved something I already know; test automation is here to stay. We don’t need to focus on the obstacles, but should focus on the goal.

Categories: Testing & QA

The Toyota Way: The need for doing it right the first time

After WWII Toyota started developing its Toyota Production System (TPS); which was identified as ‘Lean’ in the 1990s. Taiichi Ohno, Shigeo Shingo and Eiji Toyoda developed the system between 1948 and 1975. In the myth surrounding the system it was not inspired by the American automotive industry, but from a visit to American supermarkets, Ohno saw the supermarket as model for what he was trying to accomplish in the factor and perfect the Just-in-Time (JIT) production system. While accomplishing this low inventory levels were a key outcome of the TPS, and an important element of the philosophy behind its system is to work intelligently and eliminate waste so that only minimal inventory is needed.

As TPS and Lean have their own principles as outlined by Toyota:

  • Long-term Philosophy
  • Right process will produce the right results
  • Value to organization by developing people
  • Solving root problems drives organizational learning

As these principles were summed up and published by Toyota in 2001, by naming it “The Toyota Way 2001”. It consists the above named principles in two key areas: Continuous Improvement, and Respect for People.

the-toyota-way

The principles for a continuous improvement include establishing a long-term vision, working on challenges, continual innovation, and going to the source of the issue or problem. The principles relating to respect for people include ways of building respect and teamwork. When looking at the ALM all these principles come together in the ‘first time right’ approach already mentioned. And from Toyota’s view they were outlined as followed:

  • The right process will produce the right results
    • Create continuous process flow to bring problems to the surface.
    • Use the ‘pull’ system to avoid overproduction (kanban)
    • Level out the workload (heijunka).
    • Build a culture of stopping to fix problems, to get quality right from the first (jidoka)
  • Continuously solving root problems drives organizational learning
    • Go and see for yourself to thoroughly understand the situation (Genchi Genbutsu);
    • Make decisions slowly by consensus, thoroughly considering all options (nemawashi); implement decisions rapidly;
    • Become a learning organization through relentless reflection (hansei) and continuous improvement (kaizen).
Let’s do it right now!

As the economy is changing and IT is more common sense throughout ore everyday life the need for good quality software products has never been this high. Software issues create bigger and bigger issues in our lives. Think about trains that cannot ride due to software issues, bank clients that have no access to their bank accounts, and people oversleeping because their alarm app didn’t work on their iPhone. As Capers Jones [Jones, 2011] states in his 2011 study that “software is blamed for more major business problems than any other man-made product” and that “poor quality has become one of the most expensive topics in human history”. The improvement of software quality is a key topic for all industries.

 Right the first time vs jidoka

In both TPS and Lean autonomation or jidoka are used. Autonomation can be described as ‘intelligent autonomation’, it means that when an abnormal situation arises the ‘machine’ stops and fix the abnormality. Autonomation prevents the production of defective products, eliminates overproduction, and focuses attention on understanding the problem and ensuring that it never recurs; a quality control process that applies the following four principles:

  • Detect the abnormality.
  • Stop.
  • Fix or correct the immediate condition.
  • Investigate the root cause and install a countermeasure.
Find defects as early as possible

In other words autonomation helps to get quality right the first time perfectly. With IT projects being different from the Toyota car production line, ‘perfectly’ may be a bit too much, but the process around quality assurance should be the same:

  • Find the defect.
  • Stop.
  • Fix or correct the error.
  • Investigate the root cause and take countermeasures.

The defect should be found as early as possible to be fixed as early as possible. And as with Lean and TPS the reason behind this is to make it possible to address the identification and correction of defects immediately in the process.

Categories: Testing & QA

When will you start with test automation?

I just came back from vacation and when I started again I noticed a slight change in resource requests I now see coming by; as almost all requests are with a statement around test automation. In the last two days I had two separate sessions around test automation tools. Has test automation all of a sudden become more important? Did people follow up on my last post, where I state that tools are a prerequisite in testing today, or actually: yesterday!

If you missed the latest cycle in new tools for test automation you’re either an ostrich with your head in the ground (“sorry vacation was in Southern Africa”), or you just simply still afraid. Afraid of change that test automation would cannibalize your manual test execution.

Test automation is not anymore that it takes over test execution in a very complex and unmanageable way. No it offers higher efficiency on test design, test execution, but also more options to test certain non-functional parts of applications – that could not be done without those tools – like security and performance, and virtual environments to do end-2-end tests without test environments that are down all the time. Tools are now also offering more support for testing mobile solutions. Tools are everywhere!

automated-testing

Test automation offers us testers the opportunity to do more, faster, les risky, and cheaper. I set these words specifically in that order. Test automation is often seen as a way to do test cheaper. You can, but you also can do more, for instance:

  • Let the tool do the checks and you explore the application further;
  • Setup a virtual test environment that doesn’t go down after 1 hour of use and test more in the same time;
  • Create and execute more test cases by generating and executing them automatically.
  • Get higher quality by really doing a thorough regression test, instead of a simple check, to find integration errors.

There are enough reasons to work on test automation and I don’t see why not. I think now it is even time for the next step in test automation. What that is time will tell, but I look forward to hearing that at the Test Automation Day in June. Where Bryan Bakker will tell more on this next step in his presentation “Design for Testability – the next step in Test Automation”. After the congress I’ll post my ideas here.

Categories: Testing & QA

Tools should are a prerequisite for efficient and effective QA

We now live in a world where testing and quality are becoming more and more important. Last month I had a meeting with senior management in my company and I made the statement that “quality is user experience”, in other words “without the right amount of quality the user experience will always be low”. And I think most people in QA and Testing will agree with me on that. Even organizations agree on that. Then, but why do we still see so much failures in software around us? Why do we still create software without the needed quality.

For one, because it’s not possible to test for 100%! A known issue in QA, but that’s not the answer we’re looking for. I think the answer is that we still rely too much on old-fashioned manual (functional) testing. As I explained in an earlier blog we need to go past that, move forward. Testing is part of IT and needs to showcase itself as a highly versatile profession. We need to be bale to save money, deliver higher quality, shorten time to market, and go-live with as less bugs as possible…

How can we do that? There are multiple ways to answer that, but one thing will always be one of the answers: test automation or industrialization. Tools should be a prerequisite for efficient and effective QA. It should not be a question to use them, but why not to use them.

Why not use test tools?

The need for test automation has never been as high as now with Agile approaches within the software development lifecycle. New generation test tools are easy to use, low cost, or both. Examples I favor are the new Tricentis TOSCA™ Testsuite, Worksoft Sertify©, SOASTA® Platform, but also open source tool Selenium. And QA, and IT as a whole, needs to go further. Not only use tools to automate test execution, performance testing, security testing, but even more on test specification.

The upcoming Modelization of IT enables the usage of tools even further. We can create models and specify test cases with them (with the use of special tools), create requirements, create code or more. IT can benefit by this Modelization to help the business go further and achieve its goals. I’ve written about a good example of this in this blog on fully automated testing.

The tools are the prerequisite, but how can you learn more about them. Well if you are in the Netherlands in the end of June you could go to the Test Automation Day. They just published their program on their site to enable you to learn more about test automation.

Categories: Testing & QA