Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Testing & QA

Dogma Driven Development

Actively Lazy - Wed, 03/04/2015 - 21:24

We really are an arrogant, opinionated bunch, aren’t we? We work in an industry where there aren’t any right answers. We pretend what we do is computer “science”. When in reality, its more art than science. It certainly isn’t engineering. Engineering suggests an underlying physics, mathematical models of how the world works. Is there a mathematical model of how to build software at scale? No. Do we understand the difference between what makes good software and bad software? No. Are there papers with published proofs of whether this idea or that idea has any observable difference on written software, as practised by companies the world over? No. It turns out this is a difficult field: software is weird stuff. And yet we work in an industry full of close-minded people, convinced that their way is The One True Way. It’s not science, its basically art. Our industry is dominated by fashion.

Which language we work in is fashion: should we use Ruby, or Node.js or maybe Clojure. Hey Go seems pretty cool. By which I mean “I read about it on the internet, and I’d quite like to put it on my CV so can I please f*** up your million pound project in a big experiment of whether I can figure out all the nuances of the language faster than the project can de-rail?”

If it’s not the language we’re using, its architectural patterns. The dogma attached to REST. Jesus H Christ. It’s just a bunch of HTTP requests, no need to get so picky! For a while it was SOA. Then that became the old legacy thing, so now it’s all micro-services, which are totally different. Definitely. I read it on the internet, it must be true.

Everyone has their opinions. Christ, we’ve got our opinions. Thousands of blogs and wankers on twitter telling you what they think about the world (exactly like this one) As if one person’s observations are useful for anything more than being able to replicate their past success, should you ever by mistake find yourself on their timeline from about six weeks ago.

For example: I wrote a post recently about pairing, and some fine specimen of internet based humanity felt the need to tell me that people who need to pair are an embarrassment to the profession, that we should find another line of work. Hahaha I know, don’t read the comments. Especially when it’s in reply to something you wrote. But seriously now, is it necessary to share your close minded ignorance with the world?

I shouldn’t get worked up about some asshat on the internet. But it’s not just some asshat on the internet. There are hundreds of thousands of these asshats with their closed minds and dogmatic views on the world. And not just asshats spouting off on the internet, but getting paid to build the software that increasingly runs all our lives. When will we admit that we have no idea what we’re doing. The only way to get better is to learn as many tools and techniques as we can and hopefully, along the way, we’ll learn when to apply which techniques and when not to.

For example, I’ve worked with some people that don’t get TDD. Ok, fine – some people just aren’t “test infected”. And a couple of guys that really would rather gut me and fry my liver for dinner than pair with me. Do I feel the need to evangelise to them as though I’ve just found God? No. Does it offend me that they don’t follow my religion? No. Do I feel the need to suicide bomb their project? No. Its your call. Its your funeral. When I have proof that my way is The One True Way and yours is a sham, you can damn well bet I’ll be force feeding it to you. But given that ain’t gonna happen: I think we’re all pretty safe. If you don’t wanna pair, you put your headphones on and disappear into your silent reverie. Those of us that like pairing will pair, those of us that don’t, won’t. I’m fine with that.

The trouble is, in this farcical echo chamber of an industry, where the lessons of 40 years ago still haven’t been learnt properly. Where we keep repeating the mistakes of 20 years ago. Of 10 years ago. Of 5 years ago. Of 2 years ago. Of last week. For Christ’s sake people, can we not just learn a little of what’s gone before? All we have is mindless opinion, presented as fact. Everyone’s out to flog you their new shiny products, or whatever bullshit service they’re offering this week. No, sorry, it’s all utter bollocks. We know less about building decent software now than we did 40 years ago. It’s just now we build a massive amount more of it. And it’s even more shit than it ever was. Only now, now we have those crazy bastards that otherwise would stand on street corners telling me that Jesus would save me if only I would let him; but now they’re selling me scrum master training or some other snake oil.

All of this is unfortunately entirely indistinguishable from reasoned debate, so for youngsters entering the industry they have no way to know that its just a bunch of wankers arguing which colour to paint this new square wheel they invented. Until after a few years they become as jaded and cynical as the rest of us and decide to take advantage of all the other dumb fools out there. They find their little niche, their little way of making the world a little bit worse but themselves a little bit richer. And so the cycle repeats. Fashion begets fashion. Opinion begets opinion.

There aren’t any right answers in creating software. I know what I’ve found works some of the time. I get paid to put into practice what I know. I hope you do, too. But we’ve all had a different set of experiences which means we often don’t agree on what works and what doesn’t. But this is all we have. The plural of anecdote is not data.

All we have is individual judgement, borne out of individual experience. There is no grand unified theory of Correct Software Development. The best we can hope to do is learn from each other and try as many different approaches as possible. Try and fail safely and often. The more techniques you’ve tried the better the chance you can find the right technique at the right time.

Call it craftsmanship if you like. Call it art if you like. But it certainly isn’t science. And I don’t know about you, but it’s a very long time since I saw any engineering round these parts.


Categories: Programming, Testing & QA

The Virtue of Purgatory in Software Development

From the Editor of Methods & Tools - Tue, 03/03/2015 - 14:51
Having some decade of experience in software development behind me, I had the time to accumulate a lot of mistakes. One of the recurring patterns in these failures was the ambition to solve code issues too quickly. This was especially the case when the problem was related to code that I wrote, which made me feel responsible for the situation. Naturally, I had also often think that my code couldn’t be bad and somebody must have changed it after I deliver it, but this is another story ;O) When you detect ...

Sending Windows logs to Papertrail with nxlog

Agile Testing - Grig Gheorghiu - Thu, 02/26/2015 - 01:04
I am revisiting Papertrail as a log aggregation tool. It's really easy to send Linux logs to Papertrail via syslog or rsyslog or syslog-ng (see this article on how to configure syslog with TLS) but to send Windows logs you need to jump through some hoops.

Papertrail recommends nxlog as their Windows log management tool of choice, so that's what I used. This Papertrail article explains how to install and configure nxlog on Windows (I recommend enabling TLS).  The nxlog.conf template file provided by Papertrail will send Windows Event logs over. I also wanted to send application-specific logs, so here's what I did:

1) Add an Input section to nxlog.conf for each directory containing the files you want to send to Papertrail. For example, if one of your applications logs to C:\MyApp1\logs and your log files end with .log, you could have this input section:

# Monitor MyApp1 log files 
START_ANGLE_BRACKET Input MyApp1 END_ANGLE_BRACKET
 Module im_file
 File 'C:\\MyApp1\\logs\\*.log' 
 Exec $Message = $raw_event; 
 Exec if $Message =~ /GET \/ping/ drop(); 
 Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1; 
 SavePos TRUE 
 Recursive TRUE 
START_ANGLE_BRACKET /Input END_ANGLE_BRACKET

Some observations:

  • Blogger doesn't like angle brackets so replace START_ANGLE_BRACKET with < and END_ANGLE_BRACKET with >
  • The name MyApp1 is the name of this Input section
  • The File statement points to the location and name of the log files
  • The first Exec statement saves the log line under consideration as the variable $Message
  • The second Exec statement drops messages that contain a specific regular expression, in my case just 'GET /ping' -- which happens to be health checks from the load balancer that pollute the logs; you can replace this with any regular expression that will filter out log lines you don't want sent to Papertrail
  • The next few statements were in the sample Input stanza from the template nxlog.conf file so I just left them there
2) Add more Input sections, one for each log location (i.e. multiple log files under a given directory) that you want to send to Papertrail. You need to give each Input section a unique name (e.g. MyApp1 above).
3) Add a Route section for the Input sections defined previously. If you defined 2 Input sections MyApp1 and MyApp2, your Route section would look something like:

START_ANGLE_BRACKET  Route 2 END_ANGLE_BRACKET
Path MyApp1, MyApp2=> filewatcher_transformer => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
The filewatcher_transformer section was already included in the sample nxlog.conf file from Papertrail. The Route section above says that the files processed by the 2 Input paths MyApp1 and MyApp2 will be processed through the statements defined in the filewatcher_transformer section, then will be sent to Papertrail by virtue of being processed through the statements defined in the syslogout section.
At this point, if you restart the nxlog service on your Windows box, you should start seeing log entries from your application(s) flowing into the Papertrail console.

Software Development Conferences Forecast February 2015

From the Editor of Methods & Tools - Tue, 02/24/2015 - 17:38
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine. QCon London, March 2-6 2015, London, UK Exclusive 50 pounds Method & Tools discount with promo code “softdevconf50″ Wearables TechCon, March 9-11 2015, Santa Clara, USA Use code WEARIT for a $200 conference discount off the 3-day ...

Diamond Kata - TDD with only Property-Based Tests

Mistaeks I Hav Made - Nat Pryce - Fri, 02/20/2015 - 00:04
The Diamond Kata is a simple exercise that Seb Rose described in a recent blog post. Seb describes the Diamond Kata as: Given a letter, print a diamond starting with ‘A’ with the supplied letter at the widest point. For example: print-diamond ‘C’ prints A B B C C B B A Seb used the exercise to illustrate how he “recycles” tests to help him work incrementally towards a full solution. Seb’s approach prompted Alastair Cockburn to write an article in response in which he argued for more thinking before programming. Alastair’s article shows how he approached the Diamond Kata with more up-front analysis. Ron Jeffries and George Dinwiddie resonded to Alastair’s article, showing how they approached the Diamond Kata relying on emergent design to produce an elegant solution (“thinking all the time”, as Ron Jeffries put it). There was some discussion on Twitter, and several other people published their approaches. (I’ll list as many as I know about at the end of this article). The discussion sparked my interest, so I decided to have a go at the exercise myself. The problem seemed to me, at first glance, to be a good fit for property testing. So I decided to test-drive a solution using only property-based tests and see what happens. I wrote the solution in Scala and used ScalaTest to run and organise the tests and ScalaCheck for property testing. What follows is an unexpurgated, warts-and-all walkthrough of my progress, not just the eventual complete solution. I made wrong turns and stupid mistakes along the way. The walkthrough is pretty long, so if you want you don’t want to follow through step by step, jump straight to the complete solution and/or my conclusions on how the exercise went and what I learned. Alternatively, if you want to follow the walkthrough in more detail, the entire history is on GitHub, with a commit per TDD step (add a failing test, commit, make the implementation pass the test, commit, refactor, commit, … and repeat). Walkthrough Getting Started: Testing the Test Runner The first thing I like to do when starting a new project is make sure my development environment and test runner are set up right, that I can run tests, and that test failures are detected and reported. I use Gradle to bootstrap a new Scala project with dependencies on the latest versions of ScalaTest and ScalaCheck and import the Gradle project into IntelliJ IDEA. ScalaTest supports several different styles of test and assertion syntax. The user guide recommends writing an abstract base class that combines traits and annotations for your preferred testing style and test runner, so that’s what I do first: @RunWith(classOf[JUnitRunner]) abstract class UnitSpec extends FreeSpec with PropertyChecks { } My test class extends UnitSpec: class DiamondSpec extends UnitSpec { } I add a test that explicitly fails, to check that the test framework, IDE and build hang together correctly. When I see the test failure, I’m ready to write the first real test. The First Test Given that I’m writing property tests, I have to start with a simple property of the diamond function, not a simple example. The simplest property I can think of is: For all valid input character, the diamond contains one or more lines of text. To turn that into a property test, I must define “all valid input characters” as a generator. The description of the Diamond Kata defines valid input as a single upper case character. ScalaCheck has a predefined generator for that: val inputChar = Gen.alphaUpperChar At this point, I haven’t decided how I will represent the diamond. I do know that my test will assert on the number of lines of text, so I write the property with respect to an auxiliary function, diamondLines(c:Char):Vector[String], which will generate a diamond for input character c and return the lines of the diamond in a vector. "produces some lines" in { forAll (inputChar) { c => assert(diamondLines(c).nonEmpty) } } I like the way that the test reads in ScalaTest/ScalaCheck. It is pretty much a direct translation of my English description of the property into code. To make the test fail, I write diamondLines as: def diamondLines(c : Char) : Vector[String] = { Vector() } The entire test class is: import org.scalacheck._ class DiamondSpec extends UnitSpec { val inputChar = Gen.alphaUpperChar "produces some lines" in { forAll (inputChar) { c => assert(diamondLines(c).nonEmpty) } } def diamondLines(c : Char) : Vector[String] = { Vector() } } The simplest implementation that will make that property pass is to return a single string: object Diamond { def diamond(c: Char) : String = { "A" } } I make the diamondLines function in the test call the new function and split its result into lines: def diamondLines(c : Char) = { Diamond.diamond(c).lines.toVector } The implementation can be used like this: object DiamondApp extends App { import Diamond.diamond println(diamond(args.lift(0).getOrElse("Z").charAt(0))) } A Second Test, But It Is Not Very Helpful I now need to add another property, to more tightly constrain the solution. I notice that the diamond always has an odd number of lines, and decide to test that: For all valid input character, the diamond has an odd number of lines. This implies that the number of lines is greater than zero (because vectors cannot have a negative number of elements and zero is even), so I change the existing test rather than adding another one: "produces an odd number lines" in { forAll (inputChar) { c => assert(isOdd(diamondLines(c).length)) } } def isOdd(n : Int) = n % 2 == 1 But this new test has a problem: my existing solution already passes it. The diamond function returns a single line, and 1 is an odd number. This choice of property is not helping drive the development forwards. A Failing Test To Drive Development, But a Silly Mistake The next simplest property I can think of is the number of lines of the diamond. If ‘ord(c)’ is the number of letters between ‘A’ and c, (zero for A, 1 for B, 2 for C, etc.) then: For all valid input characters, c, the number of lines in a diamond for c is 2*ord(c)+1. At this point I make a silly mistake. I write my property as: "number of lines" in { forAll (inputChar) { c => assert(diamondLines(c).length == ord(c)+1) } } def ord(c: Char) : Int = c - 'A' I don’t notice the mistake immediately. When I do, I decide to leave it in the code as an experiment to see if the property tests will detect the error by becoming inconsistent, and how long it will take before they do so. This kind of mistake would easily be caught by an example test. It’s a good idea to have a few examples, as well as properties, to act as smoke tests. I make the test pass with the smallest amount of production code possible. I move the ord function from the test into the production code and use it to return the required number of lines that are all the same. def diamond(c: Char) : String = { "A\n" * (ord(c)+1) } def ord(c: Char) : Int = c - 'A' Despite sharing the ord function between the test and production code, there’s still some duplication. Both the production and test code calculate ord(c)+1. I want to address that before writing the next test. Refactor: Duplicated Calculation I replace ord(c)+1 with lineCount(c), which calculates number of lines generated for an input letter, and inline the ord(c) function, because it’s now only used in one place. object Diamond { def diamond(c: Char) : String = { "A\n" * lineCount(c) } def lineCount(c: Char) : Int = (c - 'A')+1 } And I use lineCount in the test as well: "number of lines" in { forAll (inputChar) { c => assert(diamondLines(c).length == lineCount(c)) } } On reflection, using the lineCount calculation from production code in the test feels like a mistake. Squareness The next property I add is: For all valid input character, the text containing the diamond is square Where “is square” means: The length of each line is equal to the total number of lines In Scala, this is: "squareness" in { forAll (inputChar) { c => assert(diamondLines(c) forall {_.length == lineCount(c)}) } } I can make the test pass like this: object Diamond { def diamond(c: Char) : String = { val side: Int = lineCount(c) ("A" * side + "\n") * side } def lineCount(c: Char) : Int = (c - 'A')+1 } Refactor: Rename the lineCount Function The lineCount is also being used to calculate the length of each line, so I rename it to squareSide. object Diamond { def diamond(c: Char) : String = { val side: Int = squareSide(c) ("A" * side + "\n") * side } def squareSide(c: Char) : Int = (c - 'A')+1 } Refactor: Clarify the Tests I’m now a little dissatisfied with the way the tests read: "number of lines" in { forAll (inputChar) { c => assert(diamondLines(c).length == squareSide(c)) } } "squareness" in { forAll (inputChar) { c => assert(diamondLines(c) forall {_.length == squareSide(c)}) } } The “squareness” property does not stand alone. It doesn’t communicate that the output is square unless combined with “number of lines” property. I refactor the test to disentangle the two properties: "squareness" in { forAll (inputChar) { c => val lines = diamondLines(c) assert(lines forall {line => line.length == lines.length}) } } "size of square" in { forAll (inputChar) { c => assert(diamondLines(c).length == squareSide(c)) } } The Letter on Each Line The next property I write specifies which characters are printed on each line. The characters of each line should be either a letter that depends on the index of the line, or a space. Because the diamond is vertically symmetrical, I only need to consider the lines from the top to the middle of the diamond. This makes the calculation of the letter for each line much simpler. I make a note to add a property for the vertical symmetry once I have made the implementation pass this test. "single letter per line" in { forAll (inputChar) { c => val allLines = diamondLines(c) val topHalf = allLines.slice(0, allLines.size/2 + 1) for ((line, index) <- topHalf.zipWithIndex) { val lettersInLine = line.toCharArray.toSet diff Set(' ') val expectedOnlyLetter = ('A' + index).toChar assert(lettersInLine == Set(expectedOnlyLetter), "line " + index + ": \"" + line + "\"") } } } To make this test pass, I change the diamond function to: def diamond(c: Char) : String = { val side: Int = squareSide(c) (for (lc <- 'A' to c) yield lc.toString * side) mkString "\n" } This repeats the correct letter for the top half of the diamond, but the bottom half of the diamond is wrong. This will be fixed by the property for vertical symmetry, which I’ve noted down to write next. Vertical Symmetry The property for vertical symmetry is: For all input character, c, the lines from the top to the middle of the diamond, inclusive, are equal to the reversed lines from the middle to the bottom of the diamond, inclusive. "is vertically symmetrical" in { forAll(inputChar) { c => val allLines = diamondLines(c) val topHalf = allLines.slice(0, allLines.size / 2 + 1) val bottomHalf = allLines.slice(allLines.size / 2, allLines.size) assert(topHalf == bottomHalf.reverse) } } The implementation is: def diamond(c: Char) : String = { val side: Int = squareSide(c) val topHalf = for (lc <- 'A' to c) yield lineFor(side, lc) val bottomHalf = topHalf.slice(0, topHalf.length-1).reverse (topHalf ++ bottomHalf).mkString("\n") } But this fails the “squareness” and “size of square” tests! My properties are now inconsistent. The test suite has detected the erroneous implementation of the squareSide function. The correct implementation of squareSide is: def squareSide(c: Char) : Int = 2*(c - 'A') + 1 With this change, the implementation passes all of the tests. The Position Of The Letter In Each Line Now I add a property that specifies the position and value of the letter in each line, and that all other characters in a line are spaces. Like the previous test, I can rely on symmetry in the output to simplify the arithmetic. This time, because the diamond has horizontal symmetry, I only need specify the position of the letter in the first half of the line. I add a specification for horizontal symmetry, and factor out generic functions to return the first and second half of strings and sequences. "is vertically symmetrical" in { forAll (inputChar) { c => val lines = diamondLines(c) assert(firstHalfOf(lines) == secondHalfOf(lines).reverse) } } "is horizontally symmetrical" in { forAll (inputChar) { c => for ((line, index) <- diamondLines(c).zipWithIndex) { assert(firstHalfOf(line) == secondHalfOf(line).reverse, "line " + index + " should be symmetrical") } } } "position of letter in line of spaces" in { forAll (inputChar) { c => for ((line, lineIndex) <- firstHalfOf(diamondLines(c)).zipWithIndex) { val firstHalf = firstHalfOf(line) val expectedLetter = ('A'+lineIndex).toChar val letterIndex = firstHalf.length - (lineIndex + 1) assert (firstHalf(letterIndex) == expectedLetter, firstHalf) assert (firstHalf.count(_==' ') == firstHalf.length-1, "number of spaces in line " + lineIndex + ": " + line) } } } def firstHalfOf[AS, A, That](v: AS)(implicit asSeq: AS => Seq[A], cbf: CanBuildFrom[AS, A, That]) = { v.slice(0, (v.length+1)/2) } def secondHalfOf[AS, A, That](v: AS)(implicit asSeq: AS => Seq[A], cbf: CanBuildFrom[AS, A, That]) = { v.slice(v.length/2, v.length) } The implementation is: object Diamond { def diamond(c: Char) : String = { val side: Int = squareSide(c) val topHalf = for (letter <- 'A' to c) yield lineFor(side, letter) (topHalf ++ topHalf.reverse.tail).mkString("\n") } def lineFor(length: Int, letter: Char): String = { val halfLength = length/2 val letterIndex = halfLength - ord(letter) val halfLine = " "*letterIndex + letter + " "*(halfLength-letterIndex) halfLine ++ halfLine.reverse.tail } def squareSide(c: Char) : Int = 2*ord(c) + 1 def ord(c: Char): Int = c - 'A' } It turns out the ord function, which I inlined into squareSide a while ago, is needed after all. The implementation is now complete. Running the DiamondApp application prints out diamonds. But there’s plenty of scope for refactoring both the production and test code. Refactoring: Delete the “Single Letter Per Line” Property The “position of letter in line of spaces” property makes the “single letter per line” property superflous, so I delete “single letter per line”. Refactoring: Simplify the Diamond Implementation I rename some parameters and simplify the implementation of the diamond function. object Diamond { def diamond(maxLetter: Char) : String = { val topHalf = for (letter <- 'A' to maxLetter) yield lineFor(maxLetter, letter) (topHalf ++ topHalf.reverse.tail).mkString("\n") } def lineFor(maxLetter: Char, letter: Char): String = { val halfLength = ord(maxLetter) val letterIndex = halfLength - ord(letter) val halfLine = " "*letterIndex + letter + " "*(halfLength-letterIndex) halfLine ++ halfLine.reverse.tail } def squareSide(c: Char) : Int = 2*ord(c) + 1 def ord(c: Char): Int = c - 'A' } The implementation no longer uses the squareSide function. It’s only used by the “size of square” property. Refactoring: Inline the squareSide function I inline the squareSide function into the test. "size of square" in { forAll (inputChar) { c => assert(diamondLines(c).length == 2*ord(c) + 1) } } I believe the erroneous calculation would have been easier to notice if I had done this from the start. Refactoring: Common Implementation of Symmetry There’s one last bit of duplication in the implementation. The expressions that create the horizontal and vertical symmetry of the diamond can be replaced with calls to a generic function. I’ll leave that as an exercise for the reader… Complete Tests and Implementation Tests: import Diamond.ord import org.scalacheck._ import scala.collection.generic.CanBuildFrom class DiamondSpec extends UnitSpec { val inputChar = Gen.alphaUpperChar "squareness" in { forAll (inputChar) { c => val lines = diamondLines(c) assert(lines forall {line => line.length == lines.length}) } } "size of square" in { forAll (inputChar) { c => assert(diamondLines(c).length == 2*ord(c) + 1) } } "is vertically symmetrical" in { forAll (inputChar) { c => val lines = diamondLines(c) assert(firstHalfOf(lines) == secondHalfOf(lines).reverse) } } "is horizontally symmetrical" in { forAll (inputChar) { c => for ((line, index) <- diamondLines(c).zipWithIndex) { assert(firstHalfOf(line) == secondHalfOf(line).reverse, "line " + index + " should be symmetrical") } } } "position of letter in line of spaces" in { forAll (inputChar) { c => for ((line, lineIndex) <- firstHalfOf(diamondLines(c)).zipWithIndex) { val firstHalf = firstHalfOf(line) val expectedLetter = ('A'+lineIndex).toChar val letterIndex = firstHalf.length - (lineIndex + 1) assert (firstHalf(letterIndex) == expectedLetter, firstHalf) assert (firstHalf.count(_==' ') == firstHalf.length-1, "number of spaces in line " + lineIndex + ": " + line) } } } def firstHalfOf[AS, A, That](v: AS)(implicit asSeq: AS => Seq[A], cbf: CanBuildFrom[AS, A, That]) = { v.slice(0, (v.length+1)/2) } def secondHalfOf[AS, A, That](v: AS)(implicit asSeq: AS => Seq[A], cbf: CanBuildFrom[AS, A, That]) = { v.slice(v.length/2, v.length) } def diamondLines(c : Char) = { Diamond.diamond(c).lines.toVector } } Implementation: object Diamond { def diamond(maxLetter: Char) : String = { val topHalf = for (letter <- 'A' to maxLetter) yield lineFor(maxLetter, letter) (topHalf ++ topHalf.reverse.tail).mkString("\n") } def lineFor(maxLetter: Char, letter: Char): String = { val halfLength = ord(maxLetter) val letterIndex = halfLength - ord(letter) val halfLine = " "*letterIndex + letter + " "*(halfLength-letterIndex) halfLine ++ halfLine.reverse.tail } def ord(c: Char): Int = c - 'A' } Conclusions In his article, “Thinking Before Programming”, Alastair Cockburn writes: The advantage of the Dijkstra-Gries approach is the simplicity of the solutions produced. The advantage of TDD is modern fine-grained incremental development. … Can we combine the two? I think property-based tests in the TDD process combined the two quite successfully in this exercise. I could record my half-formed thoughts about the problem and solution as generators and properties while using “modern fine-grained incremental development” to tighten up the properties and grow the code that met them. In Seb’s original article, he writes that when working from examples… it’s easy enough to get [the tests for ‘A’ and ‘B’] to pass by hardcoding the result. Then we move on to the letter ‘C’. The code is now screaming for us to refactor it, but to keep all the tests passing most people try to solve the entire problem at once. That’s hard, because we’ll need to cope with multiple lines, varying indentation, and repeated characters with a varying number of spaces between them. I didn’t encounter this problem when driving the implementation with properties. Adding a new property always required an incremental improvement to the implementation to get the tests passing again. Neither did I need to write throw-away tests for behaviour that was not actually desired of the final implementation, as Seb did with his “test recycling” approach. Every property I added applied to the complete solution. I only deleted properties that were implied by properties I added later, and so had become unnecessary duplication. I took the approach of starting from very generic properties and incrementally adding more specific properties as I refine the implementation. Generic properties were easy to come up with, and helped me make progress in the problem. The suite of properties reinforced one another, testing the tests, and detected the mistake I made in one property that caused it to be inconsistent with the rest. I didn’t know Scala, ScalaTest or ScalaCheck well. Now I’ve learned them better I wish I had written a minimisation strategy for the input character. This would have made test failure messages easier to understand. I also didn’t address what the diamond function would do with input outside the range of ‘A’ to ‘Z’. Scala doesn’t let one define a subtype of Char, so I can’t enforce the input constraint in the type system. I guess the Scala way would be to define diamond as a PartialFunction[Char,String]. Further Thoughts Thoughts on duplication and tests as documentation Thoughts on property-based tests and iterative/incremental development Other Solutions Mark Seeman has approached Diamond Kata with property-based tests, using F# and FsCheck. Solutions to the Diamond Kata using exmaple-based tests include: Seb Rose: Recycling Tests in TDD Alastair Cockburn: Thinking Before Programming Seb Rose: Diamond recycling (and painting yourself into a corner) Ron Jeffries: a detailed walkthrough of his solution George Dinwiddie: Another Approach to the Diamond Kata Ivan Sanchez: A walkthrough of his Clojure solution. Jon Jagger: print “squashed-circle” diamond Sandro Mancuso: A Java solution on GitHub Krzysztof Jelski: A Python solution on GitHub Philip Schwarz: A Clojure solution on GitHub
Categories: Programming, Testing & QA

GTAC 2014 Coming to Seattle/Kirkland in October

Google Testing Blog - Thu, 02/19/2015 - 14:22
Posted by Anthony Vallone on behalf of the GTAC Committee

If you're looking for a place to discuss the latest innovations in test automation, then charge your tablets and pack your gumboots - the eighth GTAC (Google Test Automation Conference) will be held on October 28-29, 2014 at Google Kirkland! The Kirkland office is part of the Seattle/Kirkland campus in beautiful Washington state. This campus forms our third largest engineering office in the USA.



GTAC is a periodic conference hosted by Google, bringing together engineers from industry and academia to discuss advances in test automation and the test engineering computer science field. It’s a great opportunity to present, learn, and challenge modern testing technologies and strategies.

You can browse the presentation abstracts, slides, and videos from last year on the GTAC 2013 page.

Stay tuned to this blog and the GTAC website for application information and opportunities to present at GTAC. Subscribing to this blog is the best way to get notified. We're looking forward to seeing you there!

Categories: Testing & QA

GTAC 2014: Call for Proposals & Attendance

Google Testing Blog - Thu, 02/19/2015 - 14:21
Posted by Anthony Vallone on behalf of the GTAC Committee

The application process is now open for presentation proposals and attendance for GTAC (Google Test Automation Conference) (see initial announcement) to be held at the Google Kirkland office (near Seattle, WA) on October 28 - 29th, 2014.

GTAC will be streamed live on YouTube again this year, so even if you can’t attend, you’ll be able to watch the conference from your computer.

Speakers
Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations and lightning talks are 45 minutes and 15 minutes respectively. Speakers should be prepared for a question and answer session following their presentation.

Application
For presentation proposals and/or attendance, complete this form. We will be selecting about 300 applicants for the event.

Deadline
The due date for both presentation and attendance applications is July 28, 2014.

Fees
There are no registration fees, and we will send out detailed registration instructions to each invited applicant. Meals will be provided, but speakers and attendees must arrange and pay for their own travel and accommodations.

Update : Our contact email was bouncing - this is now fixed.



Categories: Testing & QA

The Deadline to Sign up for GTAC 2014 is Jul 28

Google Testing Blog - Thu, 02/19/2015 - 14:21
Posted by Anthony Vallone on behalf of the GTAC Committee

The deadline to sign up for GTAC 2014 is next Monday, July 28th, 2014. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to add yours for consideration. If you would like to speak or attend, be sure to complete the form by Monday.

We will be making regular updates to our site over the next several weeks, and you can find conference details there:
  developers.google.com/gtac

For those that have already signed up to attend or speak, we will contact you directly in mid August.

Categories: Testing & QA

Announcing the GTAC 2014 Agenda

Google Testing Blog - Thu, 02/19/2015 - 14:20
by Anthony Vallone on behalf of the GTAC Committee

We have completed selection and confirmation of all speakers and attendees for GTAC 2014. You can find the detailed agenda at:
  developers.google.com/gtac/2014/schedule

Thank you to all who submitted proposals! It was very hard to make selections from so many fantastic submissions.

There was a tremendous amount of interest in GTAC this year with over 1,500 applicants (up from 533 last year) and 194 of those for speaking (up from 88 last year). Unfortunately, our venue only seats 250. However, don’t despair if you did not receive an invitation. Just like last year, anyone can join us via YouTube live streaming. We’ll also be setting up Google Moderator, so remote attendees can get involved in Q&A after each talk. Information about live streaming, Moderator, and other details will be posted on the GTAC site soon and announced here.

Categories: Testing & QA

GTAC 2014 is this Week!

Google Testing Blog - Thu, 02/19/2015 - 14:20
by Anthony Vallone on behalf of the GTAC Committee

The eighth GTAC commences on Tuesday at the Google Kirkland office. You can find the latest details on the conference at our site, including speaker profiles.

If you are watching remotely, we'll soon be updating the live stream page with the stream link and a Google Moderator link for remote Q&A.

If you have been selected to attend or speak, be sure to note the updated parking information. Google visitors will use off-site parking and shuttles.

We look forward to connecting with the greater testing community and sharing new advances and ideas.

Categories: Testing & QA

GTAC 2014 Wrap-up

Google Testing Blog - Thu, 02/19/2015 - 14:19
by Anthony Vallone on behalf of the GTAC Committee

On October 28th and 29th, GTAC 2014, the eighth GTAC (Google Test Automation Conference), was held at the beautiful Google Kirkland office. The conference was completely packed with presenters and attendees from all over the world (Argentina, Australia, Canada, China, many European countries, India, Israel, Korea, New Zealand, Puerto Rico, Russia, Taiwan, and many US states), bringing with them a huge diversity of experiences.


Speakers from numerous companies and universities (Adobe, American Express, Comcast, Dropbox, Facebook, FINRA, Google, HP, Medidata Solutions, Mozilla, Netflix, Orange, and University of Waterloo) spoke on a variety of interesting and cutting edge test automation topics.

All of the slides and video recordings are now available on the GTAC site. Photos will be available soon as well.


This was our most popular GTAC to date, with over 1,500 applicants and almost 200 of those for speaking. About 250 people filled our venue to capacity, and the live stream had a peak of about 400 concurrent viewers with 4,700 playbacks during the event. And, there was plenty of interesting Twitter and Google+ activity during the event.


Our goal in hosting GTAC is to make the conference highly relevant and useful for, not only attendees, but the larger test engineering community as a whole. Our post-conference survey shows that we are close to achieving that goal:



If you have any suggestions on how we can improve, please comment on this post.

Thank you to all the speakers, attendees, and online viewers who made this a special event once again. To receive announcements about the next GTAC, subscribe to the Google Testing Blog.

Categories: Testing & QA

TDD Against the Clock

Actively Lazy - Thu, 02/19/2015 - 07:42

A couple of weeks ago I ran a “TDD Against the Clock” session. The format is simple: working in pairs following a strict red-green-refactor TDD cycle we complete a simple kata. However we add one key constraint: the navigator starts a five minute timer. As soon as the five minutes is up:

  • If the code compiles and the tests are green, commit!
  • Otherwise, revert!

Either way, the pairs swap roles when the timer goes off. If the driver completes a red-green-refactor cycle faster than five minutes, they can commit ahead of time but the roles still swap.

The kata we chose for this session was the bowling kata. This is a nice, simple problem that I expected we could get a decent way through in each of the two 45 minute sessions.

Hard Time

The five minute time constraint sounds fiendish doesn’t it? How can you possibly get anything done in five minutes? Well, you can, if you tackle something small enough. This exercise is designed to force you to think in small increments of functionality.

It’s amazing how little you can type in five minutes. But if you think typing speed is the barrier, you’re not thinking hard enough about the right way to tackle the problem. There comes a point in the bowling kata where you go from dealing with single frames and simple scores to spares (or strikes) for the first time. This always requires a jump because what you had before won’t suit what you need now. How to tackle this jump incrementally is part of the challenge when working within a five minute deadline. One of our group had an idea but knew it was tough to get it done in five minutes. He typed like a demon trying to force his solution in: he still ran out of time. Typing speed is not the problem (no matter how much it seems like it is). You need a better approach, you need to think more not type more.

Good Behaviour

After a few cycles, we found hardly anybody hit the 5 minute deadline any more. It’s fascinating how quickly everyone realised that it was better to spend a 5 minute cycle discussing than to get lost half-way through a change and end up reverting. Similarly, when you find the change you wanted to make in this cycle is too hard or too time consuming, it’s better to throw away what you have, swap pairs and refactor before you try and write the failing test again.

These are all good behaviours that are useful in day-to-day life, where it’s all too easy to keep chasing down a rat hole. Learning to work in small, independent increments and making that a subconscious part of how you work will make you a better programmer.

Wrong School

The biggest trouble we found is that the bowling kata isn’t well suited to what I consider “normal”, outside-in TDD (London School TDD). Most of the time I use TDD as a design tool, to help me uncover the right roles and responsibilities. However, with the bowling kata the most elegant solution is the one Uncle Bob drives towards, which is just simple types with no object modelling.

This is fine for an algorithm like scoring a game of bowling, which has an ultimate truth and isn’t likely to change. But in the normal day-to-day world we’re designing for flexibility and constant change. This is where a good object model of the domain makes things easier to reason about and simpler to change. This is typically where outside-in TDD will help you.

A couple of the group were determined to implement an OO version of the bowling kata. It isn’t easy as it doesn’t lend itself naturally to being built incrementally towards a good object model. However, with enough stubbornness it can be done. This led to an interesting discussion of whether you can TDD algorithms and whether TDD is better suited to problems where an object model is the desired outcome.

Obviously you can TDD algorithms incrementally, whether it’s worthwhile I’m not so sure.  Typically you’re implementing an algorithm because there is a set of rules to follow. Implementing each rule one at a time might help keep you focussed, but you always need to be aware of the algorithm as a whole.

Using TDD to drive an OO design is different. There can be many, similarly correct object models that vary only by subtle nuances. TDD can help guide your design and choose between the nuances. While you still need to think of the overall system design, TDD done outside-in is very deliberately trying to limit the things you need to worry about at any given stage: focus on one pair of interactions at a time. This is where TDD is strongest: providing a framework for completing a large task in small, manageable increments.

Even if the problem we chose wasn’t ideal, overall I found the TDD against the clock session a great way to practice the discipline of keeping your commits small, with constant refactoring, working incrementally towards a better design.

How do you move a mountain? Simply move it one teaspoonful at a time.

 


Categories: Programming, Testing & QA

Diamond Kata - Thoughts on Incremental Development

Mistaeks I Hav Made - Nat Pryce - Thu, 02/19/2015 - 00:35
Some more thoughts on my experience doing the Diamond Kata with property-based tests… When test-driving development with example-based tests, an essential skill to be learned is how to pick each example to be most helpful in driving development forwards in small steps. You want to avoid picking examples that force you to take too big a step (A.K.A. “now draw the rest of the owl”). Conversely, you don’t want to get sidetracked into a boring morass of degenerate cases and error handling when you’ve not yet addressed the core of the problem to be solved. Property-based tests are similar: the skill is in picking the right next property to let you make useful progress in small steps. But the progress from nothing to solution is different. Doing TDD with example-based tests, I’ll start with an arbitrary, specific example (arbitrary but carefully chosen to help me make useful progress), and write a specific implementation to support just that one example. I’ll add more examples to “triangulate” the property I want to implement, and generalise the implementation to pass the tests. I continue adding examples and triangulating, and generalising the implementation until I have extensive coverage and a general implementation that meets the required properties. Example TDD progress Doing TDD with property-based tests, I’ll start with a very general property, and write a specific but arbitrary implementation that meets the property (arbitrary but carefully chosen to help me make useful progress). I’ll add more specific properties, which force me to generalise the the implementation to make it meet all the properties. The properties also double-check one another (testing the tests, so to speak). I continue adding properties and generalising the implementation until I have a general implementation that meets the required properties. Property TDD progress I find that property-based tests let me work more easily in larger steps than when testing individual examples. I am able to go longer without breaking the problem down to avoid duplication and boilerplate in my tests, because the property-based tests have less scope for duplication and boilerplate. For example, if solving the Diamond Kata with example-based tests, my reaction to the “now draw the rest of the owl” problem that Seb identified would be to move the implementation towards a compositional design so that I could define the overall behaviour declaratively and not need to write integration tests for the whole thing. For example, when I reached the test for “C”, I might break the problem down into two parts. Firstly, a higher-order mirroredLines function that is passed a function to generate each line, with the type (in Haskell syntax): mirroredLines :: (Char -> Char -> String) -> Char -> String I would test-drive mirroredLines with a stub function to generate fake lines, such as: let fakeLine ch maxCh = "line for " ++ [ch] Then, I would write a diamondLine function, that calculates the actual lines of the diamond. And declare the diamond function by currying: let diamond = mirroredLines diamondLine I wouldn’t feel the need to write tests for the diamond function given adequate tests of mirroredLines and diamondLine.
Categories: Programming, Testing & QA

Software Development Linkopedia February 2015

From the Editor of Methods & Tools - Tue, 02/17/2015 - 16:09
Here is our monthly selection of interesting knowledge material on programming, software testing and project management. This month you will find some interesting information and opinions about software and system modeling, programmer psychology, managing priorities, improving software architecture, technical user stories, free tools for Scrum, coding culture and integrating UX in Agile approaches. Web site: Fundamental Modeling Concepts The Fundamental Modeling Concepts (FMC) primarily provide a framework for the comprehensive description of software-intensive systems. Blog: The Ten Commandments of Egoless Programming Blog: WIP and Priorities: Hw to Get Fast and Focused! Blog: Sacrificial Architecture Blog: ...

Diamond Kata - Some Thoughts on Tests as Documentation

Mistaeks I Hav Made - Nat Pryce - Sun, 02/15/2015 - 13:13
Comparing example-based tests and property-based tests for the Diamond Kata, I’m struck by how well property-based tests reduce duplication of test code. For example, in the solutions by Sandro Mancuso and George Dinwiddie, not only do multiple tests exercise the same property with different examples but the tests duplicate assertions. Property-based tests avoid the former by defining generators of input data, but I’m not sure why the latter occurs. Perhaps Seb’s “test recycling” approach would avoid this kind of duplication. But compared to example based tests, property based tests do not work so well as as an explanatory overview. Examples convey an overall impression of what the functionality is, but are are not good at describing precise details. When reading example-based tests, you have to infer the properties of the code from multiple examples and informal text in identifiers and comments. The property-based tests I wrote for the Diamond Kata specify precise properties of the diamond function, but nowhere is there a test that describes that the function draws a diamond! There’s a place for both examples and properties. It’s not an either/or decision. However, explanatory examples used for documentation need not be test inputs. If we’re generating inputs for property tests and generating documentation for our software, we can combine the two, and insert generated inputs and calculated ouputs into generated documentation.
Categories: Programming, Testing & QA

Quote of the Month February 2015

From the Editor of Methods & Tools - Tue, 02/10/2015 - 13:47
The worrying thing about writing tests is that there are numerous accounts of people introducing tests into their development process and.. ending up with even more junk code to support than they had to begin with! Why? I guess, mainly because they lacked the knowledge and understanding needed to write good tests: tests, that is, that will be an asset, not a burden. Reference: Bad Tests, Good Tests, Tomek Kaczanowski, http://practicalunittesting.com/btgt.php

Pairing Patterns

Actively Lazy - Wed, 02/04/2015 - 22:01

Pair programming is hard. When most developers start pairing it feels unnatural. After a lifetime of coding alone, headphones on, no human contact; suddenly talking about every damned line of code can seem weird. Counter-productive, even.

And yet… effective pairing is the cheapest way to improve code quality. Despite what superficially seems like a halving in productivity – after all, your team of eight developers are only working on four things now instead of eight! – it turns out that productivity doesn’t drop at all. If anything, I’ve seen the opposite.

Going it Alone

In my experience most developers are used to, and feel most comfortable, coding on their own. It seems the most natural way to write code. But it introduces all sorts of problems.

If you’re the only person that wrote this code there’s only one person that knows it, that means at 3am in 6 months time guess who’s getting the phone call? And what happens when you decide to leave? No, worse, what happens when that other guy decides to leave and now you’ve got a metric fuckton of code to support. And of course, he couldn’t code for shit. His code stinks. His design stinks. You question his ability, his morals, even his parentage. Because everybody codes to a different style it’s hard to maintain any consistency. This varies from the most trivial of stylistic complaints (braces on new lines, puhleeze, what a loser) to consistency of architectural approach and standardised tools and libraries. This makes picking up other people’s code hard.

When you’re coding on your own, it’s harder to be disciplined: I don’t need to write a unit test for this class, it’s pretty trivial. I don’t need to refactor this mess, I know how it works. With nobody looking over your shoulder it takes a lot more self-discipline to write the high quality code you know you ought to.

Getting Started Pairing

The easiest way to get started is to pair with someone that’s experienced at doing it. It can feel quite strange and quickly become dysfunctional if you’re not used to it, so having an experienced hand show you what effective pairing feels like is really important.

The most important thing to realise is that pairing is incredibly social. You will spend a massive amount of time talking. It turns out that days of coding can save literally minutes of thought up front. When you’re pairing, this thinking happens out loud as you argue about the best way to approach the design, the best way to test this class, the best way to refactor it.

This can feel alien at first and incredibly wasteful. Why don’t you just shut up and let me code? Because then we’ll just have to delete your crap code and you’ll feel bad. Or worse, we’ll push it so you don’t feel bad and then we’ll come back to this mess again and again over the coming months and pay an incredibly high price instead of spending another few minutes discussing it now until we agree.

The Roles

When pairing we traditionally label the two roles “driver” and “navigator”. The driver is the person with their hands on the keyboard, typing. The navigator isn’t. So what the hell’s the navigator doing? The critical thing is that they’re not just sitting there watching. The driver is busy writing good code that compiles; the driver is focused on details. The navigator is looking at the bigger picture: making sure that what we’re doing is consistent with the overall design.

One thing I really struggle with, but as a navigator it’s really important: don’t interrupt the driver’s flow. Resist the temptation to tell the driver there’s a missing bracket or semi-colon. Resist the urge to tell them what order to fix the compile errors in. Keep track of what needs to be done, if the driver misses something small write it down and come back to it.

The navigator should be taking copious notes, letting the driver stay hands-on-keyboard typing. If there’s a test we’ve spotted we’re missing, write it down. If there’s an obvious design smell we need to come back to, write it down. If there’s a refactoring we should do next, write it down. The navigator uses these notes to guide the coding session – ensuring details aren’t missed and that we keep heading in the right direction and come back to every detail we’ve spotted along the way.

The navigator can also keep track of the development “call stack”. You know how it goes: we started writing the shopping basket returns a price in euros test; but to do that we need to change the basket item get price method; this breaks a couple of basket item unit tests, the first of these shows we don’t have a currency conversion available for a basket item; so now we’re changing how currency conversion is constructed so we can pass it into the basket item factory. This call stack of development activities can get very deep if you’re not careful, but a disciplined navigator with a clear navigator’s pad will guide the way.

Changing Roles

Generally the person that knows the domain / code base / problem the best should spend the least time being the driver. If I don’t know this code and you’re driving, I’m just gonna sit here watching you type. I can’t really contribute any design ideas because you know the domain. I can’t ask questions because it stops you typing. But the other way round: I can be busy typing learning the code as I go; while you use your superior knowledge to guide me in the right direction. I can ask lots of questions because when I don’t know, work stops until I’m happy again.

A good approach can be ping-pong pairing: this is where one person writes a failing test, the other makes it pass then writes another failing test, back to the first to make this test pass and then write another failing test, and so on and so on… This can give a good balance to a pairing session as both developers write test and production code and gives a natural rhythm preventing any one developer from dominating the driver role.

Sometimes it’s necessary to impose a time limit, I find 25 minutes is long enough for one person to be driving. This can happen when someone has an idea about a refactoring, especially if it becomes a sprawling change. 25 minutes also puts a good upper limit on a change, if you’ve not been able to commit to source control in 25 minutes it is definitely time to abort and do-over.

At the end of the day, write up your navigator pad and email it your partner. The following day you can swap pairs allowing either of you to carry on from exactly where you left off today.

Conclusion

Pairing can feel strange at first, but with practice it will begin to feel normal. If you can keep pairing day-in day-out you will come to rely on having a second brain alongside you. You’ll realise you can get through complex work faster because you’ve got two people working at different detail levels. Keep pairing long enough and coding on your own will begin to feel strange, almost dangerous. Who’s watching my back?


Categories: Programming, Testing & QA

The First Annual Testing on the Toilet Awards

Google Testing Blog - Tue, 02/03/2015 - 17:51
By Andrew Trenk

The Testing on the Toilet (TotT) series was created in 2006 as a way to spread unit-testing knowledge across Google by posting flyers in bathroom stalls. It quickly became a part of Google culture and is still going strong today, with new episodes published every week and read in hundreds of bathrooms by thousands of engineers in Google offices across the world. Initially focused on content related to testing, TotT now covers a variety of technical topics, such as tips on writing cleaner code and ways to prevent security bugs.

While TotT episodes often have a big impact on many engineers across Google, until now we never did anything to formally thank authors for their contributions. To fix that, we decided to honor the most popular TotT episodes of 2014 by establishing the Testing on the Toilet Awards. The winners were chosen through a vote that was open to all Google engineers. The Google Testing Blog is proud to present the winners that were posted on this blog (there were two additional winners that weren’t posted on this blog since we only post testing-related TotT episodes).

And the winners are ...

Erik Kuefler: Test Behaviors, Not Methods and Don't Put Logic in Tests 
Alex Eagle: Change-Detector Tests Considered Harmful

The authors of these episodes received their very own Flushy trophy, which they can proudly display on their desks.



(The logo on the trophy is the same one we put on the printed version of each TotT episode, which you can see by looking for the “printer-friendly version” link in the TotT blog posts).

Congratulations to the winners!

Categories: Testing & QA

Setting up an OpenVPN server inside an AWS VPC

Agile Testing - Grig Gheorghiu - Sat, 01/31/2015 - 23:56
My goal in this post is to show how to set up an OpenVPN server within an AWS VPC so that clients can connect to EC2 instances via a VPN connection. It's fairly well documented how to configure a site-to-site VPN with an AWS VPC gateway, but articles talking about client-based VPN connections into a VPC are harder to find.

== OpenVPN server setup ==

Let's start with setting up the OpenVPN server. I launched a new Ubuntu 14.04 instance in our VPC and I downloaded the latest openvpn source code via:

wget http://swupdate.openvpn.org/community/releases/openvpn-2.3.6.tar.gz

In order for the 'configure' step to succeed, I also had to install the following Ubuntu packages:

apt-get install build-essential openssl libssl-dev lzop liblzo2-dev libpam-dev

I then ran the usual commands inside the openvpn-2.3.6 directory:

./configure; make; sudo make install

At this point I proceeded to set up my own Certificate Authority (CA), per the OpenVPN HOWTO guide.  As it turned out, I needed the easy-rsa helper scripts on the server running openvpn. I got them from github:

git clone https://github.com/OpenVPN/easy-rsa.git

To generate the master CA certificate & key, I did the following:

cd ~/easy-rsa/easyrsa3
cp vars.example vars

- edited vars file and set these variables with the proper values for my organization:
set_var EASYRSA_REQ_COUNTRY
set_var EASYRSA_REQ_PROVINCE
set_var EASYRSA_REQ_CITY
set_var EASYRSA_REQ_ORG
set_var EASYRSA_REQ_EMAIL
set_var EASYRSA_REQ_OU

./easyrsa init-pki
(this will create the correct directory structure)
./easyrsa build-ca
(this will use the info specified in the vars file above)

To generate the OpenVPN server certificate and key, I ran:

./easyrsa build-server-full server
(I was prompted for a password for the server key)

To generate an OpenVPN client certificate and key for user myuser, I ran:

./easyrsa  build-client-full myuser
(I was prompted for a password for the client key)

The next step was to generate the Diffie Hellman (DH) parameters for the server by running:

./easyrsa gen-dh

I was ready at this point to configure the OpenVPN server.

I created a directory called /etc/openvpn and copied the pki directory under ~/easy-rsa/easyrsa3 to /etc/openvpn. I also copied the sample server configuration file ~/openvpn-2.3.6/sample/sample-config-files/server.conf to /etc/openvpn.

I edited /etc/openvpn/server.conf and specified the following:

ca /etc/openvpn/pki/ca.crt
cert /etc/openvpn/pki/issued/server.crt
key /etc/openvpn/pki/private/server.key  # This file should be kept secret
dh /etc/openvpn/pki/dh.pem

server 10.9.0.0 255.255.255.0
ifconfig-pool-persist /etc/openvpn/ipp.txt

push "route 172.30.0.0 255.255.0.0"

The first block specifies the location of the CA certificate, the server key and certificate, and the DH certificate.

The 'server' parameter specifies a new subnet from which both the OpenVPN server and the OpenVPN clients connecting to the server will get their IP addresses. I set it to 10.9.0.0/24. The client IP allocations will be saved in the ipp.txt file, as specified in the ifconfig-pool-persist parameter.

One of the most important options, which I missed when I initially configured the server, is the 'push route' one. This makes the specified subnet (i.e. the instances in the VPC that you want to get to via the OpenVPN server) available to the clients connecting to the OpenVPN server without the need to create static routes on the clients. In my case, all the EC2 instances in the VPC are on the 172.30.0.0/16 subnet, so that's what I specified above.

Two more very important steps are needed on the OpenVPN server. It took me quite a while to find them so I hope you will be spared the pain.

The first step was to turn on IP forwarding on the server:

- uncomment the following line in /etc/sysctl.conf:
net.ipv4.ip_forward=1

- run
sysctl -p

The final step in the configuration of the OpenVPN server was to make it do NAT via itpables masquerading (thanks to rbgeek's blog post for these last two critical steps):

- run
iptables -t nat -A POSTROUTING -s 10.9.0.0/24 -o eth0 -j MASQUERADE

- also add the above line to /etc/rc.local so it gets run on reboot

Now all that's needed on the server is to actually run openvpn. You can run it in the foreground for troubleshooting purposes via:

openvpn /etc/openvpn/server.conf

Once everything works, run it in daemon mode via:

openvpn --daemon --config /etc/openvpn/server.conf

You will be prompted for the server key password when you start up openvpn. Haven't looked yet on how to run the server in a fully automated way.

Almost forgot to specify that you need to allow incoming traffic to UDP port 1194 in the AWS security group where your OpenVPN server belongs. Also allow traffic from that security group to the security groups of the EC2 instances that you actually want to reach over the OpenVPN tunnel.

== OpenVPN client setup ==

This is on a Mac OSX Mavericks client, but I'm sure it's similar for other clients.

Install tuntap
- download tuntap_20150118.tar.gz from http://tuntaposx.sourceforge.net
- untar and install tuntap_20150118.pkg

Install lzo
wget http://www.oberhumer.com/opensource/lzo/download/lzo-2.06.tar.gz
tar xvfz lzo-2.06.tar.gz
cd lzo-2.06
./configure; make; sudo make install

Install openvpn
- download openvpn-2.3.6.tar.gz from http://openvpn.net/index.php/open-source/downloads.htmltar xvf openvpn-2.3.6.tar
cd openvpn-2.3.6
./configure; make; sudo make install

At this point ‘openvpn --help’ should work.
The next step for the client setup is to copy the CA certificate ca.crt, and the client key and certificate (myuser.key and myuser.crt) from the OpenVPN server to the local client. I created an openvpn directory under my home directory on my Mac and dropped ca.crt in ~/openvpn/pki, myuser.key in ~/openvpn/pki/private and myuser.crt in ~/openvpn/pki/issued. I also copied the sample file ~/openvpn-2.3.6/sample/sample-config-files/client.conf to ~/openvpn and specified the following parameters in that file:
remote EXTERNAL_IP_ADDRESS_OF_OPENVPN_SERVER 1194
ca /Users/myuseropenvpn/pki/ca.crtcert /Users/myuser/openvpn/pki/issued/myuser.crtkey /Users/myuser/openvpn/pki/private/myuser.key
Then I started up the OpenVPN client via:
sudo openvpn ~/openvpn/client.conf(at this point I was prompted for the password for myuser.key)
To verify that the OpenVPN tunnel is up and running, I ping-ed the internal IP address of the OpenVPN server (in my case it was 10.9.0.1 on the internal subnet I specified in server.conf), as well as the internal IPs of various EC2 instances behind the OpenVPN server. Finally, I ssh-ed into those internal IP addresses and declared victory.
That's about it. Hope this helps!

UPDATE: I discovered in the mean time a very good Mac OSX GUI tool for managing client OpenVPN connections: Tunnelblick. All it took was importing the client.conf file mentioned above.

Testing on the Toilet: Change-Detector Tests Considered Harmful

Google Testing Blog - Wed, 01/28/2015 - 01:43
by Alex Eagle

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.


You have just finished refactoring some code without modifying its behavior. Then you run the tests before committing and… a bunch of unit tests are failing. While fixing the tests, you get a sense that you are wasting time by mechanically applying the same transformation to many tests. Maybe you introduced a parameter in a method, and now must update 100 callers of that method in tests to pass an empty string.

What does it look like to write tests mechanically? Here is an absurd but obvious way:
// Production code:
def abs(i: Int)
return (i < 0) ? i * -1 : i

// Test code:
for (line: String in File(prod_source).read_lines())
switch (line.number)
1: assert line.content equals def abs(i: Int)
2: assert line.content equals return (i < 0) ? i * -1 : i

That test is clearly not useful: it contains an exact copy of the code under test and acts like a checksum. A correct or incorrect program is equally likely to pass a test that is a derivative of the code under test. No one is really writing tests like that, but how different is it from this next example?
// Production code:
def process(w: Work)
firstPart.process(w)
secondPart.process(w)

// Test code:
part1 = mock(FirstPart)
part2 = mock(SecondPart)
w = Work()
Processor(part1, part2).process(w)
verify_in_order
was_called part1.process(w)
was_called part2.process(w)

It is tempting to write a test like this because it requires little thought and will run quickly. This is a change-detector test—it is a transformation of the same information in the code under test—and it breaks in response to any change to the production code, without verifying correct behavior of either the original or modified production code.

Change detectors provide negative value, since the tests do not catch any defects, and the added maintenance cost slows down development. These tests should be re-written or deleted.

Categories: Testing & QA