Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Testing & QA

Software Development Conferences Forecast March 2015

From the Editor of Methods & Tools - Tue, 03/31/2015 - 16:00
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software […]

Quote of the Month March 2015

From the Editor of Methods & Tools - Wed, 03/25/2015 - 09:10
Competencies versus Roles: We’ve seen a positive move toward emphasizing competencies in a team rather than roles or titles. As teams make that change, we see fewer “It’s not my job” excuses and more “How can I help?” conversations. Team members will continue to have core competencies in some areas more than others, but they […]

Impact-Driven Scrum, Code Review & #NoEstimates in Methods & Tools Spring 2015 issue

From the Editor of Methods & Tools - Mon, 03/23/2015 - 16:13
Methods & Tools – the free e-magazine for software developers, testers and project managers – has just published its Spring 2015 issue that discusses Impact-Driven Scrum, Code Review, #NoEstimates,Self-Selecting Teams, Software Laws and the Kanboard and ConQAT open source tools. * Impact-Driven Scrum Delivery * Code Review: Why It Matters * #NoEstimates – Alternative to […]

Triggering Jenkins jobs remotely via git post-commit hooks

Agile Testing - Grig Gheorghiu - Wed, 03/18/2015 - 23:49
Assume you want a Jenkins job (for example a job that deploys code or a job that runs integration tests) to run automatically every time you commit code via git. One way to do this would be to configure Github to access a webhook exposed by Jenkins, but this is tricky to do when your Jenkins instance is not exposed to the world.

One way I found to achieve this is to trigger Jenkins job remotely via a local git post-commit hook. There were several steps I had to take:

1) Create a Jenkins user to be used by remote curl commands -- let's call it user1 with password password1.

2) Configure a given Jenkins job -- let's call it JOB-NUMBER1 -- to allow remote builds to be triggered. If you go to the Jenkins configuration page for that job, you'll see a checkbox under the Build Triggers section called "Trigger builds remotely (e.g. from scripts)". Check that checkbox and also specify a random string as the Authentication Token -- let's say it is mytoken1.

3) Try to trigger a remote build for JOB-NUMBER1 by using a curl command similar to this one:

curl --user 'user1:password1' -X POST "http://jenkins.mycompany.com:8080/job/JOB-NUMBER1/build" --data token=mytoken1 --data delay=0sec

If the Jenkins build is parameterized, you need to specify each parameter in the curl command, even if those parameters have default values specified in the Jenkins job definition. Let's say you have 2 parameters, TARGET_HOST and TARGET_USER. Then the curl command looks something like this:

curl --user 'user1:password1' -X POST "http://jenkins.mycompany.com:8080/job/JOB-NUMBER1/build" --data token=mytoken1 --data delay=0sec --data-urlencode json='{"parameter": [{"name":"TARGET_HOST", "value":"myhost1.mycompany.com"}, {"name":"TARGET_USER", "value":"mytargetuser1"}]}'

When you run these curl commands, you should see JOB-NUMBER1 being triggered instanly in the Jenkins dashboard.

Note: if you get an error similar to "HTTP ERROR 403 No valid crumb was included in the request" it means that you have "Prevent Cross Site Request Forgery exploits" checked on the Jenkins "Configure Global Security" page. You need to uncheck that option. Since you're most probably not exposing your Jenkins instance to the world, that should be fine.

 4) Create git post-commit hook. To do this, you need to create a file called post-commit in your local .git/hooks directory under the repository from which you want to trigger the Jenkins job. The post-commit file is a regular bash script:

#!/bin/bash

curl --user 'user1:password1' -X POST "http://jenkins.mycompany.com:8080/job/JOB-NUMBER1/build" --data token=mytoken1 --data delay=0sec

Don't forget to make the post-commit file executabe: chmod 755 post-commit

At this point, whenever you commit code in this repository, you should see the Jenkins job being triggered instantly.

Dependency injection with PostSharp

Actively Lazy - Wed, 03/18/2015 - 22:21

I don’t really like IoC containers. Or rather, I don’t like the crappy code people write when they’re given an IoC container. Before you know it you have NounVerbers everywhere, a million dependencies and no decent domain model. Dependencies should really be external to your application; everything outside of the core domain model that your application represents.

  • A web service? That’s a dependency
  • A database? Yup
  • A message queue? Definitely a dependency
  • A scheduler or thread pool? Yup
  • Any NounVerber (PriceCalculator, StockFetcher, BasketFactory, VatCalculator) no! Not a dependency. Stop it. They’re all part of your core business domain and are actually methods on a class. If you can’t write Price.Calculate() or Stock.Fetch() or new Basket() or Vat.Calculate() then fix your domain model first before you go hurting yourself with an IoC container

A while back I described a very simple, hand-rolled approach to dependency injection. But if we wave a little PostSharp magic we can improve on that basic idea. All the source code for this is available on github.

It works like this: if we have a dependency, say an AuthService, we declare an interface that business objects can implement to request that they have the dependency injected into them. In this case, IRequireAuthService.

class User : IRequireAuthService
{
  public IAuthService AuthService { set; private get; }

We create a DependencyInjector that can set these properties:

public void InjectDependencies(object instance)
{
  if (instance is IRequireAuthService)
    ((IRequireAuthService)instance).AuthService = AuthService;
    ...
}

This might not be the prettiest method – you’ll end up with an if…is IRequire… line for each dependency you can inject. But this provides a certain amount of friction. While it is easy to add new dependencies, developers are discouraged from doing it. This small amount of friction massively limits the unchecked growth of dependencies so prevalent with IoC containers. This friction is why I prefer the hand-rolled approach to off-the-shelf IoC containers.

So how do we trigger the dependency injector to do what it has to do? This is where some PostSharp magic comes in. We declare an attribute to use on the constructor:

  [InjectDependencies]
  public User(string id)

Via the magic of PostSharp aspect weaving this attribute causes some code to be executed before the constructor. This attribute is simply defined as:

public sealed override void OnEntry(MethodExecutionArgs args)
{
  DependencyInjector.CurrentInjector.InjectDependencies(args.Instance);
  base.OnEntry(args);
}

And that’s it – PostSharp weaves this method before each constructor with the [InjectDependencies] attribute. We get the current dependency injector and pass in the object instance (i.e. the newly created User instance) to have dependencies injected into it. Just like that we have a very simple dependency injector. Even better all this aspect weaving magic is available with the express (free!) edition of PostSharp.

Taking it Further

There are a couple of obvious extensions to this. You can create a TestDependencyInjector so that your unit tests can provide their own (mock) implementations of dependencies. This can also include standard (stub) implementations of some dependencies. E.g. a dependency that manages cross-thread scheduling can be replaced by an immediate (synchronous) implementation for unit tests to ensure that unit tests are single-threaded and repeatable.

Secondly, the DependencyInjector uses a ThreadLocal to store the current dependency injector. If you use background threads and want dependency injection to work there, you need a way of pushing the dependency injector onto the background thread. This generally means wrapping thread spawning code (which will itself be a dependency). You’ll want to wrap any threading code anyway to make it unit-testable.

Compile Time Checks

Finally, the most common failure mode we encountered with this was people forgetting to put [InjectDependencies] on the constructor. This means you get nulls at runtime, instead of dependencies. With a bit more PostSharp magic (this brand of magic requires the paid-for version, though) we can stop that, too. First, we change each IRequire to use a new attribute that indicates it manages injection of a dependency:

[Dependency]
public interface IRequireAuthService
{
  IAuthService AuthService { set; }
}

We configure this attribute to be inherited to all implementation classes – so all business objects that require auth service get the behaviour – then we define a compile time check to verify that the constructors have [InjectDependencies] defined:

public override bool CompileTimeValidate(System.Reflection.MethodBase method)
{
  if (!method.CustomAttributes.Any(a => a.AttributeType == typeof(InjectDependenciesAttribute)))
  {
    Message.Write(SeverityType.Error, "InjectDependences", "No [InjectDependencies] declared on " + method.DeclaringType.FullName + "." + method.Name, method);
    return false;
  }
  return base.CompileTimeValidate(method);
}

This compile time check now makes the build fail if I ever declare a class IRequireAuthService without adding [InjectDependencies] onto the class’ constructor.

Simple, hand-rolled dependency injection with compile time validation thanks to PostSharp!

 


Categories: Programming, Testing & QA

Exploratory Testing 3.0

James Bach’s Blog - Mon, 03/16/2015 - 17:50

[Authors’ note: Others have already made the point we make here: that exploratory testing ought to be called testing. In fact, Michael said that about tests in 2009, and James wrote a blog post in 2010 that seems to say that about testers. Aaron Hodder said it quite directly in 2011, and so did Paul Gerrard. While we have long understood and taught that all testing is exploratory (here’s an example of what James told one student, last year), we have not been ready to make the rhetorical leap away from pushing the term “exploratory testing.” Even now, we are not claiming you should NOT use the term, only that it’s time to begin assuming that testing means exploratory testing, instead of assuming that it means scripted testing that also has exploration in it to some degree.]

By James Bach and Michael Bolton

In the beginning, there was testing. No one distinguished between exploratory and scripted testing. Jerry Weinberg’s 1961 chapter about testing in his book, Computer Programming Fundamentals, depicted testing as inherently exploratory and expressed caution about formalizing it. He wrote, “It is, of course, difficult to have the machine check how well the program matches the intent of the programmer without giving a great deal of information about that intent. If we had some simple way of presenting that kind of information to the machine for checking, we might just as well have the machine do the coding. Let us not forget that complex logical operations occur through a combination of simple instructions executed by the computer and not by the computer logically deducing or inferring what is desired.”

Jerry understood the division between human work and machine work. But, then the formalizers came and confused everyone. The formalizers—starting officially in 1972 with the publication of the first testing book, Program Test Methods—focused on the forms of testing, rather than its essences. By forms, we mean words, pictures, strings of bits, data files, tables, flowcharts and other explicit forms of modeling. These are things that we can see, read, point to, move from place to place, count, store, retrieve, etc. It is tempting to look at these artifacts and say “Lo! There be testing!” But testing is not in any artifact. Testing, at the intersection of human thought processes and activities, makes use of artifacts. Artifacts of testing without the humans are like state of the art medical clinics without doctors or nurses: at best nearly useless, at worst, a danger to the innocents who try to make use of them.

We don’t blame the innovators. At that time, they were dealing with shiny new conjectures. The sky was their oyster! But formalization and mechanization soon escaped the lab. Reckless talk about “test factories” and poorly designed IEEE standards followed. Soon all “respectable” talk about testing was script-oriented. Informal testing was equated to unprofessional testing. The role of thinking, feeling, communicating humans became displaced.

James joined the fray in 1987 and tried to make sense of all this. He discovered, just by watching testing in progress, that “ad hoc” testing worked well for finding bugs and highly scripted testing did not. (Note: We don’t mean to make this discovery sound easy. It wasn’t. We do mean to say that the non-obvious truths about testing are in evidence all around us, when we put aside folklore and look carefully at how people work each day.) He began writing and speaking about his experiences. A few years into his work as a test manager, mostly while testing compilers and other developer tools, he discovered that Cem Kaner had coined a term—”exploratory testing”—to represent the opposite of scripted testing. In that original passage, just a few pages long, Cem didn’t define the term and barely described it, but he was the first to talk directly about designing tests while performing them.

Thus emerged what we, here, call ET 1.0.

(See The History of Definitions of ET for a chronological guide to our terminology.)

ET 1.0: Rebellion

Testing with and without a script are different experiences. At first, we were mostly drawn to the quality of ideas that emerged from unscripted testing. When we did ET, we found more bugs and better bugs. It just felt like better testing. We hadn’t yet discovered why this was so. Thus, the first iteration of exploratory testing (ET) as rhetoric and theory focused on escaping the straitjacket of the script and making space for that “better testing”. We were facing the attitude that “Ad hoc testing is uncontrolled and unmanageable; something you shouldn’t do.” We were pushing against that idea, and in that context ET was a special activity. So, the crusaders for ET treated it as a technique and advocated using that technique. “Put aside your scripts and look at the product! Interact with it! Find bugs!”

Most of the world still thinks of ET in this way: as a technique and a distinct activity. But we were wrong about characterizing it that way. Doing so, we now realize, marginalizes and misrepresents it. It was okay as a start, but thinking that way leads to a dead end. Many people today, even people who have written books about ET, seem to be happy with that view.

This era of ET 1.0 began to fade in 1995. At that time, there were just a handful of people in the industry actively trying to develop exploratory testing into a discipline, despite the fact that all testers unconsciously or informally pursued it, and always have. For these few people, it was not enough to leave ET in the darkness.

ET 1.5: Explication

Through the late ‘90s, a small community of testers beginning in North America (who eventually grew into the worldwide Context-Driven community, with some jumping over into the Agile testing community) was also struggling with understanding the skills and thought processes that constitute testing work in general. To do that, they pursued two major threads of investigation. One was Jerry Weinberg’s humanist approach to software engineering, combining systems thinking with family psychology. The other was Cem Kaner’s advocacy of cognitive science and Popperian critical rationalism. This work would soon cause us to refactor our notions of scripted and exploratory testing. Why? Because our understanding of the deep structures of testing itself was evolving fast.

When James joined ST Labs in 1995, he was for the first time fully engaged in developing a vision and methodology for software testing. This was when he and Cem began their fifteen-year collaboration. This was when Rapid Software Testing methodology first formed. One of the first big innovations on that path was the introduction of guideword heuristics as one practical way of joining real-time tester thinking with a comprehensive underlying model of the testing process. Lists of test techniques or documentation templates had been around for a long time, but as we developed vocabulary and cognitive models for skilled software testing in general, we started to see exploratory testing in a new light. We began to compare and contrast the important structures of scripted and exploratory testing and the relationships between them, instead of seeing them as activities that merely felt different.

In 1996, James created the first testing class called “Exploratory Testing.”  He had been exposed to design patterns thinking and had tried to incorporate that into the class. He identified testing competencies.

Note: During this period, James distinguished between exploratory and ad hoc testing—a distinction we no longer make. ET is an ad hoc process, in the dictionary sense: ad hoc means “to this; to the purpose”. He was really trying to distinguish between skilled and unskilled testing, and today we know better ways to do that. We now recognize unskilled ad hoc testing as ET, just as unskilled cooking is cooking, and unskilled dancing is dancing. The value of the label “exploratory testing” is simply that it is more descriptive of an activity that is, among other things, ad hoc.

In 1999, James was commissioned to define a formalized process of ET for Microsoft. The idea of a “formal ad hoc process” seemed paradoxical, however, and this set up a conflict which would be resolved via a series of constructive debates between James and Cem. Those debates would lead to we here will call ET 2.0.

There was also progress on making ET more friendly to project management. In 2000, inspired by the work for Microsoft, James and Jon Bach developed “Session-Based Test Management” for a group at Hewlett-Packard. In a sense this was a generalized form of the Microsoft process, with the goal of creating a higher level of accountability around informal exploratory work. SBTM was intended to help defend exploratory work from compulsive formalizers who were used to modeling testing in terms of test cases. In one sense, SBTM was quite successful in helping people to recognize that exploratory work was entirely manageable. SBTM helped to transform attitudes from “don’t do that” to “okay, blocks of ET time are things just like test cases are things.”

By 2000, most of the testing world seemed to have heard something about exploratory testing. We were beginning to make the world safe for better testing.

ET 2.0: Integration

The era of ET 2.0 has been a long one, based on a key insight: the exploratory-scripted continuum. This is a sliding bar on which testing ranges from completely exploratory to completely scripted. All testing work falls somewhere on this scale. Having recognized this, we stopped speaking of exploratory testing as a technique, but rather as an approach that applies to techniques (or as Cem likes to say, a “style” of testing).

We could think of testing that way because, unlike ten years earlier, we now had a rich idea of the skills and elements of testing. It was no longer some “creative and mystical” act that some people are born knowing how to do “intuitively”. We saw testing as involving specific structures, models, and cognitive processes other than exploring, so we felt we could separate exploring from testing in a useful way. Much of what we had called exploratory testing in the early 90’s we now began to call “freestyle exploratory testing.”

By 2006, we settled into a simple definition of ET, simultaneous learning, test design, and test execution. To help push the field forward, James and Cem convened a meeting called the Exploratory Testing Research Summit in January 2006. (The participants were James Bach, Jonathan Bach, Scott Barber, Michael Bolton, Elisabeth Hendrickson, Cem Kaner, Mike Kelly, Jonathan Kohl, James Lyndsay, and Rob Sabourin.) As we prepared for that, we made a disturbing discovery: every single participant in the summit agreed with the definition of ET, but few of us agreed on what the definition actually meant. This is a phenomenon we had no name for at the time, but is now called shallow agreement in the CDT community. To combat shallow agreement and promote better understanding of ET, some of us decided to adopt a more evocative and descriptive definition of it, proposed originally by Cem and later edited by several others: “a style of testing that emphasizes the freedom and responsibility of the individual tester to continually optimize the quality of his work by treating test design, test execution, test result interpretation, and learning as mutually supporting activities that continue in parallel throughout the course of the project.” Independently of each other, Jon Bach and Michael had suggested the “freedom and responsibility” part to that definition.

And so we had come to a specific and nuanced idea of exploration and its role in testing. Exploration can mean many things: searching a space, being creative, working without a map, doing things no one has done before, confronting complexity, acting spontaneously, etc. With the advent of the continuum concept (which James’ brother Jon actually called the “tester freedom scale”) and the discussions at the ExTRS peer conference, we realized most of those different notions of exploration are already central to testing, in general. What the adjective “exploratory” added, and how it contrasted with “scripted,” was the dimension of agency. In other words: self-directedness.

The full implications of the new definition became clear in the years that followed, and James and Michael taught and consulted in Rapid Software Testing methodology. We now recognize that by “exploratory testing”, we had been trying to refer to rich, competent testing that is self-directed. In other words, in all respects other than agency, skilled exploratory testing is not distinguishable from skilled scripted testing. Only agency matters, not documentation, nor deliberation, nor elapsed time, nor tools, nor conscious intent. You can be doing scripted testing without any scrap of paper nearby (scripted testing does not require that you follow a literal script). You can be doing scripted testing that has not been in any way pre-planned (someone else may be telling you what to do in real-time as they think of ideas). You can be doing scripted testing at a moment’s notice (someone might have just handed you a script, or you might have just developed one yourself). You can be doing scripted testing with or without tools (tools make testing different, but not necessarily more scripted). You can be doing scripted testing even unconsciously (perhaps you feel you are making free choices, but your models and habits have made an invisible prison for you). The essence of scripted testing is that the tester is not in control, but rather is being controlled by some other agent or process. This one simple, vital idea took us years to apprehend!

In those years we worked further on our notions of the special skills of exploratory testing. James and Jon Bach created the Exploratory Skills and Tactics reference sheet to bring specificity and detail to answer the question “what specifically is exploratory about exploratory testing?”

In 2007, another big slow leap was about to happen. It started small: inspired in part by a book called The Shape of Actions, James began distinguishing between processes that required human judgment and wisdom and those which did not. He called them “sapient” vs. “non-sapient.” This represented a new frontier for us: systematic study and development of tacit knowledge.

In 2009, Michael followed that up by distinguishing between testing and checking. Testing cannot be automated, but checking can be completely automated. Checking is embedded within testing. At first, James objected that, since there was already a concept of sapient testing, the distinction was unnecessary. To him, checking was simply non-sapient testing. But after a few years of applying these ideas in our consulting and training, we came to realize (as neither of us did at first) that checking and testing was a better way to think and speak than sapience and non-sapience. This is because “non-sapience” sounds like “stupid” and therefore it sounded like we were condemning checking by calling it non-sapient.

Do you notice how fine distinctions of language and thought can take years to work out? These ideas are the tools we need to sort out our practical decisions. Yet much like new drugs on the market, it can sometimes take a lot of experience to understand not only benefits, but also potentially harmful side effects of our ideas and terms. That may explain why those of us who’ve been working in the craft a long time are not always patient with colleagues or clients who shrug and tell us that “it’s just semantics.” It is our experience that semantics like these mean the difference between clear communication that motivates action and discipline, and fragile folklore that gets displaced by the next swarm of buzzwords to capture the fancy of management.

ET 3.0: Normalization

In 2011, sociologist Harry Collins began to change everything for us. It started when Michael read Tacit and Explicit Knowledge. We were quickly hooked on Harry’s clear writing and brilliant insight. He had spent many years studying scientists in action, and his ideas about the way science works fit perfectly with what we see in the testing field.

By studying the work of Harry and his colleagues, we learned how to talk about the difference between tacit and explicit knowledge, which allows us to recognize what can and cannot be encoded in a script or other artifacts. He distinguished between behaviour (the observable, describable aspects of an activity) and actions (behaviours with intention) (which had inspired James’ distinction between sapient and non-sapient testing). He untangled the differences between mimeomorphic actions (actions that we want to copy and to perform in the same way every time) and polimorphic actions (actions that we must vary in order to deal with social conditions); in doing that, he helped to identify the extents and limits of automation’s power. He wrote a book (with Trevor Pinch) about how scientific knowledge is constructed; another (with Rob Evans) about expertise; yet another about how scientists decide to evaluate a specific experimental result.

Harry’s work helped lend structure to other ideas that we had gathered along the way.

  • McLuhan’s ideas about media and tools
  • Karl Weick’s work on sensemaking
  • Venkatesh Rao’s notions of tempo which in turn pointed us towards James C. Scott’s notion of legibility
  • The realization (brought to our attention by an innocent question from a tester at Barclays Bank) that the “exploratory-scripted continuum” is actually the “formality continuum.” In other words, to formalize an activity means to make it more scripted.
  • The realization of the important difference between spontaneous and deliberative testing, which is the degree of reflection that the tester is exercising. (This is not the same as exploratory vs. scripted, which is about the degree of agency.)
  • The concept of “responsible tester” (defined as a tester who takes full, personal, responsibility for the quality of his work).
  • The advent of the vital distinction between checking and testing, which replaced need to talk about “sapience” in our rhetoric of testing.
  • The subsequent redefinition of the term “testing” within the Rapid Software Testing namespace to make these things more explicit (see below).

About That Last Bullet Point

ET 3.0 as a term is a bit paradoxical because what we are working toward, within the Rapid Software Testing methodology, is nothing less than the deprecation of the term “exploratory testing.”

Yes, we are retiring that term, after 22 years. Why?

Because we now define all testing as exploratory.  Our definition of testing is now this:

“Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modeling, observation and inference, output checking, etc.”

Where does scripted testing fit, then?  By “script” we are speaking of any control system or factor that influences your testing and lies outside of your realm of choice (even temporarily). This does not refer only to specific instructions you are given and that you must follow. Your biases script you. Your ignorance scripts you. Your organization’s culture scripts you. The choices you make and never revisit script you.

By defining testing to be exploratory, scripting becomes a guest in the house of our craft; a potentially useful but foreign element to testing, one that is interesting to talk about and apply as a tactic in specific situations. An excellent tester should not be complacent or dismissive about scripting, any more than a lumberjack can be complacent or dismissive about heavy equipment. This stuff can help you or ruin you, but no serious professional can ignore it.

Are you doing testing? Then you are already doing exploratory testing. Are you doing scripted testing? If you’re doing it responsibly, you are doing exploratory testing with scripting (and perhaps with checking).  If you’re only doing “scripted testing,” then you are just doing unmotivated checking, and we would say that you are not really testing. You are trying to behave like a machine, not a responsible tester.

ET 3.0, in a sentence, is the demotion of scripting to a technique, and the promotion of exploratory testing to, simply, testing.

Categories: Testing & QA

Software Architecture versus Code

From the Editor of Methods & Tools - Mon, 03/16/2015 - 17:35
Software architecture and coding are often seen as mutually exclusive disciplines, despite us referring to higher level abstractions when we talk about our software. You’ve probably heard others on your team talking about components, services and layers rather than objects when they’re having discussions. Take a look at the codebase though. Can you clearly see […]

History of Definitions of ET

James Bach’s Blog - Mon, 03/16/2015 - 16:20

History of the term “Exploratory Testing” as applied to software testing within the Rapid Software Testing methodology space.

For a discussion of the some of the social and philosophical issues surrounding this chronology, see Exploratory Testing 3.0.

1988 First known use of the term, defined variously as “quick tests”; “whatever comes to mind”; “guerrilla raids” – Cem Kaner, Testing Computer Software (There is explanatory text for different styles of ET in the 1988 edition of Testing Computer Software. Cem says that some of the text was actually written in 1983.) 1990 “Organic Quality Assurance”, James Bach’s first talk on agile testing filmed by Apple Computer, which discussed exploratory testing without using the words agile or exploratory. 1993 June: “Persistence of Ad Hoc Testing” talk given at ICST conference by James Bach. Beginning of James’ abortive attempt to rehabilitate the term “ad hoc.” 1995 February: First appearance of “exploratory testing” on Usenet in message by Cem Kaner. 1995 Exploratory testing means learning, planning, and testing all at the same time. – James Bach (Market Driven Software Testing class) 1996 Simultaneous exploring, planning, and testing. – James Bach (Exploratory Testing class v1.0) 1999 An interactive process of concurrent product exploration, test design, and test execution. – James Bach (Exploratory Testing class v2.0) 2001(post WHET #1) The Bach View

Any testing to the extent that the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.

The Kaner View

Any testing to the extent that the tester actively controls the design of the tests as those tests are performed, uses information gained while testing to design new and better tests, and where the following conditions apply:

  • The tester is not required to use or follow any particular test materials or procedures.
  • The tester is not required to produce materials or procedures that enable test re-use by another tester or management review of the details of the work done.

– Resolution between Bach and Kaner following WHET #1 and BBST class at Satisfice Tech Center.

(To account for both of views, James started speaking of the “scripted/exploratory continuum” which has greatly helped in explaining ET to factory-style testers) 2003-2006 Simultaneous learning, test design, and test execution – Bach, Kaner 2006-2015 An approach to software testing that emphasizes the personal freedom and responsibility of each tester to continually optimize the value of his work by treating learning, test design and test execution as mutually supportive activities that run in parallel throughout the project. – (Bach/Bolton edit of Kaner suggestion) 2015 Exploratory testing is now a deprecated term within Rapid Software Testing methodology. See testing, instead. (In other words, all testing is exploratory to some degree. The definition of testing in the RST space is now: Evaluating a product by learning about it through exploration and experimentation, including to some degree: questioning, study, modeling, observation, inference, etc.)

 

Categories: Testing & QA

Software Development Linkopedia March 2015

From the Editor of Methods & Tools - Wed, 03/11/2015 - 15:46
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about software development culture, project estimation, checking the health of your project team, the costs and benefits of unit testing, product backlog management, mobile architecture, test coverage and user experience. Blog: […]

Dogma Driven Development

Actively Lazy - Wed, 03/04/2015 - 21:24

We really are an arrogant, opinionated bunch, aren’t we? We work in an industry where there aren’t any right answers. We pretend what we do is computer “science”. When in reality, its more art than science. It certainly isn’t engineering. Engineering suggests an underlying physics, mathematical models of how the world works. Is there a mathematical model of how to build software at scale? No. Do we understand the difference between what makes good software and bad software? No. Are there papers with published proofs of whether this idea or that idea has any observable difference on written software, as practised by companies the world over? No. It turns out this is a difficult field: software is weird stuff. And yet we work in an industry full of close-minded people, convinced that their way is The One True Way. It’s not science, its basically art. Our industry is dominated by fashion.

Which language we work in is fashion: should we use Ruby, or Node.js or maybe Clojure. Hey Go seems pretty cool. By which I mean “I read about it on the internet, and I’d quite like to put it on my CV so can I please f*** up your million pound project in a big experiment of whether I can figure out all the nuances of the language faster than the project can de-rail?”

If it’s not the language we’re using, its architectural patterns. The dogma attached to REST. Jesus H Christ. It’s just a bunch of HTTP requests, no need to get so picky! For a while it was SOA. Then that became the old legacy thing, so now it’s all micro-services, which are totally different. Definitely. I read it on the internet, it must be true.

Everyone has their opinions. Christ, we’ve got our opinions. Thousands of blogs and wankers on twitter telling you what they think about the world (exactly like this one) As if one person’s observations are useful for anything more than being able to replicate their past success, should you ever by mistake find yourself on their timeline from about six weeks ago.

For example: I wrote a post recently about pairing, and some fine specimen of internet based humanity felt the need to tell me that people who need to pair are an embarrassment to the profession, that we should find another line of work. Hahaha I know, don’t read the comments. Especially when it’s in reply to something you wrote. But seriously now, is it necessary to share your close minded ignorance with the world?

I shouldn’t get worked up about some asshat on the internet. But it’s not just some asshat on the internet. There are hundreds of thousands of these asshats with their closed minds and dogmatic views on the world. And not just asshats spouting off on the internet, but getting paid to build the software that increasingly runs all our lives. When will we admit that we have no idea what we’re doing. The only way to get better is to learn as many tools and techniques as we can and hopefully, along the way, we’ll learn when to apply which techniques and when not to.

For example, I’ve worked with some people that don’t get TDD. Ok, fine – some people just aren’t “test infected”. And a couple of guys that really would rather gut me and fry my liver for dinner than pair with me. Do I feel the need to evangelise to them as though I’ve just found God? No. Does it offend me that they don’t follow my religion? No. Do I feel the need to suicide bomb their project? No. Its your call. Its your funeral. When I have proof that my way is The One True Way and yours is a sham, you can damn well bet I’ll be force feeding it to you. But given that ain’t gonna happen: I think we’re all pretty safe. If you don’t wanna pair, you put your headphones on and disappear into your silent reverie. Those of us that like pairing will pair, those of us that don’t, won’t. I’m fine with that.

The trouble is, in this farcical echo chamber of an industry, where the lessons of 40 years ago still haven’t been learnt properly. Where we keep repeating the mistakes of 20 years ago. Of 10 years ago. Of 5 years ago. Of 2 years ago. Of last week. For Christ’s sake people, can we not just learn a little of what’s gone before? All we have is mindless opinion, presented as fact. Everyone’s out to flog you their new shiny products, or whatever bullshit service they’re offering this week. No, sorry, it’s all utter bollocks. We know less about building decent software now than we did 40 years ago. It’s just now we build a massive amount more of it. And it’s even more shit than it ever was. Only now, now we have those crazy bastards that otherwise would stand on street corners telling me that Jesus would save me if only I would let him; but now they’re selling me scrum master training or some other snake oil.

All of this is unfortunately entirely indistinguishable from reasoned debate, so for youngsters entering the industry they have no way to know that its just a bunch of wankers arguing which colour to paint this new square wheel they invented. Until after a few years they become as jaded and cynical as the rest of us and decide to take advantage of all the other dumb fools out there. They find their little niche, their little way of making the world a little bit worse but themselves a little bit richer. And so the cycle repeats. Fashion begets fashion. Opinion begets opinion.

There aren’t any right answers in creating software. I know what I’ve found works some of the time. I get paid to put into practice what I know. I hope you do, too. But we’ve all had a different set of experiences which means we often don’t agree on what works and what doesn’t. But this is all we have. The plural of anecdote is not data.

All we have is individual judgement, borne out of individual experience. There is no grand unified theory of Correct Software Development. The best we can hope to do is learn from each other and try as many different approaches as possible. Try and fail safely and often. The more techniques you’ve tried the better the chance you can find the right technique at the right time.

Call it craftsmanship if you like. Call it art if you like. But it certainly isn’t science. And I don’t know about you, but it’s a very long time since I saw any engineering round these parts.


Categories: Programming, Testing & QA

The Virtue of Purgatory in Software Development

From the Editor of Methods & Tools - Tue, 03/03/2015 - 14:51
Having some decade of experience in software development behind me, I had the time to accumulate a lot of mistakes. One of the recurring patterns in these failures was the ambition to solve code issues too quickly. This was especially the case when the problem was related to code that I wrote, which made me […]

Sending Windows logs to Papertrail with nxlog

Agile Testing - Grig Gheorghiu - Thu, 02/26/2015 - 01:04
I am revisiting Papertrail as a log aggregation tool. It's really easy to send Linux logs to Papertrail via syslog or rsyslog or syslog-ng (see this article on how to configure syslog with TLS) but to send Windows logs you need to jump through some hoops.

Papertrail recommends nxlog as their Windows log management tool of choice, so that's what I used. This Papertrail article explains how to install and configure nxlog on Windows (I recommend enabling TLS).  The nxlog.conf template file provided by Papertrail will send Windows Event logs over. I also wanted to send application-specific logs, so here's what I did:

1) Add an Input section to nxlog.conf for each directory containing the files you want to send to Papertrail. For example, if one of your applications logs to C:\MyApp1\logs and your log files end with .log, you could have this input section:

# Monitor MyApp1 log files 
START_ANGLE_BRACKET Input MyApp1 END_ANGLE_BRACKET
 Module im_file
 File 'C:\\MyApp1\\logs\\*.log' 
 Exec $Message = $raw_event; 
 Exec if $Message =~ /GET \/ping/ drop(); 
 Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1; 
 SavePos TRUE 
 Recursive TRUE 
START_ANGLE_BRACKET /Input END_ANGLE_BRACKET

Some observations:

  • Blogger doesn't like angle brackets so replace START_ANGLE_BRACKET with < and END_ANGLE_BRACKET with >
  • The name MyApp1 is the name of this Input section
  • The File statement points to the location and name of the log files
  • The first Exec statement saves the log line under consideration as the variable $Message
  • The second Exec statement drops messages that contain a specific regular expression, in my case just 'GET /ping' -- which happens to be health checks from the load balancer that pollute the logs; you can replace this with any regular expression that will filter out log lines you don't want sent to Papertrail
  • The next few statements were in the sample Input stanza from the template nxlog.conf file so I just left them there
2) Add more Input sections, one for each log location (i.e. multiple log files under a given directory) that you want to send to Papertrail. You need to give each Input section a unique name (e.g. MyApp1 above).
3) Add a Route section for the Input sections defined previously. If you defined 2 Input sections MyApp1 and MyApp2, your Route section would look something like:

START_ANGLE_BRACKET  Route 2 END_ANGLE_BRACKET
Path MyApp1, MyApp2=> filewatcher_transformer => syslogoutSTART_ANGLE_BRACKET /Route END_ANGLE_BRACKET
The filewatcher_transformer section was already included in the sample nxlog.conf file from Papertrail. The Route section above says that the files processed by the 2 Input paths MyApp1 and MyApp2 will be processed through the statements defined in the filewatcher_transformer section, then will be sent to Papertrail by virtue of being processed through the statements defined in the syslogout section.
At this point, if you restart the nxlog service on your Windows box, you should start seeing log entries from your application(s) flowing into the Papertrail console.

Software Development Conferences Forecast February 2015

From the Editor of Methods & Tools - Tue, 02/24/2015 - 17:38
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software […]

Diamond Kata - TDD with only Property-Based Tests

Mistaeks I Hav Made - Nat Pryce - Fri, 02/20/2015 - 00:04
The Diamond Kata is a simple exercise that Seb Rose described in a recent blog post. Seb describes the Diamond Kata as: Given a letter, print a diamond starting with ‘A’ with the supplied letter at the widest point. For example: print-diamond ‘C’ prints A B B C C B B A Seb used the exercise to illustrate how he “recycles” tests to help him work incrementally towards a full solution. Seb’s approach prompted Alastair Cockburn to write an article in response in which he argued for more thinking before programming. Alastair’s article shows how he approached the Diamond Kata with more up-front analysis. Ron Jeffries and George Dinwiddie resonded to Alastair’s article, showing how they approached the Diamond Kata relying on emergent design to produce an elegant solution (“thinking all the time”, as Ron Jeffries put it). There was some discussion on Twitter, and several other people published their approaches. (I’ll list as many as I know about at the end of this article). The discussion sparked my interest, so I decided to have a go at the exercise myself. The problem seemed to me, at first glance, to be a good fit for property testing. So I decided to test-drive a solution using only property-based tests and see what happens. I wrote the solution in Scala and used ScalaTest to run and organise the tests and ScalaCheck for property testing. What follows is an unexpurgated, warts-and-all walkthrough of my progress, not just the eventual complete solution. I made wrong turns and stupid mistakes along the way. The walkthrough is pretty long, so if you want you don’t want to follow through step by step, jump straight to the complete solution and/or my conclusions on how the exercise went and what I learned. Alternatively, if you want to follow the walkthrough in more detail, the entire history is on GitHub, with a commit per TDD step (add a failing test, commit, make the implementation pass the test, commit, refactor, commit, … and repeat). Walkthrough Getting Started: Testing the Test Runner The first thing I like to do when starting a new project is make sure my development environment and test runner are set up right, that I can run tests, and that test failures are detected and reported. I use Gradle to bootstrap a new Scala project with dependencies on the latest versions of ScalaTest and ScalaCheck and import the Gradle project into IntelliJ IDEA. ScalaTest supports several different styles of test and assertion syntax. The user guide recommends writing an abstract base class that combines traits and annotations for your preferred testing style and test runner, so that’s what I do first: @RunWith(classOf[JUnitRunner]) abstract class UnitSpec extends FreeSpec with PropertyChecks { } My test class extends UnitSpec: class DiamondSpec extends UnitSpec { } I add a test that explicitly fails, to check that the test framework, IDE and build hang together correctly. When I see the test failure, I’m ready to write the first real test. The First Test Given that I’m writing property tests, I have to start with a simple property of the diamond function, not a simple example. The simplest property I can think of is: For all valid input character, the diamond contains one or more lines of text. To turn that into a property test, I must define “all valid input characters” as a generator. The description of the Diamond Kata defines valid input as a single upper case character. ScalaCheck has a predefined generator for that: val inputChar = Gen.alphaUpperChar At this point, I haven’t decided how I will represent the diamond. I do know that my test will assert on the number of lines of text, so I write the property with respect to an auxiliary function, diamondLines(c:Char):Vector[String], which will generate a diamond for input character c and return the lines of the diamond in a vector. "produces some lines" in { forAll (inputChar) { c => assert(diamondLines(c).nonEmpty) } } I like the way that the test reads in ScalaTest/ScalaCheck. It is pretty much a direct translation of my English description of the property into code. To make the test fail, I write diamondLines as: def diamondLines(c : Char) : Vector[String] = { Vector() } The entire test class is: import org.scalacheck._ class DiamondSpec extends UnitSpec { val inputChar = Gen.alphaUpperChar "produces some lines" in { forAll (inputChar) { c => assert(diamondLines(c).nonEmpty) } } def diamondLines(c : Char) : Vector[String] = { Vector() } } The simplest implementation that will make that property pass is to return a single string: object Diamond { def diamond(c: Char) : String = { "A" } } I make the diamondLines function in the test call the new function and split its result into lines: def diamondLines(c : Char) = { Diamond.diamond(c).lines.toVector } The implementation can be used like this: object DiamondApp extends App { import Diamond.diamond println(diamond(args.lift(0).getOrElse("Z").charAt(0))) } A Second Test, But It Is Not Very Helpful I now need to add another property, to more tightly constrain the solution. I notice that the diamond always has an odd number of lines, and decide to test that: For all valid input character, the diamond has an odd number of lines. This implies that the number of lines is greater than zero (because vectors cannot have a negative number of elements and zero is even), so I change the existing test rather than adding another one: "produces an odd number lines" in { forAll (inputChar) { c => assert(isOdd(diamondLines(c).length)) } } def isOdd(n : Int) = n % 2 == 1 But this new test has a problem: my existing solution already passes it. The diamond function returns a single line, and 1 is an odd number. This choice of property is not helping drive the development forwards. A Failing Test To Drive Development, But a Silly Mistake The next simplest property I can think of is the number of lines of the diamond. If ‘ord(c)’ is the number of letters between ‘A’ and c, (zero for A, 1 for B, 2 for C, etc.) then: For all valid input characters, c, the number of lines in a diamond for c is 2*ord(c)+1. At this point I make a silly mistake. I write my property as: "number of lines" in { forAll (inputChar) { c => assert(diamondLines(c).length == ord(c)+1) } } def ord(c: Char) : Int = c - 'A' I don’t notice the mistake immediately. When I do, I decide to leave it in the code as an experiment to see if the property tests will detect the error by becoming inconsistent, and how long it will take before they do so. This kind of mistake would easily be caught by an example test. It’s a good idea to have a few examples, as well as properties, to act as smoke tests. I make the test pass with the smallest amount of production code possible. I move the ord function from the test into the production code and use it to return the required number of lines that are all the same. def diamond(c: Char) : String = { "A\n" * (ord(c)+1) } def ord(c: Char) : Int = c - 'A' Despite sharing the ord function between the test and production code, there’s still some duplication. Both the production and test code calculate ord(c)+1. I want to address that before writing the next test. Refactor: Duplicated Calculation I replace ord(c)+1 with lineCount(c), which calculates number of lines generated for an input letter, and inline the ord(c) function, because it’s now only used in one place. object Diamond { def diamond(c: Char) : String = { "A\n" * lineCount(c) } def lineCount(c: Char) : Int = (c - 'A')+1 } And I use lineCount in the test as well: "number of lines" in { forAll (inputChar) { c => assert(diamondLines(c).length == lineCount(c)) } } On reflection, using the lineCount calculation from production code in the test feels like a mistake. Squareness The next property I add is: For all valid input character, the text containing the diamond is square Where “is square” means: The length of each line is equal to the total number of lines In Scala, this is: "squareness" in { forAll (inputChar) { c => assert(diamondLines(c) forall {_.length == lineCount(c)}) } } I can make the test pass like this: object Diamond { def diamond(c: Char) : String = { val side: Int = lineCount(c) ("A" * side + "\n") * side } def lineCount(c: Char) : Int = (c - 'A')+1 } Refactor: Rename the lineCount Function The lineCount is also being used to calculate the length of each line, so I rename it to squareSide. object Diamond { def diamond(c: Char) : String = { val side: Int = squareSide(c) ("A" * side + "\n") * side } def squareSide(c: Char) : Int = (c - 'A')+1 } Refactor: Clarify the Tests I’m now a little dissatisfied with the way the tests read: "number of lines" in { forAll (inputChar) { c => assert(diamondLines(c).length == squareSide(c)) } } "squareness" in { forAll (inputChar) { c => assert(diamondLines(c) forall {_.length == squareSide(c)}) } } The “squareness” property does not stand alone. It doesn’t communicate that the output is square unless combined with “number of lines” property. I refactor the test to disentangle the two properties: "squareness" in { forAll (inputChar) { c => val lines = diamondLines(c) assert(lines forall {line => line.length == lines.length}) } } "size of square" in { forAll (inputChar) { c => assert(diamondLines(c).length == squareSide(c)) } } The Letter on Each Line The next property I write specifies which characters are printed on each line. The characters of each line should be either a letter that depends on the index of the line, or a space. Because the diamond is vertically symmetrical, I only need to consider the lines from the top to the middle of the diamond. This makes the calculation of the letter for each line much simpler. I make a note to add a property for the vertical symmetry once I have made the implementation pass this test. "single letter per line" in { forAll (inputChar) { c => val allLines = diamondLines(c) val topHalf = allLines.slice(0, allLines.size/2 + 1) for ((line, index) <- topHalf.zipWithIndex) { val lettersInLine = line.toCharArray.toSet diff Set(' ') val expectedOnlyLetter = ('A' + index).toChar assert(lettersInLine == Set(expectedOnlyLetter), "line " + index + ": \"" + line + "\"") } } } To make this test pass, I change the diamond function to: def diamond(c: Char) : String = { val side: Int = squareSide(c) (for (lc <- 'A' to c) yield lc.toString * side) mkString "\n" } This repeats the correct letter for the top half of the diamond, but the bottom half of the diamond is wrong. This will be fixed by the property for vertical symmetry, which I’ve noted down to write next. Vertical Symmetry The property for vertical symmetry is: For all input character, c, the lines from the top to the middle of the diamond, inclusive, are equal to the reversed lines from the middle to the bottom of the diamond, inclusive. "is vertically symmetrical" in { forAll(inputChar) { c => val allLines = diamondLines(c) val topHalf = allLines.slice(0, allLines.size / 2 + 1) val bottomHalf = allLines.slice(allLines.size / 2, allLines.size) assert(topHalf == bottomHalf.reverse) } } The implementation is: def diamond(c: Char) : String = { val side: Int = squareSide(c) val topHalf = for (lc <- 'A' to c) yield lineFor(side, lc) val bottomHalf = topHalf.slice(0, topHalf.length-1).reverse (topHalf ++ bottomHalf).mkString("\n") } But this fails the “squareness” and “size of square” tests! My properties are now inconsistent. The test suite has detected the erroneous implementation of the squareSide function. The correct implementation of squareSide is: def squareSide(c: Char) : Int = 2*(c - 'A') + 1 With this change, the implementation passes all of the tests. The Position Of The Letter In Each Line Now I add a property that specifies the position and value of the letter in each line, and that all other characters in a line are spaces. Like the previous test, I can rely on symmetry in the output to simplify the arithmetic. This time, because the diamond has horizontal symmetry, I only need specify the position of the letter in the first half of the line. I add a specification for horizontal symmetry, and factor out generic functions to return the first and second half of strings and sequences. "is vertically symmetrical" in { forAll (inputChar) { c => val lines = diamondLines(c) assert(firstHalfOf(lines) == secondHalfOf(lines).reverse) } } "is horizontally symmetrical" in { forAll (inputChar) { c => for ((line, index) <- diamondLines(c).zipWithIndex) { assert(firstHalfOf(line) == secondHalfOf(line).reverse, "line " + index + " should be symmetrical") } } } "position of letter in line of spaces" in { forAll (inputChar) { c => for ((line, lineIndex) <- firstHalfOf(diamondLines(c)).zipWithIndex) { val firstHalf = firstHalfOf(line) val expectedLetter = ('A'+lineIndex).toChar val letterIndex = firstHalf.length - (lineIndex + 1) assert (firstHalf(letterIndex) == expectedLetter, firstHalf) assert (firstHalf.count(_==' ') == firstHalf.length-1, "number of spaces in line " + lineIndex + ": " + line) } } } def firstHalfOf[AS, A, That](v: AS)(implicit asSeq: AS => Seq[A], cbf: CanBuildFrom[AS, A, That]) = { v.slice(0, (v.length+1)/2) } def secondHalfOf[AS, A, That](v: AS)(implicit asSeq: AS => Seq[A], cbf: CanBuildFrom[AS, A, That]) = { v.slice(v.length/2, v.length) } The implementation is: object Diamond { def diamond(c: Char) : String = { val side: Int = squareSide(c) val topHalf = for (letter <- 'A' to c) yield lineFor(side, letter) (topHalf ++ topHalf.reverse.tail).mkString("\n") } def lineFor(length: Int, letter: Char): String = { val halfLength = length/2 val letterIndex = halfLength - ord(letter) val halfLine = " "*letterIndex + letter + " "*(halfLength-letterIndex) halfLine ++ halfLine.reverse.tail } def squareSide(c: Char) : Int = 2*ord(c) + 1 def ord(c: Char): Int = c - 'A' } It turns out the ord function, which I inlined into squareSide a while ago, is needed after all. The implementation is now complete. Running the DiamondApp application prints out diamonds. But there’s plenty of scope for refactoring both the production and test code. Refactoring: Delete the “Single Letter Per Line” Property The “position of letter in line of spaces” property makes the “single letter per line” property superflous, so I delete “single letter per line”. Refactoring: Simplify the Diamond Implementation I rename some parameters and simplify the implementation of the diamond function. object Diamond { def diamond(maxLetter: Char) : String = { val topHalf = for (letter <- 'A' to maxLetter) yield lineFor(maxLetter, letter) (topHalf ++ topHalf.reverse.tail).mkString("\n") } def lineFor(maxLetter: Char, letter: Char): String = { val halfLength = ord(maxLetter) val letterIndex = halfLength - ord(letter) val halfLine = " "*letterIndex + letter + " "*(halfLength-letterIndex) halfLine ++ halfLine.reverse.tail } def squareSide(c: Char) : Int = 2*ord(c) + 1 def ord(c: Char): Int = c - 'A' } The implementation no longer uses the squareSide function. It’s only used by the “size of square” property. Refactoring: Inline the squareSide function I inline the squareSide function into the test. "size of square" in { forAll (inputChar) { c => assert(diamondLines(c).length == 2*ord(c) + 1) } } I believe the erroneous calculation would have been easier to notice if I had done this from the start. Refactoring: Common Implementation of Symmetry There’s one last bit of duplication in the implementation. The expressions that create the horizontal and vertical symmetry of the diamond can be replaced with calls to a generic function. I’ll leave that as an exercise for the reader… Complete Tests and Implementation Tests: import Diamond.ord import org.scalacheck._ import scala.collection.generic.CanBuildFrom class DiamondSpec extends UnitSpec { val inputChar = Gen.alphaUpperChar "squareness" in { forAll (inputChar) { c => val lines = diamondLines(c) assert(lines forall {line => line.length == lines.length}) } } "size of square" in { forAll (inputChar) { c => assert(diamondLines(c).length == 2*ord(c) + 1) } } "is vertically symmetrical" in { forAll (inputChar) { c => val lines = diamondLines(c) assert(firstHalfOf(lines) == secondHalfOf(lines).reverse) } } "is horizontally symmetrical" in { forAll (inputChar) { c => for ((line, index) <- diamondLines(c).zipWithIndex) { assert(firstHalfOf(line) == secondHalfOf(line).reverse, "line " + index + " should be symmetrical") } } } "position of letter in line of spaces" in { forAll (inputChar) { c => for ((line, lineIndex) <- firstHalfOf(diamondLines(c)).zipWithIndex) { val firstHalf = firstHalfOf(line) val expectedLetter = ('A'+lineIndex).toChar val letterIndex = firstHalf.length - (lineIndex + 1) assert (firstHalf(letterIndex) == expectedLetter, firstHalf) assert (firstHalf.count(_==' ') == firstHalf.length-1, "number of spaces in line " + lineIndex + ": " + line) } } } def firstHalfOf[AS, A, That](v: AS)(implicit asSeq: AS => Seq[A], cbf: CanBuildFrom[AS, A, That]) = { v.slice(0, (v.length+1)/2) } def secondHalfOf[AS, A, That](v: AS)(implicit asSeq: AS => Seq[A], cbf: CanBuildFrom[AS, A, That]) = { v.slice(v.length/2, v.length) } def diamondLines(c : Char) = { Diamond.diamond(c).lines.toVector } } Implementation: object Diamond { def diamond(maxLetter: Char) : String = { val topHalf = for (letter <- 'A' to maxLetter) yield lineFor(maxLetter, letter) (topHalf ++ topHalf.reverse.tail).mkString("\n") } def lineFor(maxLetter: Char, letter: Char): String = { val halfLength = ord(maxLetter) val letterIndex = halfLength - ord(letter) val halfLine = " "*letterIndex + letter + " "*(halfLength-letterIndex) halfLine ++ halfLine.reverse.tail } def ord(c: Char): Int = c - 'A' } Conclusions In his article, “Thinking Before Programming”, Alastair Cockburn writes: The advantage of the Dijkstra-Gries approach is the simplicity of the solutions produced. The advantage of TDD is modern fine-grained incremental development. … Can we combine the two? I think property-based tests in the TDD process combined the two quite successfully in this exercise. I could record my half-formed thoughts about the problem and solution as generators and properties while using “modern fine-grained incremental development” to tighten up the properties and grow the code that met them. In Seb’s original article, he writes that when working from examples… it’s easy enough to get [the tests for ‘A’ and ‘B’] to pass by hardcoding the result. Then we move on to the letter ‘C’. The code is now screaming for us to refactor it, but to keep all the tests passing most people try to solve the entire problem at once. That’s hard, because we’ll need to cope with multiple lines, varying indentation, and repeated characters with a varying number of spaces between them. I didn’t encounter this problem when driving the implementation with properties. Adding a new property always required an incremental improvement to the implementation to get the tests passing again. Neither did I need to write throw-away tests for behaviour that was not actually desired of the final implementation, as Seb did with his “test recycling” approach. Every property I added applied to the complete solution. I only deleted properties that were implied by properties I added later, and so had become unnecessary duplication. I took the approach of starting from very generic properties and incrementally adding more specific properties as I refine the implementation. Generic properties were easy to come up with, and helped me make progress in the problem. The suite of properties reinforced one another, testing the tests, and detected the mistake I made in one property that caused it to be inconsistent with the rest. I didn’t know Scala, ScalaTest or ScalaCheck well. Now I’ve learned them better I wish I had written a minimisation strategy for the input character. This would have made test failure messages easier to understand. I also didn’t address what the diamond function would do with input outside the range of ‘A’ to ‘Z’. Scala doesn’t let one define a subtype of Char, so I can’t enforce the input constraint in the type system. I guess the Scala way would be to define diamond as a PartialFunction[Char,String]. Further Thoughts Thoughts on duplication and tests as documentation Thoughts on property-based tests and iterative/incremental development Other Solutions Mark Seeman has approached Diamond Kata with property-based tests, using F# and FsCheck. Solutions to the Diamond Kata using exmaple-based tests include: Seb Rose: Recycling Tests in TDD Alastair Cockburn: Thinking Before Programming Seb Rose: Diamond recycling (and painting yourself into a corner) Ron Jeffries: a detailed walkthrough of his solution George Dinwiddie: Another Approach to the Diamond Kata Ivan Sanchez: A walkthrough of his Clojure solution. Jon Jagger: print “squashed-circle” diamond Sandro Mancuso: A Java solution on GitHub Krzysztof Jelski: A Python solution on GitHub Philip Schwarz: A Clojure solution on GitHub
Categories: Programming, Testing & QA

GTAC 2014 Coming to Seattle/Kirkland in October

Google Testing Blog - Thu, 02/19/2015 - 14:22
Posted by Anthony Vallone on behalf of the GTAC Committee

If you're looking for a place to discuss the latest innovations in test automation, then charge your tablets and pack your gumboots - the eighth GTAC (Google Test Automation Conference) will be held on October 28-29, 2014 at Google Kirkland! The Kirkland office is part of the Seattle/Kirkland campus in beautiful Washington state. This campus forms our third largest engineering office in the USA.



GTAC is a periodic conference hosted by Google, bringing together engineers from industry and academia to discuss advances in test automation and the test engineering computer science field. It’s a great opportunity to present, learn, and challenge modern testing technologies and strategies.

You can browse the presentation abstracts, slides, and videos from last year on the GTAC 2013 page.

Stay tuned to this blog and the GTAC website for application information and opportunities to present at GTAC. Subscribing to this blog is the best way to get notified. We're looking forward to seeing you there!

Categories: Testing & QA

GTAC 2014: Call for Proposals & Attendance

Google Testing Blog - Thu, 02/19/2015 - 14:21
Posted by Anthony Vallone on behalf of the GTAC Committee

The application process is now open for presentation proposals and attendance for GTAC (Google Test Automation Conference) (see initial announcement) to be held at the Google Kirkland office (near Seattle, WA) on October 28 - 29th, 2014.

GTAC will be streamed live on YouTube again this year, so even if you can’t attend, you’ll be able to watch the conference from your computer.

Speakers
Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations and lightning talks are 45 minutes and 15 minutes respectively. Speakers should be prepared for a question and answer session following their presentation.

Application
For presentation proposals and/or attendance, complete this form. We will be selecting about 300 applicants for the event.

Deadline
The due date for both presentation and attendance applications is July 28, 2014.

Fees
There are no registration fees, and we will send out detailed registration instructions to each invited applicant. Meals will be provided, but speakers and attendees must arrange and pay for their own travel and accommodations.

Update : Our contact email was bouncing - this is now fixed.



Categories: Testing & QA

The Deadline to Sign up for GTAC 2014 is Jul 28

Google Testing Blog - Thu, 02/19/2015 - 14:21
Posted by Anthony Vallone on behalf of the GTAC Committee

The deadline to sign up for GTAC 2014 is next Monday, July 28th, 2014. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to add yours for consideration. If you would like to speak or attend, be sure to complete the form by Monday.

We will be making regular updates to our site over the next several weeks, and you can find conference details there:
  developers.google.com/gtac

For those that have already signed up to attend or speak, we will contact you directly in mid August.

Categories: Testing & QA

Announcing the GTAC 2014 Agenda

Google Testing Blog - Thu, 02/19/2015 - 14:20
by Anthony Vallone on behalf of the GTAC Committee

We have completed selection and confirmation of all speakers and attendees for GTAC 2014. You can find the detailed agenda at:
  developers.google.com/gtac/2014/schedule

Thank you to all who submitted proposals! It was very hard to make selections from so many fantastic submissions.

There was a tremendous amount of interest in GTAC this year with over 1,500 applicants (up from 533 last year) and 194 of those for speaking (up from 88 last year). Unfortunately, our venue only seats 250. However, don’t despair if you did not receive an invitation. Just like last year, anyone can join us via YouTube live streaming. We’ll also be setting up Google Moderator, so remote attendees can get involved in Q&A after each talk. Information about live streaming, Moderator, and other details will be posted on the GTAC site soon and announced here.

Categories: Testing & QA

GTAC 2014 is this Week!

Google Testing Blog - Thu, 02/19/2015 - 14:20
by Anthony Vallone on behalf of the GTAC Committee

The eighth GTAC commences on Tuesday at the Google Kirkland office. You can find the latest details on the conference at our site, including speaker profiles.

If you are watching remotely, we'll soon be updating the live stream page with the stream link and a Google Moderator link for remote Q&A.

If you have been selected to attend or speak, be sure to note the updated parking information. Google visitors will use off-site parking and shuttles.

We look forward to connecting with the greater testing community and sharing new advances and ideas.

Categories: Testing & QA

GTAC 2014 Wrap-up

Google Testing Blog - Thu, 02/19/2015 - 14:19
by Anthony Vallone on behalf of the GTAC Committee

On October 28th and 29th, GTAC 2014, the eighth GTAC (Google Test Automation Conference), was held at the beautiful Google Kirkland office. The conference was completely packed with presenters and attendees from all over the world (Argentina, Australia, Canada, China, many European countries, India, Israel, Korea, New Zealand, Puerto Rico, Russia, Taiwan, and many US states), bringing with them a huge diversity of experiences.


Speakers from numerous companies and universities (Adobe, American Express, Comcast, Dropbox, Facebook, FINRA, Google, HP, Medidata Solutions, Mozilla, Netflix, Orange, and University of Waterloo) spoke on a variety of interesting and cutting edge test automation topics.

All of the slides and video recordings are now available on the GTAC site. Photos will be available soon as well.


This was our most popular GTAC to date, with over 1,500 applicants and almost 200 of those for speaking. About 250 people filled our venue to capacity, and the live stream had a peak of about 400 concurrent viewers with 4,700 playbacks during the event. And, there was plenty of interesting Twitter and Google+ activity during the event.


Our goal in hosting GTAC is to make the conference highly relevant and useful for, not only attendees, but the larger test engineering community as a whole. Our post-conference survey shows that we are close to achieving that goal:



If you have any suggestions on how we can improve, please comment on this post.

Thank you to all the speakers, attendees, and online viewers who made this a special event once again. To receive announcements about the next GTAC, subscribe to the Google Testing Blog.

Categories: Testing & QA