Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Testing & QA

Testing on the Toilet: Effective Testing

Google Testing Blog - Wed, 05/07/2014 - 18:10
by Rich Martin, Zurich

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.


Whether we are writing an individual unit test or designing a product’s entire testing process, it is important to take a step back and think about how effective are our tests at detecting and reporting bugs in our code. To be effective, there are three important qualities that every test should try to maximize:

Fidelity

When the code under test is broken, the test fails. A high­-fidelity test is one which is very sensitive to defects in the code under test, helping to prevent bugs from creeping into the code.

Maximize fidelity by ensuring that your tests cover all the paths through your code and include all relevant assertions on the expected state.

Resilience

A test shouldn’t fail if the code under test isn’t defective. A resilient test is one that only fails when a breaking change is made to the code under test. Refactorings and other non-­breaking changes to the code under test can be made without needing to modify the test, reducing the cost of maintaining the tests.

Maximize resilience by only testing the exposed API of the code under test; avoid reaching into internals. Favor stubs and fakes over mocks; don't verify interactions with dependencies unless it is that interaction that you are explicitly validating. A flaky test obviously has very low resilience.

Precision

When a test fails, a high­-precision test tells you exactly where the defect lies. A well­-written unit test can tell you exactly which line of code is at fault. Poorly written tests (especially large end-to-end tests) often exhibit very low precision, telling you that something is broken but not where.

Maximize precision by keeping your tests small and tightly ­focused. Choose descriptive method names that convey exactly what the test is validating. For system integration tests, validate state at every boundary.

These three qualities are often in tension with each other. It's easy to write a highly resilient test (the empty test, for example), but writing a test that is both highly resilient and high­-fidelity is hard. As you design and write tests, use these qualities as a framework to guide your implementation.

Categories: Testing & QA

Does Experience Helps in User Experience?

From the Editor of Methods & Tools - Mon, 05/05/2014 - 15:48
Some of the older readers might remember software development in the 20th century when end-user interaction with computer was performed using 80×24 characters terminal screens where the definition of user experience was making sure that your F(unction) keys were properly standardized across the screens. Does anyone still press F1 for help today? At least there are still these “Fx” keys on the keyboard. Those were days were one of the last thing you wanted was that somebody put his (obviously greasy) finger on the screen;O)Today’s user interface and user experience ...

Software Development Conferences Forecast April 2014

From the Editor of Methods & Tools - Mon, 04/28/2014 - 11:11
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine: STAREAST, May 4–9 2014, Orlando, USA GOTO Amsterdam 2014, June 18-20, 2014, Amsterdam, The Netherlands The Reliable Software Developers Conferences, May 20 – June 5, UK The Android Developer Conference, May 27-30, Boston, USA Better Software & Agile Development ...

Dashboards are important!

Agile Testing - Grig Gheorghiu - Fri, 04/25/2014 - 22:21
In this case, they were a factor in my having a discussion with Eric Garcetti, the mayor of Los Angeles, who was visiting our office. He was intrigued by the Graphite dashboards we have on 8 monitors around the Ops area and I explained to him a little bit of what's going on in terms of what we're graphing. I'll let you guess who is the mayor in this photo:


Slides from my remote presentation on "Modern Web development and operations practices" at MSU

Agile Testing - Grig Gheorghiu - Fri, 04/25/2014 - 22:06
Titus Brown was kind enough to invite me to present to his students in the CSE 491 "Web development" class at MSU. I presented remotely, via Google Hangouts, on "Modern Web development and operations practices" and it was a lot of fun. Perhaps unsurprisingly, most of the questions at the end were on how to get a job in this field and be able to play with some of these cool technologies. My answer was to become active in Open Source, beef up your portfolio on GitHub, go to conferences and network (this was actually Titus's idea, but I wholeheartedly agree), and in general  be curious and passionate about your field, and success will follow. I posted my slides on Slideshare if you are curious to take a look. Thanks to Dr. Brown for inviting me! :-)

Software Development Linkopedia April 2014

From the Editor of Methods & Tools - Tue, 04/22/2014 - 15:58
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about professional software testing, managing agile requirements with features or job stories instead of user stories, integrating quality assurance and Scrum, security architecture, load testing, database normalization and UX design. Blog: Heuristics for recognizing professional testers Blog: Feature Boards Blog: Testing at Airbnb Blog: Replacing The User Story With The Job Story Article: Four Tips for Integrating Quality Assurance with Scrum Article: Security Architecture Approaches Article: Preparing for Load Testing Best ...

Testing on the Toilet: Test Behaviors, Not Methods

Google Testing Blog - Mon, 04/14/2014 - 22:25
by Erik Kuefler

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

After writing a method, it's easy to write just one test that verifies everything the method does. But it can be harmful to think that tests and public methods should have a 1:1 relationship. What we really want to test are behaviors, where a single method can exhibit many behaviors, and a single behavior sometimes spans across multiple methods.

Let's take a look at a bad test that verifies an entire method:

@Test public void testProcessTransaction() {
User user = newUserWithBalance(LOW_BALANCE_THRESHOLD.plus(dollars(2));
transactionProcessor.processTransaction(
user,
new Transaction("Pile of Beanie Babies", dollars(3)));
assertContains("You bought a Pile of Beanie Babies", ui.getText());
assertEquals(1, user.getEmails().size());
assertEquals("Your balance is low", user.getEmails().get(0).getSubject());
}

Displaying the name of the purchased item and sending an email about the balance being low are two separate behaviors, but this test looks at both of those behaviors together just because they happen to be triggered by the same method. Tests like this very often become massive and difficult to maintain over time as additional behaviors keep getting added in—eventually it will be very hard to tell which parts of the input are responsible for which assertions. The fact that the test's name is a direct mirror of the method's name is a bad sign.

It's a much better idea to use separate tests to verify separate behaviors:

@Test public void testProcessTransaction_displaysNotification() {
transactionProcessor.processTransaction(
new User(), new Transaction("Pile of Beanie Babies"));
assertContains("You bought a Pile of Beanie Babies", ui.getText());
}
@Test public void testProcessTransaction_sendsEmailWhenBalanceIsLow() {
User user = newUserWithBalance(LOW_BALANCE_THRESHOLD.plus(dollars(2));
transactionProcessor.processTransaction(
user,
new Transaction(dollars(3)));
assertEquals(1, user.getEmails().size());
assertEquals("Your balance is low", user.getEmails().get(0).getSubject());
}

Now, when someone adds a new behavior, they will write a new test for that behavior. Each test will remain focused and easy to understand, no matter how many behaviors are added. This will make your tests more resilient since adding new behaviors is unlikely to break the existing tests, and clearer since each test contains code to exercise only one behavior.

Categories: Testing & QA

Variable Testers

James Bach’s Blog - Sun, 04/13/2014 - 22:55

I once heard a vice president of software engineering tell his people that they needed to formalize their work. That day, I was an unpaid consultant in the building to give a free seminar, so I had even less restraint than normal about arguing with the guy. I raised my hand, “I don’t think you can mean that, sir. Formality is about sameness. Are you really concerned that your people are working in different ways? It seems to me that what you ought to be concerned about is effectiveness. In other words, get the job done. If the work is done a different way every time, but each time done well, would you really have a problem with that? For that matter, do you actually know how your folks work?”

This was years ago. I’m wracking my brain, but I can’t remember specifically how the executive responded. All I remember is that he didn’t reply with anything very specific and did not seem pleased to be corrected by some stranger who came to give a talk.

Oh well, it had to be done.

I have occasionally heard the concern by managers that testers are variable in their work; that some testers are better than others; and that this variability is a problem. But variability is not a problem in and of itself. When you drive a car, there are different cars on the road each day, and you have to make different patterns of turning the wheel and pushing the brake. So what?

The weird thing is how utterly obvious this is. Think about managers, designers, programmers, product owners… think about ANYONE in engineering. We are all variable. Complaining about testers being variable– as if that were a special case– seems bizarre to me… unless…

I suppose there are two things that come to mind which might explain it:

1) Maybe they mean “testers vary between satisfying me and not satisfying me, unlike other people, who always satisfy me.” To examine this we would discover what their expectations are. Maybe they are reasonable or maybe they are not. Maybe a better system for training and leading testers is needed.

2) Maybe they mean “testing is a strictly formal process that by its nature should not vary.” This is a typical belief by people who know nothing about testing. What they need is to have testing explained or demonstrated to them by someone who knows what he’s doing.

 

 

 

 

 

Categories: Testing & QA

Why does it work in staging but not in production?

Agile Testing - Grig Gheorghiu - Mon, 04/07/2014 - 23:07
This is a question that I am sure was faced by every developer and operation engineer out there. There can be multiple answers to this question, and I'll try to offer some of the ones we arrived at, having to do mainly with our Chef workflow, but that can be applied I think to any other configuration management tool.

1) A Chef cookbook version in staging is different from the version in production

This is a common scenario, and it's supposed to work this way. You do want to test out new versions of your cookbooks in staging first, then update the version of the cookbook in production.

2) A feature flag is turned on in staging but turned off in production

We have Chef attributes defined in attributes/default.rb that serve as feature flags. If a certain attribute is true, some recipe code or template section gets included which wouldn't be included if the attribute were false. The situation can occur where a certain attribute is set to true in the staging environment but is set to false in the production environment, at which point things can get out of sync. Again, this is expected, as you do want to test new features out in staging first, but don't forget to turn them on in production at some point.

3) A block of code or template is included in staging but not in production

We had this situation very recently. Instead of using attributes as feature flags, we were directly comparing the environment against 'stg' or 'prod' inside an if block in a template, and only including that template section if the environment was 'stg'. So things were working perfectly in staging, but mysteriously the template section wasn't even there in production. An added difficulty was that the template in question was peppered with non-indented if blocks, so it took us a while to figure out what was going on.

Two lessons here:

a) Make your templates readable by indenting code blocks.

b) Use attributes as feature flags, and don't compare directly against the current environment. This way, it's easier to always look at the default attribute file and see if a given feature flag is true or false.

4) A modification is made to the cookbook version in production directly on the Chef server

I blogged about this issue in the past. Suppose you have an environments file that pins a given cookbook (let's designate it as cookbook C) to 1.0.1 in staging and to 1.0.0 in production. You want to upgrade production to 1.0.1, because it was tested in staging and it worked fine. However, instead of i) modifying the environments/prod.rb file and pinning the cookbook C to 1.0.1, ii) updating the Chef server via "knife environment from file environments/prod.rb" and iii) committing your changes in git, you modify the version of the cookbook C directly on the Chef server with "knife environment edit prod".

Then, the next time you or somebody else modifies environments/prod.rb to bump up another cookbook to the next version, the version of cookbook C in that file is still 1.0.0, so when you upload environments/prod.rb to the Chef server, it will downgrade cookbook C from 1.0.1 to 1.0.0. Chaos will ensue the next time chef-client runs on the nodes that have recipes from cookbook C. Production will be broken, while staging will still happily work.

Here are 2 other scenarios not related directly to staging vs production, but instead having the potential to break production altogether.

You forget to upload the new version of the cookbook to the Chef server

You make all of your modifications to the cookbook, you commit your code to git, but for some reason you forget to upload the cookbook to the Chef server. Particularly if you keep the same version of the cookbook that is in staging (and possibly in production), then your modifications won't take effect and you may spend some quality time pulling your hair.

You upload a cookbook to the Chef server without bumping its version

There is another, even worse, scenario though: you do upload your cookbook to the Chef server, but you realize that you didn't bump up the version number compared to what is currently pinned to production. As a consequence, all the nodes in production that have recipes from that cookbook will be updated the next time they run chef-client. That's a nasty one. It does happen. So make sure you pay attention to your cookbook versioning process and stick to it!

Software Development Mantra

From the Editor of Methods & Tools - Mon, 04/07/2014 - 14:51
We believe developers should have a particular attitude when writing code. There are actually several we’ve come up with over time – all being somewhat consistent with each other but saying things a different way. The following are the ones we’ve held to date:Avoid over- and under-design. Minimize complexity and rework. Never make your code worse (the Hippocratic Oath of Coding). Only degrade your code intentionally. Keep your code easy to change, robust, and safe to change.Source: “Essential Skills for the Agile Developer – A Guide to Better Programming and Design”, Alan Shalloway, Scott ...

Being a Great Software Development Manager

From the Editor of Methods & Tools - Thu, 03/27/2014 - 15:33
Any manager, when goals are not being met, identifies the impediments and whenever possible removes them. A good manager goes further by identifying and removing potential impediments before they lead to the goals not being met. A great manager makes this seem easy. Tony Bowden, Socialtext Source: Managing the Unmanageable, Mickey W. Mantle & Ron Lichty, Addison-Wesley

Are integration tests worth the hassle?

Actively Lazy - Tue, 03/25/2014 - 22:19

Whether or not you write integration tests can be a religious argument: either you believe in them or you don’t. What we even mean by integration tests can lead to an endless semantic argument.

What do you mean?

Unit tests are easy to define they test a single unit: a single class, a single method, make a single assertion on the behaviour of that method. You probably need mocks (again, depending on your religious views on mocking).

Integration tests, as fas as I’m concerned, mean they test a deployed (or at least deployable) version of your code, outside in, as close to what your “user” will do as possible. If you’re building a website, use Selenium WebDriver. If you’re writing a web service, write a test client and make requests to a running instance of your service. Get as far outside your code as you reasonably can to mimic what your user will do, and do that. Test that your code, when integrated, actually works.

In between these two extremes exist varying degrees of mess, which some people call integration testing. E.g. testing a web service by instantiating your request handler class and passing a request to it programmatically, letting it run through to the database. This is definitely not unit testing, as it’s hitting the database. But, it’s not a complete integration test, as it misses a layer: what if HTTP requests to your service never get routed to your handler, how would you know?

What’s the problem then?

Integration tests are slow. By definition, you’re interacting with a running application which you have to spin up, setup, interact with, tear down and clean up afterwards. You’re never going to get the speed you do with unit tests. I’ve just started playing with NCrunch, a background test runner for Visual Studio – which is great, but you can’t get it running your slow, expensive integration tests all the time. If your unit tests take 30 seconds to run, I’ll bet you run them before every checkin. If your integration tests take 20 minutes to run, I bet you don’t run them.

You can end up duplicating lower level tests. If you’re following a typical two level approach of writing a failing integration test, then writing unit tests that fail then pass until eventually your integration test passes – there is an inevitable overlap between the integration test and what the unit tests cover. This is expected and by design, but can seem like repetition. When your functionality changes, you’ll have at least two tests to change.

They aren’t always easy to write. If you have a specific case to test, you’ll need to setup the environment exactly right. If your application interacts with other services / systems you’ll have to stub them so you can provide canned data. This may be non-trivial. The biggest cost, in most environments I’ve worked in, with setting up good integration tests is doing all the necessary evil of setting up test infrastructure: faking out web services, third parties, messaging systems, databases blah blah. It all takes time and maintenance and slows down your development process.

Finally integration tests can end up covering uninteresting parts of the application repeatedly, meaning some changes are spectacularly expensive in terms of updating the tests. For example, if your application has a central menu system and you change it, how many test cases need to change? If your website has a login form and you massively change the process, how many test cases require a logged in user?

Using patterns like the page object pattern you can code your tests to minimize this, but it’s not always easy to avoid this class of failure entirely. I’ve worked in too many companies where, even with the best of intentions, the integration tests end up locking in a certain way of working that you either stick with or declare bankruptcy and just delete the failing tests.

What are the advantages then?

Integration tests give you confidence that your application actually works from your user’s perspective. I’d never recommend covering every possible edge case with integration tests – but a happy-path test for a piece of functionality and a failure-case gives you good confidence that the most basic aspects of any given feature work. The complex edge cases you can unit test, but an overall integration test helps you ensure that the feature is basically integrated and you haven’t missed something obvious that unit tests wouldn’t cover.

Your integration tests can be pretty close to acceptance tests. If you’re using a BDD type approach, you should end up with quite readable test definitions that sufficiently technical users could understand. This helps you validate that the basic functionality is as the user expects, not just that it works to what you expected.

What goes wrong?

The trouble is if integration tests are hard to write you won’t write them. You’ll find another piece of test infrastructure you need to invest in, decide it isn’t worth it this time and skip it. If your approach relies on integration tests to get decent coverage of parts of your application – especially true for the UI layer – then skipping them means you can end up with a lot less coverage than you’d like.

Some time ago I was working on a WPF desktop application – I wanted to write integration tests for it. The different libraries for testing WPF applications are basically all crap. Each one of them failed to meet my needs in some annoying, critical way. What I wanted was WebDriver for WPF. So I started writing one. The trouble is, the vagaries of the Windows UI eventing system mean this is hard. After a lot of time spent investing in test infrastructure instead of writing integration tests, I still had a barely usable testing framework that left all sorts of common cases untestable.

Because I couldn’t write integration tests and unit testing WPF UI code can be hard, I’d only unit test the most core internal functionality – this left vast sections of the WPF UI layer untested. Eventually, it became clear this wasn’t acceptable and we returned to the old-school approach of writing unit tests (and unit tests alone) to get as close to 100% coverage as is practical when some of your source code is written in XML.

This brings us back full circle: we have good unit test coverage for a feature, but no integration tests to verify that all the different units are hanging together correctly and work in a deployed application. But, where the trade-off is little test coverage or decent test coverage with systematic blindspots what’s the best alternative?

Conclusion

Should you write integration tests? If you can, easily: yes! If you’re writing a web service, it’s much easier to write integration tests for than almost every other type of application. If you’re writing a relatively traditional, not too-javascript-heavy website, WebDriver is awesome (and the only practical way to get some decent cross-browser confidence). If you’re writing very complex UI code (WPF or JavaScript) it might be very hard to write decent integration tests.

This is where your test approach blurs with architecture: as much as possible, your architecture needs to make testing easy. Subtle changes to how you structure your application might make it easier to get decent test coverage: you can design the application to make it easy to test different elements in isolation (e.g. separate UI layer from a business logic service); you don’t get quite fully integrated tests, but you minimize the opportunity for bugs to slip through the cracks.

Whether or not you write integration tests is fundamentally a question of what tests your architectural choices require you to write to get confidence in your code.


Categories: Programming, Testing & QA

Software Development Conferences Forecast March 2014

From the Editor of Methods & Tools - Tue, 03/25/2014 - 14:04
Here is a list of software development related conferences and events on Agile ( Scrum, Lean, Kanban) software testing, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & Tools software development magazine: Big Data TechCon, March 31-April 2, Boston, USA Software Testing Analysis & Review Canada Conference, April 5-9 2014, Toronto, Canada Agile Adria Conference, April 7-8 2014, Tuheljske Toplice, Croatia Optional Conference, April 8-9 2014, Budapest, Hungary GOTO Amsterdam 2014, ...

Done & Retrospectives in Scrum, Software Architecture in Methods & Tools Spring 2014 issue

From the Editor of Methods & Tools - Mon, 03/24/2014 - 16:05
Methods & Tools – the free e-magazine for software developers, testers and project managers – has just published its Spring 2014 issue that discusses how to be bone in Scrum; why, when and how to perform successful retrospectives; Why it is better to chop onions instead of layers for a improved software architecture, using Ranorex to test your desktop, mobile or web application. Methods & Tools Spring 2014 contains the following articles: * The Principles of Done * Doing Valuable Agile Retrospectives * Chop Onions Instead of Layers in Software Architecture * Ranorex – Automated ...

NoSQL Databases Adoption Survey Results

From the Editor of Methods & Tools - Tue, 03/18/2014 - 14:20
NoSQL databases (MongoDB, Couchbase, Riak, Cassandra, etc…) are getting more and more visibility in software development discussions and conferences. The last Methods & Tools survey wanted to know if organizations were using NoSQL databases. The results are mixed. On one side, 60% of the participants answered that their organization was not using at all this technology, with 20% that are even unaware of their existence. On the other side, more than 25% of the organizations had NoSQL-based applications in production, which is not so bad for a relatively new technology largely ...

Software Development Linkopedia March 2014

From the Editor of Methods & Tools - Wed, 03/12/2014 - 17:06
Here is our monthly selection of interesting knowledge material on programming, software testing and project management.  This month you will find some interesting information and opinions about Agile values, roles in Scrum, unit testing mistakes, managing projects, Agile testing, using the MongoDB NoSQL database, software metrics, using programmer anarchy for organizing projects and specifications by example. Web site: Agile Coach Camps Blog: Agile Is Dead (Long Live Agility) Blog: Beyond Roles in Scrum Blog:5 Unit Testing Mistakes Blog: Are comments always wrong? Article: Cutting a Monster Project Down to a Manageable Size Article: When Scrum Uncovers Stinky ...

Affordance Open-Space at XP Day

Mistaeks I Hav Made - Nat Pryce - Tue, 12/03/2013 - 00:20
Giovanni Asproni and I facilitated an open-space session at XP Day on the topic of affordance in software design. Affordance is "a quality of an object, or an environment, which allows an individual to perform an action". Don Norman distinguishes between actual afforance and perceived affordance, the latter being "those action possibilities that are readily perceivable by an actor". I think this distinction applies as well to software programming interfaces. Unless running in a sandboxed execution environments, software has great actual affordance, as long as we're willing to do the wrong thing when necessary: break encapsulation or rely on implementation-dependent data structures. What makes programming difficult is determining how and why to work with the intentions of those who designed the software that we are building upon: the perceived affordance of those lower software layers. Here are the notes I took during the session, cleaned up a little and categorised into things that provide help affordance, hinder affordance or can both help and hinder. Helping Repeated exposure to language you understand results in fluency. Short functions Common, consistent language. Domain types. Good error messages Things that behave differently should look different. e.g. convention for ! suffix in Scheme and Ruby for functions that mutate data. Able to explore software: able to put "probes" into/around the software to experiment with its behaviour know where to put those probes TDD, done properly, is likely to make most important affordances visible simpler APIs ability to experiment and probe software running in isolation. Hindering Two ways software has bad affordance: can't work out what it does (poor perceived affordance) it behaves unexpectedly (poor actual affordance) (are these two viewpoints on the same thing?) Null pointer exceptions detected far from origin. Classes named after design patterns incorrectly. e.g. programmer did not understand the pattern. Factories that do not create objects Visitors that are really internal iterators Inconsistencies can be caused by unfinished refactoring which code style to use? agree in the team spread agreement by frequent pair rotation Names that have subtly different meanings in natural language and in the code. E.g. OO design patterns. Compare with FP design patterns that have strict definitions and meaningless names (in natural language). You have to learn what the names really mean instead of rely on (wrong) intuition. Both Helping and Hindering Conventions e.g. Ruby on Rails alternative to metaphor? but: don't be magical! hard to test code in isolation If you break convention, draw attention to where it has been broken. Naming types after patterns better to use domain terms than pattern names? e.g. CustomerService mostly does stuff with addresses. Better named AddressBook, can see it also has unrelated behaviour that belongs elsewhere. but now need to communicate the system metaphors and domain terminology (is that ever a bad thing?) Bad code style you are familiar with does offer perceived affordance. Bad design might provide good actual affordance. e.g. lots of global variables makes it easy to access data. but provides poor perceived affordance e.g. cannot understand how state is being mutated by other parts of the system Method aliasing: makes code expressive common in Ruby but Python takes the opposite stance - only one way to do anything Frameworks: new jargon to learn new conventions provide specific affordances if your context changes, you need different affordances and the framework may get in the way. Documentation vs Tests actual affordance of physical components is described in a spec documents and not obvious from appearance. E.g. different grade of building brick. Is this possible for software? docs are more readable tests are more trustworthy best of both worlds - doctests? Used in Go and Python How do we know what has changed between versions of a software component? open source libraries usually have very good release notes why is it so hard in enterprise development?
Categories: Programming, Testing & QA

Multimethods

Mistaeks I Hav Made - Nat Pryce - Thu, 10/10/2013 - 00:30
I have observed that... Unless the language already provides them, any sufficiently complicated program written in a dynamically typed language contains multiple, ad hoc, incompatible, informally-specified, bug-ridden, slow implementations of multimethods. (To steal a turn of phrase from Philip Greenspun).
Categories: Programming, Testing & QA

Thought after Test Automation Day 2013

Henry Ford said “Obstacles are those frightful things you see when take your eyes off the goal.” After I’ve been to Test Automation Day last month I’m figuring out why industrializing testing doesn’t work. I try to put it in this negative perspective, because I think it works! But also when is it successful? A lot of times the remark from Ford is indeed the problem. People tend to see obstacles. Obstacles because of the thought that it’s not feasible to change something. They need to change. But that’s not an easy change.

After attending the #TAD2013 as it was on Twitter I saw a huge interest in better testing, faster testing, even cheaper testing by using tools to industrialize. Test automation has long been seen as an interesting option that can enable faster testing. it wasn’t always cheaper, especially the first time, but at least faster. As I see it it’ll enable better testing. “Better?” you may ask. Test automation itself doesn’t enable better testing, but by automating regression tests and simple work the tester can focus on other areas of the quality.

images

And isn’t that the goal? In the end all people involved in a project want to deliver a high quality product, not full of bugs. But they also tend to see the obstacles. I see them less and less. New tools are so well advanced and automation testers are becoming smarter and smarter that they enable us to look beyond the obstacles. I would like to say look over the obstacles.

At the Test Automation Day I learned some new things, but it also proved something I already know; test automation is here to stay. We don’t need to focus on the obstacles, but should focus on the goal.

Categories: Testing & QA

The Toyota Way: The need for doing it right the first time

After WWII Toyota started developing its Toyota Production System (TPS); which was identified as ‘Lean’ in the 1990s. Taiichi Ohno, Shigeo Shingo and Eiji Toyoda developed the system between 1948 and 1975. In the myth surrounding the system it was not inspired by the American automotive industry, but from a visit to American supermarkets, Ohno saw the supermarket as model for what he was trying to accomplish in the factor and perfect the Just-in-Time (JIT) production system. While accomplishing this low inventory levels were a key outcome of the TPS, and an important element of the philosophy behind its system is to work intelligently and eliminate waste so that only minimal inventory is needed.

As TPS and Lean have their own principles as outlined by Toyota:

  • Long-term Philosophy
  • Right process will produce the right results
  • Value to organization by developing people
  • Solving root problems drives organizational learning

As these principles were summed up and published by Toyota in 2001, by naming it “The Toyota Way 2001”. It consists the above named principles in two key areas: Continuous Improvement, and Respect for People.

the-toyota-way

The principles for a continuous improvement include establishing a long-term vision, working on challenges, continual innovation, and going to the source of the issue or problem. The principles relating to respect for people include ways of building respect and teamwork. When looking at the ALM all these principles come together in the ‘first time right’ approach already mentioned. And from Toyota’s view they were outlined as followed:

  • The right process will produce the right results
    • Create continuous process flow to bring problems to the surface.
    • Use the ‘pull’ system to avoid overproduction (kanban)
    • Level out the workload (heijunka).
    • Build a culture of stopping to fix problems, to get quality right from the first (jidoka)
  • Continuously solving root problems drives organizational learning
    • Go and see for yourself to thoroughly understand the situation (Genchi Genbutsu);
    • Make decisions slowly by consensus, thoroughly considering all options (nemawashi); implement decisions rapidly;
    • Become a learning organization through relentless reflection (hansei) and continuous improvement (kaizen).
Let’s do it right now!

As the economy is changing and IT is more common sense throughout ore everyday life the need for good quality software products has never been this high. Software issues create bigger and bigger issues in our lives. Think about trains that cannot ride due to software issues, bank clients that have no access to their bank accounts, and people oversleeping because their alarm app didn’t work on their iPhone. As Capers Jones [Jones, 2011] states in his 2011 study that “software is blamed for more major business problems than any other man-made product” and that “poor quality has become one of the most expensive topics in human history”. The improvement of software quality is a key topic for all industries.

 Right the first time vs jidoka

In both TPS and Lean autonomation or jidoka are used. Autonomation can be described as ‘intelligent autonomation’, it means that when an abnormal situation arises the ‘machine’ stops and fix the abnormality. Autonomation prevents the production of defective products, eliminates overproduction, and focuses attention on understanding the problem and ensuring that it never recurs; a quality control process that applies the following four principles:

  • Detect the abnormality.
  • Stop.
  • Fix or correct the immediate condition.
  • Investigate the root cause and install a countermeasure.
Find defects as early as possible

In other words autonomation helps to get quality right the first time perfectly. With IT projects being different from the Toyota car production line, ‘perfectly’ may be a bit too much, but the process around quality assurance should be the same:

  • Find the defect.
  • Stop.
  • Fix or correct the error.
  • Investigate the root cause and take countermeasures.

The defect should be found as early as possible to be fixed as early as possible. And as with Lean and TPS the reason behind this is to make it possible to address the identification and correction of defects immediately in the process.

Categories: Testing & QA