Warning: Table './devblogsdb/cache_page' is marked as crashed and last (automatic?) repair failed query: SELECT data, created, headers, expire, serialized FROM cache_page WHERE cid = 'http://www.softdevblogs.com/?q=aggregator/categories/2&page=1' in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc on line 135

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 729

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 730

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 731

Warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/bootstrap.inc on line 732
Software Development Blogs: Programming, Software Testing, Agile, Project Management
Skip to content

Software Development Blogs: Programming, Software Testing, Agile Project Management

Methods & Tools

Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!

Testing & QA
warning: Cannot modify header information - headers already sent by (output started at /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/database.mysql.inc:135) in /home/content/O/c/n/Ocnarfparking9/html/softdevblogs/includes/common.inc on line 153.

Your DI framework is killing your code

Actively Lazy - Fri, 09/25/2015 - 06:45

I read a really interesting post recently looking at the difference between typical OO code and a more functional style. There’s a lot to be said for the functional style of coding, even in OO languages like Java and C#. The biggest downside I find is always one of code organisation: OO gives you a discoverable way of organising large amounts of code. While in a functional style you might have less code, but it’s harder to organise it clearly.

It’s not Mike’s conclusion that I want to challenge, it’s his starting point: he starts out with what he describes as “typical object oriented C# code”. Unfortunately I think he’s bang on: even in this toy example, it is exactly like almost all so-called “object oriented” code I see in the wild. My issue with code like this is: it isn’t in the least bit object oriented. It’s procedural code haphazardly organised into classes. Just cos you’ve got classes, don’t make it OO.

What do I mean procedural code? The typical pattern I see is a codebase made up of two types of classes:

  1. Value objects, holding all the data with no business logic
    Extra credit here if it’s an object from your persistence layer, nhibernate or the like.
  2. Classes with one or two public functions –  they act on the value objects with no state of their own
    These are almost always “noun-verbers”

A noun-verber is the sure-fire smell of poor OO code: OrderProcessor, AccountManager, PriceCalculator. No, calling it an OrderService doesn’t make it any better. Its still a noun-verber you’re hiding by the meaningless word “service”. A noun-verber means its all function and no state, it’s acting on somebody else’s state. It’s almost certainly a sign of feature envy.

The other design smell with these noun-verbers is they’re almost always singletons. Oh you might not realise they’re singletons, because you’ve cleverly hidden that behind your dependency injection framework: but it’s still a singleton. If there’s no state on it, it might as well be a singleton. It’s a static method in all but name. Oh sure its more testable than if you’d actually used the word “static”. But it’s still a static method. If you’d not lied to yourself with your DI framework and written this as a static method, you’d recoil in horror. But because you’ve tarted it up and called it a “dependency”, you think it’s ok. Well it isn’t. It’s still crap code. What you’ve got is procedures, arbitrarily grouped into classes you laughably call “dependencies”. It sure as hell isn’t OO.

One of the properties of good OO design is that code that operates on data is located close to the data. The way the data is actually represented can be hidden behind behaviours. You focus on the behaviour of objects, not the representation of the data. This allows you to model the domain in terms of nouns that have behaviours. A good OO design should include classes that correspond to nouns in the domain, with behaviours that are verbs that act on those nouns: Order.SubmitForPicking(). UserAccount.UpdatePostalAddress(), Basket.CalculatePriceIncludingTaxes().

These are words that someone familiar with the domain but not software would still understand. Does your customer know what an OrderPriceStrategyFactory is for? No, then it’s not a real thing. Its some bullshit you made up.

The unloved value objects are, ironically, where the real answer is to the design problem. These are almost always actual nouns in the domain. They just lack any behaviours which would make them useful. Back to Mike’s example: he has a Customer class – its public interface is just an email address property, a classic value object: all state and no behaviour [if you want to play along at home, I’ve copied Mike’s code into a git repo; as well as my refactored version].

Customer sounds like a good noun in this domain. I bet the customer would know what a Customer was. If only there were some behaviours this domain object could have. What do Customers do in Mike’s world? Well, this example is all about creating and sending a report. A report is made for a single customer, so I think we could ask the Customer to create its own report. By moving the method from ReportBuilder onto Customer, it is right next to the data it operates on. Suddenly the public email property can be hidden – well this seems a useful change, if we change how we contact customers then only the customer needs to change, not also the ReportBuilder. It’s almost like a single change in the design should be contained within a single class. Maybe someone should write a principle about this single responsibility malarkey…

By following a pattern like this, moving methods from noun-verbers onto the nouns on which they operate, we end up with a clearer design. A Customer can CreateReport(), a Report can RenderAsEmail(), and an Email can Send(). In a domain like this, these seem like obvious nouns and obvious behaviours for these nouns to have. They are all meaningful outside of the made up world of software. The design models the domain and it should be clear how the domain model must change in response to each new requirement – since each will represent a change in our understanding of the domain.

So why is this pattern so uncommon? I blame the IoC frameworks. No seriously, they’re completely evil. The first thing I hit when refactoring Mike’s example, even using poor man’s DI, was that my domain objects needed dependencies. Because I’ve now moved the functionality to email a report from ReportingService onto the Report domain object, my domain object now needs to know about Emailer. In the original design it could be injected in, but with an object that’s new’d up, how can I inject the dependency? I either need to pass Emailer into my Report at construction time or on sending as email. When refactoring this I opted to pass in the dependency when it was used, but only because passing in at construction time is cumbersome without support.

Almost all DI frameworks make this a royal pain. If I want to inject dependencies into a class that also has state (like the details of the customer’s report), it’s basically impossible, so nobody does it. It’s better, simpler, quicker to just pull report creation onto a ReportBuilder and leave Customer alone. But it’s wrong. Customer deserves to have some functionality. He wants to be useful. He’s tired of just being a repository for values. If only there was a way of injecting dependencies into your nouns that wasn’t completely bullshit.

For example using NInject – pretty typical of DI frameworks – you can create a domain object that requires injected dependencies and state by string typing. Seriously? In the 21st century, you want me to abandon type safety, and put the names of parameters into strings. No. Just say no.

No wonder when faced with these compromises people settle for noun-verbers. The alternatives are absolutely horrific. The only DI framework I’ve seen which lets you do this properly is Guice‘s assisted injection. Everybody else, as far as I can see, is just plain wrong.

Is your DI framework killing your code? Almost certainly: if you’ve got value objects and noun-verbers, your design is rubbish. And it’s your DI framework’s fault for making a better design too hard.

Categories: Programming, Testing & QA

Ethnography, Emotional Testing, Lean-UX, Enterprise BDD in Methods & Tools Fall 2015 issue

From the Editor of Methods & Tools - Tue, 09/22/2015 - 14:26
Methods & Tools – the free e-magazine for software developers, testers and project managers – has just published its Fall 2015 issue that discusses Ethnographic Approach to Software, Emotional Testing, Lean UX in public sector and Enterprise-Scale BDD. Article in the Fall 2015 issue of Methods & Tools: * An Ethnographic Approach to Software * […]

Quote of the Month September 2015

From the Editor of Methods & Tools - Mon, 09/14/2015 - 14:48
Competencies versus Roles: We’ve seen a positive move toward emphasizing competencies in a team rather than roles or titles. As teams make that change, we see fewer “It’s not my job” excuses and more “How can I help?” conversations. Team members will continue to have core competencies in some areas more than others, but they […]

Software Development Linkopedia September 2015

From the Editor of Methods & Tools - Thu, 09/10/2015 - 13:38
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about self-organization, software quality, software development management, software architecture from a DevOps perspective, Scrum maturity, product owners’ skills and web performance. Web site: Week-end Testing Blog: Why Self-Organizing is So Hard […]

Money is Back for Software Tools… Or Maybe Not

From the Editor of Methods & Tools - Mon, 09/07/2015 - 16:04
The beginning of 2015 has seen a wave of new financing for software development tools companies. Mulesoft, Docker, Sauce Labs, Neo Technology, MongoDB, EnterpriseDB or CloudBees are some of the organizations that have received up to $130 million dollars to develop their activities. Even if not all companies will earn money on the long-term, this […]

GTAC 2015 Coming to Cambridge (Greater Boston) in November

Google Testing Blog - Sun, 09/06/2015 - 16:27
Posted by Anthony Vallone on behalf of the GTAC Committee

We are pleased to announce that the ninth GTAC (Google Test Automation Conference) will be held in Cambridge (Greatah Boston, USA) on November 10th and 11th (Toozdee and Wenzdee), 2015. So, tell everyone to save the date for this wicked good event.

GTAC is an annual conference hosted by Google, bringing together engineers from industry and academia to discuss advances in test automation and the test engineering computer science field. It’s a great opportunity to present, learn, and challenge modern testing technologies and strategies.

You can browse presentation abstracts, slides, and videos from previous years on the GTAC site.

Stay tuned to this blog and the GTAC website for application information and opportunities to present at GTAC. Subscribing to this blog is the best way to get notified. We're looking forward to seeing you there!

Categories: Testing & QA

GTAC 2015: Call for Proposals & Attendance

Google Testing Blog - Sun, 09/06/2015 - 16:27
Posted by Anthony Vallone on behalf of the GTAC Committee

The GTAC (Google Test Automation Conference) 2015 application process is now open for presentation proposals and attendance. GTAC will be held at the Google Cambridge office (near Boston, Massachusetts, USA) on November 10th - 11th, 2015.

GTAC will be streamed live on YouTube again this year, so even if you can’t attend in person, you’ll be able to watch the conference remotely. We will post the live stream information as we get closer to the event, and recordings will be posted afterward.

Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations are 30 minutes and lightning talks are 10 minutes. Speakers should be prepared for a question and answer session following their presentation.

For presentation proposals and/or attendance, complete this form. We will be selecting about 25 talks and 200 attendees for the event. The selection process is not first come first serve (no need to rush your application), and we select a diverse group of engineers from various locations, company sizes, and technical backgrounds (academic, industry expert, junior engineer, etc).

The due date for both presentation and attendance applications is August 10th, 2015.

There are no registration fees, but speakers and attendees must arrange and pay for their own travel and accommodations.

More information
You can find more details at developers.google.com/gtac.

Categories: Testing & QA

The Deadline to Apply for GTAC 2015 is Monday Aug 10

Google Testing Blog - Sun, 09/06/2015 - 16:27
Posted by Anthony Vallone on behalf of the GTAC Committee

The deadline to apply for GTAC 2015 is this Monday, August 10th, 2015. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to submit your proposal for consideration. If you would like to speak or attend, be sure to complete the form by Monday.

We will be making regular updates to the GTAC site (developers.google.com/gtac/2015/) over the next several weeks, and you can find conference details there.

For those that have already signed up to attend or speak, we will contact you directly by mid-September.

Categories: Testing & QA

TestInsane’s Mindmaps Are Crazy Cool

James Bach’s Blog - Wed, 09/02/2015 - 11:09

Most testing companies offer nothing to the community or the field of testing. They all seem to say they hire only the best experts, but only a very few of them are willing to back that up with evidence. Testing companies, by and large, are all the same, and the sameness is one of mediocrity and mendacity.

But there are a few exceptions. One of them is TestInsane, founded by ex-Moolyan co-founder Santosh Tuppad. This is a company to watch.

The wonderful thing about TestInsane is their mindmaps. More than 100 of them. What lovelies! Check them out. They are a fantastic public contribution! Each mindmap tackles some testing-related subject and lists many useful ideas that will help you test in that area.

I am working on a guide to bug reporting, and I found three maps on their site that are helping me cover all the issues that matter. Thank you TestInsane!

I challenge other testing companies to contribute to the craft, as well.

Note: Santosh offered me money to help promote his company. That is a reasonable request, but I don’t do those kinds of deals. If I did that even once I would lose vital credibility. I tell everyone the same thing: I am happy to work for you if you pay me, but I cannot promote you unless I believe in you, and if I believe in you I will promote you for free. As of this writing, I have not done any work for TestInsane, paid or otherwise, but it could happen in the future.

I have done paid work for Moolya, and Per Scholas, both of which I gush about on a regular basis. I believe in those guys. Neither of them pay me to say good things about them, but remember, anyone who works for a company will never say bad things. There are some other testing companies I have worked for that I don’t feel comfortable endorsing, but neither will I complain about them in public (usually… mostly).

Categories: Testing & QA

Software Development Conferences Forecast August 2015

From the Editor of Methods & Tools - Mon, 08/31/2015 - 14:53
Here is a list of software development related conferences and events on Agile project management ( Scrum, Lean, Kanban), software testing and software quality, software architecture, programming (Java, .NET, JavaScript, Ruby, Python, PHP) and databases (NoSQL, MySQL, etc.) that will take place in the coming weeks and that have media partnerships with the Methods & […]

Software Development Linkopedia August 2015

From the Editor of Methods & Tools - Thu, 08/27/2015 - 14:44
Here is our monthly selection of knowledge on programming, software testing and project management. This month you will find some interesting information and opinions about commitment & estimation, better software testing, the dark side of metrics, scrum retrospectives, databases and Agile adoption. Blog: Against Estimate-Commitment Blog: Write Better Tests in 5 Steps Blog: Story Point […]

Quote of the Month August 2015

From the Editor of Methods & Tools - Wed, 08/19/2015 - 09:39
Acknowledging that something isn’t working takes courage. Many organizations encourage people to spin things in the most positive light rather than being honest. This is counterproductive. Telling people what they want to hear just defers the inevitable realization that they won’t get what they expected. It also takes from them the opportunity to react to […]

How to Predict the Release of Your Project Without Estimating

From the Editor of Methods & Tools - Mon, 08/10/2015 - 15:41
We often hear that estimating is a must in project management. “We can’t make decisions without them” we hear often. This video shows examples of how you can predict a release date of a project without any estimates, relying only on easily available data. Learn how you can follow progress on a project at all […]

Refactoring JavaScript from Sync to Async in Safe Baby-Steps

Mistaeks I Hav Made - Nat Pryce - Sat, 08/08/2015 - 17:10
Consider some JavaScript code that gets and uses a value from a synchronous call or built in data structure: function to_be_refactored() { var x; ... x = get_x(); ...use x... } Suppose we want to replace this synchronous call with a call to a service that has an asynchronous API (an HTTP fetch, for example). How can we refactor the code from synchronous to asynchronous style in small safe steps? First, wrap the the remainder of the function after the line that gets the value in a “continuation” function that takes the value as a parameter and closes over any other variables in its environment. Pass the value to the continuation function: function to_be_refactored() { var x, cont; ... x = get_x(); cont = function(x) { ...use x... }; cont(x); } Then pull the definition of the continuation function before the code that gets the value: function to_be_refactored() { var x, cont; cont = function(x) { ...use x... }; ... x = get_x(); cont(x); } Now extract the last two lines that get the value and call the continuation into a single function that takes the continuation as a parameter and pass the continuation to it. function to_be_refactored() { ... get_x_and(function(x) { ...use x... }); } function get_x_and(cont) { cont(get_x()); } If you have calls to get_x in many places in your code base, move get_x_and into a common module so that it can be called everywhere that get_x is called. Transform the remaining uses of get_x to “continuation passing style”, replacing the calls to get_x with calls toget_x_and. Finally, replace the implementation of get_x_and with a call to the async service and delete the get_x function. Wouldn’t it be nice if IDEs could do this refactoring automatically? The Trouble With Shared Mutable State Dale Hagglund asked via Twitter “What if cont assumes that some [shared mutable] property remains constant across the async invocation? I’ve always found these very hard to unmake.” In that case, you’ll have to copy the current value of the shared, mutable property into a local variable that is then closed over by the continuation. E.g. function to_be_refactored() { var x; ... x = get_x(); ...use x and shared_mutable_y() ... } would have to become: function to_be_refactored() { var y; ... y = shared_mutable_y(); get_x_and(function(x) { ...use x and y... }); }
Categories: Programming, Testing & QA

Thought after Test Automation Day 2013

Henry Ford said “Obstacles are those frightful things you see when take your eyes off the goal.” After I’ve been to Test Automation Day last month I’m figuring out why industrializing testing doesn’t work. I try to put it in this negative perspective, because I think it works! But also when is it successful? A lot of times the remark from Ford is indeed the problem. People tend to see obstacles. Obstacles because of the thought that it’s not feasible to change something. They need to change. But that’s not an easy change.

After attending the #TAD2013 as it was on Twitter I saw a huge interest in better testing, faster testing, even cheaper testing by using tools to industrialize. Test automation has long been seen as an interesting option that can enable faster testing. it wasn’t always cheaper, especially the first time, but at least faster. As I see it it’ll enable better testing. “Better?” you may ask. Test automation itself doesn’t enable better testing, but by automating regression tests and simple work the tester can focus on other areas of the quality.


And isn’t that the goal? In the end all people involved in a project want to deliver a high quality product, not full of bugs. But they also tend to see the obstacles. I see them less and less. New tools are so well advanced and automation testers are becoming smarter and smarter that they enable us to look beyond the obstacles. I would like to say look over the obstacles.

At the Test Automation Day I learned some new things, but it also proved something I already know; test automation is here to stay. We don’t need to focus on the obstacles, but should focus on the goal.

Categories: Testing & QA

The Toyota Way: The need for doing it right the first time

After WWII Toyota started developing its Toyota Production System (TPS); which was identified as ‘Lean’ in the 1990s. Taiichi Ohno, Shigeo Shingo and Eiji Toyoda developed the system between 1948 and 1975. In the myth surrounding the system it was not inspired by the American automotive industry, but from a visit to American supermarkets, Ohno saw the supermarket as model for what he was trying to accomplish in the factor and perfect the Just-in-Time (JIT) production system. While accomplishing this low inventory levels were a key outcome of the TPS, and an important element of the philosophy behind its system is to work intelligently and eliminate waste so that only minimal inventory is needed.

As TPS and Lean have their own principles as outlined by Toyota:

  • Long-term Philosophy
  • Right process will produce the right results
  • Value to organization by developing people
  • Solving root problems drives organizational learning

As these principles were summed up and published by Toyota in 2001, by naming it “The Toyota Way 2001”. It consists the above named principles in two key areas: Continuous Improvement, and Respect for People.


The principles for a continuous improvement include establishing a long-term vision, working on challenges, continual innovation, and going to the source of the issue or problem. The principles relating to respect for people include ways of building respect and teamwork. When looking at the ALM all these principles come together in the ‘first time right’ approach already mentioned. And from Toyota’s view they were outlined as followed:

  • The right process will produce the right results
    • Create continuous process flow to bring problems to the surface.
    • Use the ‘pull’ system to avoid overproduction (kanban)
    • Level out the workload (heijunka).
    • Build a culture of stopping to fix problems, to get quality right from the first (jidoka)
  • Continuously solving root problems drives organizational learning
    • Go and see for yourself to thoroughly understand the situation (Genchi Genbutsu);
    • Make decisions slowly by consensus, thoroughly considering all options (nemawashi); implement decisions rapidly;
    • Become a learning organization through relentless reflection (hansei) and continuous improvement (kaizen).
Let’s do it right now!

As the economy is changing and IT is more common sense throughout ore everyday life the need for good quality software products has never been this high. Software issues create bigger and bigger issues in our lives. Think about trains that cannot ride due to software issues, bank clients that have no access to their bank accounts, and people oversleeping because their alarm app didn’t work on their iPhone. As Capers Jones [Jones, 2011] states in his 2011 study that “software is blamed for more major business problems than any other man-made product” and that “poor quality has become one of the most expensive topics in human history”. The improvement of software quality is a key topic for all industries.

 Right the first time vs jidoka

In both TPS and Lean autonomation or jidoka are used. Autonomation can be described as ‘intelligent autonomation’, it means that when an abnormal situation arises the ‘machine’ stops and fix the abnormality. Autonomation prevents the production of defective products, eliminates overproduction, and focuses attention on understanding the problem and ensuring that it never recurs; a quality control process that applies the following four principles:

  • Detect the abnormality.
  • Stop.
  • Fix or correct the immediate condition.
  • Investigate the root cause and install a countermeasure.
Find defects as early as possible

In other words autonomation helps to get quality right the first time perfectly. With IT projects being different from the Toyota car production line, ‘perfectly’ may be a bit too much, but the process around quality assurance should be the same:

  • Find the defect.
  • Stop.
  • Fix or correct the error.
  • Investigate the root cause and take countermeasures.

The defect should be found as early as possible to be fixed as early as possible. And as with Lean and TPS the reason behind this is to make it possible to address the identification and correction of defects immediately in the process.

Categories: Testing & QA

When will you start with test automation?

I just came back from vacation and when I started again I noticed a slight change in resource requests I now see coming by; as almost all requests are with a statement around test automation. In the last two days I had two separate sessions around test automation tools. Has test automation all of a sudden become more important? Did people follow up on my last post, where I state that tools are a prerequisite in testing today, or actually: yesterday!

If you missed the latest cycle in new tools for test automation you’re either an ostrich with your head in the ground (“sorry vacation was in Southern Africa”), or you just simply still afraid. Afraid of change that test automation would cannibalize your manual test execution.

Test automation is not anymore that it takes over test execution in a very complex and unmanageable way. No it offers higher efficiency on test design, test execution, but also more options to test certain non-functional parts of applications – that could not be done without those tools – like security and performance, and virtual environments to do end-2-end tests without test environments that are down all the time. Tools are now also offering more support for testing mobile solutions. Tools are everywhere!


Test automation offers us testers the opportunity to do more, faster, les risky, and cheaper. I set these words specifically in that order. Test automation is often seen as a way to do test cheaper. You can, but you also can do more, for instance:

  • Let the tool do the checks and you explore the application further;
  • Setup a virtual test environment that doesn’t go down after 1 hour of use and test more in the same time;
  • Create and execute more test cases by generating and executing them automatically.
  • Get higher quality by really doing a thorough regression test, instead of a simple check, to find integration errors.

There are enough reasons to work on test automation and I don’t see why not. I think now it is even time for the next step in test automation. What that is time will tell, but I look forward to hearing that at the Test Automation Day in June. Where Bryan Bakker will tell more on this next step in his presentation “Design for Testability – the next step in Test Automation”. After the congress I’ll post my ideas here.

Categories: Testing & QA

Tools should are a prerequisite for efficient and effective QA

We now live in a world where testing and quality are becoming more and more important. Last month I had a meeting with senior management in my company and I made the statement that “quality is user experience”, in other words “without the right amount of quality the user experience will always be low”. And I think most people in QA and Testing will agree with me on that. Even organizations agree on that. Then, but why do we still see so much failures in software around us? Why do we still create software without the needed quality.

For one, because it’s not possible to test for 100%! A known issue in QA, but that’s not the answer we’re looking for. I think the answer is that we still rely too much on old-fashioned manual (functional) testing. As I explained in an earlier blog we need to go past that, move forward. Testing is part of IT and needs to showcase itself as a highly versatile profession. We need to be bale to save money, deliver higher quality, shorten time to market, and go-live with as less bugs as possible…

How can we do that? There are multiple ways to answer that, but one thing will always be one of the answers: test automation or industrialization. Tools should be a prerequisite for efficient and effective QA. It should not be a question to use them, but why not to use them.

Why not use test tools?

The need for test automation has never been as high as now with Agile approaches within the software development lifecycle. New generation test tools are easy to use, low cost, or both. Examples I favor are the new Tricentis TOSCA™ Testsuite, Worksoft Sertify©, SOASTA® Platform, but also open source tool Selenium. And QA, and IT as a whole, needs to go further. Not only use tools to automate test execution, performance testing, security testing, but even more on test specification.

The upcoming Modelization of IT enables the usage of tools even further. We can create models and specify test cases with them (with the use of special tools), create requirements, create code or more. IT can benefit by this Modelization to help the business go further and achieve its goals. I’ve written about a good example of this in this blog on fully automated testing.

The tools are the prerequisite, but how can you learn more about them. Well if you are in the Netherlands in the end of June you could go to the Test Automation Day. They just published their program on their site to enable you to learn more about test automation.

Categories: Testing & QA

Thu, 01/01/1970 - 01:00