Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
Software Development Blogs: Programming, Software Testing, Agile Project Management
Subscribe to Methods & Tools
if you are not afraid to read more than one page to be a smarter software developer, software tester or project manager!
I read a really interesting post recentlyÂ looking at the difference between typical OO code and a more functional style.Â There’s a lot to be said for the functional style of coding, even in OO languages like Java and C#. The biggest downside I find is always one of code organisation: OO gives you a discoverable way of organising large amounts of code. While in a functional style you might have less code, but it’s harder to organise it clearly.
It’s not Mike’s conclusion that I want to challenge, it’s his starting point: he starts out with what he describes as “typical object oriented C# code”. Unfortunately I think he’s bang on: even in this toy example, it is exactly like almost all so-called “object oriented” code I see in the wild. My issue with code like this is: it isn’t in the least bit object oriented. It’s procedural code haphazardlyÂ organised into classes. Just cosÂ you’ve got classes, don’tÂ make itÂ OO.
What do I mean procedural code? The typical pattern I see is a codebase made up of two types of classes:
A noun-verber is the sure-fire smell of poor OO code: OrderProcessor, AccountManager, PriceCalculator. No, calling it an OrderService doesn’t make it any better. Its still a noun-verber you’re hiding by the meaninglessÂ word “service”. A noun-verber means its all function and no state, it’s acting on somebody else’s state. It’s almost certainly a sign of feature envy.
The other design smell with these noun-verbers is they’re almost alwaysÂ singletons. Oh you might not realise they’re singletons, because you’ve cleverly hidden that behind your dependency injectionÂ framework: but it’s still a singleton. If there’s no state on it, it might as well be a singleton. It’s a static method in all but name. Oh sure its more testable than if you’d actually used the word “static”. But it’s still a static method. If you’d not lied to yourself with your DIÂ framework and written this as a static method, you’d recoil in horror. But because you’ve tarted it up and called it a “dependency”, you think it’s ok. Well it isn’t. It’s still crap code. What you’ve got is procedures, arbitrarily grouped into classes you laughably call “dependencies”. It sure as hell isn’t OO.
One of the properties of good OO designÂ is that code that operates on data is located close to the data. The way the data is actually representedÂ can be hidden behind behaviours. You focus on the behaviour of objects, not the representation of the data. This allows you to model the domain in terms ofÂ nouns that have behaviours.Â A good OO design should include classes that correspond to nouns in the domain, with behaviours that are verbs that act onÂ those nouns: Order.SubmitForPicking(). UserAccount.UpdatePostalAddress(), Basket.CalculatePriceIncludingTaxes().
These are words that someone familiar with the domain but not software would still understand. Does your customer know what an OrderPriceStrategyFactory is for? No, then it’s not a real thing. Its some bullshit you made up.
The unloved value objects are, ironically, where the real answer is to the design problem. These are almost always actual nouns in the domain. They just lack any behaviours which would make them useful. Back to Mike’s example: he has a Customer class – its public interface is just an email address property, a classic value object: all state and no behaviour [if you want to play along at home, I’ve copied Mike’s code into a git repo; as well as my refactored version].
Customer sounds like a good noun in this domain. I bet the customer would know what a Customer was. If only there were some behaviours this domain object could have.Â What do Customers do in Mike’s world? Well, this example is all about creating and sending a report. A report is made for a single customer, so I think we could ask the Customer to create its own report. By moving the method from ReportBuilder onto Customer, it is right next to the data it operates on. Suddenly the public email property can be hidden – well this seems a useful change, if we change how we contact customers then only the customer needs to change, not also the ReportBuilder. It’s almost like a single change in the design should be contained within a single class. Maybe someone should writeÂ a principle about this single responsibility malarkey…
By following a pattern like this, moving methods from noun-verbers onto the nouns on which they operate, we end up with a clearer design.Â A Customer can CreateReport(), a Report can RenderAsEmail(), and an Email can Send().Â In a domain like this, these seem like obvious nouns and obvious behaviours for these nouns to have. They are all meaningful outside of the made up world of software. The design models the domain and it should be clear how the domain model must change in response to each new requirement – since each will represent a change in our understanding of the domain.
So why is this pattern so uncommon? I blame the IoC frameworks. No seriously, they’re completely evil. The first thing I hit when refactoring Mike’s example, even using poor man’s DI, was that my domain objects needed dependencies. Because I’ve now moved the functionality to email a report from ReportingService onto the Report domain object, my domain objectÂ now needs to know about Emailer. In the original design itÂ could be injected in, but with an object that’s new’d up, how can I inject the dependency? I either need to pass Emailer into my Report at construction time or on sending as email. When refactoring this I opted to pass in the dependency when it was used,Â but only because passing in at construction time is cumbersome without support.
Almost all DI frameworks make this a royal pain. If I want to inject dependencies into a class that also has state (like the details of the customer’s report), it’s basically impossible, so nobody does it. It’s better, simpler, quicker to just pull report creation onto a ReportBuilder and leave Customer alone. But it’s wrong. Customer deserves to have some functionality. HeÂ wants to be useful. He’s tired of just being a repository for values. If only there was a way of injecting dependencies into your nouns that wasn’t completely bullshit.
For example using NInject – pretty typical of DI frameworks – you can create a domain object that requires injected dependencies and state byÂ string typing. Seriously?Â In the 21st century, you want me to abandon type safety, and put the names of parameters into strings. No. Just say no.
No wonder when faced with these compromises people settle for noun-verbers. The alternatives are absolutely horrific. The only DI framework I’ve seen which lets you do this properly is Guice‘s assisted injection. Everybody else, as far as I can see, is just plain wrong.
Is your DI framework killing your code? Almost certainly: if you’ve got value objects and noun-verbers, your design is rubbish. And it’s your DI framework’s fault for making a better design too hard.
Most testing companies offer nothing to the community or the field of testing. They all seem to say they hire only the best experts, but only a very few of them are willing to back that up with evidence. Testing companies, by and large, are all the same, and the sameness is one of mediocrity and mendacity.
The wonderful thing about TestInsane is their mindmaps. More than 100 of them. What lovelies! Check them out. They are a fantastic public contribution! Each mindmap tackles some testing-related subject and lists many useful ideas that will help you test in that area.
I am working on a guide to bug reporting, and I found three maps on their site that are helping me cover all the issues that matter. Thank you TestInsane!
I challenge other testing companies to contribute to the craft, as well.
Note: Santosh offered me money to help promote his company. That is a reasonable request, but I don’t do those kinds of deals. If I did that even once I would lose vital credibility. I tell everyone the same thing: I am happy to work for you if you pay me, but I cannot promote you unless I believe in you, and if I believe in you I will promote you for free. As of this writing, I have not done any work for TestInsane, paid or otherwise, but it could happen in the future.
I have done paid work for Moolya, and Per Scholas, both of which I gush about on a regular basis. I believe in those guys. Neither of them pay me to say good things about them, but remember, anyone who works for a company will never say bad things. There are some other testing companies I have worked for that I don’t feel comfortable endorsing, but neither will I complain about them in public (usually… mostly).
After attending the #TAD2013 as it was on Twitter I saw a huge interest in better testing, faster testing, even cheaper testing by using tools to industrialize. Test automation has long been seen as an interesting option that can enable faster testing. it wasnât always cheaper, especially the first time, but at least faster. As I see it itâll enable better testing. âBetter?â you may ask. Test automation itself doesnât enable better testing, but by automating regression tests and simple work the tester can focus on other areas of the quality.
And isnât that the goal? In the end all people involved in a project want to deliver a high quality product, not full of bugs. But they also tend to see the obstacles. I see them less and less. New tools are so well advanced and automation testers are becoming smarter and smarter that they enable us to look beyond the obstacles. I would like to say look over the obstacles.
At the Test Automation Day I learned some new things, but it also proved something I already know; test automation is here to stay. We donât need to focus on the obstacles, but should focus on the goal.
After WWII Toyota started developing its Toyota Production System (TPS); which was identified as âLeanâ in the 1990s. Taiichi Ohno, Shigeo Shingo and Eiji Toyoda developed the system between 1948 and 1975. In the myth surrounding the system it was not inspired by the American automotive industry, but from a visit toÂ American supermarkets, Ohno saw the supermarket as model for what he was trying to accomplish in the factor and perfect the Just-in-Time (JIT) production system. While accomplishing this low inventory levels were a key outcome of the TPS, and an important element of the philosophy behind its system is to work intelligently and eliminate waste so that only minimal inventory is needed.
As TPS and Lean have their own principles as outlined by Toyota:
As these principles were summed up and published by Toyota in 2001, by naming it âThe Toyota Way 2001â. It consists the above named principles in two key areas: Continuous Improvement, and Respect for People.
The principles for a continuous improvement include establishing a long-term vision, working on challenges, continual innovation, and going to the source of the issue or problem. The principles relating to respect for people include ways of building respect and teamwork. When looking at the ALM all these principles come together in the âfirst time rightâ approach already mentioned. And from Toyotaâs view they were outlined as followed:
As the economy is changing and IT is more common sense throughout ore everyday life the need for good quality software products has never been this high. Software issues create bigger and bigger issues in our lives. Think about trains that cannot ride due to software issues, bank clients that have no access to their bank accounts, and people oversleeping because their alarm app didnât work on their iPhone. As Capers Jones [Jones, 2011] states in his 2011 study that âsoftware is blamed for more major business problems than any other man-made productâ and that âpoor quality has become one of the most expensive topics in human historyâ. The improvement of software quality is a key topic for all industries.Â Right the first time vs jidoka
In both TPS and Lean autonomation or jidoka are used. Autonomation can be described as âintelligent autonomationâ, it means that when an abnormal situation arises the âmachineâ stops and fix the abnormality. Autonomation prevents the production of defective products, eliminates overproduction, and focuses attention on understanding the problem and ensuring that it never recurs; a quality control process that applies the following four principles:
In other words autonomation helps to get quality right the first time perfectly. With IT projects being different from the Toyota car production line, âperfectlyâ may be a bit too much, but the process around quality assurance should be the same:
The defect should be found as early as possible to be fixed as early as possible. And as with Lean and TPS the reason behind this is to make it possible to address the identification and correction of defects immediately in the process.
I just came back from vacation and when I started again I noticed a slight change in resource requests I now see coming by; as almost all requests are with a statement around test automation. In the last two days I had two separate sessions around test automation tools. Has test automation all of a sudden become more important? Did people follow up on my last post, where I state that tools are a prerequisite in testing today, or actually: yesterday!
If you missed the latest cycle in new tools for test automation youâre either an ostrich with your head in the ground (âsorry vacation was in Southern Africaâ), or you just simply still afraid. Afraid of change that test automation would cannibalize your manual test execution.
Test automation is not anymore that it takes over test execution in a very complex and unmanageable way. No it offers higher efficiency on test design, test execution, but also more options to test certain non-functional parts of applications â that could not be done without those tools – like security and performance, and virtual environments to do end-2-end tests without test environments that are down all the time. Tools are now also offering more support for testing mobile solutions. Tools are everywhere!
Test automation offers us testers the opportunity to do more, faster, les risky, and cheaper. I set these words specifically in that order. Test automation is often seen as a way to do test cheaper. You can, but you also can do more, for instance:
There are enough reasons to work on test automation and I don’t see why not. I think now it is even time for the next step in test automation. What that is time will tell, but I look forward to hearing that at the Test Automation Day in June. Where Bryan Bakker will tell more on this next step in his presentation âDesign for Testability â the next step in Test Automationâ. After the congress Iâll post my ideas here.
We now live in a world where testing and quality are becoming more and more important. Last month I had a meeting with senior management in my company and I made the statement that âquality is user experienceâ, in other words âwithout the right amount of quality the user experience will always be lowâ. And I think most people in QA and Testing will agree with me on that. Even organizations agree on that. Then, but why do we still see so much failures in software around us? Why do we still create software without the needed quality.
For one, because itâs not possible to test for 100%! A known issue in QA, but thatâs not the answer weâre looking for. I think the answer is that we still rely too much on old-fashioned manual (functional) testing. As I explained in an earlier blog we need to go past that, move forward. Testing is part of IT and needs to showcase itself as a highly versatile profession. We need to be bale to save money, deliver higher quality, shorten time to market, and go-live with as less bugs as possibleâŠ
How can we do that? There are multiple ways to answer that, but one thing will always be one of the answers: test automation or industrialization. Tools should be a prerequisite for efficient and effective QA. It should not be a question to use them, but why not to use them.
The need for test automation has never been as high as now with Agile approaches within the software development lifecycle. New generation test tools are easy to use, low cost, or both. Examples I favor are the new Tricentis TOSCAâą Testsuite, Worksoft SertifyÂ©, SOASTAÂź Platform, but also open source tool Selenium. And QA, and IT as a whole, needs to go further. Not only use tools to automate test execution, performance testing, security testing, but even more on test specification.
The upcoming Modelization of IT enables the usage of tools even further. We can create models and specify test cases with them (with the use of special tools), create requirements, create code or more. IT can benefit by this Modelization to help the business go further and achieve its goals. Iâve written about a good example of this in this blog on fully automated testing.
The tools are the prerequisite, but how can you learn more about them. Well if you are in the Netherlands in the end of June you could go to the Test Automation Day. They just published their program on their site to enable you to learn more about test automation.