DevOps Zone is brought to you in partnership with:

I have worked in software development since the mid 90's after a few years in the aerospace industry. Initially a database specialist always a programmer, currently with java. My focus is now on helping organisations to go faster with higher quality. Scrum and Agile are often part of that change. Martin is a DZone MVB and is not an employee of DZone and has posted 10 posts at DZone. View Full User Profile

5 Tips to Reduce Unit Test Defect Rates

07.13.2011
| 9757 views |
  • submit to reddit

Five quick points about unit tests that will reduce your defect rates.

  1. Always stop to add that simple unit test. You will be amazed how often this discovers a bug or unexplored corner case.
  2. Never develop from a main method, find a way to turn it into a test. Main driven development is horrible, they are not part of the continuous development cycle so once finished they get forgotten.
  3. Make all tests run in the continuous integration environment. A test that does not run is a dark test.  Tests that are not continually run are probably broken.  Same goes for main method development.
  4. Expend effort getting the time from commit to all project tests passing tests as small as possible. If I could check in, and see the results 1 second later there is little chance of me holding someone else up.  Aim for less than 15 mins with current technology.  Parallelism is the key.  Separate the build, many small suits that run in parallel.  Why not build a cloud just for running parallel test suits in?
  5. Do as much testing as possible using simple unit tests. Unit tests are cheap to write, run fast and are easier to maintain. Test as much as possible with them, resort to higher level tests only to test integration.  Good unit level coverage will also mean the higher levels have less to do.


If you just did the above you would be doing very well indeed.

Some other test related thoughts I had today:

Why would you schedule your test runs on an hourly schedule?  Always hook unit tests into the commit.  If your unit coverage is high it might be possible to consider the unit tests a pass or fail threshold for a commit.

If you can’t fix a break quickly revert before the breaks pile up.  Choose a source control system that makes this easy to do.  The threat of a revert means people will go out of their way to test the build before checking in.

Integration tests should also be continuously run.  There may be challenges, they are often slower, and more fragile.  Break them into batches but not just by functional area.  Consider batches split by test categorisation, with buckets like:

Slow running: Schedule infrequent runs.

Fragile but fast: Trigger on commit.

Performance: Run when system is not under load.

The general rule is be inventive, find ways to get them run as frequently as possible.

Finally ask yourself this next time you realise the need for another test, but decide not to write it.  Why do I feel its ok not to write this test?

Too hard: Perhaps its time to find better ways to crack the problem.  An opportunity to be creative.

Too trivial: It should not take too long to write it then.

Your under pressure to deliver: Local optimization, delivering untested code ultimately slows down delivery.


References
Published at DZone with permission of Martin Harris, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags:

Comments

Stefan Lecho replied on Wed, 2011/07/13 - 8:08am

Keep an eye on code coverage, to be sure that the most important parts of your code are tested. It is better to have 20 tests that test 80% of your code, than 80 tests that test 20% of your code.

Mladen Girazovski replied on Wed, 2011/07/13 - 8:37am in response to: Stefan Lecho

I disagree with that statement.

Having only 20 Tests that test 80% of the system usually means that your unit tests are not testing isolated units (with mocks) but also tests colaboraters of the subject under test, menauing they "test too much", making them complex, hard to read, fragile, obfuscated and provide only poor defect localization.
In many cases this kind of tests are integration tests that cross system borders (filesystem, db, etc. pp.).

While neither situation is ideal, the 80 isolated unit tests that test 20% of the system are usually a better starting point: just add more tests
The 20 tests that test almost the whole system on the other hand would need lots of refactoring to clean up.

Stefan Lecho replied on Wed, 2011/07/13 - 9:26am

I would replace "80% of the code" with "80% of the most important parts of the application". The most important parts typically consist (without being exhaustive) of the services exposed to for instance the GUI-layer, the actions your using in the GUI-layer, the webservices exposed to other applications, etc.

My experience tells me that it is not very useful to test all setters and getters of the domain model, to test all the JPA-entities. It is far more interesting to test the EJB/Web/Spring-services that uses the domain model and the related JPA-entities, to be sure that the client that uses these services does not have to play the role of tester. 

Josh Marotti replied on Wed, 2011/07/13 - 10:33am

Test interfaces/abstractions heavily, not so much concrete methods.  Refactoring your code can end up with a complete rewrite of unit tests instead of an already built set of regression unit tests.

Loren Kratzke replied on Thu, 2011/07/14 - 9:11pm in response to: Josh Marotti

I strongly agree. In fact I would say test only interface methods. Test the function, not the implementation.

Mladen Girazovski replied on Fri, 2011/07/15 - 6:35am

I'm really wondering how anyone would "test" an interface, since there is nothing to test, at least with an unit test.

You cannot test interfaces with a unit tests, only implementations.

Loren Kratzke replied on Fri, 2011/07/15 - 1:03pm in response to: Mladen Girazovski

Well, no shit Sherlock. I am saying to only test methods defined by interfaces and nothing else. If you test only methods defined by interfaces then it is likely that you can refactor without breaking tests. Test the function, not the implementation. Most test freaks don't get it. They just want to test everything. Then when you go change one line of code, BANG! Tests start breaking. But guess what, the application didn't.

If you can't refactor without breaking tests, then you are probably a test freak that would benefit from my advice - test only the methods defined by interfaces. If it is an important service or may have multiple implementations, then it needs an interface. If it doesn't need an interface, then it probably doesn't need a test either.

Mladen Girazovski replied on Mon, 2011/07/18 - 3:36am

Well, no shit Sherlock. I am saying to only test methods defined by interfaces and nothing else. If you test only methods defined by interfaces then it is likely that you can refactor without breaking tests.


So far this tipp sounds to me like "test less code, so you'll have less tests to worry about".

Test the function, not the implementation. Most test freaks don't get it. They just want to test everything. Then when you go change one line of code, BANG! Tests start breaking. But guess what, the application didn't.
Well maybe there is something that the "test freaks" know that you don't.

If you can't refactor without breaking tests, then you are probably a test freak that would benefit from my advice - test only the methods defined by interfaces. If it is an important service or may have multiple implementations, then it needs an interface. If it doesn't need an interface, then it probably doesn't need a test either.
Whats so different about the Code that is not part of the interface? What makes it so "magic" that it doesn't need to be tested? I think the original problem is the dependencies between test code and prod. code, reducing the code that is tested will result in less tests and therefore less dependencies.

In my experience there is better ways to deal with the dependency problem, a way that does not imply that less code should be tested.

Let's take an example: With good code, if you change the constructor of a class that is tested, ideally 2 other parts of the codebase need to be changed:
The factory method in the prod code that is using this constructor, and the factory method in the test code that uses the constructor.

But if the constructor is called from 15 test methods, all those 15 test methods need to be changed... it doesn't matter how you arrived at the tests btw., test driven or if the test have been written after the prod code, once there is many tests (hundreds or even thousands), the test code better be structured so that it not becomes the biggest problem when refactoring prod. code, otherwise people will disable/delete tests instead of fixing them.
Same goes for assertions, use custom assertions to avoid redundancy and make the intent of the test clearer.

Yes, thats right, it's that simple, the same rules apply to tests that also apply to prod. code: Refactor out redundancy, keep your dependencies clean.

Here is a tip for you: Go and read a book about test refactoring and structuring, you'll certainly benefit from it, it's called "xUnit Test Patterns: Refactoring Test Code" from Gerard Meszaros.

Loren Kratzke replied on Mon, 2011/07/18 - 2:28pm in response to: Mladen Girazovski

It's not just about testing less code, it's about testing the right code. For example, directly testing setters and getters is silly. They can be tested indirectly by simply setting up your object.

Next you have an interface, let us say for a large and important service. The implementation should produce predictable output for a given input. THAT is what you need to test. If the implementation uses utilities and scratch objects in the background, you should not care. You should not write tests for these utility classes. As long as your interface is working, you should not care how the implementation arrived at the result from a unit testing perspective. This enables a developer to come in later and optimize the code without breaking every single test under the sun. In the event that the interface breaks, then you have a genuine problem that needs attention.

I have seen extremes where people actually cause an exception and then validate the text of the error message. When I changed the error message, the damn test broke. That was a very stupid test that proved nothing of value and caused me the extra work of DELETING that test.

Furthermore, it was a prime example of a unit test that passed even though the product was crap (useless error message). The energy spent writing the test would have been better spent writing a decent error message. But this type of waste is not limited to fruitless tests such as for an error message. I have also seen detailed tests that pass on code that is entirely wrong to begin with. Just because unit tests pass doesn't mean your implementation is correct. Unit tests prove nothing in this regard.

When you try to test everything, you are simply locking down code in a painfully expensive and worthless way, and you are effectively testing whether or not "code" has changed as opposed to functionality changing. If I want to know if code has been touched, I use svn. I don't need a broken unit test to tell me that code has been touched.

And if somebody touches code that breaks a pile of unit tests, it tells me nothing except that a pile of unit tests just broke because somebody touched code. It doesn't tell me that my application is broken because a unit test is slightly dumber than a rock when it comes to being aware of the larger picture.

So I am opposed to testing every method in an application for the sake of doing so. It creates technical debt and offers no real return on investment (in fact quite the opposite). Focus your testing on the complex/fragile areas, and the primary services (all of which should be behind interfaces).

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.