DevOps Zone is brought to you in partnership with:

Dror Helper is an experienced software developer has written and designed software in various fields including video streaming, eCommerce, performance optimization and unit testing tools. He is passionate about programming best practices and all things software development, and has been a guest presenter at several user group meetings and ALT.NET events. Dror's blog can be found at where he writes about unit testing, agile methodologies, development tools, programming languages and anything else he finds interesting. Dror is a DZone MVB and is not an employee of DZone and has posted 58 posts at DZone. You can read more from them at their website. View Full User Profile

Don’t fix invisible bugs

  • submit to reddit

Last week I conducted yet another code review. While looking at the code I’ve noticed a lot of lines similar to If(x != null) and if(y < 0) all over the code. These lines were not “business related” but were put in the code to make sure that proper inputs were given to the methods. Usually this is a good practice but further review showed that some of the code was checking for conditions that could not possibly happen. it meant that a lot of effort was given to creating code that checks for things that would never happen.

I’ve asked the developer why he written that code and he answered that it was put there to make sure that he handles a certain scenario properly. I’ve asked him when such conditions would happen and after thought he answered – never!

In that case - I said – please remove this code. At first he insisted that the code should stay there because: You can never know and because it was already written. I didn’t even dignify the 2nd remark with an answer but instead told him to write a unit test that shows that the code was important and could save us from disaster. After a few tries he was convinced that there is no way to cause his methods to fail and that this code is not needed.

This is not a new thing, when I see a fellow developer over thinking some method or class I suggest that first he write a test. I try to make sure that all of teammates practice TDD (Test Driven Development) along other benefits writing the tests before hand guarantee that every lines written is needed. But even if you do not writes your tests first and you find yourself asking whether you need to write “that code” try writing a test before to show that there a problem there that need solving…

It seems that code coverage goes both ways – it’s important to make sure that your code is well tested but if there is a piece of code that is not tested it could be because it would never run – and as such can be removed.

Published at DZone with permission of Dror Helper, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)


Liam Knox replied on Sun, 2011/05/15 - 12:12am

I really don't get this article. There is nothing wrong with guards against unexpected conditions that should never happen and these guards lead to better detection or handling. i.e. Assert.isTrue(x>0). If the guard introduces more concerns/complexity then this is a different case but clearly with an Assert or @NotNull you are not at that point

The word 'never' seems to be bounded around here with no real context. That the variable is a int means, yes, it can be negative and yes you should expect and handle this case.

As for the developers second remark he is absolutely correct in one area. If you have production code and you have question marks around its reason or need you still need to take care and regression test fully. Maybe the check doesn't do anything at all but it will take computing cycles potentially alter execution behaviour based on threading etc. etc.

For an effective example how such naiveness in test coverage can lead to total chaos see

Tom Gardner replied on Sun, 2011/05/15 - 12:59am

Clearly the author has never experienced rare unexpected behaviour in production code, nor has had the "pleasure" of faultfinding large bodies of code.

Maybe he thinks that because his code passes his tests, the code is actually correct.

Such unpleasant behaviour can be caused by any or all of:

  • compiler faults; you don't actually think your compiler always produces correct code do you?
  • runtime faults in a virtual machine; go and look at the buglists for your favourite language
  • library faults; you don't actually think everybody else's code is correct, do you?
  • hardware faults, due to either
  • misuse of library; you don't actually think the library writer anticipated how you would want to (mis) use their code, do you?
  • education faults; you don't actually think that there is a strong connection between computer arithmetic and the arithmetic you (ought to have) learned at school? e.g. a+b+c == c+b+a
  • but I will presume that the code you and your team is perfect and could never lead to any of the faults
or that none of those problems can silently re-occur when the compiler/runtime/library is upgraded..

Michał Minicki replied on Sun, 2011/05/15 - 3:02am

Code tends to evolve and change through time, so the guards may prove to be useful in some unpredictable way. But I do agree it's better to spend time on writing proper unit tests instead of the unnecessary guards.

Ricky Clarkson replied on Sun, 2011/05/15 - 7:13am

I agree with the sentiment of the article, though it's pretty badly written.

With the exception of null, you can generally use a type to restrict the values. E.g., if you would otherwise write lots of if (x > 0), write a PositiveInt type that has the check once and use that everywhere you need a positive int.

if (x != null) checks in code make me wonder why x might be null, i.e., whether it's in response to an actual bug or just random defensive slop, and sometimes it's hard to tell. Just say no to nulls.

I can't believe we're still writing HTML in article comments, DZone. Fix that crap.

Balázs Bessenyei replied on Sun, 2011/05/15 - 7:50am in response to: Liam Knox

"Maybe the check doesn't do anything at all but it will take computing cycles potentially alter execution behaviour based on threading etc. etc."

If a code needs such "workarounds" to get around threading problems then it is fundamentally flawed and error prone. Any change in the production environment can render those "workarounds" ineffective.
In these case the original cause needs to be located and fixed and the "workarounds" removed.
If removal is not a possibility then it needs to be documented to avoid further removal attempts or recognize when it is no longer needed.

Converting them to Assert or @NotNull should be an adequate solution as well.

Liam Knox replied on Sun, 2011/05/15 - 8:28am in response to: Balázs Bessenyei

My point is as with this whole article is that it sounds like its coming from a hacker. The 'oh just rip that out it doesn't do anything and chuck it back out into prod' approach.

Even an appearing passive change needs to be treated with respect in testing. That the removed code may be masking other issues is the whole point. The last thing you want is for this seemingly passive change bombing out the production system. Yes if you remove and detect other concerns you fix these. But I have the impression in the authors case this would not of happened

Take that number format bug example. Clearly they didn't have an adequate unit tests for this for the past 15 years. Whether this is even possible given the scale of the test set is moot. It purely proves how apparently simple functions are riddled with complexity. Something the author seems not to appreciate.

Ricky Clarkson replied on Sun, 2011/05/15 - 8:57am

Liam, Code can still be worth ripping out, though you might like to make sure you do it a little at a time to minimise impact, and/or make sure you have a good release process, so that you get feedback from early adopters before the whole userbase gets the software. If you leave code in because you don't know what it does, even though it's problematic, you make routine maintenance harder and you're saving up problems for later.

darryl west replied on Sun, 2011/05/15 - 9:38am

good article and great advise. teaching inexperienced coders the value of TDD and "testing with a purpose" is an uphill effort. (even harder to teach so-called experienced coders). hopefully your young developer will use this lesson the next time he/she thinks its necessary to bulk up code with unnecessary clutter.

Balázs Bessenyei replied on Sun, 2011/05/15 - 10:09am in response to: Liam Knox

That would be even more stupid; to put it into production without verification.

Even if the developer unable to find error, the automated and regression test should. If not, then let the production system burn.

With this logic we should add checks like 1+1 = 2 and a few similar to every part of the code, just to be certain that the universe working right.

Don`t misunderstand me, I`m not fan of blindly remove code, which appears unnecessary, without proper reason and verification. 

Liam Knox replied on Sun, 2011/05/15 - 5:49pm in response to: Ricky Clarkson

I don't disagree with pruning code. Code evolves, this is all natural. But I disagree with your article and you reasoning on guards. Guards have use, there is no question about this. Yes if you are dealing with unit tests and something like an int you should handle both positive and negative and often an invalid negative condition is more clearly expressed by the Assert and the test than 'assumption'

Put another way the code is more expressive with the Assert than without and no doubt the exception will be. Yes dont scatter if but conditions everywhere but use a proper assert mechanism.

I will divert to Bob Martin or the SpringSource code base for better explanation reasoning on code, expressiveness and assert examples

Liam Knox replied on Sun, 2011/05/15 - 5:58pm in response to: Balázs Bessenyei

No. You should test your system and the classes you create. You need to rely on some confidence what you build on is also tested. So no we don't test 1+1=2, someone else must. Though if I have my own Money object I must ensure it rounds correctly with respect to Currency semantics etc.

On the Universe point. I don't think anyone know its working correctly or how it exactly works, and your simple add 1 example will always work differently to the expected 'Universe' behaviour in the Computer Science World i.e. overflow, precission errors etc, etc, int , double, BigDecimcal all will behave differently. Nice start ;)

What I am saying is no matter how trivial appearance is there is often massive complexity and design decisions to make. (not verified) replied on Mon, 2011/05/16 - 2:26am


you certainly did that after making sure you have a production-ready system:

- there are no unmet requirements/change requests

- your team solved all the registered bugs in the issue tracker

- integration and acceptance tests have failed in finding bugs in months

Once these are met, it's natural to perfect your code, whatever perfect code means for your team. Design by contract is usually not the first thing to prune from the code base on your way to perfect code.

Tom Gardner replied on Mon, 2011/05/16 - 2:40am in response to: Balázs Bessenyei

Even if the developer unable to find error, the automated and regression test should. If not, then let the production system burn. With this logic we should add checks like 1+1 = 2 and a few similar to every part of the code, just to be certain that the universe working right.
Be careful and have some humility: there are a multitude of pitfalls and traps in computer arithmetic. These have been known for decades, and the common ones are summarised by the University of Cambridge here:
especially the "Presentation" link.

Speedread the presentation, and become aware of how many things you don't know :)

Dror Helper replied on Mon, 2011/05/16 - 5:07am

I'd like to thank everyone who commented on my post - all of you are right, not only the ones that agreed with me but the one who disagreed as well... Let me explain - I did not mean to say that defensive programming is wrong and I do think that developers should check the input of their functions for invalid arguments. Unfortunately when I wrote this post I had a specific piece of code in mind that I could not post (due to understandable reasons - it is production code). And instead I found the common ground in "check for null" that made so many of you upset so many of you. The code I wrote about had more of 75% lines that could never run! When I made the developer try and unit test it he agreed that it was useless. I'm sorry that I failed explaining the case properly and can only attribute this lack of clarity to the late hour in which I wrote this post. I appreciate your comments and I'm glad this misguided post has caused an interesting debate.

Alex(JAlexoid) ... replied on Mon, 2011/05/16 - 7:41am in response to: Dror Helper

There are 2 points here that make your whole post absolutely true in all cases and absolutely incorrect in all cases.

  1. Unnecessary code - unnecessary code is dead weight for any project. Each line of code you write, you are responsible for. And each line of code is a potential bug. Therefore, fewer lines of code result in fewer bugs.
  2. Code reuse - when a project grows to a particular size, code reuse is an everyday activity. Those just-in-case checks become the only thing that keep systems stable. Because when a project is big enough, you simply loose track where exactly that code was intended to be used and where it ends up being used. Even in these cases, a developer might not be able to tell what is it guarding from.
  3. And as always, when there are only 2 points there is the third one -Systems can have a hybryd thing going, mixing the two points above. Have isolation between components. Components that are allowed to be used by anyone and fall under point #2. Components that are created to perform specific tasks and fall under point #1.

Nicolas Bousquet replied on Mon, 2011/05/16 - 10:35am

>> Don't fix invisible bugs.

Apply this mantra to yourself. Overchecking is not a bug. On the other hand, any change to production code will introduce new bugs. This is not a matter of how trivial the change is. In your exemple, you'll going to remove some check you shouldn't have and it will introduce bugs. It will cost the time to find and remove the check, the time to verify you can safely remove them... The time to remove them. The time to handle the bugs introduced in production. The cost of a potential downtime. The time to failback to previous version. The time to reput some of the tests. The time to redeploy.

The whole refactor thing is nice, but it fail the reality check. Old "ugly" code is in general of good quality. Even if you really dislike it. Why ? Because all bugs have been removed over the years.

When you refactor, you should have a very good reason, because you restart the clock at 0. You'll need to wait a few more year to make this modified code nearly bug free.

I agree with you overcheck is bad. Why check for null for exemple ? I wonder if the guy fear the JVM would miss to throw a NullPointerException? If the JVM does it, why check again in code? If null is a  normal value, check for it, otherwise, the JVM will do the job for you.

But I'am not for a refactor eveything by default.

Alessandro Santini replied on Mon, 2011/05/16 - 10:53am in response to: Alex(JAlexoid) Panzin

"... you simply loose track where exactly that code was intended to be used and where it ends up being used. Even in these cases, a developer might not be able to tell what is it guarding from."

I would rather rephrase this as

"you simply loose track of how exactly that code was intended to be used, irrespective of where".

In other words, correct component interface specifications should replace irrelevant checks.

My bottom line is that useless checks are normally quick'n'dirty fixes to interface specification flaws.

Balázs Bessenyei replied on Mon, 2011/05/16 - 2:21pm in response to: Tom Gardner

True, I should have specified them to be integers in decimal radix. Probably 1 == 1 would have been better.

Besides you and Liam Knox have manged to miss my point completely. 

P.S.: Thanks for the link, it will do some good in the future.

Liam Knox replied on Mon, 2011/05/16 - 6:58pm in response to: Dror Helper

I would like to thank you for the post because discussion on this point is very important. Personally I agree with guards, at least for public method preconditions.

Expression of these guards is now the most important advance we need to make. Annotations look the likely candidate i.e. @NotNull etc. They express intent clearly and can even be compile checked to eliminate a number of misuses.

In the Java world I never really understood the assert keyword. Yes, what it does is clear but to have something like this that is switcable in a runtime is madness. You cannot do that. Its like having optional brakes in a car. This is why you see Spring etc with their own Assert methods.

Basically assert keyword implementation was a regression in language design I think now annotations can put back the idea that pre condition checks are good

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.