DevOps Zone is brought to you in partnership with:

Kirk is a software developer who has filled most roles on the software developer team. He is the author of Java Design: Objects, UML, and Process (Addison-Wesley, 2002) and he contributed to No Fluff Just Stuff 2006 Anthology (Pragmatic Bookshelf, 2006). His most recent book, Java Application Architecture: Modularity Patterns with Examples Using OSGi was published in 2012. Kirk is a DZone Zone Leader and has posted 77 posts at DZone. You can read more from them at their website. View Full User Profile

Test Infecting Your Team

06.12.2009
| 9157 views |
  • submit to reddit

Recently, a simple question was posed to the Yahoo Test Driven Development (TDD) group.

I've seen situations where the person on a team you would single out to infect produces a high level of quality of code without unit testing. In this case you have to show the leader that their team could produce more value if they were to adopt unit testing, even when the difference it makes for them is negligible.

How would you do this?

Initially, it's easy to argue that it's not necessary. If a developer is already producing high quality code that does what it's supposed to, then it's easy to justify that TDD isn't necessary. There's a cost to TDD, and many of us have heard the argument that TDD slows a developer down.

Thankfully, Joshua Kerievsky chimed in with the sensible words of wisdom that shed light on the importance of TDD...even in situations where developers are able to produce high quality code without it.

A "high level of quality code" is great, yet most software lives on and on and people expect to add/modify features in that software or debug it. How much will the maintenance costs be without unit tests? How much more risk does that add?

Most folks just look at today, not tomorrow. It's important to consider the full cost of not writing unit tests.

Of course, the point here is that TDD provides benefits beyond testing. While TDD is just one way to prove that our code works today, TDD also provides the courage to refactor the code tomorrow. It provides developers important feedback on the system wide impact of their change. It serves as a form of executable documentation for developers responsible for maintaining the code going forward. While there is a short-term cost to investing in TDD, the real benefit of TDD is long-term and is well worth the investment.

For reference, here's the Yahoo thread.

Published at DZone with permission of its author, Kirk Knoernschild.
Tags:

Comments

Fabrizio Giudici replied on Fri, 2009/06/12 - 11:08am

TDD also provides the courage to refactor the code tomorrow

 Exactly, and "courage" is the magic word. If you don't have tests for good code, you'll stuck yourself in the habit "don't touch it, since it works". As time goes on, there will be cases where the code works a bit less, some minor bugs, and some RFE happen. Since you don't want to touch that code, you'll put fixes/enhancements/workarounds in other parts of the code, slightly but constantly degrading the quality of your design. You won't even upgrade a depending library, since you can't easily run regression tests over it. In a shorter time than you expect, that good designed and implemented project will turn into a nightmare. 

Walter Bogaardt replied on Fri, 2009/06/12 - 1:42pm

Yet even today TDD is an afterthough in the web realms. Try functional testing of a JSF webapp. Adobe Flex actually implemented functional testing of flex components, which is refreshing.

Mohamed El-beltagy replied on Fri, 2009/06/12 - 3:57pm

Execuse me folks, and execuse my little knowledge of TDD, but as far as I know, TDD is not only for code quality and functionality tests.

It's also about business tests and that the code covers and executes the required busienss from it and gives the expected results. It's not only about code quality and bug-free software.

At least that's what I read someday about TDD. And if that was right, then even best code quality will still not cover the entire purpose of TDD usage.

Thanks all.

Jess Holle replied on Sat, 2009/06/13 - 8:53am

It seems to me that tests at the wrong level decrease the courage to refactor.

I see a lot of tests written to the lowest possible level of code, e.g. down to individual inner classes even.

Such tests discourage refactoring because they need to be all rewritten because they are testing details of the current implementation rather than behavior of a software component or published/stable API.

Testing at the component or stable API level is great -- that facilitates refactoring.  Testing below this level, is bad -- it missed the critical overall component behavior and locks in implementation detail.

A lot of software does not have a clear distinction between components/stable-APIs and implementation detail, which then leads to bad tests The drive to unit test everything makes this worse -- as developers then make things public and add lots of injection APIs to make it easier to unit test implementation internals that shouldn't be separately tested anyway, thus removing important distinctions between internals and externally visible APIs.

Jose Maria Arranz replied on Sun, 2009/06/14 - 11:42am in response to: Jess Holle

I can't more agree with you Jess!!!!!

The unconfortable truth is low level unit testing is useless, testing isolated classes is a waste of time, the hell is in the integration level, the "overall component behavior" as Jess says.

In complex software many components cannot be isolated because they were designed to fit with others, smart functional tests focused to test these "service components" are enough. Only in very simple cases you can easily isolate your component using some kind of mock components.

Of course I'm not talking about low level APIs fully isolated from the rest (for instance a HashMap object).

 

Mladen Girazovski replied on Mon, 2009/06/15 - 2:38am

In complex software many components cannot be isolated because they were designed to fit with others, smart functional tests focused to test these "service components" are enough. Only in very simple cases you can easily isolate your component using some kind of mock components.

I think the problem is not new, Gerard Meszaros calls it "Overspecified Software", the extensive use of (very specific) Mocks can lead to it, which in turn leads to high mantanaince costs.

The reason is imho that with Mocks, we are not doing black box testing anymore, but rather withebox testing, meaning that we are testing/expecting the internals of the subject under test to behave in a certain way, rather than just the outcome.

IMHO you should try to keep your Mocks/Fakes as ignorant as possible, so they won't be the fragile parts in your tests that break as soon as some internals are changed.

Mike Bria replied on Fri, 2009/07/10 - 8:12am

Fabrizio, well put, it really is all about increasing confidence to move quickly and freely, and over the long haul.

Regarding "over-specification". This is why one of my top rules of thumb is to "always test objects, never methods". It's another way of thinking about keeping your tests as "behavior-centric" as possible, and keeping them out of the nits of the implementation details.

When you're using fakes, yes as mgira says, keep them simple/stupid. More specifically, if you find that your test is doing too much work managing mocks, you probably have a design problem with the object its testing - eg, object doing too much, not doing enough, maybe "Feature Envy", etc.

Anyway, regarding the original question of the post: not too long ago I wrote up my quick hit-list on the reasons a team needs TDD here, feel free to check it out and comment.

Cheers,
MB

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.