Agile Zone is brought to you in partnership with:

When not working in a research group at Research In Motion, Steve likes to study new topics in Computer Science, and create unique web applications that apply them. Steve is a DZone MVB and is not an employee of DZone and has posted 5 posts at DZone. You can read more from them at their website. View Full User Profile

Test Driven Development without Tears

03.06.2012
| 12691 views |
  • submit to reddit

Every company that I worked for has its own method of testing, and I've gained a lot of experience in what works and what doesn't. At last, that stack of conflicting confidentiality agreements that I got as a coop student have now all expired, so I can talk about it. (I never signed them anyway.)

Warning: My recollection of events may be different from what actually occurred. Do you remember what you were doing 10 years ago?

Don't make testing a pain in the ass.

Two places I worked at had test systems that were painful to use. They were Microsoft and Soma.

Microsoft's is the worse of the two. The system of my particular team in 2000 was so bad that developers couldn't even use it. Only dedicated Software Test Engineers had access to it and could run the full suite. Setting it up on a new PC could not be done in a day. Eventually at the end of the summer I managed to get a copy running, and it found lots of bugs in my module, but by then I ran out of time to fix them. Yes, this was all my fault because I should have been running the tests earlier. I didn't run them because they were so darn hard to get a hold of and set up.

Testing must be done continuously and during development, by developers.

Soma had the right idea. They were working on a software phone that ran on Linux. Before checking in a change, we were required to write a test for it and run the full regression suite without any failures.

The software and tests were all written in Java, so to create a test, you'd create a Java object with a procedure that called the high level APIs to start a phone call, and figure out a way to use the feature you were working on.

Some of the features needed a complex set of steps to invoke. If you wanted to test the user hitting a #-code to add a participant during a five-way conference call, you had to first set up five fake calls using the Java API. There were functions for dialing a number and such. Whenever an API changed, all the test objects would have to be updated.

The test system was downright painful to use. Early versions ran in real time, with real timeouts. To test that the dial tone only played for 30 seconds before the off-hook beeps started, you had to sit there and wait while the system did nothing until the timer expired. There was a speeded-up mode where timers would expire immediately. But even in speed mode, the Java system was so bogged down that it would take several hours to run everything. While I was there, they were in the process of building a cluster of 100 PCs just to run all the tests.

The problem was that all of the tests were system level. Programming in Java eventually leads to a system that I call object-soup: More abstraction can seem to be better, but it leads to more classes, and more classes have more references to each other. The system grew so complex that there was no way to reset the state to the middle of a five-way call, without actually performing all of the call setup. Each one had to be set up for every test. Unit testing of an individual class was meaningless, but it was impossible to separate a feature from the rest of the system.

You can't test GUIs

In 1999, Corel's flagship product was its DRAW! Vector graphics suite. As I remember it, testing was completely manual. Of course, developers were expected to test their changes as much as possible, but CorelDRAW is a huge program with thousands of features and modes of operation. After we'd done some cursory testing, we'd check in a change into Visual Source Safe and wait bugs to come in from beta testers. When a bug was submitted, the test specialist for my team, Mona, would reproduce it and create an issue report.

The system was clearly broken. We had about 100,000 open bugs and even some of the more serious ones (copy text with certain bullet styles results in crash.) had been unfixed for several versions.

The problem was the lack of any regression testing. We relied too much on developers manually going through all the code paths, and beta testers to submit reports. The engines were physically separated into libraries, but they were tightly coupled into the UI, so automated testing was impossible. If you broke something in a little-used feature, it might be years until someone noticed.

Regression testing is only feasible if the process is automated, but to this day, I still can't think of a good way to automatically test GUIs. If your program has a graphical user interface, you are stuck with laborious manual testing. The best thing you can do is to separate your program into two parts: a user interface part, and an engine part that does the real work. The user interface part will have to be tested by moving the mouse and going through all the options. The engine part has to have a well defined API, with inputs and outputs that you can test automatically.

Note: Most people stop reading at this point.


Three rules for Test Driven Development

In a modern software company, developers should be running some kind of regression tests with every change they make. If you make it painful, then your developers will be unproductive. If you make it painful and mandatory, then your developers will be unproductive and unhappy. For effective test driven development, you need three things:

  • The test system should support the developer, not the other way around.
  • A developer must be able to create a new test in under 10 minutes.
  • The test suite should be capable of running hundreds of tests in under 5 minutes.

If the test system is painful to use and create tests for, then developers will not create tests. There has to be some payoff for spending the effort of creating a test, in terms of finishing and going home early. Otherwise, the tests will be put off to later, or they won't get done, or you will have to have a separate team whos only job it is to create tests, and then you are no longer doing test driven development.

The test system must support the easy creation of tests. You should be able to take a bug report or some log from the field, run it through a tool, and out pops your test that is ready to run to reproduce the issue. If the tests are written in Java that is quite hard to do. Ideally instead of writing code, you will have some other kind of input, like a list of events that occurred since system startup. You can run the events through your system to exactly reproduce its state. These types of tests take no development effort to create, and the best part is they don't depend on function names and classes. You could rewrite your system from scratch, and as long as it takes the same input, your tests will still work.

Finally, the test suite should be fast. You're going to want your automated build system to run the tests after every few changes. If you think about it, a developer might make, on average, one or two changes a day. If you have 100 developers, you will have 100 to 200 changes a day. Developers will need to be able to run the tests at their desk, too. If regression tests passing is mandatory before committing a change, then it should take only a few minutes to run, so you don't have developers leaving for a three hour lunch, checking their stocks, or having swordfights.

 

File importer example

Lets say we are responsible for writing file importers for a word processor. We are fixing a bug in a file importer for Microsoft DOC format. The input is a .DOC file, and the output is a series of function calls that modify a document. So when we read some text, we call document.addText(), and when we get a new font, we call document.setFont(), etc.

But instead of maintaining a real document and displaying on the screen, we have a generic test document. When we call document.addText(), we just record this fact to a text file. So after our importer runs, we might have something like this:

called addText("This document is copyright")
called setFont("Symbol", 10)
called addText("c")
called setFont("Times New Roman", 10 )
called addText("1995")

Suppose we had hundreds of .doc files, taken from actual bug reports by actual users. We put them into a folder called "input" and check them into our source control system. We run them through the test document. This only takes a few seconds, because all its doing is creating text files. We then take these output .txt files, and check them into our source control system.

If, one year later, I'm making a change and the output of the test suite changes in any way, then it is either a bug, an improvement, or irrelevant. I'll revise the change to fix the problem, or update the checked-in text files with the new results.

We now have a regression test system that is easy to use and quick to run. The tests are not Java files. They take absolutely no effort to write -- we just save an attachment. In fact, it's easier to fix an issue after creating a test for it, because you can run it over and over again. Developers will naturally create tests as part of doing their job, without even being asked.

Now we can re-write stuff without fear. We can re-write the file importers from scratch, and make sure they work on all the same documents in exactly the same way. We can also re-write the rest of the system, as long as the interface to addText() and setFont() still works the same way.

Sure, there are some bad parts. If you change document.setFont() so that it needs a font encoding parameter, you will need to update all the test scripts. But these changes aren't difficult to manage, and the benefits far outweigh the inconvenience.

In Conclusion

If you are setting up a regression test system, it should be effortless to create a new test, and it should be able to run hundreds of tests in five minutes. Most importantly, the test system should make it easier to fix bugs, so developers will naturally want to create new tests.

Source:  http://stevehanov.ca/blog/index.php?id=66

Published at DZone with permission of Steve Hanov, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags:

Comments

Michael Remijan replied on Tue, 2012/03/06 - 10:50am

Maven has been a cause of a lot of "tears" when it comes to testing.  Maven's build lifecycle includes both unit tests and integration tests with the added headache of there not being any clear separation between code for unit tests and integration tests.  This results in unit tests doing a lot of things they are not suppose to be doing and it results in developer machines needing (sometimes complex) environment setup in order to get integration testing to run.  If Maven did the following it would help out tremendously: One, remove integration testing from the default build lifecycle, Two, provide a completey separate directory structure for unit tests.  If this is done, then (in theory) developers would be able to create fast unit tests with no external dependencies and the organization should be able to have 2 continuous integration environments...one which is running unit tests all the time, and another which is configured with the environment which the integration tests need, which may only run on a nightly basis.  In general, tests are OK, but if ANY work is required to get them running (setting a system environment variable, creating a database, making sure a server is available, etc.) then every developer will click the checkbox to ignore the tests, which defeats the purpose of having them.

Ken Dombeck replied on Tue, 2012/03/06 - 2:18pm

 Michael,

We had a simular situation were we wanted to run the fast unit tests on our local machines prior to committing code and let the CI server run all of the unit and slow integration tests. 

TestNG allowed us to accomplish this by using groups, I am sure Junit has something simular. By adding the group to the integration tests we were able to have the maven surfire plugin exclude these integration tests on our local machine builds.

@Test
public class MyUnitTest {
...
}

@Test(groups = "integration")
public class MyIntegrationTest {
...
}
<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-surefire-plugin</artifactId>
  <configuration>
    <excludedGroups>integration</excludedGroups>
  </configuration>
</plugin>

You can then use a Maven profile to change the value of the excludedGroups for your CI server.

 

Curtis Yanko replied on Fri, 2013/01/11 - 8:29pm

I second, 'groups' or you put integration test in a separate project OR run them in the integration of Maven.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.