Agile Zone is brought to you in partnership with:

Mr. Lott has been involved in over 70 software development projects in a career that spans 30 years. He has worked in the capacity of internet strategist, software architect, project leader, DBA, programmer. Since 1993 he has been focused on data warehousing and the associated e-business architectures that make the right data available to the right people to support their business decision-making. Steven is a DZone MVB and is not an employee of DZone and has posted 135 posts at DZone. You can read more from them at their website. View Full User Profile

Unit Test Case, Subject Matter Experts and Requirements

02.09.2011
| 5439 views |
  • submit to reddit
Here's a typical "I don't like TDD" question: the topic is "Does TDD really work for complex projects?"


Part of the question focused on the difficulty of preparing test cases that cover the requirements. In particular, there was some hand-wringing over conflicting and contradictory requirements.


Here's what's worked for me.

Preparation. The users provide the test cases as a spreadsheet showing the business rules. The columns are attributes of some business document or case. The rows are specific test cases. Users can (and often will) do this at the drop of a hat. Often complex, narrative requirements written by business analysts are based on such a spreadsheet.


This is remarkably easy for must users to produce. It's just a spreadsheet (or multiple spreadsheets) with concrete examples. It's often easier for users to make concrete examples than it is for them to write more general business rules.


Automated Test Case Construction


Here's what can easily happen next.


Write a Python script to parse the spreadsheet and extract the cases. There will be some ad-hoc rules, inconsistent test cases, small technical problems. The spreadsheets will be formatted poorly or inconsistently.


Once the cases are parsed, it's easy to then create a Unittest.TestCase template of some kind. Use Jinja2 or even Python's string.Template class to rough out the template for the test case. The specifics get filled into the unit test template.


The outline of test case construction is something like this. Details vary with target language, test case design, and overall test case packaging approach.

t = SomeTemplate()
for case_dict in testCaseParser( "some.xls" ):
code= t.render( **case_dict )
with open(testcaseName(**case_dict ),'w') as result:
result.write( code )


You now have a fully-populated tree of unit test classes, modules and packages built from the end-user source documents.

You have your tests. You can start doing TDD.

Scenarios

One of the earliest problems you'll have is test case spreadsheets that are broken. Wrong column titles, wrong formatting, something wrong. Go meet with the user or expert that built the spreadsheet and get the thing straightened out.

Perhaps there's some business subtlety to this. Or perhaps they're just careless. What's important is that the spreadsheets have to be parsed by simple scripts to create simple unit tests. If you can't arrive at a workable solution, you have Big Issues and it's better to resolve it now than try to press on to implementation with a user or SME that's uncooperative.

Another problem you'll have is that tests will be inconsistent. This will be confusing at first because you've got code that passed one test, and fails another test and you can't tell what the differences between the tests are. You have to go meet with the users or SME's and resolve what the issue is. Why are the tests inconsistent? Often, attributes are missing from the spreadsheet -- attributes they each assumed -- and attributes you didn't have explicitly written down anywhere. Other times there's confusion that needs to be resolved before any programming should begin.

The Big Payoff

When the tests all pass, you're ready for performance and final acceptance testing. Here's where TDD (and having the users own the test cases) pays out well.

Let's say we're running the final acceptance test cases and the users balk at some result. "Can't be right" they say.

What do we do?

Actually, almost nothing. Get the correct answer into a spreadsheet somewhere. The test cases were incomplete. This always happens. Outside TDD, it's called "requirements problem" or "scope creep" or something else. Inside TDD, it's called "test coverage" and some more test cases are required. Either way, test cases are always incomplete.

It may be that they're actually changing an earlier test case. Users get examples wrong, too. Either way (omission or error) we're just fixing the spreadsheets, regenerating the test cases, and starting up the TDD process with the revised suite of test cases.

Bug Fixing

Interestingly, a bug fix after production roll-out is no different from an acceptance test problem. Indeed it's no different from anything that's happened so far.

A user spots a bug. They report it. We ask for the concrete example that exemplifies the correct answer.

We regenerate the test cases from the spreadsheets and start doing development. 80% of the time, the new example is actually a change to an existing example. And since the users built the example spreadsheets with the test data, they can maintain those spreadsheets to clarify the bugs. 20% of the time it's a new requirement. Either way, the test cases are as complete and consistent as the users are capable of producing.
Published at DZone with permission of Steven Lott, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags:

Comments

Martin Groenhof replied on Wed, 2011/02/09 - 10:50am

TDD is a religion not a methodology

Tero Kadenius replied on Wed, 2011/02/09 - 1:39pm in response to: Martin Groenhof

TDD is not a solution to every single problem. It's not a silver bullet, that's for sure. There isn't such a thing! However, what I've found is that being test-driven is just the best general approach to software development problems. Definitely better than anything else, I've come across so far.

Loren Kratzke replied on Wed, 2011/02/09 - 2:54pm

I agree that TDD is a religion. I think that trippling the amount of work during initial dev is not worth the payout of theoretically attaining better testability. Tests do not guarantee (or even imply) good code.

I find good interface design and proper service stratification to be far more valuable than TDD when it comes to writing great code that does great things for great people. Unit tests are good for locking code down. TDD is good for tying your hands and feet when you need agility the most.

Jilles Van Gurp replied on Wed, 2011/02/09 - 6:48pm

There are two things to consider here:

1) Most software you depend on in your daily life (IDE, OS, embedded software powering your elevator, car, private jet, etc.) was written in a way that can in no way be sold to anyone as TDD with a straight face. Whether that's good or bad I'll leave to the reader. Most of it is actually pretty damn good and in fact, you are unlikely to depend on much software that was written properly TDD style. It's like UML, MDA, or any of the other fads of the past: you won't find any of it near any software you actually care about.

2) If it is that well specified that you can easily write tests for it, are you really dealing with the type of software project that plagues the rest of the industry where the requirements are a big uncertainty, the customer changes his/her opinion on an hourly basis, and the engineers lack both the domain knowledge and the technical skills to address them properly? Most likely not. I'd argue TDD is predominantly found in projects that any skilled engineer can do on automatic pilot.

Depending on your answers to 1 & 2 you may be delusional or simply wrong. I happen to think writing tests when you can is a good thing and TDD is a great way to quickly (in)validate requirements up to a certain scope. If you get your requirements in managable chunks it actuallly works fairly well. However, if you need to prototype a very vague concept in a hurry, it's a collossal waste of time that just causes you to spend more time on what was probably a misinterpretation of the requirements to begin with. Reality is that as soon as you have tests, they start resisting challenging the very assumptions that they represent exactly when they still need to be challenged. This fundamental contradiction is what makes TDD sound like a great idea just like waterfall sounds like a great idea until you experience it first hand.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.