Agile Zone is brought to you in partnership with:

Mark is a graph advocate and field engineer for Neo Technology, the company behind the Neo4j graph database. As a field engineer, Mark helps customers embrace graph data and Neo4j building sophisticated solutions to challenging data problems. When he's not with customers Mark is a developer on Neo4j and writes his experiences of being a graphista on a popular blog at http://markhneedham.com/blog. He tweets at @markhneedham. Mark is a DZone MVB and is not an employee of DZone and has posted 492 posts at DZone. You can read more from them at their website. View Full User Profile

Kent Beck's Test Driven Development Screencasts

07.29.2010
| 9374 views |
  • submit to reddit

Following the recommendations of Corey Haines, Michael Guterl, James Martin and Michael Hunger I decided to get Kent Beck's screencasts on Test Driven Development which have been published by the Pragmatic Programmers.

I read Kent's 'Test Driven Development By Example' book a couple of years ago and remember enjoying that so I was intrigued as to what it would be like to see some of those ideas put into practice in real time.

As I expected a lot of Kent's approach wasn't that surprising to me but there were a few things which stood out:

  • Kent wrote the code inside the first test and didn't pull that out into its own class until the first test case was working. I've only used this approach in coding dojos when we followed Keith Braithwaite's 'TDD as if you meant it' idea. Kent wasn't as stringent about writing all the code inside the test though – he only did this when he was getting started with the problem.

    The goal seemed to be to keep the feedback loop as tight as possible and this was approach was the easiest way to achieve that when starting out.

  • He reminded me of the 'calling the shots' technique when test driving a piece of code. We should predict what's going to happen when we run the test rather than just blindly running it. Kent pointed out that this is a good way for us to learn something – if the test doesn't fail/pass the way that we expect it to then we have a gap in our understanding of how the code works. We can then do something about closing that gap.
  • I was quite surprised that Kent copied and pasted part of an existing test almost every time he created a new one – I thought that was just something that we did because we're immensely lazy!

    I'm still unsure about this practice because although Ian Cartwright points out the dangers of doing this it does seem to make for better pairing sessions. The navigator doesn't have to wait twiddling their thumbs while their pair types out what is probably a fairly similar test to one of the others in the same file. Having said that it could be argued that if your tests are that similar then perhaps there's a better way to write them.

    For me the main benefit of not copy/pasting is that it puts us in a mindset where we have to think about the next test that we're going to write. I got the impression that Kent was doing that anyway so it's probably not such a big deal.

  • Kent used the 'present tense' in his test names rather than prefixing each test with 'should'. This is an approach I came across when working with Raph at the end of last year.

    To use Esko Luontola's lingo I think the tests follow the specification style as each of them seems to describe a particular behaviour for part of the API.

    I found it interesting that he includes the method name as part of the test name. For some reason I've tried to avoid doing this and often end up with really verbose test names when a more concise name with the method name included would have been way more readable.

    A couple of examples are 'getRetrievesWhatWasPut' and 'getReturnsNullIfKeyNotFound' which both describe the intent of their test clearly and concisely. The code and tests are available to download from the Prag Prog website.

  • One thing which I don't think I quite yet grasp is something Kent pointed out in his summary at the end of the 4th screencast. To paraphrase, he suggested that the order in which we write our tests/code can have quite a big impact on the way that the code evolves.

    He described the following algorithm to help find the best order:

    • Write some code
      • erase it
        • write it in a different order

    And repeat.

    I'm not sure if Kent intended for that cycle to be followed just when practicing or if it's something he'd do with real code too. An interesting idea either way and since I haven't ever used that technique I'm intrigued as to how it would impact the way code evolved.

  • There were also a few good reminders across all the episodes:
    • Don't parameterise code until you actually need to.
    • Follow the Test – Code – Cleanup cycle.
    • Keep a list of tests to write and cross them off as you go.

Overall it was an interesting series of videos to watch and there were certainly some good reminders and ideas for doing TDD more effectively.

 

From http://www.markhneedham.com/blog/2010/07/28/kent-becks-test-driven-development-screencasts

Published at DZone with permission of Mark Needham, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags:

Comments

Rehman Khan replied on Sat, 2012/02/25 - 3:40am

Regarding the TDD game that Kent describes at the end of Episode 4 (write code, erase it, write it in different order, repeat), I do believe he intends to practice it each time. He describes the goal of TDD is "Clean code that works." To achieve that, he's refactoring until it's most efficient in terms of design and usability.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.