Agile Zone is brought to you in partnership with:

Richard Perfect a Solution Architect for Fronde in Wellington, New Zealand. In his spare time when the commitments of family and two small, dearly loved children permit he is an avid gamer and can often be found within the virtual worlds of World of Warcraft. He has also recently launched a blog and web application at www.the-decision-wall.com. Richard has posted 5 posts at DZone. You can read more from them at their website. View Full User Profile

Building Better Burn-Down Charts

09.22.2009
| 14319 views |
  • submit to reddit

As a developer you have just finished your latest bit of code and have released it to the test team. It took a little longer than you thought it would but finally you can say to your project manager that this task is now 100% complete. The test team receives the new build including your code but after only a few hours of testing they find a problem and now it looks like you have at least three days of work to go.

Even after solving the problem and sending back to the test team the testers find yet another problem. Your project manager is becoming concerned and is putting pressure on you to complete the task. You feel bad because it seems like you can never make reliable estimates. The result of this scenario is that instead of being 100% complete you have unwittingly entered the seemingly never ending state of 90% complete. 

Has this happened to you?

The traditional approach to producing burn-down charts is based on estimating "time to completion" for a collection of tasks, where tasks might represent things like user stories, use cases, components or other activities.  Typically the x-axis shows "remaining effort" and the y-axis shows a "timeline" with the expected completion date at some point in the future.


Example Burn-Down Chart
Figure 1: Traditional Burn-Down Chart

One of the challenges of using this approach to estimate progress is that the estimated time to completion will be compared against something (the orginal estimate) that already has a level of uncertainty. 

In the example above a developer never truly knows how long it will take to code something because they never truly know when they have finished. This issue could be addressed by having a policy that says construction is complete when a developer says something is ready-for-test, but as illustrated in the example above there is considerable uncertainty and ambiguity about what complete means in this context.

Developers are traditionally optimists when it comes to estimating because they don't have the overall framework of risk and uncertainty to evaluate their work in the context of the overall project. Developers also continually struggle with estimating because they don’t typically do the same thing again and again i.e. either the business problem itself is new, or they are using at least one new technology.  Conversely, testers tend to be reasonably accurate at estimating their work because their methodologies and toolsets largely remain the same from project to project.

A New Approach

The proposal put forward in this article is that you can use the status of test scripts as a more realistic basis for measuring the completion of a project rather than getting developers to estimate whether or not they are complete. In general testers are better at estimating the numbers of planned test cases than developers are at estimating the number of hours it takes them to complete coding. So we suggest that counting the number of planned versus actual test scripts that are passing provides a much more reliable metric as the basis for a burn-down chart.

The following chart shows a real-world Java/J2EE project. This project was built using the Oracle Application server and JSF.


Burn-Down Chart based on passing Test Cases
Figure 2: Burn-Down Chart based on passing Test Cases

This chart is more like a “burn-up” chart than a “burn-down” chart. The idea is that the completed works accumulates up towards the target rather than down towards zero. Drawing the chart this way more clearly shows that when the client changes the scope of the project that the work effort involved has been changed. See link for more discussions about this.

The somewhat linear rate of progress over builds 1 – 20 is deceiving as at this stage full functional testing had not commenced.  From approximately Jan-2007 true functional testing had commenced and the data was reported weekly. The sharp changes in the test case metric occurred in Build 32, Build 47 and Build 56 where significant quantities of new test cases were discovered due to scope changes. In the last situation there was a crash program involving extra resources to get this new work done in the shortest amount of time possible.

Tangible Metrics

The strength of this approach is that you are measuring the completion of a project with something that truly reflects whether or not the project is finished. Instinctively people know that they can't 100% test an application, but we still effectively delegate the authority to finish a project to the test team. We are finished when the test team says we are finished.

Testers generally start a project with a rough estimate of how many test cases a use case will need for adequate coverage. Since in most cases they write their test cases in parallel with the construction of the code they quickly convert their estimate into a tangible suite of test cases. Testers will know the exact set of test cases that will be used before developers have completed construction of the code.

A developer will start a project with a rough idea of how long a use case will take to develop. Estimates can vary considerably dependent upon many factors such as knowledge, experience and the technologies used.  For many developers their estimate of how long it will take to complete a task becomes proportionally less accurate the closer they get to finishing it as it is very difficult for developer to predict how many bugs a tester will find.

The tester's metric of planned test cases becomes proportionally more accurate over time but a developer's metric of estimated time to completion becomes less accurate just when you need it the most.



Developer/Tester Estimation Accuracy
Figure 3: Developer/Tester Estimation Accuracy

Not all test cases/scripts are the same size, some may take longer to execute and some may be more problematic as well but overall they do tend to average out. It's not the absolute size of a test case that's important it's more about the rate (velocity) at which test cases are passing. It's the same as if you are measuring use cases, stories or "planning points".

Unit Testing

But wait there's more! You can also apply this technique to unit tests. If the developers estimate the number of planned unit tests and you count the number of passing unit tests then you get a completion metric for code construction.

Developers are not used to estimating test cases and it may take them a little bit of practice to get used to the process. There's also a very useful technique that we have used in the past for this - but that's the subject of a different article :-)

Summary

Producing a burn-down chart by tracking planned test cases versus passing test case as a measurement of completion is:

  • More accurate than using developer's estimate of how much time it will take to complete a task.
  • Becomes more accurate faster and when you need it the most.
  • You can apply this technique to both automated and manual (unit & functional) test cases.
  • Passing test cases ultimately decide when something is finished.

We liked this idea so much we even wrote a product about it. You can find out more at www.traceanalyst.com


 
Published at DZone with permission of its author, Richard Perfect.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)