One of biggest problems with the traditional waterfall model is simply it's length. It's a lot more difficult to understand how things went off the tracks when your project lasts for months or years. By the time you realize there's a problem with one part or one team, the root cause of the problem could've occurred last year. That makes it difficult to locate and fix the problem. The alternative seems like a lot more work, but has several distinct advantages.
First, let me ask you how long it takes you to drive to the beach? Got a number? Great. Now tell me how long it takes you to drive to work? Unless you're one of the lucky bums we all love to hate, your beach trip is probably measured in hours and your work drive is measured in minutes. Which one of these two estimates is more accurate? The shorter one is always a safer bet. The same is true with software estimates.
When a developer tells me that a task will take weeks or months, I know that the work hasn't been completely broken down, and the odds are good that there are hidden problems within the work yet to be done. These landmines tend to blow up and take your project's timeline with them. The only way to get great estimates from developers is to break down the long features into shorter tasks. I've found than nothing less than half a day, and nothing longer than a week, is a great rule of thumb.
Less than half a day is impossible to complete, generally speaking. Microsoft releases a service patch, your hard drive fails, or Apple releases a product you've got to read about all afternoon. (You know who you are!) These days I suggest that people have one day tasks. Anything shorter is probably overly optimistic (that's code for wrong
But we also cap the amount of work we let someone estimate at one week. I really prefer three days and find it's the magic spot where people get their head around a problem. Anything longer than a three day estimate and you run the strong risk of glossing over details that aren't completely understood. It's those poorly understood details that blow up in your face and take days to understand and complete.
In a few days I'll present a few techniques you can use to drive down your estimate lengths. Let me know (in the comments) if this interests you. If there's enough interest I'll prioritize it.
So what does the hard stop iteration have to do with estimates? A shorter iteration forces you to consistently create shorter estimates. I really like one week iterations. It seems too short to do anything useful, but it's not. It creates a very regular rhythm for the team. Each week work is examined, accepted (or rejected or lack of detail), and tackled. If there's a problem, you find out at the end of the week.
That's another benefit... lots of finish lines! Have you ever worked on a project that seemed to never end? It dragged out for months or even years? Bugs, feature bloat, changes in direction... who cares why, you just knew you could never finish it.
With a hard stop, one week iteration, you get more finish lines. The team gets to finish something every week. It seems trivial, but the boost to morale can be significant to a team who's given up.
At the end of the iteration, you look at each task you've tackled (see my post on 3x5 cards
), and decide if the feature is Done or Not Done. Each week you can find out if you're keeping up or falling behind. It won't tell us why a team is falling behind, but we'll know they are. That's good information to know.
At the end of the iteration, never automatically roll a feature into the next iteration. It was selected based on your assurance that the feature only costs N days. We've now blown that estimate, so the Responsible Person needs to reevaluate the feature based on the new (and more expensive!) cost. Most items roll back into the next iteration, but not always.
Risk mitigation is another great side effect of the time boxed iteration. How quickly do you want to know that team X or team Y can't complete their commitments? Or how quickly do you want to know they can?
We've touched on many topics here that deserve a much deeper treatment, but I'll save that for another post! Topics that tie in include test automation, smaller feature estimates, planning meetings, demos, and retrospectives. Which ones do you want to hear about first?