Managing Schedule Flaws using Agile Methods
Software projects rarely come in both on time and on budget, leading to dissatisfied end users. DeMarco and Lister, authors of “Waltzing with Bears: Managing Risk on Software Projects,” discuss five different risks associated with software project management, including schedule flaws as one of them.
In this article, we’ll discuss several symptoms and causes of schedule flaws, present metrics and diagrams that can be used to track your team’s progress against its schedule, and describe Agile ways to address these risks.
ABOUT THE AUTHOR
Brian Button is the VP of Engineering and Director of Agile Methods at Asynchrony Solutions, Inc. (www.asolutions.com), a leader in Agile software development. Brian instituted the Asynchrony Center of Excellence, leading a group of agile trainers and mentors that train, innovate, and evangelize agile to internal staff and project teams at outside corporations. For more information, visit http://blog.asolutions.com.
Schedule flaws can be caused by the unpredictability of the environment around a project or by the inherent difficulty in predicting the time significant pieces of software will take to implement.
Estimation difficulties are just a fact. We rarely create the same systems twice, so each new project is truly new. Each has their own problems, solutions, and complexities. No software plan survives its first contact with the customer, leading to a situation where your plan needs to evolve to match changing reality. This is the risk that we’ll focus on below.
Teams that suffer from schedule flaws often exhibit one or more of the following five symptoms:
1. Frequent change requests from customers and stakeholders Customers and stakeholders learn more about the system being built as it takes shape. Based on their new knowledge, stakeholders often change their minds about existing requirements or adding new requirements.
2. Unreliable estimates Every interesting piece of software that gets built is inherently something new. Because of this, the time to build individual pieces is difficult to accurately estimate.
3. Large amount of “off the books” work All teams have “extra” work to do that is never written down, be it testing, documentation, or just finishing the “edge cases” of features. This work has to happen, but it appears on no one’s schedule.
4. Uncertain quality Inadequate or late testing leaves the quality of the system in doubt, allowing defects to be found in final testing. Since these defects are unknown until the last round of testing, their effect on the schedule is unknown.
5. Matrixed team members Team members that are shared among multiple teams can become bottlenecks. Waiting for them to become available can cause delays in deliverables.
It is important to have a good set of historical metrics to understand the effects the above causes of schedule flaws have on your project. The most basic metrics used to track a project’s progress and to illustrate schedule flaws are two variations of burn charts. The first, a burndown chart, graphs work completed versus time, sometimes with both actual and planned work/timelines shown. A burnup chart shows the same information, additionally showing work added to a project over time. A project is on-track as long as the actual progress and planned progress shown on either type of chart match. A solid metric describing progress against your desired delivery date is the most critical measurement for a project to keep, since it is the leading indicator of whether you have a problem.
In Figure 1, Example Burn Down Chart, we can see a project that spent several weeks basically tracking the ideal curve down their burndown chart. All of a sudden, though, the project went off-track. A large amount of work was added to the release, as can be inferred by the upwards slope of the burndown line. Scope had to be cut or time added to bring the project in successfully.
In Figure 2, Example Burn Up Chart, the total height of any bar represents the total amount of work present in the project. The area in green represents work completed and the area in red shows work left to do. The total amount of work in scope varies as the total height changes. Here, you can see that work is being added as quickly as it is being finished, resulting in a finish line that is constantly moving to the right.
These two graphs show the same backlog for the same project, but illustrate the different information available from each graph.
METRICS TO UNDERSTAND CAUSES
Once it is determined that the project is not keeping to its schedule, more investigation must be done to determine why that is. Below are several metrics that can be used to learn about underlying causes of schedule flaws.
1. Changing Capacity Teams that do not have a stable amount of working hours can see their productivity rise and fall as their capacity changes. Measuring the number of available hours across the entire team will provide insight as to whether this is the cause of the schedule flaw. If it is, you’ll see velocity vary directly with capacity.
2. Poor Estimation Accuracy The most important part of looking at estimation accuracy is identifying stories that are outliers from the main body of the estimates and to understand what made them off by so much. When the outliers are found, some level of root cause analysis can be done to understand if there was a special reason for that variance or if there is something systemically wrong with the estimates that made a group of them drastically wrong.
For example, on a recent project I managed, we tracked estimates versus actuals. Every story was estimated in “points,” where a single point corresponded to a half-day of work. When graphed, we saw how rapidly the distribution of results changed as the estimated size of each story increased.
In Figure 3, Estimates versus Actuals for 1 Point Stories, we can see a graph of our results. The X axis represents the actual number of hours taken by a story, while the Y axis shows how many stories had the same actual duration. The majority of stories were 6 hours or fewer, with many of them being shorter than 4 hours – we didn’t have fractional points, so 1 point was a low as we could go. There were a few outliers, though, and we talked about the special causes of them. In some cases, there were defects or poorly written code in the inherited codebase, in others the requirements were unclear early and grew as they were better understood.
As the size of the story estimates increased, even to 2 points, the quality of the estimates began to drop. In Figure 4, Estimates versus Actuals for 2 Point Stories, you can see a larger spread of actuals resulting from the same types of issues in the first case, with misunderstood requirements playing a larger part in the inaccuracies as estimates grew.
3. Uncertain Quality and “Off the Books” Work On teams with whom I’ve been associated, these two causes are known by everyone on the team but acknowledged by no one. The best way to understand the effects of these two flaws is for a manager to work closely enough with the team to feel the undercurrent of tension that people are surely experiencing. Faced with this undercurrent, they must start conversations about quality and completeness and readiness. The longer the team waits to have these conversations, the more unpleasant the surprise at the project’s end.
AGILE PLANNING & ROADMAPS
Each of these causes of schedule flaws represent risks to a project. Agile teams manage this risk by changing the role and method of planning versus more traditionally run projects.
Agile teams plan differently. They absolutely have a plan and a schedule, but the plan is encouraged to change over time as learning happens. Planning becomes a commonplace activity, performed at different levels and at different rhythms throughout a project. These different levels of planning serve to address each of the issues described above in specific ways.
At the highest levels, Agile teams plan for delivering capabilities to customers at some agreed upon schedule. These capabilities are loosely defined to leave as much wiggle room as possible while giving as complete a description of the feature as possible. This wiggle room sounds absurd on the surface, but it is actually a key ingredient of what makes this style of planning so successful.
The output of this planning is a roadmap of capabilities that will be delivered at specified times in the future, with some amount of detail about what each capability will provide. That should be enough for long range planning, marketing, and sales. They have a rough roadmap and a near certain guarantee of delivery.
By keeping this long-range planning at a very high level, people are free to make changes in the plan at this point, whether from changing market forces or in reaction to a schedule flaw, with little cost and with little risk. This level of planning happens several times a year.
PLANNING & EXECUTION
One level down from roadmap/portfolio level planning is Release Planning. This is when teams solidify the features they are going to deliver typically 4-12 weeks out. Capabilities from the roadmap are selected and broken down into smaller, more understandable pieces, called epics. Epics represent coherent, releasable features in an application that are more defined than larger capabilities but still larger than what can be implemented. The epics selected first tend to be the ones that provide the greatest value to business stakeholders, risk reduction, or learning for an organization. Lower-valued features are pushed later in the project schedule, or perhaps fall off completely if their value never becomes high enough to justify the cost of developing them.
The epics are estimated by the practitioners who are going to implement them, and are prioritized according to their importance to the release. This level of planning happens once per release: 4-12 times a year.
The most frequent form of planning, iteration planning, happens once every week or two and is where the rubber meets the road. A small number of epics is brought to the team, where they are broken into “user stories,” small bits of functionality that provide some portion of the epics’ features.
During iteration planning, the team discusses the low level business details of the work and builds a plan for how they are going to implement this set of user stories. Each story is defined as concretely as possible, including a set of acceptance criteria that detail what it means for that story to be done. At this point, these finely grained units of work are generally a day or less of work. As described above, smaller stories are estimated more accurately.
As part of the capacity planning used during iteration planning, historical values for the capacity of the team are tracked and used to limit the amount of work promised for the 1-2 week time box. This regular rhythm of planning, committing, executing, and delivering gives the project a heartbeat that allows its progress to be measured and tracked.
The final execution stage is where quality is monitored and created every day. Quality is never uncertain on a team like this. Each move that a team member makes is done with an eye on producing quality. There are automated tests around security, load, scalability, and performance. Most tests are run dozens of times a day and at least once per night. The system is continuously built, deployed, and tested.
Obviously, there is effort expended to reach these quality levels, but the benefit is that a team can be ready to ship code at any time. Every feature that is done is coded, tested at the feature and system level, all needed documentation is written, and it is ready to go. This lets progress through the project be tracked in terms of completed value, and allows for early and incremental delivery of working functionality.
By focusing on the agile practices and metrics detailed in this article, teams can identify and manage the risks that cause schedule flaws. These metrics give visibility to the risks, while the practices give teams tools to manage them. Between the combination of the two, teams can deliver great value to their stakeholders quickly, effectively, and with high quality.
Figure 1 - Example Burn Down Chart
Figure 2 - Example Burn Up Chart
Figure 3 - Estimates versus Actuals for 1 Point Stories
Figure 4 - Estimates versus Actuals for 2 Point Stories
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)