Agile Zone is brought to you in partnership with:

I am a software engineer, database developer, web developer, social media user, programming geek, avid reader, sports fan, data geek, and statistics geek. I do not care if something is new and shiny. I want to know if it works, if it is better than what I use now, and whether it makes my job or my life easier. I am also the author of RegularGeek.com and Founder of YackTrack.com, a social media monitoring and tracking tool. Robert is a DZone MVB and is not an employee of DZone and has posted 109 posts at DZone. You can read more from them at their website. View Full User Profile

Defining Project Failure

11.10.2010
| 5682 views |
  • submit to reddit

Recently, I wrote about how software development processes do not fail, the people involved with the projects fail. The idea in that post was that the processes are rarely followed the way they are written. The parts that fail are the people adding scope without following the process, the people underestimating the complexity of a task, or the people shortening the project duration due to external factors. The problem is that these things happen on every project. Agile processes are a great benefit, but sometimes the processes need to be adapted to your environment.

Most people try to avoid failure at any cost. However, the real question is how do we define project failure? In most cases, you will hear people talk about the project going over budget, the project being late, or the project being buggy. These three reasons are probably the most popular failures that people talk about. These definitions come from the early days of software development where the Waterfall model was typically used. Our software processes have become more agile, but our definition of failure has not. In my people failure post referenced above, I talk about how people get blamed for failure:

One question that needs to be answered in your company is what is your definition of project failure? This is the topic of a longer post by itself. A few simple guides are whether a project finished by a planned deadline, whether the project finished within a planned budget or whether the number of production defects, or even late QA defects, is within some threshold. Once your definition is set, then you know who to blame, right? If the project is late, then the project manager should be blamed. If the project is over budget, then the customer is to blame because they requested too many features. If the number of defects is too high, then the developers are to be blamed.

                                                                                       Good Fast Cheap - Pick Any Two

Obviously, this simple pattern of blaming is wrong, but it can be common. If we ignore this simplistic view of software development, we might have a better idea of what failure should look like. First, you need to look at what you can control in a project. Many people have seen the project management triangle, where you have three variables that you can work with, cost, scope and schedule. Sometimes scope or schedule is replaced by quality when the triangle is talked about in terms of good, fast and cheap (pick only two). The problem with these models is that only 3 variables are considered. It is easy to be considered a failing project when those are the basic measurements.

In the PMBOK 4.0 there are now 6 variables being monitored, scope, schedule, budget, risk, resources and quality. This may be better than the original 3 variables, but it is more of an attempt to avoid failure than to succeed. Also, monitoring variables does not tell you whether the project was a failure unless each variable has a threshold that should not be exceeded. Even if this simple definition of failure, exceeding at least one threshold for one of the variables, is not reasonable. If you exceed the threshold for the number of resources and do not exceed the others, is that a project that failed? Most likely this is not the case. What if you exceed all 6 variables, but the users love the new system? Can we really define project failure as a set of measurements? This is where the boundary between success and failure gets fuzzy.

Defining Success

Before I try to define failure, I wanted to define some level of success. First, users must like using the system. I do not mean that they need to take joy from using it, but they cannot hate using it and it must give them the functionality they currently need. Note that I also said functionality they need, not functionality they want. Wants are those features that help define future releases. If you look at the 6 variables from PMBOK 4.0, the only one that I like in defining success is quality. If you have a low-defect system, then the project did something right. Defect rates are very difficult to control because defects can be defined in many ways. A low-defect rate means that the system does not have a lot of traditional bugs and that the users find the system tends to work as they expect.

Defining Failure

Defining failure is not as simple. Generally, the definition of success above can be a used for almost any business. If your project meets those criteria, it can be seen as a success. However, the real world is not quite as forgiving, and your company may have various constraints on your project. Your company probably uses the 6 variables mentioned above to define thresholds. So, let’s look at each one:

  • Scope – For any given project, scope can be used to determine whether a project is complete. However, feature completeness should not be the criteria for success. Functional completeness, which is whether the users can complete their work using the system, is a better measurement and it is also harder to define. Functional completeness is not known until many users go through their typical workflow in the system several times.
  • Schedule – If a project goes past its deadline, many companies consider that a failure. I do not believe you can measure the schedule miss without looking at other aspects of the project. Also, you need to look at the reason for the deadline. If the deadline is only the time when all of the work was estimated to complete, then it was not really a deadline. If there was a time to market concern or the users have some other schedule constraints and need the system by a particular date, then the schedule does become very important. Missing an estimated end date and missing a constrained scheduled deadline are two very different things.
  • Budget – Money is an issue for most companies and is one of the few measures that can be a big indicator or cause of failure. The budget for a project is typically a function of the number of resources over the life of the schedule, unless there are capital expenditures like new hardware. For smaller companies, the budget can be of extreme importance because funding is limited, especially when compared to large corporations. In the most extreme cases, the budget can be the reason a project gets shut down, obviously meaning that the project is a failure.
  • Risk – At this point, I ask that all project managers skip to the next bullet. Risk and the implications of that risk is something that can be managed but should not be evaluated in terms of the success or failure of a project. There are very few cases where risk should matter, and those are projects where the defining feature is to reduce risk in the business itself. Risk is a good measurement to determine the possibility of failure or other bad situations, so from the management perspective it is a good window into the health of a project.
  • Resources – Staffing of a project is typically not used as a measurement of failure, but it can be a very good indicator of impending doom. For example, if a project was estimated to require 5 software engineers for 6 months, and after 2 months another 5 engineers are added, that is a significant indicator that something is wrong. It may mean that a large amount of scope was added to the project, or that the project or its complexity was severely underestimated. Additional resources will also affect the budget, probably add more risk due to communication difficulties and likely impact the quality of the software delivered.
  • Quality – As I mentioned before, quality should be a huge determining factor in the success of a project, but it should not be the defining factor. A high defect rate is always a bad thing, and will have long lasting effects on the application. Thankfully, agile processes like test driven development, and basic automated unit testing have helped developers ensure some level of code quality. The only problem with quality is that you can never rid a system of all defects, you can only be rid of known defects. It can also become fairly expensive to ensure the highest levels of quality.


At this point, you are probably asking where the definition of failure is. Technically speaking, failure is very specific to your environment. I have limitedly defined success and you could say that failure is not matching or exceeding the success criteria. The most important thing is defining success and failure in the context of your business. If you know what these definitions are, then you can actually determine whether your project was successful and how you can drive towards success in your projects.

Do you have some unique way to determine project success? Am I completely missing something important in defining failure? Let me know in the comments.

References
Published at DZone with permission of Robert Diana, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)