Agile Zone is brought to you in partnership with:

I run a small development firm which specializes Microsoft technologies. Ted has posted 12 posts at DZone. You can read more from them at their website. View Full User Profile

Technical Debt - Part 2: Identification

01.14.2011
| 6053 views |
  • submit to reddit

identify-marketWe discussed the process of defining technical debt in a previous article which outlines some important items that should be considered before going about identifying it within an organization. Only after a clear vision of what technical debt means to the organization has been established should one go about the process of trying to find it. Some sources of debt are going to be easier to find than others. Unfortunately, the most difficult sources to find sometimes present the most risk. The process of identifying technical debt should be evolutionary where each iteration of the process incorporates feedback from the previous iteration.

Sometimes just looking at how a project team communicates provides clues that technical debt may exist. In a talk by Andy Lester, he provides a list of signs that can indicate the existence of debt. If you hear a project team make any of the following statements it can be cause for concern.

· Don’t we have documentation on the file layouts?

· I thought we had a test for that!

· If I change X it is going to break Y….I think.

· Don’t touch that code. The last time we did it took weeks to fix.

· The server is down. Where are the backups?

· Where is the email about that bug?

· We can’t upgrade. No one understands the code.

While a lot of technical folks will get a good laugh when reading that list of statements, they are laughing because they’ve likely heard each of them several times. Each statement points to a potential source of technical debt such as insufficient documentation, inadequate QA processes, a code base in need of refactoring, the lack of a disaster recovery plan and the absence of a formal bug tracking system. These statements also indicate sources of interest that is being paid such as drains on productivity, reworking broken code, inability to migrate to current platforms and potentially lost data.

Unfortunately it isn’t always possible to tap into team communications to look for suspicious language. However, it is often possible to define measures to help zero in on where debt may exist. These measures can be both qualitative and quantitative and act as leading and lagging indicators of the inevitable interest payments. These metrics do offer the possibility of creating technical debt KRIs (key risk indicators). These KRIs could then be assembled into dashboards and/or heat maps to highlight potential areas of concern. Let’s take a look at a few examples.

Leading Indicators

KRI: Poor code quality
Reason: Poor code quality is probably the number one source of technical debt. Its existence can slow down new development and maintenance to the point of paralysis.
Metrics: Uncommented, complicated and duplicated code
Source: There are a number of automated tools available which can generate code quality metrics automatically. NDepend is a good example.
KRI: Inadequate code coverage
Reason: The percentage of code that has automated tests associated with is usually inversely proportional to the number of defects that make their way to the end user. You’re usually not aiming for 100% coverage but the higher the risk the higher the code coverage should be.
Metrics: Percent code coverage
Source: There are a number of automated tools available which can generate code coverage metrics automatically. NCover is a good example.
KRI: Technology non-compliance
Reason: When applications are not compliant with standards defined by the organization a variety of challenges arise including data loss, system failure and inability to support new technologies.
Metrics: Number of systems using unapproved technologies, number of unpatched servers, age of disaster recover tests
Source: Utilities exist for testing for unpatched servers but reporting mechanisms might need to be adopted to generate other metrics.

Lagging Indicators

KRI: SLA failures
Reason: The inability to meet service level agreements can be a good indicator that teams are contending with ineffective architecture or poorly implemented code.
Metrics: Enhancement request aging, frequency/length of outages
Source: Monitoring tools exist to report on outages but reporting mechanisms will likely need to be put in place to provide visibility to request aging.
KRI: Audit failures
Reason: As systems and processes are evaluated by both internal and external auditors, they are examined for gaps and deficiencies that represent technical debt.
Metrics: Number of “Needs Improvement” and “Unsatisfactory” IT audits
Source: Reporting mechanisms will likely need to be put in place to provide visibility to audit results.
KRI: Poor data quality
Reason: A large percentage of data quality issues are the result of gaps in transformation processes and front/back end validation controls.
Metrics: Number of data quality defects
Source: Automated tools exist which allow the creation of data integrity rules and provide consolidated reporting. Informatica Data Quality is a good example.

Challenges

As the size of the organization and breadth of the search for debt grows larger, so do the challenges. In larger organizations, broad initiatives that identify issues and implicate individuals will meet resistance. No one likes to be told that their group or department is not meeting expectations. Larger initiatives need to be championed by the senior leaders of the organization to head off pushback. Additionally, when technical debt is discovered it needs to be handled with care. If managers are punished or looked upon unfavorably it will create friction that could short circuit the entire process.

Managers should be encouraged to self-identify technical debt and be rewarded for doing so by having the remediation prioritized appropriately. The carrot works better than the stick in these situations because there are always ways to fudge metrics. Having proper code coverage is only useful if effective tests are written to evaluate the code. If developers write ineffective tests just to keep their metrics green quality will suffer. If managers are rewarded with the resources necessary to make their teams more efficient there will be an incentive to make the process work. Conversely, if they are punished they will try to game the system and avoid the negative consequences.

Conclusion

There are a number of potential challenges associated with identifying technical debt. These challenges grow exponentially if the wrong approach is used. As we discussed, the right approach involves an evolutionary approach to identification and making sure that there are incentives for getting technical debt on the corporate radar. There are also a number of frameworks available (e.g. COBIT, ITIL, PMBOK, etc) to help identify potential gaps and outline best IT practices. Making the management of technical debt part of the corporate culture is the key to long term success. Consider establishing a central repository to track it, use project management tools that facilitate that tracking, and creating an awareness initiative on why managing technical debt is in everyone’s best interest.

From http://blog.acrowire.com/technical-debt/technical-debt-part-2-identification/
Published at DZone with permission of its author, Ted Theodoropoulos.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Tags:

Comments

Mark Anthony replied on Fri, 2012/04/13 - 11:31am

I’m glad you liked the talk. Yes, that bulleted list always brings laughs from conference attendees, but only the laughs of the pain of acknowledgment of having been there.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.