Performance Zone is brought to you in partnership with:

Jurgen Appelo calls himself a creative networker. But sometimes he's a writer, speaker, trainer, entrepreneur, illustrator, manager, blogger, reader, dreamer, leader, freethinker, or… Dutch guy. Since 2008 Jurgen writes a popular blog at, covering the creative economy, agile management, and personal development. He is the author of the book Management 3.0, which describes the role of the manager in agile organizations. And he wrote the little book How to Change the World, which describes a supermodel for change management. Jurgen is CEO of the business network Happy Melly, and co-founder of the Agile Lean Europe network and the Stoos Network. He is also a speaker who is regularly invited to talk at business seminars and conferences around the world. After studying Software Engineering at the Delft University of Technology, and earning his Master’s degree in 1994, Jurgen Appelo has busied himself starting up and leading a variety of Dutch businesses, always in the position of team leader, manager, or executive. Jurgen has experience in leading a horde of 100 software developers, development managers, project managers, business consultants, service managers, and kangaroos, some of which he hired accidentally. Nowadays he works full-time managing the Happy Melly ecosystem, developing innovative courseware, books, and other types of original content. But sometimes Jurgen puts it all aside to spend time on his ever-growing collection of science fiction and fantasy literature, which he stacks in a self-designed book case. It is 4 meters high. Jurgen lives in Rotterdam (The Netherlands) -- and in Brussels (Belgium) -- with his partner Raoul. He has two kids, and an imaginary hamster called George. Jurgen has posted 145 posts at DZone. You can read more from them at their website. View Full User Profile

8 Tips for Performance Metrics

  • submit to reddit
Performance metrics are important. At school, in sports, and in the arts, people want to know how well they are doing. They get grades for their knowledge of math, languages, and geography, rankings for their performances in football, basketball, and tennis, and ratings for their books, plays, or TV shows. If you don’t know how you’re doing, you cannot verify if you’re doing better next time. That’s why people want to know their score on a Microsoft certification exam. It’s why they hook up their Nike shoes to their iPods, tracking their running achievements. And it’s why I’m looking forward to your Amazon ratings for my book. :-)

One responsibility of a manager is to make sure that employees get to know and understand how well they are doing their jobs. And whether you are producing metrics for individuals or groups, there are a number of tips you may want to keep in mind when measuring their performance:

1) Distinguish skill from discipline
In a previous blog post I discussed two rankings for maturity: skill and discipline. You may wish to evaluate people and teams separately for both. This helps skilled people (who may think that they’re too good to fail) not to forget about discipline. It also helps to avoid overconfidence in disciplined people (who may think they’re good just because they follow procedures). Some examples of measuring discipline: task board is up-to-date, meetings start on time, code coverage always > 95%. Some examples of measuring skill: no build failures, few bugs reported, and customer demos always accepted.

2) Do not rate knowledge or experience
I see knowledge and experience as prerequisites for skill and discipline, but I believe measuring people’s knowledge and experience doesn’t make much sense. Knowledge and experience are about being. Skill and discipline are about delivering. As a writer I don’t get ratings for being a writer. I get ratings for delivering a book. Nobody in your organization should be earning ratings for knowledge and experience, while wasting their time playing Tetris.

3) Rate multiple activities
Each of us has some things he is good at, and some things he is not. You can accept the humiliation of a bad rating for one activity when there is another one on which you’ve scored well. Similarly, employees can accept criticism more easily when it is compensated with compliments in other areas. Having multiple ratings also makes it easier to be honest and fair to a person. Rate people and teams for the quality of a software release and its timeliness, for customer satisfaction and cost effectiveness, for official standards adhered to and team flexibility.

4) Rate multiple performances
One of my high school teachers had a system where he organized at least ten test scores a year per person, and he promised not to count the lowest one, because “we all have a bad day sometimes.” People in general prefer to be rated multiple times for similar activities. They want a chance to do better next time. Rate them for each project that they do, and each new release that goes into production.

5) Use relative ratings where possible
Compare the performance of a team against their previous performances over time (“you’re now doing 15% better than last time”); against other teams in the organization (“you’re doing 20% worse than the guys in project X”); or against external businesses (“we’re doing 32% better than company B”). With relative metrics teams can strive to do better every time, instead of trying to meet one target and then staying there.

6) Keep the feedback loop as short as possible
There should be as little delay as possible between the time of an activity and feedback from the metrics. It is one of the reasons I started writing a blog before writing a book. I needed the immediate feedback from readers on my blog to know how to write better. Only one and a half year later I felt confident enough to start writing a book, which has a much longer feedback cycle.

7) Use both leading and lagging indicators
Leading indicators are metrics that, when they change, will indicate that you might be on the right track in achieving your goal. (Example: increased code coverage of unit tests might indicate higher quality in a product.) Lagging indicators are metrics that verify whether or not you have achieved a goal, after completing the work. (Example: reduced defects reported by customers verifies quality after the product’s release.) In general it is advised to use both leading and lagging indicators.

8) Never create the ratings yourself
The value of your opinion as a manager about the performance of a person or team is very, very, very small. Make sure that all ratings, whether qualitative or quantitative, are produced by the environment. Not by you. You may be the messenger sometimes, but not the assessor. Be the judge, not the prosecutor.

Talking about judges… Yes, I plead guilty (again). Like many other naïve managers in the world I have personally ranked and rated employees, once per year, using one single value on a 5-level scale. But I regret that now. I believe that people should be rated with multiple ratings, multiple times, as soon as possible. And not by me. Let the world know I’m sorry. It won’t happen again.

Published at DZone with permission of its author, Jurgen Appelo. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)


Michael Norton replied on Wed, 2010/04/14 - 6:45am

Keep your metrics balanced. Not only do we want to measure different activities, we want to measure different objectives. We want to ensure that the attention to one area does not inadvertently create undesirable behaviors. If we only focus on speed, for example, quality is likely to take a hit. If we are recording and reporting on story points completed per week, we should also record and report average cyclomatic complexity or defects per story point.

Sindy Loreal replied on Sat, 2012/02/25 - 9:01am

I was looking for such post.I would like to read more about this topic.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.