Agile Zone is brought to you in partnership with:

I am an author, speaker, and loud-mouth on the design of enterprise software. I work for ThoughtWorks, a software delivery and consulting company. Martin is a DZone MVB and is not an employee of DZone and has posted 82 posts at DZone. You can read more from them at their website. View Full User Profile

CannotMeasureProductivity

09.03.2013
| 6209 views |
  • submit to reddit

We see so much emotional discussion about software process, design practices and the like. Many of these arguments are impossible to resolve because the software industry lacks the ability to measure some of the basic elements of the effectiveness of software development. In particular, we have no way of reasonably measuring productivity.

Productivity, of course, is something you determine by looking at the input of an activity and its output. So to measure software productivity you have to measure the output of software development. The reason we can't measure productivity is because we can't measure output.

This doesn't mean people don't try. One of my biggest irritations are studies of productivity based on lines of code. For a start, there's all the stuff about differences between languages, different counting styles, and differences due to formatting conventions. But even if you use a consistent counting standard on programs in the same language, all auto-formatted to a single style, lines of code still doesn't measure output properly.

Any good developer knows that they can code the same stuff with huge variations in lines of code, furthermore code that's well designed and factored will be shorter because it eliminates the duplication. Copy and paste programming leads to high LOC counts and poor design because it breeds duplication. You can prove this to yourself if you go at a program with a refactoring tool that supports Inline Method. Just using that on common routines should allow you to easy double the LOC count.

You would think that lines of code are dead, but it seems that every month I see productivity studies based on lines of code, even in such respected journals as IEEE Software that should know better.

Now this doesn't mean that LOC is a completely useless measure, it's pretty good at suggesting the size of a system. I can be pretty confident that a 100 KLOC system is bigger than a 10 KLOC system, but if I've written the 100 KLOC system in a year, and Joe writes the same system in 10 KLOC during the same time, that doesn't make me more productive. Indeed, I would conclude that our productivities are about the same but my system is much more poorly designed.

Another approach that's often talked about for measuring output is Function Points. I have a little more sympathy for them, but am still unconvinced. This hasn't been helped by stories I've heard of that talk about a single system getting counts that varied by a factor of three from different function point counters using the same system.

Even if we did find an accurate way for function points to determine functionality, I still think we are missing the point of productivity. I might say that measuring functionality is a way to look at the direct output of software development, but true output is something else. Assuming an accurate FP counting system, if I spend a year delivering a 100 FP system and Joe spends the same year delivering a 50 FP system, can we assume that I'm more productive? I would say not. It may be that of my 100 FP only 30 is actually functionality that's useful to my customer, but Joe's is all useful. I would thus argue that while my direct productivity is higher, Joe's true productivity is higher.

Jeff Grigg pointed out to me that there's internal factors that affect delivering function points. "My 100 function points are remarkably similar functions, and it took me a year to do them because I failed to properly leverage reuse. Joe's 50 functions are (bad news for him) all remarkably different. Almost no reuse is possible. But in spite of having to implement 50 remarkably different function points, for which almost no reuse leverage is possible, Joe is an amazing guy, so he did it all in only a year."

But all of this ignores the point that even useful functionality isn't the true measure. As I get better I produce 30 useful FP of functionality, and Joe only does 15. But someone figures out that Joe's 15 leads to $10 million extra profit for our customer and my work only leads to $5 million. I would again argue that Joe's true productivity is higher because he has delivered more business value, and I assert that any true measure of software development productivity must be based on delivered business value.

This thinking also feeds into success rates. Common statements about software success are bogus because people don't understand WhatIsFailure. I might argue that a successful project is one that delivers more business value than the cost of the project. So if Joe and I run five projects each, and I succeed on four and Joe on one, do I finally do a better job than Joe? Not necessarily. If my four successes yield $1 million profit each, but Joe's one success yields $10 million more than the cost of all his projects combined, then he's the one who should get the promotion.

Some people say "if you can't measure it, you can't manage it". That's a cop out. Businesses manage things they can't really measure the value of all the time. How do you measure the productivity of a company's lawyers, it's marketing department, an educational institution? You can't, but you still need to manage them (see Robert Austin for more).

If team productivity is hard to figure out, it's even harder to measure the contribution of individuals on that team. You can get a rough sense of a team's output by looking at how many features they deliver per iteration. It's a crude sense, but you can get a sense of whether a team's speeding up, or a rough sense of whether one team is more productive than another. But individual contributions are much harder to assess. While some people may be responsible for implementing features, others may play a supporting role, helping others to implement their features. Their contribution is that they are raising the whole team's productivity, but it's very hard to get a sense of their individual output unless you are a developer on that team.

If all this isn't complicated enough, the Economist (Sep 13-19, 2003) had an article on productivity trends. It seems that economists are now seeing productivity increases in business due to computer investments in the nineties. The point is that the improvements lag the investments: "Investing in computers does not automatically boost productivity growth; firms need to reorganize their business practices as well". The same lag occurred with the invention of electricity.

So, not only is business value hard to measure, there's a time lag too. So maybe you can't measure the productivity of a team until a few years after a release of the software they were building.

I can see why measuring productivity is so seductive. If we could do it we could assess software much more easily and objectively than we can now. But false measures only make things worse. This is somewhere I think we have to admit to our ignorance.



Published at DZone with permission of Martin Fowler, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Ian Mitchell replied on Tue, 2013/09/03 - 2:55am

In an agile way of working, the best measure of productivity is arguably the incremental delivery of potentially releasable software. However, there is an industry bias towards the more immediately measurable, such as function points, SLOC, or whatever.

Perhaps the most controversial metrics of all are not those of "perceived productivity", but of "estimated productivity". For example, there's been quite a bit of debate recently concerning the viability of story point estimates as a metric that allows for work to be bid across teams. This has seen the advocates of frameworks that support such usages pitched against others who view these techniques as unagile and dangerous:

http://www.jrothman.com/blog/mpd/2013/08/comparing-teams-is-not-useful-exposing-another-management-myth.html

http://kenschwaber.wordpress.com/2013/08/06/unsafe-at-any-speed/

http://agile.dzone.com/articles/method-wars-scrum-vs-safe


Christophe Blin replied on Tue, 2013/09/03 - 4:58am

Quote : I might argue that a successful project is one that delivers more business value than the cost of the project.

=> This is the pure gem of this article. 

If all managers had this only measure that would be good enough. The REAL challenge is to define the measure before the project.
For ex, "As a manager, I'll invest X team members during X days in this project and if in X-n days I see XXX, then I'll continue the project and redefine a new success measure"

The problem is that far too often the managers start a project with NO vision or very far vision (in 5 years for ex).
This is OK for planes or space shuttle program, it is not for enterprise softwares (in 5 years, the business will have changed and the forecast will be completely wrong). 

Russell Pannone replied on Thu, 2013/09/05 - 11:35am

Hi Martin,

I have admired and respected your thought leadership for many years.

I would like to comment on what you stated in your article CannotMeasureProductivity: "So, not only is business value hard to measure, there's a time lag too."

I often hear people say we cannot measure business value because it is hard so we will not do it. This is a shame. If something is worthwhile to do just because it is hard should not be used as an excuse for doing it.

For me using simple measures and metrics for Customer Satisfaction, Business Value, Employee Satisfaction and Product Quality are elements of productivity.


Doug Shelton replied on Fri, 2013/09/06 - 1:57am

Martin:

I get your points (I think), but I assume you do understand why it is that management wants to measure productivity [I can provide a boatload of example reasons if desired].

So - perhaps you are right - certainly considering the examples you've given.  But that said, perhaps the appropriate question to consider is: "So How "Can We  Appropriately" measure Productivity?".  I do not think you've made a solid case for why productivity simply "cannot be measured" - rather you provided some examples of the "Wrong Way to measure productivity" - which does not at all mean productivity "Cannot be measured" (albeit I agree that figuring out "how" to measure productivity may indeed be quite difficult).   Also, I think you obfuscated the issue by talking about the "value of SW delivered".  I think you can definitely separate the issues of "Value of SW delivered" from "Productivity", for the purpose of being able to zero in on determining whether a Programmer simply isn't performing "up to snuff" - which is one of the key examples I'd give re "Why" management wants to determine productivity (regardless of what that reason might be that is limiting the said programmer's productivity).

Your thoughts on this assertion?

Wade Chandler replied on Thu, 2013/09/12 - 6:24am in response to: Doug Shelton

Often part of the issue with "is a programmer performing up to snuff" as it relates to productivity is management,  project managers, owners, and other stake holders are not doing their job the best they can. Most times in software, even if using tools and libraries the team has used many times, the systems being built are not cookie cutter systems.

Requirements, quality of requirements, priorities, deadlines arbitrarily set or based in fact, all play a role, and nearly always, and including organizations who say they are agile, the blame for perceived performance problems fall onto developers, when it is the planning and plan execution which are failing.

Egos then enter the situation, and instead of thinking about those facts, in a way that really takes the budget and time to market in mind, most projects wind up trying to thread a needle. The input required to then understand such issues isn't accepted and processed in a way conducive to moving forward more sanely or starting that way. Too often it is ignored or people get upset, and continue to slug along,  unless they get lucky and run out of requirements, until inevitably they wind up wondering why things are taking longer than they thought they should or they are over budget.

Of the two, only being over budget is a quantifiable fact. Of course one can have a good idea if they are missing a window related to time to market, but that is undeniably a separate set of information from the plan and schedule though their executions are linked.

If a developer exercises what the industry considers best practices, or can easily be taught and leverage them, and isn't very stubborn with attitude problems, and thus can take input and adjust with the needs, then it is hard to say they are not up to snuff. Anyone with decent experience in development and people interaction can tell a decent developer, as well as one's skill level, aptitude, and emotional stability.

So, yes there are bad developers, but if you have developers matching the good traits above, then even ones slow at typing can execute. The ones in the negative you need to drop, unless of course they are interns or junior, you budgeted for that, and think they are teachable.

But, you can not discount the plan and its execution. More stress on top of a bad plan or arbitrary dates won't help you get more work, or meet your goal, but can help run off good help or burn people out. That will in turn have the opposite effect, and to me points to the reason for much project failure. You can dig a ditch quicker to a point, but you can't short change the best practices of work and quality control.

If you get a ditch that doesn't drain, you spend more time fixing it than if you dug it right the first time. The schedule, plan, and budget determine your ditch. Those with managers who know how to execute in that realm will be the successful ones.  

Alexander Von Z... replied on Thu, 2013/09/12 - 12:07pm

Great article. I came to the same conclusion. It is basically impossible to measure productivity of developers or teams in a way that is fair and balanced. All of those metrics used for this purpose have serious flaws.

But I think there is still a way to metric based performance improvements if we look at the concept of "technical debt". This concept has gained a lot of traction in the last couple of years and when I look at my own practical experience with using it I would say the results are quite promising. By measuring certain aspects of technical debt it is possible to limit or even reverse the accumulation of technical debt. That leads to better software that is easier to understand and therefore easier to maintain. At our company we are focussing on structural aspects of technical debt (broken architecture, cyclic dependencies etc.) and that has helped us tremendously. That experience is confirmed by other companies that also measure technical debt at least on a daily base.

Edward Villanueva replied on Mon, 2013/10/28 - 6:18pm

Impressive information  you have here. I'm glad to find this page, I will surely check this site regularly. 

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.