Episteme

Mike's random thoughts and ramblings

Metrics and Incentives

There's a great discussion going on in the security blogosphere about the importance of metrics - Mogull, Amrit and Rothman are talking about them.

I'm going to fall on the same side as Amrit on this one, as metrics have always been one of my favorite things to play with. The problem, as Mike points out is that the wrong metric is always worse than no metrics at all. Anybody who has known me a while has heard me rant about the over-simplistic metrics that we use in technology, and the importance of finding metrics that can really help solve problems. And, as is evident in the articles that started this whole discussion over at Joel on Software (here and here), metrics have been given a really bad name in technology. From the articles:

"Software organizations tend to reward programmers who (a) write lots of code and (b) fix lots of bugs. The best way to get ahead in an organization like this is to check in lots of buggy code and fix it all, rather than taking the extra time to get it right in the first place."

And:

"The whole fraud is only possible because performance metrics in knowledge organizations are completely trivial to game."

Joel seems to think that the reason that this is the case is because there's some inherent flaw in what you're trying to measure. I would argue that the problem isn't the measurement, but the incentives that the metric creates that are the real problem. Each thing you measure and reward (or discourage) creates a behavioral incentive - in the case above, measuring only the creation of volume of code (without quality) and remediation of bad code (without proactively avoiding bad code in the first place) creates a situation where the metric is, as Joel points out, "trivial to game".

This is why metric development has to end up as an iterative process. What most companies do when they realize that they've incentivised bad behavior is the same thing that Joel suggests: dump the metrics altogether. Unfortunately, that's actually the wrong approach - the point of creating metrics is to measure and incentivize, and proving that people are gaming the system proves something quite simple:

Measuring things works to modify behavior. If it didn't, people wouldn't be gaming the system.

The problem at this point is to ask yourself the hard question: what is it that I want to incent my people to do? Where are we now? And how can we alter the metrics to incentivize the correct behavior? How can we correct the metric?

Let's take the rather simple example Joel gave us: what if we took the metrics a) and b) above, and added c) create few bugs? And created a composite metric something like the following:

Performance = ( amount of code + bugs fixed ) / (bugs created)

How would you game that? Let's iterate through this process in the comments - let me know how you'd game it, and I'll modify the metric (in case it's not obvious, I've done this iteration before)

Share this post

About the author

Michael Murray

Michael Murray