When attempting to measure the performance, or productivity, of a team, you need to use great care when choosing your metrics. By choosing a metric, you are creating an implicit incentive for the team to optimize themselves against that metric.
Incentives can be a useful tool for a leader, the trick is to understand which incentives are effective, and which incentives your team is optimizing for.
A classic problem, especially in software development teams, is that leaders and managers unknowingly create incentives that are counterproductive. A few examples to watch out for:
If you expect 100% test coverage, then there is a good chance your team will provide you with just that. The problem is there is a world of difference between a test, and a good test. Would you rather have 50% coverage using high quality tests? Or 100% coverage with mediocre or tests?
Lines of Code
Lines of code is often used as a measure of productivity. There are numerous problems with this approach, and by no means am I the first to make these observations.
First, a claim: in any given software system, there is a correlation between lines of code and number of defects. That is, the greater the line count, the more bugs you have. Why would you want to incentivize your team to create more defects? If anything, you should incentivize your team to decrease the rate at which code is added, or even to decrease the total line count in your application.
However, correlation does not imply causation, and not all lines of code are created equal, so any metric based on lines of code is fraught with risk. Incentivizing the net negative production of code can lead to widespread use of clever tricks, obscure hacks, and generally unreadable and unmaintainable code.
Using work estimates to measure delivered value
You are estimating your work, right? Many teams estimate future work in hours, or days. Agile teams often have a concept of story points (a story point is simply a unit of effort used to estimate future work). Estimates allow for planning (aka guessing), hiring, organizing, and any number of useful activities. However, an estimate is just that, an estimate, and does not capture the total effort invested to actually deliver a given feature, and does not capture the value that was actually created.
I'll argue that there exists a relationship between effort and value, but that it is impossible to derive one from the other. Effort results from things like expertise and complexity, not from the delivered value. Anyone who has toiled away, spending long hours on a useless deliverable can attest to this. Likewise, value is determined by things such as how much revenue can be generated, or how happy you make your users. The relationship exists between value and effort because not everything is worth the cost required to attain it.
So perhaps it is just plain wrong to use effort metrics to measure delivered value. Beyond that, you may be implicitly incentivizing the inflation of estimates, decreasing their usefulness. Even worse, you may be disincentivizing the delivery of actual value!
Software development is hard, and I have a tremendous amount of respect for teams that are successful and make the world better place. Anything you've read here which suggests that I'm trivializing the hard work, dedication, and focus it takes to create software products was not intentional. Just the opposite actually; I'm saying software development is probably harder than you think it is.
Can you think of any other implicit incentives? Please share in the comments section.