James Newkirk points out that the absolute measurement of code coverage is not a very interesting number by itself, but must be viewed as part of the overall trend of coverage. He also points out that it is acceptable for some categories of code to have lower, or even zero, test coverage.
These are examples of one of my fundamental rules of metrics.
Rule #1: Metrics Must Be Interpreted In Context
A few examples of context are:
- Within a time series. A code coverage of 20% could be very good if the team is just beginning to add automated tests to a legacy system. On the other hand a code coverage of 85% could be poor if was at 90% coverage at the last release.
- By category of code. James uses the example of web service wrappers generated by Visual Studio and and views within a Model-View-Presenter pattern. I agree with James that generated code does not need unit tests. However, you should be particlarly thorough in testing any code generators you write.
- Correlations with other metrics. If pressed to come up with an arbitrary threshold for a metric I like to do it within the context of another metric. For example, I might say that the unit test line coverage must be greater than 80% for all methods with a cyclomatic complexity greater than one. A key metric to correlate with is defects. Code that has proven to be buggy in production should get closer attention.
- Compared to other projects in your organization. If every other software project at your company has 90% code coverage you had better have a good reason for only having 70%.
There is a lot more to say on this subject. Stay Tuned.
Pingback: Julias Shaw › Teaser: Creating Ad Hoc Reports in Panopticode 0.2