o continuous integration
is an important part of your development process and you're eager to include a code-coverage check as part of your automated builds, but which coverage-rate targets should your development team set? Seasoned supporters of code coverage suggest that you aim for 75 percent, 85 percent, or perhaps even 100 percent coverage rates.
After measuring the baseline coverage rates for a project at my organization, I realized our development team needed to set our sights much lower than those rate targets. I didn't want to become the team's Colonel Cathcart, the character from Josef Heller's novel Catch 22 who arbitrarily raised his expectations as soon as it appeared his squadron would achieve the current ones:
Yossarian slumped with disappointment. "Then I really have to fly the fifty missions, don't I?" he grieved.
"The fifty-five," Doc Daneeka corrected him.
"The fifty-five missions the colonel now wants all of you to fly."
Excerpt from Catch 22 by Josef Heller
Rather than setting arbitrary targets, I chose the strategy of incremental improvement. To successfully execute this strategy, each build must have equal or better coverage than the previous successful build. By taking many small steps I hoped to achieve a giant leap in quality.
This article describes how you can implement the incremental improvement strategy in your code coverage using Cobertura and Apache Ant.
Unit Testing, Code Coverage, and Continuous Integration
Unit testing, code coverage, and continuous integration are all widely accepted best practices. In fact, most developers know they should unit test religiously. If you're not already one of the converted, let me paraphrase Google Director of Research Peter Norvig:
If you think you don't need to write unit tests for your code, write down all the reasons on a piece of paper. Study the paper carefully. Then throw away the paper and write tests anyway.
But who tests the tester? That is, how do you verify that you are writing enough tests? This is very valuable information because the code that isn't being exercised by your tests is where you should focus your energy. One solution is using a code-coverage tool, which will tell you the percentage of code your tests are exercising, and then incorporating a coverage check in your normal integration process. If your coverage check fails, then your build should also fail.
For my incremental improvement strategy, I chose the code coverage tool Cobertura because of its simple, well-defined, four Ant-task interface. One of these tasks, cobertura-check, causes a build to fail when code doesn't achieve the required coverage rate. For example, this Ant target will cause the build to fail if the coverage rate slips to less than 80 percent:
But instead of hard coding the line rate, you should use the result of an earlier build as the target rate for the current check. You can achieve this by chaining a couple of the Cobertura tasks with two core Ant tasks. And don't worry about whether to measure line rate, branch rate, or some other coverage metric. Your goal is to achieve a marked improvement rather than setting an absolute target or worrying about the details of which coverage rate metric to choose.
(Read an introduction to using Cobertura here. If you use a tool that doesn't have an equivalent to the cobertura-check task, check out the Ant fail taskbut be prepared to write your own custom condition.)