is the practice of conducting code builds at least once a day to catch small integration problems before they grow into show stoppers. The more often programmers test a fully integrated system, the more opportunities they have to check the functional integrity of their applicationswhich leads to better products. Development teams all over the world are adopting this best practice.
Continuous integration also has a hidden benefit: it sets teams up to find and fix performance problems. The trouble with performance is nobody cares about it until there's a problem, and by the time a performance issue makes its presence known it's often very difficult to find and fix. Programming teams who practice continuous integration have the advantage of being able to test performance on a regular and automated basis.
Consider the seven key steps in the typical development process:
- Unit testing
- Acceptance testing
- System testing
- Performance testing
Long-running queries, unnecessary executions, excessive results sets, and other performance issues usually don't surface until the acceptance testing phasethe fifth stage of developmentor later. Making matters worse, teams usually don't see those problems as a priority and don't deal with them until much later, when the application moves out of QA with a punch list of lag times that must be corrected. As a result, programmers can spend weeks finding and fixing performance problems, sometimes spending as much as 20 percent of the development process in an abysmal bug hunt (based on statistical feedback from a survey of the IronGrid user base). Sluggish performance can also lead to design changes, an increasingly difficult prospect as the investment in code grows. Quite often all this leads to late delivery or, if that's not an option, shipping poorly performing code.
|Continuous performance can be just as beneficial as performing unit tests with every build.|
To address performance problems earlier in the development process, teams can implement a new practice called continuous performance, which borrows from the continuous integration concept to establish a specific and timesaving process for detecting and fixing performance problems.
For developers who practice continuous integration, some of the fundamental aspects of continuous performance will sound familiar:
- Test performance at least nightly, just as unit tests confirm functionality. This "seize the build" approach ensures that programmers have frequent opportunities to check performance as the application grows.
- Reduce the amount of code that requires testing and tuning, which cuts down performance-tuning time. As with unit tests, performance tests are easier to apply to smaller bits of code. Testing with every build therefore makes sense both for functionality and for performance. Continuous performance, like other forms of automated testing, drives code that's cleanly factored and easier to read and maintain.
- Minimize variances in performance specifications. The more closely an application adheres to performance specs as lines of code mount, the easier it is to manage performance throughout the development process.
- Improve performance of the final product. Performance degradation worsens over time, so keeping performance in check with every build results in faster code on the back end.
- Sustain competitiveness with better-performing applications. The output of every development team, whether an in-house app or a major commercial software release, directly impacts the relative success of their organization.
- Get more efficient. Continuous performance lets you capture requirements in automated test cases. Instead of writing extra code that might improve performance, programmers can focus on performance only when it's needed and where it's most appropriate.
Any process change sounds like a lot of work at first, but here's the good news: implementing continuous performance doesn't have to be painful, and it can be just as beneficial as performing unit tests with every build.