ontinuous integration is the practice of conducting code builds at least once a day to catch small integration problems before they grow into show stoppers. The more often programmers test a fully integrated system, the more opportunities they have to check the functional integrity of their applications?which leads to better products. Development teams all over the world are adopting this best practice.
Continuous integration also has a hidden benefit: it sets teams up to find and fix performance problems. The trouble with performance is nobody cares about it until there’s a problem, and by the time a performance issue makes its presence known it’s often very difficult to find and fix. Programming teams who practice continuous integration have the advantage of being able to test performance on a regular and automated basis.
Consider the seven key steps in the typical development process:
- Unit testing
- Acceptance testing
- System testing
- Performance testing
Long-running queries, unnecessary executions, excessive results sets, and other performance issues usually don’t surface until the acceptance testing phase?the fifth stage of development?or later. Making matters worse, teams usually don’t see those problems as a priority and don’t deal with them until much later, when the application moves out of QA with a punch list of lag times that must be corrected. As a result, programmers can spend weeks finding and fixing performance problems, sometimes spending as much as 20 percent of the development process in an abysmal bug hunt (based on statistical feedback from a survey of the IronGrid user base). Sluggish performance can also lead to design changes, an increasingly difficult prospect as the investment in code grows. Quite often all this leads to late delivery or, if that’s not an option, shipping poorly performing code.
For developers who practice continuous integration, some of the fundamental aspects of continuous performance will sound familiar:
- Test performance at least nightly, just as unit tests confirm functionality. This “seize the build” approach ensures that programmers have frequent opportunities to check performance as the application grows.
- Reduce the amount of code that requires testing and tuning, which cuts down performance-tuning time. As with unit tests, performance tests are easier to apply to smaller bits of code. Testing with every build therefore makes sense both for functionality and for performance. Continuous performance, like other forms of automated testing, drives code that’s cleanly factored and easier to read and maintain.
- Minimize variances in performance specifications. The more closely an application adheres to performance specs as lines of code mount, the easier it is to manage performance throughout the development process.
- Improve performance of the final product. Performance degradation worsens over time, so keeping performance in check with every build results in faster code on the back end.
- Sustain competitiveness with better-performing applications. The output of every development team, whether an in-house app or a major commercial software release, directly impacts the relative success of their organization.
- Get more efficient. Continuous performance lets you capture requirements in automated test cases. Instead of writing extra code that might improve performance, programmers can focus on performance only when it’s needed and where it’s most appropriate.
Any process change sounds like a lot of work at first, but here’s the good news: implementing continuous performance doesn’t have to be painful, and it can be just as beneficial as performing unit tests with every build.
Establishing a Process
For a continuous performance strategy to take hold, performance must be a priority from the earliest stages of the project. It can’t be left until acceptance testing or later. True, some problems don’t manifest themselves until they’re in an integrated environment, such as when a database application finally is tested with a genuine, live load or when an application makes its first calls to a mainframe. So while you can’t catch every performance error, you can?and should?catch some early.
Development teams who already conduct daily builds and unit tests can easily enhance their environments with continuous performance techniques. The following seven steps provide a roadmap to help developers ferret out the necessary time from their existing programming environments, while leveraging their already proven discipline by taking advantage of existing continuous integration efforts.
Step One: Include performance in the requirements definition.
If performance is addressed up front, extending the scope of the project to include specific parameters or overall goals is easy. Also, the earlier a problem takes root in a process, the more costly and difficult it is to mitigate it down the road.
Step Two: Work performance testing into the development timeline.
Product development and delivery schedules are based on competitive issues, the scope of the feature set, and the resources applied to product development. These factors determine the timeline over all else, so development schedules are often unrealistic. By scheduling performance tests throughout development (and ideally with every build), teams can actually buy more time because they are spared long and arduous tuning sessions at the project’s end. The less time is required to address bottlenecks at the end of the process, the less pressure developers feel to maintain a breakneck coding pace.
Step Three: Conduct performance tests on a regular basis?at least nightly.
A functional integrity test is a golden opportunity to also test performance, which can be a seamless addition to your continuous build process. It’s important to define a quantifiable standard for performance, to establish a pass/fail point. This makes it easier to keep coding until the software passes, just as many programmers write to pass unit tests.
Step Four: Constantly monitor performance activity.
Performance monitoring involves a series of tests, some of which may be conducted nightly. Others, however, might occur only when you have enough code to see how different units interact with one another. Examples include monitoring CPU consumption, stack trace, call history, identifying how many times a function is called and by what other function, memory profiling, track threads, identifying thread contentions or starvation, identifying EJB execution time and spotting long-running EJBs. Plenty of tools are available to measure these things, but developers need to make a record of their findings to create a baseline of code performance (see Step Five).
Step Five: Track the performance history as the application grows.
Establishing a performance history for every application is crucial for a truly effective continuous performance environment. By tracking how the code performs, you can establish a performance baseline, or signature, for your application. This signature allows you to see exactly what happened when new code was introduced to different parts of the application. For instance, two units might perform fine separately. But when you integrate them, you may notice a significant bottleneck. Knowing the history of each unit will help you identify the root cause of that problem, so you can fix it before it is compounded with the next build or integration.
Step Six: Use that data to get a fix on the code responsible for bottlenecks.
By comparing the previous images of your application and noting performance degradations or even improvements, you can then identify which code is causing that result. The history becomes invaluable here, because programmers will forget when they introduced which pieces of code. The best approach is to leverage tracking tools that can record test results, even though the market is woefully underserved here. Most tools limit their use to code versioning and test case monitoring.
Step Seven: Use your knowledge to efficiently tune only the code that caused the performance problem.
This is the step where the continuous performance strategy pays off generously. For the first time, developers will have a way to pinpoint exactly where they introduced performance-sapping code. They can then fix the problem before it is buried beneath dozens of new builds. Because intuition, even in the best of developers, is a poor way to identify problem sources, each programmer can significantly reduce total effort by optimizing only broken code.
These seven steps provide developers with a process that allows them to create an ongoing history of the performance of their applications?a performance signature that can be referred to any time a bottleneck that must be tuned emerges. In the end, the weeks of tuning before the product ships can be dramatically reduced or even eliminated. And no longer will software companies and development teams wait until the final stages of development?or even later?to address performance problems.
Tools: The Final Step
Acquiring tools and technologies usually is easier than changing the way we work. Yet while nearly any development team can establish a continuous performance environment without organizational changes or major procedural upheavals, few know where to find tools that can help them set a performance baseline and test against it.
Some open source tools have made initial progress. For example, JUnitPerf is a simple extension of JUnit that lets every programmer time-box a given JUnit test case, or even test it under load. It does require some extra effort to set up and tear down test cases, and its load testing is relatively simplistic. But it’s a good start.
In addition to open source software, a new generation of commercial tools is emerging. Some companies like Borland are moving their performance suites into the areas of acceptance testing with some level of automation. Most larger companies, though, focus on lucrative development environments and post-production tools. For the most part, continuous performance is the domain of small companies whose software is solely devoted to enabling these strategies.
The Benefits of Continuous Performance
More and more, companies large and small are recognizing the need for ongoing performance testing, and they are working to introduce products to meet that need. Those that are bound to succeed will address all the key aspects of continuous performance implementation, including the ability to monitor and track performance over time. Meanwhile, server-based solutions that allow entire development groups to automate their unit tests will also be crucial for full-fledged solutions.
With the right guidelines and effective tools, continuous performance devotees will benefit from much more than simply saving a couple of weeks of code tuning. Development teams will preserve product schedules, avoid needless debugging after an application has been deployed, deliver better quality code to internal users and external customers, create happier and more productive programming teams, streamline post-development steps including QA and beta programs, and reduce the costs of supporting products.
These benefits should make the easy transition to a continuous performance environment an even easier sell to management. The financial costs are minimal, while the potential benefits can shoot straight to the bottom line.