Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Continuous Performance: A Best Practice to Ensure Faster Code : Page 2

Continuous performance is a new practice for detecting and fixing performance problems from the early stages of software development. Learn how to implement this process, which drives code that's cleanly factored and easier to read and maintain.


advertisement
Establishing a Process
For a continuous performance strategy to take hold, performance must be a priority from the earliest stages of the project. It can't be left until acceptance testing or later. True, some problems don't manifest themselves until they're in an integrated environment, such as when a database application finally is tested with a genuine, live load or when an application makes its first calls to a mainframe. So while you can't catch every performance error, you can—and should—catch some early.

Development teams who already conduct daily builds and unit tests can easily enhance their environments with continuous performance techniques. The following seven steps provide a roadmap to help developers ferret out the necessary time from their existing programming environments, while leveraging their already proven discipline by taking advantage of existing continuous integration efforts.

Step One: Include performance in the requirements definition.
If performance is addressed up front, extending the scope of the project to include specific parameters or overall goals is easy. Also, the earlier a problem takes root in a process, the more costly and difficult it is to mitigate it down the road.



If performance is addressed up front, extending the scope of the project to include specific parameters or overall goals is easy.
Naturally, applying a tremendous level of granularity to those metrics may not be realistic early on, but some overall performance specs for the application (such as 1.5 seconds to generate a Web page that makes three calls to a SQL database) are useful in setting the bar for more specific goals that can be established as the application takes shape. (Product managers should work in tandem with the development team during this step to set realistic and meaningful performance specs.)

Step Two: Work performance testing into the development timeline.
Product development and delivery schedules are based on competitive issues, the scope of the feature set, and the resources applied to product development. These factors determine the timeline over all else, so development schedules are often unrealistic. By scheduling performance tests throughout development (and ideally with every build), teams can actually buy more time because they are spared long and arduous tuning sessions at the project's end. The less time is required to address bottlenecks at the end of the process, the less pressure developers feel to maintain a breakneck coding pace.

Step Three: Conduct performance tests on a regular basis—at least nightly.
A functional integrity test is a golden opportunity to also test performance, which can be a seamless addition to your continuous build process. It's important to define a quantifiable standard for performance, to establish a pass/fail point. This makes it easier to keep coding until the software passes, just as many programmers write to pass unit tests.

Step Four: Constantly monitor performance activity.
Performance monitoring involves a series of tests, some of which may be conducted nightly. Others, however, might occur only when you have enough code to see how different units interact with one another. Examples include monitoring CPU consumption, stack trace, call history, identifying how many times a function is called and by what other function, memory profiling, track threads, identifying thread contentions or starvation, identifying EJB execution time and spotting long-running EJBs. Plenty of tools are available to measure these things, but developers need to make a record of their findings to create a baseline of code performance (see Step Five).

Step Five: Track the performance history as the application grows.
Establishing a performance history for every application is crucial for a truly effective continuous performance environment. By tracking how the code performs, you can establish a performance baseline, or signature, for your application. This signature allows you to see exactly what happened when new code was introduced to different parts of the application. For instance, two units might perform fine separately. But when you integrate them, you may notice a significant bottleneck. Knowing the history of each unit will help you identify the root cause of that problem, so you can fix it before it is compounded with the next build or integration.

Step Six: Use that data to get a fix on the code responsible for bottlenecks.
By comparing the previous images of your application and noting performance degradations or even improvements, you can then identify which code is causing that result. The history becomes invaluable here, because programmers will forget when they introduced which pieces of code. The best approach is to leverage tracking tools that can record test results, even though the market is woefully underserved here. Most tools limit their use to code versioning and test case monitoring.

Step Seven: Use your knowledge to efficiently tune only the code that caused the performance problem.
This is the step where the continuous performance strategy pays off generously. For the first time, developers will have a way to pinpoint exactly where they introduced performance-sapping code. They can then fix the problem before it is buried beneath dozens of new builds. Because intuition, even in the best of developers, is a poor way to identify problem sources, each programmer can significantly reduce total effort by optimizing only broken code.

These seven steps provide developers with a process that allows them to create an ongoing history of the performance of their applications—a performance signature that can be referred to any time a bottleneck that must be tuned emerges. In the end, the weeks of tuning before the product ships can be dramatically reduced or even eliminated. And no longer will software companies and development teams wait until the final stages of development—or even later—to address performance problems.



Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date