Gathering the Data Elements
To manage all the network elements properly requires a great deal of element-specific knowledge. For example, a network storage element must be configured specifically to handle certain service requirements, such as database transaction performance. Often, fragments of domain-specific knowledge reside with either a few developers or IT administrators. Tools can gather these requirements together and automatically configure devices and software during service deployment.
Familiar to many developers, for example, HP OpenView Internet Services simulates and monitors business-application transactions in a Web environment. By monitoring both the transactions and the Internet services that complete them, this product helps both developers and operations staff to test, measure, and predict points of failure in complex systems.
There are other complementary development and deployment monitoring tools, such as the HP OpenView Transaction Analyzer and the Web Transaction Observer. There are, obviously, competing tools in this category. The point is that being equipped with such monitoring tools enables developers to measure and analyze runtime behavior and make that information available for creating the software. When developers utilize runtime capabilities as part of a designed maintenance process, they are better equipped to write software that achieves business and operational excellence.
Tools of this sort will be necessary to automate the collection and analysis of runtime application data. This won't happen for free. Developers need to design applications with the concept of intelligent feedback and control in mind.
Typically, the biggest impact of intelligent feedback is in the implementation phase of a project. Certainly some extra time and development effort is required to implement applications that integrate into a runtime measurement framework. The reward for this effort is better agility in responding to changes in customer requirements, because runtime analysis information is always visible by developers and business analysts.
A key to understanding if the development effort is justified for your organization comes in reviewing your application deployment efficiencies and quality goals.
- Do you feel that your applications reach your goals set for high quality?
- Are you able to tell your customers that you can achieve stated service-level agreements?
- How long does it take for your development organization to study, define, and create enhancements for currently deployed services?
Based on your answers, you can then determine what additional implementation time is warranted to achieving these goals. Often small efforts with minimal development effort can introduce meaningful business improvements, such as simply knowing how many times per day a service crashes and fails to deliver results to a customer.
Runtime management tools provide critical measurement and analysis information that can easily flow back into business-decision making and application development processes. The subsequent operational monitoring benefits Web service development and maintenance. At HP we call this the IT Service Management Lifecycle. Whatever you call it, constrained staff and budgets make it even more important to evaluate every mission critical Web service from a macro perspective. It is arguably the most direct route to successful high-performance, job-specific solutions that exactly meet customer needs.
So here it is in two parts: By focusing on customer satisfaction through the use of measurement and monitoring techniques, better solutions are ultimately deployed. Integrating Web service metrics into business processes not only raises the quality of the application, but also lends a hand at reducing deployment costs by automating the flow of runtime performance and error information back into the development team.