revious articles in this series discussed ways of making development of Web services more efficient and the applications themselves highly manageable once deployed. But there is more that developers can do to facilitate a successful migration and commitment to the Web services business model.
Building and leveraging intelligent feedback and control mechanisms can help create an end-to-end view of the Web services development and management lifecycle. This means incorporating processes that provide timely indicators about the operation and performance of the Web services. A hallmark of this approach is that it also enables IT administrators to integrate business-performance analysis into the deployment process. By doing so, developers can make a significant contribution to the ongoing alignment of IT operations with business goals.
Calling on a New Breed of IT Developer
Web services are a standardized technology for building applications that connect with internal and external users. Still, it remains a serious challenge for developers to deliver cost-effective, efficient services that make internal users and external partners highly productive, and which please customers. Network availability and access, security, performance, and functionality are prominent design concerns. It is simply not enough to design secure, highly available services. Over time, services will have to be adapted to reflect changing business requirements and goals.
For example, consider a company that intends to provide detailed product information, including pricing data (which changes based on market fluctuations) vendor supplies, and commodity prices. The Web service must be made to cope with multiple-vendor Web sites, handle network outages, and respond to the user in a timely manner even when certain components are sluggish or unavailable.
This application requires a high degree of resiliency, such that every possible error and performance bottleneck is managed, and that is a very difficult design challenge. To succeed, development and deployment personnel need to work closely together to understand manageability issues. Constraints on staff and budgets can make this difficult, but a little applied creativity makes it possible to test and manage deployed applications even as you create new services, or extend and enhance existing applications.
Built to Manage
Intelligent feedback and control loops enable development teams to continuously analyze business-software performance and modify the Web services quickly, efficiently, and even automatically. These loops are programmatic mechanisms that push runtime instrumentation data back into the development process. Just as developers test applications to ensure runtime correctness, intelligent feedback provides ongoing testing and analysis of application performance and behavior.
This kind of testing must be planned and executed with business analysis in mind. Hosted testing environments are needed to monitor services under high-stress simulations. It is important for developers to understand the dependencies between the software development lifecycle (plan-design-develop-test) and the IT service management lifecycle (deploy-deliver-manage-operate). Figure 1 shows where the development and service-management process models overlap.
Consider this detailed example: Suppose a business creates a Web site to sell long-distance phone services. To help customers understand the pricing benefits of the services, the company’s developers create an online rate-plan calculator. They design the Web service application to gather pricing data from multiple long-distance carriers, compare this data to current rate plans, and then display for the customer a lower-cost rate plan for a setup fee. In this Web service-based business, revenue and profitability are direct factors of the company’s ability to rapidly design, develop, deploy, manage, extend, and enhance its online application. If customers cannot quickly and easily obtain a rate quote, then sales are hindered and customer satisfaction falls.
This example is intuitive to customers, business planners, and software developers. But what is harder to see is the impact of the software-development loops that affect this business. Analysis information collected while the application is running might identify performance bottlenecks, design changes that could improve scalability, or issues that directly effect a service-level agreement. This runtime-analysis data must flow back to both the business planner and the software developer for the business to operate smoothly.
Loop into Developer Productivity
To significantly boost developer productivity, tools must automate the collection and dissemination of runtime information. The trouble is, most application development tools today focus on the forward direction of the development process, i.e., helping developers move from requirements to design, or from design to construction. With control loops, however, it is the backward flow of information that affects the business’s bottom line. Data from the runtime use of Web service applications must be collected, filtered, analyzed, and sent back into the business analysis and development processes.
To accomplish process reversal, instrumentation must be added to runtime services and their associated components. For example, in the long-distance rate-plan Web service, the Web service component will need to execute a set of transactions to corporate database systems to compute current rates. These transactions are a critical element of the processing for the Web service. Although they may look like a black-box component from a source-code perspective, they must be viewed as critical infrastructure from the testing perspective. If a transaction were to block for a considerable amount of time, perhaps because the database was locked, then the entire Web service could become inoperable and useless to a customer.
When a developer has this ability to quickly find a problem in a complex distributed service, he can subsequently focus attention on how to replicate problems. Certainly, errors occur when customers use a service differently than developers expected or when unexpected circumstances occur. Our experience at HP suggests that developers generally simulate error-prone uses of their application instead of waiting for errors to occur. Part of the reason for this approach, obviously, is to design test plans. For example, developers might suggest testing to learn if a system can handle hundreds of thousands of transactions at once. Simulating such error-causing circumstances is much harder than allowing them to occur naturally, and it forces developers to write additional code for the simulations, and that has a dramatic impact on productivity.
With both of these tools in place, developers can design a complete range of tests to simulate situations where the Web service infrastructure might fail or become intolerably slow. Along with simulation capabilities comes the need to monitor and record the infrastructure activity during both testing and production use to identify the source of problems. For example, it would be useful to identify what types of customers are having trouble, such as those in certain geographies or those using a particular connection type or ISP. Another useful measurement is to compare Web site performance to Web server performance. This can indicate the server components are not properly distributed across the network infrastructure.
Another area where bridging the gap through operational monitoring impacts developer productivity is in Web service creation. A mechanism to match service-level agreements with deployed services would reduce errors and customer complaints. For example, in the long-distance rate-plan service, many network-infrastructure elements are required to execute the entire service, such as network storage for a database, gateways to internal and external networks for multi-vendor rate calculations, and internal Web servers for adequate performance.
Gathering the Data Elements
To manage all the network elements properly requires a great deal of element-specific knowledge. For example, a network storage element must be configured specifically to handle certain service requirements, such as database transaction performance. Often, fragments of domain-specific knowledge reside with either a few developers or IT administrators. Tools can gather these requirements together and automatically configure devices and software during service deployment.
Familiar to many developers, for example, HP OpenView Internet Services simulates and monitors business-application transactions in a Web environment. By monitoring both the transactions and the Internet services that complete them, this product helps both developers and operations staff to test, measure, and predict points of failure in complex systems.
There are other complementary development and deployment monitoring tools, such as the HP OpenView Transaction Analyzer and the Web Transaction Observer. There are, obviously, competing tools in this category. The point is that being equipped with such monitoring tools enables developers to measure and analyze runtime behavior and make that information available for creating the software. When developers utilize runtime capabilities as part of a designed maintenance process, they are better equipped to write software that achieves business and operational excellence.
Tools of this sort will be necessary to automate the collection and analysis of runtime application data. This won’t happen for free. Developers need to design applications with the concept of intelligent feedback and control in mind.
Typically, the biggest impact of intelligent feedback is in the implementation phase of a project. Certainly some extra time and development effort is required to implement applications that integrate into a runtime measurement framework. The reward for this effort is better agility in responding to changes in customer requirements, because runtime analysis information is always visible by developers and business analysts.
A key to understanding if the development effort is justified for your organization comes in reviewing your application deployment efficiencies and quality goals.
- Do you feel that your applications reach your goals set for high quality?
- Are you able to tell your customers that you can achieve stated service-level agreements?
- How long does it take for your development organization to study, define, and create enhancements for currently deployed services?
Based on your answers, you can then determine what additional implementation time is warranted to achieving these goals. Often small efforts with minimal development effort can introduce meaningful business improvements, such as simply knowing how many times per day a service crashes and fails to deliver results to a customer.
Runtime management tools provide critical measurement and analysis information that can easily flow back into business-decision making and application development processes. The subsequent operational monitoring benefits Web service development and maintenance. At HP we call this the IT Service Management Lifecycle. Whatever you call it, constrained staff and budgets make it even more important to evaluate every mission critical Web service from a macro perspective. It is arguably the most direct route to successful high-performance, job-specific solutions that exactly meet customer needs.
So here it is in two parts: By focusing on customer satisfaction through the use of measurement and monitoring techniques, better solutions are ultimately deployed. Integrating Web service metrics into business processes not only raises the quality of the application, but also lends a hand at reducing deployment costs by automating the flow of runtime performance and error information back into the development team.