hile watching application development practices evolve over the years, I have noticed a monumental shift in the way companies approach cost reductions throughout the application lifecycle. The bottom line is, of course, return on investment, or TCO. Whatever they call it, organizations are developing and maintaining applications at an unsurpassed rate, hoping to make gains in employee productivity, increase company revenue, and improve customer satisfaction. The ultimate goal is to achieve operational efficiencies and reduce costs. However, companies always face a trade-off: the business value of the application vs. its development and maintenance costs.
The arduous task of maintaining and managing in-production applications accounts for up to two-thirds of an application’s total cost of ownership (TCO). In other words, for every dollar spent developing an application, an organization will spend two dollars maintaining it in production.
That's a big consideration—especially in today’s economy. Therefore, architects, developers, IT professionals, and technology executives are now also considering the future costs and manageability of applications when making decisions throughout the application lifecycle. Best practices are evolving quickly, and some of the most progressive and cost-conscious organizations in the world are implementing the strategies outlined here.
Developing on .NET
Microsoft’s .NET framework provides a compelling platform for building robust distributed applications, providing benefits to developers, businesses and users. Organizations are turning to the .NET framework both to replace legacy applications and to develop new applications, because .NET-based development and maintenance costs can be 20 to 25 percent less than those for J2EE-built applications (from the Giga/Forrester study: "The Total Economic Impact of Developing and Deploying Applications on Microsoft and J2EE/Linux Platforms").
The framework consists of namespaces containing classes with methods that cover a large range of common programming needs in a number of areas, including user interface, data access, database connectivity, cryptography, web application development, numeric algorithms and network communications. The framework is built on open standards and embraces a wide range of programming languages, including not only Microsoft languages such as C#, VB.NET, and Managed C++, but also some third-party languages (Ruby, Python, Lisp, SmallTalk, etc.). See the sidebar "Why Use .NET?" for more information.
|The arduous task of maintaining and managing in-production applications accounts for up to two-thirds of an application’s total cost of ownership (TCO). |
Other platforms provide similar, although often less-integrated platforms. Still, it is simply not enough to choose one platform over another. Organizations must consider several strategies that can reduce costs throughout the application lifecycle, from planning and development, to testing, deployment and support. The common thread: enabling the ability to monitor and measure application behavior and health throughout the application lifecycle.
The first consideration when developing or migrating an application to a new framework is to determine the business rules being automated. These rules may simply be specifications for extracting and displaying data, but more likely have added complexity for reading, querying, manipulating, and storing data; interacting with users and other systems and services; and providing management and monitoring functionality. Therefore, it is imperative to ensure that business rules are not only implemented correctly, but also that they meet current business and end user requirements. Extracting and verifying these business rules provides the basis for application architecture, letting organizations correctly match the architecture with the hardware and programming techniques required to successfully implement and deploy an application.
An Apple a Day
In addition to examining business rules, modern principles and best practices dictate that architects create a health model, or blueprint for application behavior. As one of the initial considerations during development, the health model defines a process for individual services and application components to change states, typically using simple indicators such as "working normally," "performance degraded," or "failed." The health model enables proactive problem resolution by configuring a monitoring solution that adheres to the blueprint, avoiding the problem resolution costs associated with manually detecting and diagnosing problems.
Using a health model is a best practice in application development because it offers the potential to dramatically lower the application’s TCO later in the application lifecycle. It does that by enabling designers to understand the relationships and interactions between application components and the impact of individual component failures on the health of the entire system. A health model also allows developers to write appropriate instrumentation (or appropriately configure a monitoring solution), and helps operations staff to better deploy and manage the application.
Further, the health model helps determine what information needs to be collected to troubleshoot the degradation or failure. Because such information is specific to each individual component and service, collecting it properly is essential to ensuring optimal performance and reducing the problem resolution cycle. Increased uptime and improved data collection when applications behave unexpectedly equate to lower TCO.