As enterprise architecture (EA) matures into an established discipline in most moderate to large scale IT organizations, EA is increasingly being challenged to demonstrate objective evidence of its contribution to the corporation. Beginning even before the current economic crisis began, the senior business management intensified their scrutiny of IT in relation to measurable benefits. Increasingly, IT divisions are being required to produce detailed metrics that provide quantitative evidence of their contributions.
This presents a particular challenge to EA. Other IT divisions tend to have well-established criteria to measure their performance: Percent uptime for systems, error rates for applications, number of calls resolved on the first contact for help centers, etc. EA, on the other hand, is rarely directly responsible for any obvious quantifiable measure. Its impact is more subtle. EA governance, for example, may significantly enhance the quality of development efforts, but is rarely held directly responsible for the success or failure of a specific project. Similarly, while EA clearly has an important oversight role with regard to infrastructure and time to recover from a planned drill or actual disaster, you cannot reasonably attribute measurable results to the quality of EA efforts. An effective EA organization, while touching on a vast array of IT activities, is solely responsible for relatively few projects. Its value is far more often incremental, strengthening system development and maintenance and avoiding unnecessary risk rather than having primary responsibility.
The development of common services for which some EA organizations assume responsibility is an exception to this rule, but the measurement of the value of these services is not simple either. Several measures are applicable:
- The number of services used by multiple applications
- Error rates
- Transaction throughput
However, this last metric?transaction throughput?may not provide a clear indication of EA contribution that is meaningful to senior management. CEOs and CFOs are often unimpressed with these numbers. They are aware that the real question is whether the centralization of services is cost-effective relative to each application. They do not accept on faith that common services?which you often need to “tweak” for effective reuse?save the corporation money. While upper management is interested in these statistics, they do not, in themselves, give executives a means to measure EA’s value in relation to its cost.
Beyond the limited number of metrics that are directly attributed to common services, EA’s metric waters are muddy. It’s success at setting, publishing, and enforcing standards may have an effect on, say, system up time, but how much of an effect? If programs crashed the system or systems that met corporate coding standards but contained buried errors, is EA at fault? Possibly, but it certainly would not be the first area investigated in seeking root causes. If EA shouldn’t be blamed if server capacity was maxed out, can it claim credit when all systems perform perfectly? Similarly, quantifying EA’s role in project efficiency statistics, such as on-time delivery, number of errors detected and resolved prior to user acceptance testing, requires assumptions and interpretations that defeat the goal of providing purely objective measures.
Assigning a dollar value to EA efforts is fraught with ambiguity. If EAs governance panel requires a division to rework its security architecture, cost is added that could conceivably be balanced by risk averted. Can such numbers be meaningfully calculated? Possibly. But no CFO is going to take EA’s word for its calculations; only the business can assign a value to a given effort. While this might be at least possible for a project as a whole or a functional requirement of a project, it is far less likely and much more debatable to assign a particular numeric to only the EA portion of the project.
If, despite these reservations, senior management insists on quantitative metrics, it is essential that the criteria be clearly established and agreed upon before initiating a metrics program. For example, the management person or team evaluating EA’s performs agrees that the number of projects reviewed for compliance to standards, the turnaround time for such reviews, or the number of services re-used is meaningful, then EA should certainly provide them. Accumulating metrics is itself a time-consuming and thus costly process. Gathering metrics without a clear understanding that they are considered meaningful to management line is a wasted effort.
With nearly all quantifiable metrics, trends are more likely to be useful than absolute numbers. As such, they are unlikely to answer fundamental questions about whether EA adds enough value to justify its existence. If, however, senior management reaches an accord with EA on which metrics matter most, then regular quarterly reviews of statistical trends can indicate whether EA is progressively meeting its mandate.
If EA cannot easily quantify its contribution, does it have other means to clarify its value proposition and its success in achieving that proposition? By clearly classifying the nature of EA’s impact?cost savings, risk avoidance, reducing implementation time, or improving quality?and identifying exactly the difference EA input has made, EA can provide senior management with the evidence they need to determine whether EA is achieving corporate goals cost-effectively. Rather than producing questionable numeric figures, EA spells out its specific role in key corporate initiatives.
If EA is doing its job, a report by quarter of its contribution should produce a lengthy list of significant specific contributions. For example, rather than stating generically that EA, in its governance function, enforces standards, EA can show how it intervened in a specific project to require compliance with a corporate standard, and how this avoided proliferation of unnecessary software diversity, additional licensing, and maintenance costs. If EA provides a common service that is easily applied to a new project, EA can document the relatively short time period it took to re-use the existing service rather than build a new component from scratch. If EA provides a corporate lab and testing facilities, it can specify how its involvement in a particular project revealed weakness in design early in the development cycle, which lead to reduced cost and a more reliable solution.
Transparency is a better goal than narrowly-defined quantification. In the end, the sheer number of rows on a spreadsheet documenting EA contributions for a given quarter should reveal to management exactly what EA is doing and how it adds value.
EA cannot bury its head in the sand and assume the executives recognize its value. Instead, the best way to communicate EA’s value to those who sign the paychecks is to provide specific, well-articulated, and detailed benefits for each individual EA intervention.