SOA: Refactoring Mainframe Applications into Dynamic Web Applications, Part 1

large number of developers (and managers) have heard of service-oriented architecture (SOA) and are familiar with the idea in principle, but have no idea about how to go about applying it to a particular system. This article begins a two-part series discussing how to refactor mainframe routines using service-oriented techniques into deployable services and make the framework available via the Web.

First, you need to understand some fundamental concepts surrounding SOA.

Introducing SOA
You should follow several SOA principles to gain the benefits of heterogeneous message handling, service reuse, etc. Here’s a description of the main SOA principles:

  • Stateless Interactions?Services should be independent, self-contained modules, which do not store state from one request to another. In addition, services should not depend on the context or state of other services. You should define any state dependencies using business processes and data models rather than context objects or session keys.
  • Coarse-Grained Interfaces?SOA optimization relies on services being constructed and exposed with coarse-grained interfaces. While each service may be implemented as an abstraction of a group of finer-grained objects, the objects themselves should be hidden from public access. You implement each service by grouping objects, components, and fine-grained services, and exposing them as a single unit using a service façade.
  • Loose Couplings?Coupling generally refers to the act of joining two things together, such as the links in a chain. In software engineering, coupling typically refers to the degree to which software components/modules depend upon each other. The degree to which components are linked defines whether they operate in a tightly coupled relationship or in a loosely coupled relationship.
    Service-oriented architecture (SOA) guidelines recommend that you construct services with loose coupling in mind. Loosely coupled components locate and communicate with each other dynamically at runtime as opposed to a static compile-time binding?often referred to as late binding. This lets you deploy your applications as desired, making deployment decisions at deployment-time, rather than design-time.
  • Service Discovery and Registration?Services should be registered with some form of public or private registry, such as a database, a directory service, a UDDI registry, or an XML file. After registration, components that want to call the service first use the registry to locate the service and then call the service.
  • Location Transparency?Location transparency deals with the ability to access information objects without advance knowledge of their location. To achieve this, you specify service locations in configuration mediums such as UDDI registries. In addition, service calls should be targeted at common location endpoints such as URLs and URIs.
  • Protocol Independence?SOA depends on a minimum amount of interdependencies between services. This loose-coupling must propagate all the way to the protocol layer of the architecture. The design of the communication infrastructure used within an SOA should be independent the underlying protocol layer. Some well-known implementations for protocol independence are:
    • Business Delegates?A Business Delegate hides underlying implementation details of a business service, such as lookup and access details.
    • Remote Proxies or Surrogates?A Remote Proxy acts as a stand-in for objects that exist in a different tier.
    •  
      Figure 1. The Service-oriented Framework: Each tier shown in the diagram supports different functionality within the overall framework.
    • Adapters?An Adapter provides transparent access to disparate objects or services by converting requests and possibly responses from one interaction interface to another.
    • Brokers?A Broker decouples business tier objects from objects and services in other tiers.
    • Factories?A Factory instantiates objects at runtime based on dynamic configurations. Factories often instantiate objects designed around the Strategy pattern or Bridge pattern.

Building the Framework from the Mainframe Out
For this article, the application framework follows a four-tier design, including a client tier, an application tier, a business tier, and an integration tier. Each tier serves to support different functionality within the overall framework, as shown in Figure 1.

The Integration Tier
The integration tier defines objects, components, and adapters that allow easy integration with enterprise information systems. The J2EE Connector Architecture lets mainframe system vendors and other Enterprise Information Systems (EIS) vendors develop resource adapters that can be deployed in any application server or framework supporting the J2EE Platform Specification, Version 1.3 and above.

The J2EE Connector Architecture is centered on resource adapters. A resource adapter serves as the intermediary between the calling component and an EIS. Resource adapters expose native EIS components, routines, applications, etc. to the Java environment, letting an EIS plug into any framework or platform that supports the connector architecture. Resource adapters must adhere to guidelines defined by the connector architecture, called system-level contracts.

System-level contracts define standard functions that the J2EE application server or platform will handle. There are three major categories of system-level contracts:

 
Figure 2. The J2EE Connector Architecture Relationships: The figure shows the relationships between components in the J2EE Connector Architecture.
  • Connection management?Allows applications to connect to an EIS.
  • Transaction management?Allows the environment to manage and perform transactional access across EIS resource managers.
  • Security?Provides support for secure access to the EIS.

The diagram in Figure 2 illustrates the relationships between components in the J2EE Connector Architecture.

Author’s Note: You can find more information about resource adapters at Sun.

Integrating with Legacy Mainframe Systems
Typical mainframe systems expose information using a two-tier architecture consisting of dumb terminals that present the results of business logic executed by the mainframe. Most conventional integration efforts revolve around screen-scraping. Screen-scraping software electronically “reads” the information from a mainframe terminal screen, letting businesses replace hard-wired terminal interfaces with Web-enabled thin clients. However, that form of integration refactoring affects the presentation only, it doesn’t help expose business logic in a form that can be easily reused and/or represented in different ways. A better way is to let the presentation tier handle application-specific UI generation and the business tier factor the legacy routines into logical business services.

 
Figure 3. Adapter and Mainframe Routine Relationships: As the figure shows, each adaptor may encapsulate multiple routines.

The first step in refactoring a legacy mainframe system is to separate the presentation logic from the business logic. You can refactor a COBOL system into special kinds of modules known as “service programs” to separate presentation logic from business logic.

The next step is to build resource adapters defining coarse-grained interfaces that encapsulate the desired business logic residing within the service programs or other accessible routines.

As shown in Figure 3, a typical adapter implementation encapsulates a number of routines, depending on the desired functionality for the adapter:

Typically, you’d refactor a legacy mainframe system in several steps, as the need arises, allowing the system to evolve and improve naturally?and avoiding many potential mistakes that often arise in a wholesale refactoring effort.

The Business Tier

 
Figure 4. Service Implementation Relationships: The figure illustrates the relationships between a typical service, its embedded components, and resource adapters.

The business tier defines objects, components, and services that encapsulate business logic for a given enterprise. You expose this business logic publicly as services to take advantage of SOA’s benefits.

With a system of services defined, an enterprise can reuse and re-orchestrate the services to form other services, processes, and applications.

The next section illustrates the implementation of a simple service that can act as a boilerplate for more complex services.

The Service Implementation
Remember, each service should expose a coarse-grained instance of business logic for a given enterprise. Typically, you’d abstract and aggregate existing objects and components to form each service. You can build services that interact with legacy systems around components that interact with resource adapters, or the service code itself can interact with the resource adapter.

Figure 4 illustrates the relationships between a typical service, its embedded components, and resource adapters.

For this article series, the goal is to refactor an application that tracks users and user accounts from a simple mainframe green-screen terminal program, as illustrated in Figure 5, to a multi-tiered, Web-enabled framework.

 
Figure 5. Original Green Screen System: The figure shows the original mainframe application running in terminal mode.

As you can see from Figure 5, the application stores user account information. The UserAccountService class in Listing 1 defines a service that returns a list of user-account names. The service uses the J2EE Connector Architecture API to interact with a fictional EIS system.

The Service Locator
Locating a location-transparent service can prove to be a complex process, but you can use the Service Locator pattern to hide all location-centric details. A service locator implementation is typically a specific kind of service that interacts with some form of service registry to provide location-transparent service lookup.

The ServiceLocator class shown in Listing 2 illustrates a service locator which uses fully-qualified class names to construct services. This form of lookup is sometimes conceptually referred to as intra-VM or classpath lookup. More robust implementations would abstract directory service access, UDDI registry access, etc.

The Protocol Abstraction Layer
A protocol-independent framework relies on a protocol abstraction layer to hide the details of each protocol from calling code. Among other benefits, a protocol abstraction layer allows access to services from multiple simultaneous protocols, hides protocol details from the service developer, and facilitates transparent protocol replacement. The diagram in Figure 6 illustrates this concept:

 
Figure 6. The Protocol Abstraction Layer: The protocol abstraction layer hides protocol details from calling code, providing service access from multiple simultaneous protocols, and facilitating transparent protocol replacement.

As the diagram shows, the service-request interfaces and the service-invocation interfaces do not need to change, regardless of the underlying protocol used. This allows for easy migration without runtime disturbances or changes to the calling code.

The ProtocolProxy factory encapsulates a very simple protocol-layer proxy class. ProtocolProxy is defined as follows:

   public class ProtocolProxy   {      private static Map instances =         Collections.synchronizedMap(new HashMap());      public static final String HTTP = "HTTP";         public static ProtocolProxy getInstance(         String protocol)      {         ProtocolProxy instance =             (ProtocolProxy)instances.get(protocol);         if (instance == null)         {            // Each protocol type should be represented             // by a subclass            instance = new ProtocolProxy(protocol);            instances.put(protocol, instance);         }            return instance;      }         private String protocolType = "";         // Each protocol type should be represented       // by a subclass      private ProtocolProxy(String protocolType)      {         this.protocolType = protocolType;      }         public Object call(BusinessService service,          String[] paramValues) throws ServiceException      {         Object retVal = service.execute(paramValues);         if (retVal != null)         {            return new ServiceModel(retVal);         }         return null;      }   }

The RequestProcessor class (see Listing 3) dispatches incoming requests using the ProtocolProxy class.

The Application Tier
Multitier application interactions usually consist of an HTTP-based request passed from a client to an HTTP server that executes business-domain logic. The server formats the response from the business-logic object into some type of markup language (HTML, WML, XML, etc.) and passes that back to the client. The model-view-controller (MVC) pattern often embodies the interaction.

Using the MVC pattern to encapsulate client/server interactions helps to illustrate software-pattern roles as well as developer roles by separating object, components, and services into tiers with well-defined boundaries. Other benefits derived from this pattern include the ability to easily maintain, manage, and extend the system; more easily support multiple client devices; simplify testing procedures; and reduce code duplication.

The Client Tier
At this point, you can assume that the client tier handles only standard HTTP requests from a Web browser. Therefore, the FrontController class handles each request/response in a simple model-view-controller manner. However, the framework will eventually be modified to support a push-styled, event-driven model, which will make the framework more responsive and dynamic.

The FrontController class is defined as follows:

   public class FrontController extends HttpServlet   {      public void init(ServletConfig servletConfig)         throws ServletException      {         super.init(servletConfig);      }         protected void doPost(HttpServletRequest req,                            HttpServletResponse res)         throws ServletException, IOException      {         doGet(req, res);      }         protected void doGet(HttpServletRequest req,                           HttpServletResponse res)         throws ServletException, IOException      {         try         {            Model model = RequestProcessor.getInstance().               processRequest(req);            // view type should be matched to request             // type using config file            String formatted =               XSLViewFactory.getView(View.DEFAULT_VIEW).               format(model.toXML());            PrintWriter out = res.getWriter();            out.println(formatted);            out.flush();            out.close();         }         catch (RequestException e)         {            PrintWriter out = res.getWriter();            out.println(               "

Error: " + e.toString() + "

"); out.flush(); out.close(); } } }

This article is the first of a two-part series discussing how to refactor a mainframe application using service-oriented techniques into deployable Web services within a standard Java servlet framework. This first part discussed how to apply SOA to a mainframe-dependent system. The second part discusses how to enable the framework to use XMLHttpRequest background requests from a browser to invoke the Web services and provide data directly to the client application rather than screen-scraping and reformatting to provide data to a desktop application.

Share the Post:
Share on facebook
Share on twitter
Share on linkedin

Overview

Recent Articles: