devxlogo

The Architecting Magic Behind Taking Mash Ups Offline

The Architecting Magic Behind Taking Mash Ups Offline

s the line between the desktop and the web blurs, more applications are taking advantage of the best of both worlds. Adobe AIR and JavaFX are prime examples of frameworks that benefit from this union. However, as services deliver more of an application’s functions, the offline problem starts to creep in. Simply stated, the offline problem prompts this fundamental question: Who supplies the online data when the application goes offline?

The answer to this question may seem quite obvious; just keep a local copy of the data. That solution, however, is deceiving in its simplicity because, with the introduction of multiple copies of the same data, the real crux of the offline problem is revealed: synchronization.

Unfortunately, synchronization is anything but simple. Even trivial synchronization between two identical databases with identical schema is a very complex undertaking.

Related to the blur between desktop and web is the interest around service-oriented architectures (SOAs). This phenomenon is partially driving the recent interest in enterprise mash-ups, which leverage all the features offered by exposed services to quickly integrate heterogeneous data sources into a single, integrated view. This data may originate from data sources that are both internal (human resources, sales, and so on) and external (map services, stock tickers, and so forth) to the company. By their nature, service-based applications leverage a large amount of their processing power and logic from the systems that are hosting the services.

However, this leverage means that service-based applications inherently suffer from the offline problem and can work only while connected. For applications that make sense only while online, such as a live stock-ticker application, synchronization is not a problem. The offline problem becomes a true roadblock only when an application attempts to mobilize its data, or allow its users to work while disconnected.

The solution, it would seem, is that the offline application should store the data it needs from all the services locally, and synchronize everything upon reconnection. Unfortunately, synchronization is anything but simple. Even trivial synchronization between two identical databases with identical schema is a very complex undertaking. This simple case still introduces all the complexities of change tracking, managing deletes, and a host of other issues. These problems become even worse when the data is spread across many different sources, all with different protocols, and potentially not under your control (in the case of external services). The question then becomes, how do you architect an occasionally connected application that mashes up data from multiple, heterogeneous data sources?

Criteria for Synchronization Architecture
Along with its inherent complexities, synchronization further complicates enterprise data management with problems such as scalability among other areas that can often be partially ignored in a connected application. These areas include:

  • Security
  • Robustness
  • Data transfer volume
  • Application maintainability

It is against these criteria that you will be able to evaluate the synchronization architectures discussed later.

Security. When enterprise data is involved, security is always a concern. Often, in the online case, no data is stored persistently on the device. However, to allow offline access, the data must be stored on the device. Additionally, it is likely that the device is itself mobile (smart phone, laptop, and so on) and can be easily lost or stolen. The offline data must be protected through passwords and encryption from would-be data thieves.

Similarly, the data may need to be synchronized to the enterprise data center over public communication channels such as the cellular phone networks or public Internet access points. This synchronization requires that communications be encrypted and protected from network snoopers.

Robustness. Mobile data also elevates the concern surrounding the robustness of the data. Validity of enterprise data is crucial for decision making. A synchronization operation must leave the data in a consistent state. Although network reliability has improved, your synchronization strategy must gracefully and, above all, consistently handle data and network errors at every stage of the synchronization. In most cases, a half-synchronization of your data is far worse than if the synchronization had never occurred. This problem becomes worse when the synchronization must be done to multiple data sources, including some that are external to your company.

Data transfer volume. For both security concerns and to reduce data transfer costs, the offline application should keep to a minimum its network interactions during synchronization. The application should try to send and receive data that has only changed since the last synchronization. The offline application can be designed to support simple change tracking. Unfortunately, this tracking will not be so simple with large, back-end systems that were only designed for central access.

Because of the limited space available on mobile devices, it is unreasonable to expect that all the enterprise data will be available offline. Unfortunately, many large enterprises’ systems that were only designed to be centrally accessed will require that any data filtering or partitioning must be done by the application. This requirement is fine for online applications, but it makes for very large data transfers in the offline case.

Application maintainability. Most online applications can be hosted as web applications and delivered from a single source. As a proof point, most enterprise mash-up tools are web based, which makes maintenance easy because updating the application for all users requires making a change in only one place. In the offline case, just as the data has to be available offline, so too does the application. Creating installed applications introduces all the problems of application versioning and distribution that would have been ignored previously.

Offline Architectures
With some of the high-level synchronization concerns identified, you can evaluate some advanced synchronization architectures. With the criteria just discussed in mind, two general architectures emerge: direct remote synchronization and staged synchronization. They are certainly not the only ways to do synchronization, but instead you can think of them as representing two extreme approaches?with a whole spectrum of hybrids in between.

However, before diving into the architectures it is very important to note that quite often the offline application will end up being quite dissimilar from its online version. While the data from the web services can be stored offline, the logic behind the web services usually cannot be stored offline without it being duplicated totally in the offline version. Additionally, you should ask, “Does this application make sense when it is offline?” An application that mashes together a contact’s address with an interactive mapping service, for example, would simply not work without being able to access the mapping service. However, a feature-limited version of that same application (such as displaying a single, static map tile) may turn out to be very feasible while offline. You must carefully plan an offline application to decide which features are both sensible and feasible.

A diagram of a connected application can show the connection to the application’s data sources through Java Message Service (JMS), SOAP, and XML-RPC (see Figure 1).

Direct Remote Synchronization
The most obvious approach would be to try to use the same application, but have the offline version access the data from its local data store. Upon reconnection, the application performs its synchronization with each data source (see Figure 2).

?
Figure 1. Connected Architecture: JMS, SOAP, and XML-RPC can connect an application to its data sources.? Figure 2. Direct Remote Synchronization Architecture: One approach to synchronization is using the same application locally. The offline version accesses data from its local data store, and when reconnected it performs its synchronization with each data source.

The most obvious security question regarding this model is how can the application make connections to the back-end data sources when synchronizing over a public network? One solution would be to expose all the services to the Internet directly. This approach is really a non-starter though because of the huge security implications of exposing data sources directly to the Internet. If this solution were to be implemented this way, each system would have to be responsible individually for its own encryption, decryption, and authentication. Granted, you can solve most of these problems by requiring synchronization through a VPN connection, but a VPN solves the problem at the expense of adding a (sometimes frustrating) step before every synchronization.

In terms of robustness, the fewer communications sessions there are, the fewer opportunities there are for problems. Since the application will be interacting directly with each service (often multiple times), there are far more opportunities for communication errors to occur. Therefore, many more opportunities exist to leave the synchronization in an incomplete state that must be rolled back.

This architecture approach will also tend to require large volumes of data to be transferred. Consider a situation where your business’s inventory is exposed though a series of simple services?getInvetoryList(), getInventoryByCategory(), getItemById(), and so on. If your inventory contains 5,000 items and fewer than 10 items change each day, how do you go about synchronizing just the items that change? You can’t.

Because the inventory system was only designed for central access, it has no concept of change tracking. The only way to get the updated inventory list is to request the entire list again. With this list, the items that have changed can be determined through the application by comparing the old and new lists, item by item.

Furthermore, assume that the online version of the application lets the user view the item’s price in two alternate currencies. When the user requests the price in another currency, a request is sent to an externally hosted currency conversion service. If this same functionality is required offline (because it is unknown which items the user will want to see and in which currencies), all conversion prices must be stored for each item as part of the synchronization. The application, then, must call the conversion service twice for each inventory item that has changed, and because an average of 10 items change between synchronizations, the application will have to make 20 unique requests to the conversion service. Additionally, each application must perform this process. If your business has 100 workers that synchronize daily, that volume could translate to almost 1,980 redundant calls to the conversion service.

Wasteful Filtering
Services that were designed for central access, as in the inventory example, will not likely contain any provisions for directly partitioning and filtering data. In the centrally accessed service model, the exposed services are more likely to provide generic data access. Any custom data partitioning and filtering (apart from security and permission filtering) is done by the application. In the inventory example a given user’s application may need to show only a few hundred inventory items that match that user’s job role. The online application may request the entire inventory and then filter it based on the user’s role. Forcing the application to do this filtering in the offline case wastes both bandwidth and processing power.

For the application maintainability criterion, in addition to the fact that there are many installed versions of the offline application, the application is also very tightly coupled to all the services it accesses. If any of the service interfaces or addresses change, all instances of the application will break. Possibly making this situation worse, it may not be obvious to the application that one of the interfaces has changed. The application may get part way through its synchronization and then fail, and be unable to complete until a new version of the application is installed.

Staged Synchronization
All the issues highlighted so far seem to suggest the need for some level of consolidation in the synchronization process. To accomplish consolidation, a staging database is inserted between the data sources and the offline application. While this new staged synchronization architecture may seem to add an extra level of complexity, it instead solves most of these problems (see Figure 3).

In this architecture, the staging database acts as a buffer between the data sources and the offline application. The staging database’s only job is to support the offline application’s synchronization by consolidating the offline data in a single place?a structure designed for synchronization.


Figure 3. Staged Synchronization Architecture: A staging database between the data sources and the offline application may appear to add an extra level of complexity, but it instead solves most synchronization problems.

The staged synchronization architecture still requires that something be exposed to the Internet. However, in this case it doesn’t matter how many heterogeneous data sources the data is spread across; only a single point needs to be accessed, and because only a single point needs access, it is much simpler to provide secure encryption and authentication without requiring a VPN connection.

A properly designed synchronization should involve only a single, bi-directional exchange of data over the public, unreliable network. At synchronization time, the offline application submits all the changes it made offline to the staging database. In return, the staging database responds with all the changes made to the data it stores locally. While the synchronization process should still be designed to handle all possible errors, reducing the number of interactions down to two is guaranteed to make the whole thing more robust.

Note that simply getting the changes into the staging database does not mean that synchronization is complete. The staging database will still have to evaluate the changes it received and begin the processes of integrating those changes into back-end data stores. At this point you may be thinking that it seems as if this model has simply offloaded the real synchronization problems to the staging database. While that observation is true, there are benefits to having the data integration work done at the server. The process of integrating the changes back will always be complex and very unique for your application. However, after robustly passing the changes to the staging database, the staging database can patiently integrate the changes using the same methods used by the online application. This integration allows the offline application to be as robust as its online counterpart.

After the application passes the data to the staging database, the synchronization (from the application’s perspective) is complete, even though the integration into the back-end systems hasn’t happened. Therefore, the application cannot participate in the data integration. Depending on your application, this approach can be either a benefit or a drawback. It is a benefit because the application’s synchronization to the staging database can be fast, small, and usually successful. However, the data may have to be modified (because of business rules, conflict resolution, and so on) by the staging database to integrate it back into the data stores. Because the application already has completed its synchronization, it cannot get those changes until its next synchronization.

A staging database can substantially reduce the amount of data sent during synchronization. Returning to the inventory example, the only way to tell what has changed is to request the entire inventory and compare. In the direct remote synchronization architecture, change tracking was carried out by each individual application. In this architecture, the staging database can do all that work on behalf of the offline applications. For example, the staging database could query the inventory list every 15 minutes and update its own inventory table. When any offline application synchronizes, the staging database can use its own change tracking to only deliver those items that have in fact changed.

However, this approach introduces the possibility of synchronizing stale data to the offline application. In the direct remote synchronization architecture, the inventory services were called directly during synchronization. In this approach, the data synchronized to the application will only be as recent as the last time the staging database updated itself.

It’s All in the Staging
Similarly, in the direct remote synchronization architecture, each offline application had to query a currency conversion service after receiving each change. In this architecture, after the staging database has noted that inventory has changed, it can make these service requests and store the result. This method eliminates the 1,980 redundant requests needed in the direct remote synchronization architecture and effectively “pre-mashes” the data that will be synched to the offline applications.

Just as the staging database can support change tracking, it can also support the data partitioning and filtering. Unlike the generic data services that are not designed for synchronizing applications, the staging database can be designed to handle this task. The staging database may still need to do the same steps to partition the data as in the direct remote synchronization architecture, except the staging database can do it in a single step on the server side.

Although the application still must be installed on all devices, the staging database totally decouples the application from the back-end data stores. In an extreme case, entire systems can be renamed or protocols changed without the installed offline application ever being updated. As long as the staging database is modified to access the new data, the offline application will remain oblivious.

In the previous architecture the application had interfaces to SOAP, XML-RPC, and JMS. In this architecture, the offline application has to only support the single protocol that the staging database decides to expose. This approach helps reduce both the application’s size and the number of support issues against the application.

The architectures discussed here are very general architectures for supporting offline applications that access data across many heterogeneous systems. The benefits of stronger security, increased robustness, reduced traffic, and simplified application maintenance tip the hand in favor of the staging database solution. The staging database solution allows for the greatest amount of flexibility because the bulk of the synchronization effort is done on the server side. However, as mentioned previously, these architectures represent the two extremes. Your particular setup may be best solved by a hybrid approach that combines elements of both architectures.

The approach discussed here really only scratches the surface of the challenge of synchronization to heterogeneous data sources. Most of the complex synchronization problems you encounter will be unique to your application and your infrastructure.

In many ways synchronization is a lot like security. Security is at its best when it’s a core feature of each component. VPN servers act as buffers to provide security to components that lack it as a core feature. Similarly, synchronization works best when it’s designed as a core feature in each synchronizable data store. The staging database, working as a synchronization server, can provide synchronization for data stores that cannot provide it themselves.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist