RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX

By submitting your information, you agree that devx.com may send you DevX offers via email, phone and text message, as well as email offers about other products and services that DevX believes may be of interest to you. DevX will process your information in accordance with the Quinstreet Privacy Policy.


The Architecting Magic Behind Taking Mash Ups Offline

While most of the complexity you encounter in synchronization efforts will be unique to your application and infrastructure, synchronization works best when it's designed to be a core feature in each data store.




Building the Right Environment to Support AI, Machine Learning and Deep Learning

s the line between the desktop and the web blurs, more applications are taking advantage of the best of both worlds. Adobe AIR and JavaFX are prime examples of frameworks that benefit from this union. However, as services deliver more of an application's functions, the offline problem starts to creep in. Simply stated, the offline problem prompts this fundamental question: Who supplies the online data when the application goes offline?

The answer to this question may seem quite obvious; just keep a local copy of the data. That solution, however, is deceiving in its simplicity because, with the introduction of multiple copies of the same data, the real crux of the offline problem is revealed: synchronization.

Unfortunately, synchronization is anything but simple. Even trivial synchronization between two identical databases with identical schema is a very complex undertaking.
Related to the blur between desktop and web is the interest around service-oriented architectures (SOAs). This phenomenon is partially driving the recent interest in enterprise mash-ups, which leverage all the features offered by exposed services to quickly integrate heterogeneous data sources into a single, integrated view. This data may originate from data sources that are both internal (human resources, sales, and so on) and external (map services, stock tickers, and so forth) to the company. By their nature, service-based applications leverage a large amount of their processing power and logic from the systems that are hosting the services.

However, this leverage means that service-based applications inherently suffer from the offline problem and can work only while connected. For applications that make sense only while online, such as a live stock-ticker application, synchronization is not a problem. The offline problem becomes a true roadblock only when an application attempts to mobilize its data, or allow its users to work while disconnected.

The solution, it would seem, is that the offline application should store the data it needs from all the services locally, and synchronize everything upon reconnection. Unfortunately, synchronization is anything but simple. Even trivial synchronization between two identical databases with identical schema is a very complex undertaking. This simple case still introduces all the complexities of change tracking, managing deletes, and a host of other issues. These problems become even worse when the data is spread across many different sources, all with different protocols, and potentially not under your control (in the case of external services). The question then becomes, how do you architect an occasionally connected application that mashes up data from multiple, heterogeneous data sources?

Comment and Contribute






(Maximum characters: 1200). You have 1200 characters left.



Thanks for your registration, follow us on our social networks to keep up-to-date