ometimes you find yourself thinking, "I wonder if I can make my application do this...?" The target of your thoughts might not immediately seem to be a useful, or even sensible, feature; but most developers tend to be attracted to unusual problems that offer a challenge.
For me, that process started when I was working with the Caching Application Block in Enterprise Library 2.0, trying to demonstrate how easy it is to create your own custom providers. I work mainly in ASP.NET, where the requirements for caching differ from most Windows Forms applications. For example, using the default Isolated Storage provider is probably not a realistic option in a web site, and even less so in a multiple-server web farm.
An earlier article of mine described the process for interfacing a custom provider with the Caching Application Block and adding support for the Enterprise Library configuration tools. The custom provider described in that article simply writes the cached data to disk in a configurable folder. This provides opportunities for caching in a way more suited to ASP.NET, although it still does not provide a truly "shared" caching mechanism due to the way that the Caching Application Block works internally.
To achieve a flexible and shared caching mechanism, you need a central cache store. One possible approach is to use the Database Caching provider supplied with Enterprise Library. However, all web caching approaches face the problem of connectivity between the application and the cache repository. Another approach is to use a web service. As it wasn't obvious whether that would even work, let alone be fast enough to be useful, the approach warranted a full test implementation.
Why Use a Web Service?
The Caching Application Block's provider mechanism lets you create a custom provider that stores cached data anywhere you want. It was this that made me wonder if it was possible to cache data within or through a web service, which would allow the provider to cache its data almost anywhere—remotely or locally—without having to write specific code that is directly integrated within Enterprise Library.
The principle is simple enough. Instead of having the backing store provider within the Caching Application Block interact directly with the backing store (the usual approach, as implemented in the Isolated Storage provider and Database provider), the backing store provider simply packages up the data and sends it to a web service.
The web service can then cache or manipulate the data in any way you need. And, if the backing store provider is sufficiently configurable, you can change the URL of the target web service any time you like. In addition, you can add more than one Cache Manager and backing store provider to an application, allowing it to cache the data through multiple web services. Finally, adding support for "partitions" within the provider means that you can implement multiple separate caches within the target backing store.
The Design of the Web Service Caching Provider
Figure 1 shows a high-level view of the approach. As you can see, the core principle is simple enough, although the implementation proved somewhat trickier than first expected. However, while having some limitations, the result does provide the features I initially wanted to achieve.
|Figure 1. Web Service Caching Provider: The figure shows a high-level view of the web service Caching Provider and associated mechanism for the Caching Application Block.|
Issues that you may want to visit are:
- Transmitted Data Format—I chose to simplify the types and capabilities somewhat to allow the use of the widest possible range of target web services (including non .NET platforms).
- Cache Duration Expiration Types—The example described here supports only the SlidingTime expiration type.
- Expiration Callbacks—You would only need this if you intend to use the mechanism within a Windows Forms application—this feature is not really useful in ASP.NET applications.
- Combining Local Caching with Remote Caching through the Web Service—This could improve performance and reduce network load, for example by using the remote service only for archiving data.
Despite the limitations, remote caching does work reasonably well with small or medium sized data items, over a reasonably fast network link. The Caching Application Block uses an in-memory cache to provide fast local performance when reading cached data, and only uses the backing store provider to implement a persistent store. Therefore, the only interactions with the backing store are:
- Loading the cached items when the application starts up
- Adding new items to the cache when the user adds them to the in-memory store
- Removing existing items from the cache when the user removes them from the in-memory store
- Flushing the cache to remove all items
- Providing a count of the number of cached items
As you can see from the list, large volumes of data move over the wire to the web service only at application startup. In contrast, adding an item to the cache involves moving only the data for that item over the wire, and only one way: into the backing store. Other operations just send or receive small SOAP packets that include items such as the cache key, an updated "last accessed time," or an integer value.