Calling Virtual Services
One of the most important parts of a RESTful application is to insure that the architecture most likely deals with resources rather than services. However, the resources in question may not actually be the ones that are stored. In this particular case, the "real" (i.e., internally stored) resources are the tweets received by a given user (here, the author of this article). Figure 5
shows the "real" and virtual collections.
|Figure 5. Virtual and "Real" Collections: The "real" resources are the tweets received by a given user.|
However, the application actually has three virtual collections:
- The listing of all tweets in the order that they are received, which is contained under the virtual collection /twitter
- The listing of tweets by the user name of the sender, which is given as /twitter//screen_name, where screen-name is the sender of the message. Thus, all messages from @myfriend would be given as /twitter//myfriend.
- The collection of hash tags that are used to identify specific topics, which would be given as /twitter/search//hashtag (such as /twitter/search//sxsw for the #sxsw South by Southwest conference in San Francisco hashtag)
The use of the double slash indicates that the item after the hash is itself either an ID or a query parameter (typically the latter). A defining characteristic of virtual collections (or services) is that each has a distinct query that identifies members who are a match. Thus, /twitter/search has a query that looks for matching hashtags in entries, while /twitter has a query that looks for matching names. When no query parameter is supplied, it will return the whole set, segmented into pages.
The dispatch.xq file performs this differentiation by using a special services.xml file, contained in the database at /db/services/services.xml. A sample services document is shown in Listing 8.
The structure of the services file is generally set up as shown in Figure 6.
|Figure 6. Services Structure: The structure of the services file is generally set up like this.|
The service in this case is the name of the virtual collection, such as /twitter or /twitter/search. In most cases, it has an associated @base, which is the path in the database to the underlying real resources (in this case /db/twitter/statuses/kurt_cagle).
The method is the HTTP method used to invoke the service: GET, POST, PUT, DELETE or HEAD. There will be either one or none of each method per service. If a method is missing, a call using that method does nothing (for instance, if you used the DELETE method on the service but DELETE wasn't listed for that service, then nothing would happen).
For each method, however, there may be any number of distinct faces. A face is roughly analogous to the Twitter format but it should be regarded more as an input or output descriptor. For instance, in the example above, /twitter.xml will output the results as an XML file, while /twitter.list will generate an XHTML list and /twitter.table will generate an XHTML table. With a PUT or POST method, the face will describe the expected input formats, such as /twitter.atom accepting only an Atom feed or Atom entry, /twitter.tweetschema accepting a tweet of the appropriate schema, and /twitter.json accepting a JSON input.
Using XProc Pipelines
A face currently has one action, though that is likely to change. An action is a set of one or more instructions that are invoked when the service, method and face are all met. In the example code, the code logic for virtual collections with
query parameters is the same as for those without, but these are likely to become differentiating factors in the actions.
Within each action is an Xproc pipeline. XProc provides a way of creating modular divisions of content, which enables processing in a way that promotes cleaner organization and code reuse. Each step in the pipeline accepts a primary input collection (typically starting with the @base collection of the overarching service), performs processing on that collection, and then outputs it to the next step in the process. It also maintains a running state operation between steps so that if one step needs to count the number of items and store that tally temporarily for another step to use (as is the case with paging and partitioning) it would store this information in a running set of parameters. The output is then sent to the client.
The XProc implementation in this example is very crude, having only a handful of step types. However, one of the goals of this project is to build up a full XQuery implementation of XProc in eXist eventually. More complete versions of XProc are found in the Dynamic Delivery Services (DDS) of Documentum's xDB9 database.
A good sample XProc is given in twitter-table.xml, which is contained in /db/twitter/ rather than /db/services (see Listing 9).
This pipeline consists of four distinct pipes:
The first is expanded because it is needed specifically for one operation: retrieving those items either from a general list or for a specific author. The second step then counts the resulting collection, using an XPath expression to perform the count on the $primary stream and storing it in the $query-count variable. The third step partitions the result into pages and retrieves the page indicated by the page parameter (passed via the URL as $page=n, where n starts at 1). The final step then takes the sorted records and applies a transformation that renders the output as a table.
In a full XProc implementation, the third and fourth steps would likely be redefined as formal steps, perhaps of this form:
|Author's Note: This capability is not implemented yet.|
The advantage of using XProc here is modularity: the ability to reuse common steps easily while still providing flexibility as necessarily.