Read-Only Web Services Scenarios
If this parameterization seems naggingly familiar, it is in fact the same issue that arises any time that you use web services. For the moment, consider only read-only web services based on the HTTP GET protocol. There are effectively two distinct scenarios where read-only web services are likely to occur.
The first is a situation where for one reason or another it is not feasible for the model in question to fit readily within the client environment directly (it's hosted on a database, there are security permissions involved, and so forth). You can label such services as "convenience services." In theory, you could host the information locally, but it wouldn't be efficient to do so. In this particular case, the data environment is fundamentally static; the same call made at two different times but with otherwise identical parameters would retrieve the same content. You could theoretically create a schema for such a call, which would consist of a specific (albeit potentially large) set of values. A good example of this approach would be a postal code registry that would let you map postal codes to given townships in an area. While it is possible this registry may change, the change would be so seldom as to be insignificant to the modeler.
The second situation, however, is considerably more interesting. This case is one where the service itself is working with a dynamic environment. For instance, take the archetypal web service, which retrieves the changes in an equity stock from the beginning of the day to the current time (+/ some reporting delta). To keep things focused on the key, the service provides a listing of a given set of stocks that have increased in value since the last reporting period. The taxonomy in this case is both functional and dynamic; if it were given as a set of radio buttons, one for each stock, the number of buttons and the content of those buttons would change every time the web service refreshed.
|One of the more insidious problems inherent in AJAX services in general is that it becomes increasingly difficult to validate an instance of a data model as that model becomes more diffuse and distributed.|
Now, this issue raises an important question: is validation necessary? Even in a completely trusted network, the answer is likely, "yes, some form of validation is necessary." In such a network, XML (or related serialized content) still needs to be created at some point, and there is a possibility that the creation process for that XML is flawed; however, the validation involved there is more in line with comprehensive unit testing. After you seal the box and determine that the content in such a closed system is valid and consistent, the only source of potential errors would come from flaws in your model itself—something that, by definition, validation cannot solve (as such validation is part of the model).
However, the moment that you introduce the possibility of XML content coming from outside of the environment, then validation becomes crucial. And because one of the principle roles of XML is as a messaging format among heterogeneous systems, then it is likely that you will need some way to determine whether content entering the system is both internally consistent and legitimate.
A static schema language, however, can only provide at best structural or base-type validation, and even there as models become more complex the likelihood that such a schema can properly validate content becomes something of a game of chance. It cannot validate taxonomic information that exists outside the model, especially in a dynamic context. Moreover, it cannot validate the authenticity of a message.
One potential solution is to set up a complex infrastructure of web services tied specifically into SOAP/WSDL interchange, establish a federation system for identity management, wrap everything in encrypted bundles, and essentially build a full handshaking mechanism across all the systems involved to turn the fundamentally unreliable network of the Internet into a closed, private, and totally reliable network. To a great extent this approach drove the creation of most of the WS-* initiatives.