In the dark ages of distributed computing, all we had were fixed application programming interfaces (APIs). Want to call a function or procedure on a server? Read the documentation for the API, which specifies the procedure, its parameters, and the values they can take. Better follow the rules precisely, or all you'll get for your trouble is an error message. And what do you do when the functionality of the procedure changes? New version of the API. New version of the documentation. And new version of all clients that call the API. In other words, such traditional APIs were tightly coupled.
Enter Web Services. Loosely coupled, right? Client (now called consumer) looks up the WSDL Service contract in a registry/repository. WSDL tells consumer what it needs to know to access the Service. Change the Service, change the WSDL. Now the consumer's request is always valid, as long as it conforms to the WSDL du jour.
Only Web Services never quite worked that way. Looking up WSDL files for every request was far too slow and awkward. WSDL never seemed to tell the consumer everything it needed to know. Vendor implementations of the WSDL and other standards varied -- and still vary to this day. And WSDL files specified the operations for the Web Service. Change the operations, change the consumer. So much for loose coupling.
Enter REST. Gone are the problematic operations of Web Services, replaced with a uniform interface: GET, POST, PUT, and DELETE are all you get. Pick an operation and a URI and never have to worry about tight coupling again.
Except for that damn URI. If you construct a URI like http://www.example.com/processID?ID=12345, we've returned to the nightmare of tightly coupled APIs, with our procedure (
processID) and its parameter (ID). Instead, say the REST purists, use a URI like http://www.example.com/processID/12345. Leave it up to the server to figure out what you want when you GET or PUT to that URI. Problem solved.
Or is it? How does the client know that the second URI above is a valid choice? Refer once again to the API documentation, which you keep in a repository somewhere. The same way we kept WSDL files in our registry/repository. Change the way the server works, update the documentation, and expect clients to fall in line.
Didn't work with Web Services, doesn't work with REST.
The answer: hypermedia-based discovery. REST isn't an API standard at all -- it's an architectural style for building distributed hypermedia applications. So, any time the client needs any information about a resource, all it needs to do is follow a hyperlink to get that information.
All a client needs is the starting point, which we call the bookmark. That never changes. In the example above, that would be http://www.example.com. Perform a GET on that, get back a representation with hyperlinks that indicate what the client can do next. Repeat as necessary. Want to change the resource behavior? No problem, simply change the links.
Simple when the client is a browser, trickier when the user agent is an arbitrary program, but the principle is always the same. Any client can determine what any resource can do for it simply by following hyperlinks. The API is the bookmark, which never changes.
Finally, loose coupling in action.
Web services, REST, Hypermedia