Client-server is an established architectural mainstay in software development. Clients connect to a central server, make requests, and the server fulfills the request and responds accordingly. This architecture generally assumes that the definition of a server is a physical machine. P2P technology has the ability to change this assumption and provide a much more dynamic and robust server entity.
Imagine that the definition of a server is not a single machine, but the entire collective of a specific and defined P2P network. P2P provides a viable foundation for compelling server architecture, as a means of redundancy, and combined computing power. In other words, clients essentially see the P2P network as a single, unified server entity. Clients are afforded the ability to dynamically speak to any node in the network without explicit knowledge that they may be referencing different nodes, and are able to reconnect to any other node, should the node they are working with fail.
Some may consider this akin to clustering, but I see a difference. Most of the clustering implementations I have seen are hampered by one or more of the following characteristics:
- A single point of management, such as a master node, that must regulate all operations on the cluster
- Deliberate, explicit, and potentially rigid configuration
- Unable to grow or shrink the cluster dynamically, at run time
- Deployment restrictions, such as locality to other nodes
- Limitations on node failure conditions
I believe P2P provides is complete alternative that avoids these problems. A completely decentralized network, acting as a single, unified entity, provides a fresh approach to client-server that provides better redundancy, greater ability to scale hardware, and a higher level of fault tolerance than current clustering technology.
|Building on the idea of a dynamic, self-adjusting server, software components deployed on these servers should be architecture agnostic.|
A Living, Breathing Server
Not too long ago, I worked on a J2EE project that was to be deployed at hundreds of remote offices around the country, with application servers sitting centralized in a single city. I did a fair amount of research into the underpinnings of the J2EE specification and discovered that WAN deployments are very problematic not only for J2EE, but for distributed component technologies in general. The ability of distributed component technologies to significantly scale on the WAN is weak, as most are more ideally suited to LANs.
In addition, the fact that the remote offices would have rich GUI clients (rather than Web clients) presented a deployment problem as well. Giving the matter some thought, I began to envision a solution to all of this, where the "server" in a widely distributed environment was not a machine, or static farm, but a dynamic network of peers, where peers would essentially migrate dynamically to different parts of the network as clients demanded service. Similar to the reinvention of client-server I just proposed, the server is once again defined as a network of peers. However, in a WAN environment, needed server services and cache would migrate to a point of locality with its active clients (i.e., to their remote location, rather than existing in a more static fashion in centralized locality).
In essence, the makeup of the server at any point in time would be dependent upon the demands of the client in any one area of the network. Potential server nodes would sit dormant but available in all parts of the network, and would spring into action, deploying themselves accordingly, as clients requested service from the server. This changes the notion of "multi-tier" to more "dynamic-tier" or "demand-tier." The server adjusts its shape with demand.
Architecture-Independent Software Development
I believe a key shift in software development will be the elimination of architectural and deployment concerns in developing distributed software components. Building on the idea of a dynamic, self-adjusting server, software components deployed on these servers should be architecture agnostic. Distributed component architectures tend to expose the local or remote nature of objects, so that developers are faced with accommodating the expected locality or remoteness of an object with the appropriate implementation.
P2P, with its ability to grow and shrink server and client functionality, can provide the plumbing that will free developers from such concerns. In fact, the lines between client and server should be erasedso that each computer is effectively both client and server. In addition, all objects will be both local and remote, depending on its run-time context. No more explicit interfaces that dictate whether an object is distributed or remote. This should all become a non-issue in software development, where deployment and scaling become platform issues, with no design implications to an application's primary problem domain. Imagine a single, unified programming model, with a single assumption that every object is part of a greater collective.