devxlogo

Applying Some Peer Pressure

Applying Some Peer Pressure

s a big fan of atmosphere and thematic environment, I queued up the “Blade Runner” Soundtrack on my computer. Vangelis’ ghostly ambience, like the subject of this editorial, is futuristic in nature, but more importantly, the source of my audio bliss is the MP3 file format (produced from my own purchased copy of the CD, if anyone from the Recording Industry Association of America (RIAA) is reading).

Peer-to-peer (P2P) technology?the advancement which vilified MP3s and the concept of digital file sharing?has created a hubbub over the past couple of years, but the legal thunderstorm that ensued rained on the technology parade. Bring up the phrase “peer-to-peer,” and the conversation undoubtedly will turn to copyright law, the Digital Millenium Copyright Act (DMCA), and the demise of Napster. But this isn’t the fault of the RIAA, the Motion Picture Association (MPAA), or any specific lawsuit or legislation. The culpability is ours?the tech community?for not seeing the forest for the trees.

Trade magazines and mainstream technical media have obsessed over file-sharing and its legally questionable future. Over the past couple of years, we have witnessed the market potential of a truly disruptive technology, but have halted its momentum by not seizing the real pearl in the Napster oyster: a change of mindset. Put simply, file sharing is not the most exciting and compelling use for P2P. File sharing has some valuable uses, but its primary purpose has been a prototype and proof-of-concept, one that has opened a doorway to something greater. P2P will turn computing upside down, as the next major shift in software architecture, as soon as we open our eyes to its real value and potential.

Picture the enterprise, with 10,000+ computers, all of which sit idle 90 percent of the time. It is shockingly wasteful, yet no one seems shocked. I propose a simple change in the way individuals and enterprises think about hardware acquisitions: Every new computer purchase should add its resources to the existing heap.

True Hardware Resource Utilization
Depending on the nature of the project I am working on, my home network may comprise up to seven computers. More than once while working through the quiet of the late night hours I have listened as one of my computer’s hard drives is flogged with data, its processor light pegged solid as if painted. Meanwhile, my other six computers enjoy a state of restful hibernation.

It occurred to me one night, as I warmed my hands by the one slaving computer, the absurdity of our accepted norms for hardware purchases and operation. Every time I buy a computer, I have a new processor, a new chunk of memory, and a new hard drive, all of which are imprisoned in the cell of that computer case. Then upon running some resource-intensive application, the machine gets crushed with the processing load, while several other machines in close vicinity sit idle.

Or likewise, if I want to save a large file, I have to be sure that that particular machine’s hard drive is not full, regardless of whether every other machine has 30GB of available space. Or similarly, finding my computer is running out of memory, while the other machines have plenty available. Each scenario is absolutely ridiculous. It is analogous to inviting your friends over to your house to help you move that grand piano in your living room, then giving everyone a crack at it, one at a time. The strength lies in the cooperative: the whole is greater than the sum of the parts.

Computers have three primary resources: processing power, memory, and storage space. Presently, we buy new hardware because a particular processing instance (computer) falls short, not because we are necessarily short on resources.

I should be able to leverage the entire union of resources at my disposal. And that’s just me, and my puny processing requirements. Picture the enterprise, with 10,000+ computers, all of which sit idle 90 percent of the time. It is shockingly wasteful, yet no one seems shocked. I propose a simple change in the way individuals and enterprises think about hardware acquisitions: Every new computer purchase should add its resources to the existing heap.

P2P technology can make this possible, erasing physical boundaries and combining computing resource into one virtual resource heap that is utilized by any application on any computer within the collective. This would allow low-cost, older technology to be leveraged indefinitely, and drive computing costs down considerably. Imagine never retiring a computer, even when the resource requirements of the applications it runs surpass the maximum amount of memory or processing power the computer can support.

Re-inventing Client-Server
Client-server is an established architectural mainstay in software development. Clients connect to a central server, make requests, and the server fulfills the request and responds accordingly. This architecture generally assumes that the definition of a server is a physical machine. P2P technology has the ability to change this assumption and provide a much more dynamic and robust server entity.

Imagine that the definition of a server is not a single machine, but the entire collective of a specific and defined P2P network. P2P provides a viable foundation for compelling server architecture, as a means of redundancy, and combined computing power. In other words, clients essentially see the P2P network as a single, unified server entity. Clients are afforded the ability to dynamically speak to any node in the network without explicit knowledge that they may be referencing different nodes, and are able to reconnect to any other node, should the node they are working with fail.

Some may consider this akin to clustering, but I see a difference. Most of the clustering implementations I have seen are hampered by one or more of the following characteristics:

  • A single point of management, such as a master node, that must regulate all operations on the cluster
  • Deliberate, explicit, and potentially rigid configuration
  • Unable to grow or shrink the cluster dynamically, at run time
  • Deployment restrictions, such as locality to other nodes
  • Limitations on node failure conditions

I believe P2P provides is complete alternative that avoids these problems. A completely decentralized network, acting as a single, unified entity, provides a fresh approach to client-server that provides better redundancy, greater ability to scale hardware, and a higher level of fault tolerance than current clustering technology.

Building on the idea of a dynamic, self-adjusting server, software components deployed on these servers should be architecture agnostic.

A Living, Breathing Server
Not too long ago, I worked on a J2EE project that was to be deployed at hundreds of remote offices around the country, with application servers sitting centralized in a single city. I did a fair amount of research into the underpinnings of the J2EE specification and discovered that WAN deployments are very problematic not only for J2EE, but for distributed component technologies in general. The ability of distributed component technologies to significantly scale on the WAN is weak, as most are more ideally suited to LANs.

In addition, the fact that the remote offices would have rich GUI clients (rather than Web clients) presented a deployment problem as well. Giving the matter some thought, I began to envision a solution to all of this, where the “server” in a widely distributed environment was not a machine, or static farm, but a dynamic network of peers, where peers would essentially migrate dynamically to different parts of the network as clients demanded service. Similar to the reinvention of client-server I just proposed, the server is once again defined as a network of peers. However, in a WAN environment, needed server services and cache would migrate to a point of locality with its active clients (i.e., to their remote location, rather than existing in a more static fashion in centralized locality).

In essence, the makeup of the server at any point in time would be dependent upon the demands of the client in any one area of the network. Potential server nodes would sit dormant but available in all parts of the network, and would spring into action, deploying themselves accordingly, as clients requested service from the server. This changes the notion of “multi-tier” to more “dynamic-tier” or “demand-tier.” The server adjusts its shape with demand.

Architecture-Independent Software Development
I believe a key shift in software development will be the elimination of architectural and deployment concerns in developing distributed software components. Building on the idea of a dynamic, self-adjusting server, software components deployed on these servers should be architecture agnostic. Distributed component architectures tend to expose the local or remote nature of objects, so that developers are faced with accommodating the expected locality or remoteness of an object with the appropriate implementation.

P2P, with its ability to grow and shrink server and client functionality, can provide the plumbing that will free developers from such concerns. In fact, the lines between client and server should be erased?so that each computer is effectively both client and server. In addition, all objects will be both local and remote, depending on its run-time context. No more explicit interfaces that dictate whether an object is distributed or remote. This should all become a non-issue in software development, where deployment and scaling become platform issues, with no design implications to an application’s primary problem domain. Imagine a single, unified programming model, with a single assumption that every object is part of a greater collective.

New Models of Security
P2P can have a profound benefit in security as well. I am not going to open the “security through obscurity” debate here, but perhaps P2P provides a different perspective on security that is worth thinking about. In a decentralized peer network, as opposed to a centralized network, the hacker’s bounty can be distributed across many nodes, and perhaps moved dynamically as a compromise occurs. It is much harder to rob ten banks than it is to rob only one bank, and the attractiveness of any specific node in the network can be significantly diminished if it holds only a small take. In addition, a hacker’s risk of getting caught increases greatly as the time exposed increases; requiring several levels of compromise in a peer network nearly guarantees a longer exposure time.

If the context of software development shifts away from that of the underlying operating system to that of distributed objects in a peer network of nodes with any operating system, the OS becomes a widget.
In addition, peer networks have the unique characteristic of being able to completely drop nodes off without necessarily crippling the collective network’s capabilities. P2P introduces a wonderful new option: simply shutting a node down on a hacker. That is easier said than done, but it does not change the fact that there is significant promise that P2P provides for innovation in security.

Eliminating the OS
Operating systems desperately need to be “widgetized” and eliminated as a dominating influence in development. Sure, there should be OS choice, but a choice of implementation from a set of base, low-level functionality is a completely different idea from the operating system decisions we face today. The choice of operating system affects nearly every part of a software development company: it affects costs, the skill-sets and employees needed, the software you can run, the component architecture, administration, the hardware you can use, etc. In addition, it is a never-ending, politically charged issue.

The relegation of applications to the underlying operating system is a barrier impeding technology progress in the software arena. P2P can also play a big role in alleviating this problem. If the context of software development shifts away from that of the underlying operating system to that of distributed objects in a peer network of nodes with any operating system, the OS becomes a widget, and a relatively unimportant and swappable part of the picture. If this happened, it would free the focus of software architecture to that which is best for technological progress, not what is best for the operating system.

Expecting Industry Resistance
A couple of the benefits described above alluded to probable resistance to P2P. P2P has been aptly called a disruptive technology. As with anything disruptive, there are a few apple carts bound to be tipped over. For instance, any technology that erases physical hardware boundaries, allowing business to leverage existing computing resources rather than being required to purchase new resources won’t be popular with hardware manufacturers. Technology that erases operating system influence over application architecture will certainly be fought tooth and nail by those companies whose operating system controls their revenue streams.

A reinvention of server architecture likely presents a painful shift to middleware vendors as well. But I would offer this thought: file-sharing is not only a prototype of P2P technology, but a prototype of the likely industry response to nearly any use of P2P. There are basically two responses to a disruptive technology: either protect the status quo at the expense of technological progress and withhold the benefits from both developers and consumers in the process; or embrace progress, and through ingenuity create successful new revenue models. It should not be surprising if those unwilling to reinvent themselves as technological progress demands will resist further advances in P2P technology.

P2P holds much promise, but we developers must open our minds to recognize where the real value of P2P lies. And so as my “Blade Runner” CD ends, the haunting voice of a dying Roy Batty wearily proclaims, “I’ve seen things you people wouldn’t believe.” Perhaps as an industry we also will choose to see changes we wouldn’t have otherwise believed; and perhaps these changes lie in the promise of P2P.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist