devxlogo

Poking Holes in the Grid

Poking Holes in the Grid

n the future described by William Gibson’s cyberpunk novels, we will interface with computers through a neural connection that will allow us to surf through a vast realm of interconnected computers, represented as holographic images of various shapes in a virtual world.

While we have quite a ways to go till we reach Gibson’s future, the first small steps have been taken with the advent of grid computing. It is now possible to connect thousands of small computers over a network to together act as a very powerful supercomputer. This technique has long been used by universities and other scientific institutions to tackle problems otherwise too large and complex to solve.

Oracle would like to take this same concept, bring it out of the academic and scientific realm, and apply it to company database applications. In its literature, Oracle cites several trends that are enabling grid computing.

  • Fast networks. The advent of gigabit network technologies means that we can link computer resources together using extremely fast networks.
  • Networked storage. Instead of storage being dedicated to a particular computer, we now have storage area networks that allocate storage to any computer on the network.
  • More computing power. Standard servers are becoming increasingly powerful without having to resort to specialized high-end machines.
  • Ease of configuration. Servers have become easier than ever to configure and plug in to the network.

In Oracle’s view, these trends result in low-cost resources connected through very fast networks that will be dynamically allocated where needed. No longer should database systems consist of islands of servers dedicated to specific applications. Instead, resources such as computing power and disk space can be dynamically allocated as needed and connected via a highway of high-speed networks.

For my money, if I want to ensure my systems are up, I’d like the equivalent of my own electric generator?computer resources dedicated to a single system only.

What are the extolled benefits of this vision? First, by dynamically allocating resources, companies will use their computing power more efficiently, thereby allowing them to decrease the size (and cost) of their data center. Second, dynamic allocation of resources provides more fault tolerance, as a failure in one resource can be seamlessly covered by others.

Dubbed “utility computing,” this vision evokes the image of companies plugging in to access computing resources with the same ease that one plugs a lamp into an outlet to obtain electrical resources. And with Oracle 10G, you can partake in utility computing yourself.

While there is much truth to the statements above that have made utility computing possible, it’s the very comparison to the electric grid that raises questions for me about the wisdom in the architecture. Nearly a year ago, in August of 2003, the U.S. suffered the largest power outage in history, with 50 million homes left without electricity. A single failure in the electric grid caused a ripple effect that downed power in huge swathes across the Northeast. Companies in the affected areas were forced to postpone their operations?unless they had backup generators.

Which makes me wonder: Is dynamically allocating your resources across a grid the best way to create a stable system? For my money, if I want to ensure my systems are up, I’d like the equivalent of my own electric generator?computer resources dedicated to a single system only.

But what about the cost savings associated with a grid? Granted, with electrical power, it is impractical and would be far too costly for everyone to have their own private generators, but it’s not the case with computer systems. In my home, we have three computers: one for me, one for my wife, and one more for the kids to play their games on. Grid computing advocates would point to the tremendous waste of resources in this scheme; and it’s true that a huge amount of resources go unused this way. For example, the computer that I’m using to type this article is hardly hitting even 5 percent of CPU utilization. But that doesn’t make me want to strive for cost efficiencies by combining their uses. I’d rather have three computers at home that each of us can use when we want, rather than try to make one computer efficient enough to serve all of our needs.

Similarly, when it comes to databases and the applications that depend on them, companies would gladly throw more hardware at systems for the sake of improving performance and stability. What stops them is not the cost of the hardware, but the price of the database software.

The Wrong Culprit
Consider this: A souped-up Dell server with four Xeon processors and 16 GB of RAM lists for around $30,000, but with the licensing cost for a major database engine at around $20,000 per CPU, the database license for that machine would cost nearly 2.5 times the price of the hardware. In fact, it’s just that high cost that has boosted the popularity of MySQL. At $500 per server, regardless of the number of CPUs, major Web sites that utilize it can afford to throw as many servers at the problem as they need.

If database vendors really want to offer us a future of greater stability and power, they’d simply lower the price of their software so that we could equip all of our systems with their own dedicated database engines. But, as in Gibson’s novels, the world never works as it should.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist