Virtual Machines: A Logical Evolution of Operating Systems

lthough microcomputer operating systems have made huge strides since the early CP/M and DOS days, they originally evolved under the basic idea that each computer would be loaded with a single operating system that was tied tightly to the hardware. When you bought an IBM PC or clone, you expected to run DOS or a DOS clone. Similarly, if you bought a Mac, you wouldn’t have expected it to run anything but a Mac OS. Machines and OS’s were tightly interwoven not only because of the performance problems that limited memory, bus speed, storage space, and transfer rates wrought, but also because the early microcomputer was seen as a “personal” computer, a standalone machine that one person would use in isolation.

All these original design specifications have changed. Today, essentially all machines are connected via LANs, the Internet, or wireless technologies, and they are often shared by many people. In fact, many of these machines aren’t “personal” at all any more; they do duty as servers, serving hundreds or thousands of different users per day. All the while operating systems have proliferated: DOS, multiple Windows versions, Linux, FreeBSD, Unix, Sun Solaris, Mac OS Systems 4-9, NextStep, OS X, et cetera. These OS changes parallel fundamental changes in use and expectations: developers want to build applications for multiple OS versions and multiple OS’s; organizations want to make full use of their servers without needing to manually redeploy them on different physical servers; and users want to be able to run any applications they need, not just those available for the OS installed on their machines.

Individual products have arisen to help meet these needs by evolving the concept of virtualization, which IBM pioneered in the 1960’s when its researchers began partitioning the mainframe so that it appeared to be running many individual operating systems, or virtual machines (VMs), simultaneously. However, while VMs that run on full OSs and virtual execution engines such as those used by Java and .NET, as well as boot managers and multitasking OS’s, are certainly useful solutions, they simply don’t go far enough to meet current needs.

New Uses, New Expectations

The changes in use and expectations demand equally fundamental changes to microcomputer operating systems. So far those aren’t forthcoming, but VM products available today can solve many of the problems. These products run on top of a full host OS installation. The market leaders are VMWare’s family of products, which run on both Windows and Linux, and Microsoft’s Virtual PC, which runs only on Windows (but can host Linux). Others, notably the open source Xen hypervisor project, are not only catching up rapidly, but also migrating into various distributions of the Linux OS itself, meaning that VM capability becomes an integral part of the OS, not an add-on product.

Why do I believe current solutions aren’t enough? Primarily because of performance, convenience, security, and management concerns.

So why do I believe current solutions aren’t enough? Primarily because of performance, convenience, security, and management concerns. For example, the Java virtual machine runs everywhere because it abstracts all OS operations?meaning that the Java VM must often execute many instructions where the same functionality, if requested directly from the host VM, would require far fewer. That abstraction scheme enables Write Once, Run Anywhere (WORA), but Java application performance suffers as a result when the application makes many calls to the underlying OS. Running a boot manager lets you run multiple OS’s, but switching between them requires a reboot, which is rarely convenient. Letting users share machines is useful, but finding out that all users are affected when one user makes changes or gets a virus is not.

Modern hypervisors?an adjunct to an OS that doesn’t actually control the hardware itself, but acts solely as a host for other operating systems?are the best of the current solutions. They’re extremely efficient, often reducing performance compared to the bare OS by only a few percent. You can run multiple OS’s simultaneously, and switch between them without rebooting, and changes made by individual users to their specific VM copies don’t affect the VMs of other users sharing the same machine.

At the hardware level, adding support for virtualization has the potential to provide significant VM performance enhancements. Both AMD (in its Pacifica line) and Intel (in its VT line, formerly Vanderpool) have added support to make the x86 architecture more VM-friendly.

Do You Even Need a Full OS?

As a modern computer owner, you own a powerful machine capable of millions of operations per second?more power on your desktop than the 1960’s era IBM mainframes. Therefore, your machine is theoretically capable of running any modern operating system. But because many commercial OS producers still collaborate closely with hardware vendors to produce combinations that run only on specific machines, and because modern OS’s are still made so that they expect to be “close to the metal,” you can’t buy a bare-bones, bootable hypervisor VM. Instead, I’m afraid you’re stuck with loading a full copy of some OS, a VM, and then running guest OS’s on top of that.

The question is, why is this still true? Why isn’t there a minimal hypervisor OS available? Perhaps you need to ask the major OS companies’ marketing departments. As a computer owner, do you really want to have a full-blown, close-to-the-metal OS that you’ll have to rebuild whenever there’s a problem? The first company to make a successful bootable and efficient hypervisor for PCs will make a fortune.

The first company to make a successful bootable and efficient hypervisor for PCs will make a fortune.

The last time you installed an operating system from scratch (not from a stored image) or bought a new computer, how long did it take you to get that machine set up exactly the way you want? New machines usually arrive with OS installed, but without any of the programs you use or the settings you prefer. Perhaps you’re more organized than I am, but it usually takes me several weeks to get a machine into the exact configuration I like. Sure, I can get it into the ninety percent range fairly quickly?a couple of evenings of installing, uninstalling, and reinstalling software, changing settings, copying files from my old machine or backup CDs, but for those first few weeks I constantly find little niggling things missing: A Word macro from a template I forgot to copy, file manager settings, missing email or address archives, connection settings, utilities I’ve written or downloaded, etc. I find I need to keep my old machine around and running for a while just to ensure that I have access to all the clutter of files and applications that make my computer my computer.

Wouldn’t it be easier if you could simply copy a file and run your old setup on your new computer directly, knowing that all your settings accompanied the file? You could, if you didn’t have to muck about with a new OS copy each time; in other words, you could if you were running a virtual machine.

Why Can’t an OS Behave More Like a File?

As you’ll see in the in-depth articles in this special report, VMs simplify OS’s by reducing them to single files that you can copy, backup, clone, deploy quickly and easily, and to which you can accept or deny changes. VMs have already become indispensable for consolidating servers to take full advantage of their power and resources, testing, maintaining large banks of unique machines, and duplicating an environment to ferret out problems. They’re rapidly making inroads onto developer’s desktops as well, because they let you keep your tried-and-true production setup intact while testing new software and beta releases safely, or running multiple OS’s for development purposes?all without the hassle and expense of maintaining and changing hardware, or dealing with multiple reboots. Managing change is an increasingly onerous task, but managing files is a well-known and well-understood process.

Today, we’re only part-way there?but far enough along that you should seriously consider running on a VM all the time. Microsoft has been passing out VirtualPC images to early testers for some time, and they’re so convenient that it’s hard to imagine going back to a time when you would have to set up a full system simply to test drive some new software. To see the full effect, you’d need to be able to get and install full VM images, already set up with the software you need instead of building your VM from scratch, the way you currently install an OS on your machine. That’s not possible with for-profit OS versions (yet), but you can already download pre-configured images from VMWare’s Virtual Machine Center?if you’re satisfied with running Linux VMs. In the future, people will feel the same way about all OSs: Can you imagine having to reinstall an OS from scratch? How quaint!

Share the Post:
Share on facebook
Share on twitter
Share on linkedin

Overview

Recent Articles: