RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX


Virtual Machines: A Logical Evolution of Operating Systems

The same properties that made virtual machines (VMs) on IBM mainframes indispensable in the 1960's will eventually make VMs indispensable on today's servers and desktop machines. But as of today, they still have some evolving to do.

lthough microcomputer operating systems have made huge strides since the early CP/M and DOS days, they originally evolved under the basic idea that each computer would be loaded with a single operating system that was tied tightly to the hardware. When you bought an IBM PC or clone, you expected to run DOS or a DOS clone. Similarly, if you bought a Mac, you wouldn't have expected it to run anything but a Mac OS. Machines and OS's were tightly interwoven not only because of the performance problems that limited memory, bus speed, storage space, and transfer rates wrought, but also because the early microcomputer was seen as a "personal" computer, a standalone machine that one person would use in isolation.

All these original design specifications have changed. Today, essentially all machines are connected via LANs, the Internet, or wireless technologies, and they are often shared by many people. In fact, many of these machines aren't "personal" at all any more; they do duty as servers, serving hundreds or thousands of different users per day. All the while operating systems have proliferated: DOS, multiple Windows versions, Linux, FreeBSD, Unix, Sun Solaris, Mac OS Systems 4-9, NextStep, OS X, et cetera. These OS changes parallel fundamental changes in use and expectations: developers want to build applications for multiple OS versions and multiple OS's; organizations want to make full use of their servers without needing to manually redeploy them on different physical servers; and users want to be able to run any applications they need, not just those available for the OS installed on their machines.

Individual products have arisen to help meet these needs by evolving the concept of virtualization, which IBM pioneered in the 1960's when its researchers began partitioning the mainframe so that it appeared to be running many individual operating systems, or virtual machines (VMs), simultaneously. However, while VMs that run on full OSs and virtual execution engines such as those used by Java and .NET, as well as boot managers and multitasking OS's, are certainly useful solutions, they simply don't go far enough to meet current needs.

New Uses, New Expectations

The changes in use and expectations demand equally fundamental changes to microcomputer operating systems. So far those aren't forthcoming, but VM products available today can solve many of the problems. These products run on top of a full host OS installation. The market leaders are VMWare's family of products, which run on both Windows and Linux, and Microsoft's Virtual PC, which runs only on Windows (but can host Linux). Others, notably the open source Xen hypervisor project, are not only catching up rapidly, but also migrating into various distributions of the Linux OS itself, meaning that VM capability becomes an integral part of the OS, not an add-on product.

Why do I believe current solutions aren't enough? Primarily because of performance, convenience, security, and management concerns.

So why do I believe current solutions aren't enough? Primarily because of performance, convenience, security, and management concerns. For example, the Java virtual machine runs everywhere because it abstracts all OS operations—meaning that the Java VM must often execute many instructions where the same functionality, if requested directly from the host VM, would require far fewer. That abstraction scheme enables Write Once, Run Anywhere (WORA), but Java application performance suffers as a result when the application makes many calls to the underlying OS. Running a boot manager lets you run multiple OS's, but switching between them requires a reboot, which is rarely convenient. Letting users share machines is useful, but finding out that all users are affected when one user makes changes or gets a virus is not.

Modern hypervisors—an adjunct to an OS that doesn't actually control the hardware itself, but acts solely as a host for other operating systems—are the best of the current solutions. They're extremely efficient, often reducing performance compared to the bare OS by only a few percent. You can run multiple OS's simultaneously, and switch between them without rebooting, and changes made by individual users to their specific VM copies don't affect the VMs of other users sharing the same machine.

At the hardware level, adding support for virtualization has the potential to provide significant VM performance enhancements. Both AMD (in its Pacifica line) and Intel (in its VT line, formerly Vanderpool) have added support to make the x86 architecture more VM-friendly.

Close Icon
Thanks for your registration, follow us on our social networks to keep up-to-date