Step 4: Configure and Customize Networking
The VMware Workstation installation process automatically configures two new (virtual) network adapters on your machine: VMNet1 and VMNet8. The
VMNet1 adapter is used for the private networking between the host and the virtual machines. VMNet8 is used for NAT networking, which enables sharing of the host's external network access with the virtual machines.
These adapters are essential for the interconnectivity and proper operation of the network between the virtual machines and the host, and for the virtual machines' access to the Internet.
Through these adapters, VMware provides DHCP services to the virtual machines, as well as NAT access to the Internet.
VMware Network Configuration
It is time now to look at some basics of VMware network configuration and how they pertain to the cluster configuration.
VMware Workstation supports three network modes:
- Bridged networkingVirtual machines have full access to the host's network. However, in order to gain access to the network they need to be assigned their own IP addresses.
- NATWith NAT configuration, guest machines do not have their own IP addresses on the external network. They are assigned IP addresses in the context of the private network within the virtual environment.
Virtual machines gain access to the external network via the host machine's VMNet8 adapter. The host machine translates the traffic coming from the virtual machines via the VMNet8 adapter as well as external traffic.
- Host Only networkingThis type of networking enables the connection only between the host machine and the virtual machine. Virtual machines do not have access to the external network.
The ability to create the virtual network adapters and configure network options as described above is essential to the cluster prototyping process.
For the example cluster, the best approach is to start with Host Only networking to establish the interconnectivity between the machines in the cluster, and then to test it from the host machine.
For the cluster configuration, you need to create the subnet for the machines in the cluster and assign static IP addresses to them. To assign static IP addresses, you should manually assign them in the C-class network ranging from <net>.3 to <net>.192. (Addresses <net>.13 are reserved for virtual machine use.)
On Linux, you could configure the subnet, range, and IP address for each machine. One way would be to add the following command (use the machine IP address as specified in the MySQL tutorial):
/sbin/ifconfig eth0 192.168.0.10 netmask 255.255.255.0 broadcast 192.168.0.255
and execute it on startup.
Since the VMware host does not automatically provide internal DNS service for the virtual machines, you need to manually configure some of the machines to serve the purpose of a DNS server or to configure the host files (which is outside the scope of this article).
The simplest cluster configuration has no firewalls between the machines, enabling software components to interact with each other based on your configuration preferences. In a more sophisticated configuration, you could configure special-purpose machines to serve as the routers/firewalls. (More on this option in Step 6).
Step 5: Customize Cluster-Aware Software Components
Once you have established the networking between the virtual machines, and when you are operating within the boundaries of the virtual machine, the cluster-aware software components "see" the virtual machine exactly as they see the network and surrounding software. From this point on, the example just follows the cluster setup steps for the components it prototypes: MySQL Management Server on machine 18.104.22.168, MySQL Server on 22.214.171.124, and data servers on 126.96.36.199 and 188.8.131.52 (see Figure 3
). You can just follow the MySQL 5.1 clustering instructions, as the virtualization process requires nothing extra except that you must have enough RAM to support all virtual machines running concurrently.
|Figure 3. MySQL 5 Virtual Cluster Topology|
Note: RedHat-based systems require a few extra steps in order to enable unicast-based clustering.
To enable your primary network card (likely eth0) for unicast, use the following command:
ifconfig eth0 multicast
Use this command to enable unicast:
route add -net 184.108.40.206 network 240.0.0.0 dev eth0
Step 6 (Optional): Configuring Firewall
To simulate a firewall, you could install a very small Linux virtual machine with two (virtual) network adapters (eth0, eth1) and configure Bridged networking on one adapter, allowing the incoming traffic from the external network into that machine. You would configure the other adapter for membership in your virtual cluster's subnet. Utilizing iptables and ipchains on Linux, you could configure the rules for allowed traffic between the external system (through the bridged adapter) and into the adapter (on the private subnet). (Click here
for a tutorial on configuring a dual-cards Linux system as the firewall.)
Now that you have configured the virtual machine to represent the cluster of physical machines getting the clustered application up and running is completely a matter of following the directions as laid out in the MySQL documentation. From this point on there is nothing specific to the virtual machine operations anymore. Make a note that any configuration error that you may experience in the setup process will likely be related to the improper setting of the networking on Linux. It is absolutely essential that you understands all the intricacies of the network configuration before embarking on the cluster prototyping.
Once the virtual cluster is established you can proceed with the testing and experimentation that is typical for this type of the architecture: load balancing properties by generating the load and examining the switching between the servers, suddenly bringing down (powering off) one of the servers, etc.
Keep in mind that virtual machines in this configuration will not exercise the same performance properties as their physical counterparts. They will perform slower.
However, the performance ratios, failures and successes observed during the experimentation on the virtual machines will be the same for the physical counterparts. If you experience the performance issues with the data replication between two virtual data servers you will see the same issues in the physical environment.
The same will apply for all the positives that you may observe during the testing.
In my practice I was able to successfully configure and prototype a very large enterprise application cluster (web servers cluster, application servers cluster, database) completely in the virtual environment of the desktop. Following the configuration steps from the virtual environment and paying attention to the lessons learned on it I was able to build and configure the production class physical environment based on my virtual prototype
in a record short time.
Furthermore, I was able to replicate, diagnose, research and resolve with the high degree of fidelity any issue on the virtual cluster that originally appeared on the physical environment saving me hours of lengthy research and investigation on the less accessible physical environment.