Understanding Networking in Virtual Machine (VM) Environment

Networking-in-Virtual-Machine-Environment

Server Virtualization packs applications in Virtual Machine (VM) format and runs multiple applications in a single server. So how is networking handled in a scenario where there are multiple VMs in each server, and multiple servers?

The above diagram represents networking in one such Virtual Machine environment. (Citrix XenServer. I am using their concept to explain networking in Virtual Machines, in general. This is only to give you an idea and may not apply to all scenarios.)

In the above diagram, the area enclosed within dotted lines represents a server (host). There are multiple Virtual Machines (VM) within a host (top), and each of them have a Virtual Adapter/Virtual Interface Card. These Virtual Interface Cards in turn connect to a Virtual Switch (vSwitch) which is shown below the VMs. The Virtual Switch is a virtualization-aware software switch running on each server that uses the same networking protocols as the physical switch.

The Virtual Switch forwards the traffic to Physical Interface Cards/Physical Adapters (NIC) on the Server. Citrix suggests to divide the traffic from the vSwitch into three groups – Primary Management Interface traffic, Virtual Machine traffic & Storage traffic, and dedicate (at least) one Physical Interface Card (NIC) to each. But this is not mandatory and all the traffic can flow through a single server adapter too.

Below that, you’ll find the physical server adapters connecting to appropriate ports on the physical switch. In Citrix XenServer, they suggest each traffic type connect to switch ports that are in the same physical network (sub-network). This enables VMs to retain their configuration and settings even if they migrate across servers.

One virtual interface card on a VM is required for each physical network you want to connect it to. Each virtual interface card is assigned a ‘virtual’ MAC address (either manually or automatically).

A separate virtual switch is required for each VLAN. Private virtual switches can also exist – VMs connected to them do not connect to the server NICs/outside network.

In Citrix XenServer, they suggest pooling similar servers into a pool (pool – 1, in this case) and keeping the network configuration in all servers within a pool, constant. That way configuration changes done on one server (pool master) is automatically replicated across all servers in a pool. They also suggest dedicating a Server NIC for management interface traffic on each server.

Each physical management interface (NIC) on the server has its own vSwitch (not shown in the above diagram) and all vSwitches can be centrally controlled through a vSwitch Controller, which makes all the virtual switches look like one large switch. Many physical switch features are supported on the virtual switches too – you can configure ACL, QoS, Traffic Monitoring, fine-grained security policies, etc. The management console/central user interface provides detailed visibility on the virtual switch connectivity.

Reference/Further Information:  Citrix XenServer Design: Designing XenServer Network Configurations (pdf).

Note: Physical Switches offer dedicated modules to connect and manage traffic from VMs directly, and that will be dealt with in a separate blog post in future.

excITingIP.com

You could stay up to date on Computer Networking/IT Technologies by subscribing to this blog with your email address in the sidebar box that says, ‘Get email updates when new articles are published.’

Diagram/Photo credit: By Destination8infinity (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons.