Before discussing SplitRXmode, a quick recap on some networking basics and how packet forwarding is done on an IP network.
First there’s the method most folks are familiar with which is Unicast. Unicast transmission sends messages to a single unique network destination identified by a unique IP address, which enables a straightforward one‐to‐one packet delivery. The Broadcast method on the other hand transmits a packet to every device on the network that is within the broadcast domain.
The Unicast method is not suitable for information that needs to be simultaneously sent to multiple recipients |
Finally there’s the multicasting method where packet delivery is sent to a group of destinations denoted by a multicast IP address. Multicasting is typically used in applications that have a requirement for simultaneously sending information to multiple destinations such as distance learning, financial stock exchanges, video conferencing and digital video libraries. Multicast sends only one copy of the information along the network, whereby any duplication is at a point close to the recipients, consequently minimizing network bandwidth requirements.
The Multicast method sends only one copy minimizing bandwidth requirements |
For multicast the Internet Group Management Protocol (IGMP) is utilized in order for membership of the multicast group to be established and coordinated. This leads to single copies of information being sent to the multicast sources over the network. Hence it’s the network that takes responsibility for replicating and forwarding the information to multiple recipients. By operating between the client and a local multicast router, IGMP utilises layer 2 switches with IGMP snooping and consequently derives the information regarding IGMP transactions. By being between the local and remote multicast routers, Multicast protocols such as PIM are then used to direct the traffic from the multicast server to the many multicast clients.
A typical IGMP architecture layout |
In the context of VMware and virtual switches there’s no need for the vSwitches to perform IGMP snooping in order to recognise which VMs have IP multicast enabled. This is due to the ESX server having authoritative knowledge of the vNICs, so whenever a VM’s vNIC is configured for multicast the vSwitch automatically learns the multicast Ethernet group addresses associated with the VM. With the VMs using IGMP to join and leave multicast groups, the multicast routers send periodic membership queries while the ESX server allows these to pass through to the VMs. The VMs that have multicast subscriptions will in turn respond to the multicast router with their subscribed groups via IGMP membership reports. IGMP snooping in this case is done by the usual physical Layer 2 switches in the network so that they can learn which interfaces require forwarding of multicast group traffic. So when the vSwitch receives multicast traffic, it forwards copies of the traffic to the subscribed VMs in a similar way to Unicast i.e. based on destination MAC addresses. With the responsibility of tracking which vNIC is associated with which multicast group lying with the vSwitch, packets are only delivered to the relevant VMs.
Multicasting in a VMware context prior to SplitRXMode |
While this method worked fine for some multicast applications this still wasn’t sufficient enough for the more demanding multicast applications and hence stalled their virtualisation. The reason being that in this case VMs would process the packet replication in a single shared context which ultimately led to constraints. This is because when there was a high VM to ESX ratio there was a consequent high packet rate that often caused large packet losses and bottlenecks. So with the release of vSphere 5, the new splitRXMode was released to not only compensate for this problem but also enable the virtualisation of demanding multicast applications.
With SplitRXMode, the received packets are now split allowing them to be processed in multiple and separate contexts. This is achieved by having the packet replication conducted by the hypervisor instead. This is enabled by having multiple receivers on the same ESX server that consequently eliminates the requirement for a physical network to transfer multiple copies of the same packet. With the only caveat being that you require a VMXNET3 virtual NIC, SplitRXmode now uses multiple physical CPUs to process the network packets received in a single network queue. This feature can obviously improve network performance for certain workloads. Instead of a shared network queue, SplitRXmode enables you to specify which vNICs process the packets in a separate context, which consequently improves throughput and maximum packet rates for multicast workloads. While there may be some concerns that this could have significant CPU overhead, those that are running with Intel’s powerful new E5 processors should have little or no concern.
So if you’re considering multicast workloads, which have multiple and simultaneous network connections for your VMware environment e.g. (multiple VMs on the same ESX server that will receive multicast traffic from the same source), then take a closer look at SplitRXmode. If not, you might just like everybody else I’ve spoken to, completely forget about it.