Link aggregation for specific VxRail networks is supported starting with VxRail version 7.0.130. NIC teaming in VxRail is the foundation for supporting link aggregation, which is the bundling of two physical network links to form a single logical channel. Link aggregation allows ports on a VxRail node to peer with a matching pair of ports on the top-of-rack switches to support load balancing and optimize network traffic distribution across the physical ports. VxRail networks with heavy resource requirements, such as vSAN and perhaps vMotion, benefit most from network traffic optimization.
Each VxRail network is assigned two uplinks during the initial implementation operation. Even if the virtual distributed switch port group is assigned a teaming and failover policy to enable better distribution across the two uplinks, true load balancing is not achievable without link aggregation. Enabling link aggregation allows the VxRail network to better use the bandwidth of both uplinks, with the traffic flow coordinated based on the load balancing hash algorithm configured on the virtual distributed switch and the top-of-rack switches.
Figure 48. Dependent link aggregation features
The following functionality dependencies must be understood if considering link aggregation with VxRail:
The following guidelines apply to the use of link aggregation with VxRail:
The following VxRail network guidelines apply to enabling link aggregation:
Figure 49. Sample link aggregation options for VxRail networking
The following guidelines must be followed to enable link aggregation on a customer-managed virtual distributed switch:
The following guidelines apply to enabling link aggregation on a VxRail-managed virtual distributed switch:
Support for LACP, the selection of load balancing hashing algorithms and the formation of link aggregation on the physical switches depends on the switch vendor and operating system. These features are usually branded by the vendor, using names such as “Ether-Channel,” “Ethernet trunk,” or “Multi-Link Trunking.” Consult your switch vendor to verify that the switch models that are planned for the VxRail cluster supports this feature.
If you plan to deploy a pair of switches to support the VxRail cluster and want to enable load balancing across both switches, the switches must support the ability for the networks in a link aggregation group to logically flow across both switches. The switch operating system must support the multichassis link aggregation feature, such as Cisco’s Virtual Port Channel. See the guides provided by your switch vendor for the steps to complete this task.
If you plan to deploy VxRail with a VxRail-managed virtual distributed switch, the switch must support the “lacp individual” or compatible feature. By default, a switch port configured for link aggregation is set to an inactive state until such time it has exchanged LACP Protocol Data Units (PDUs) with a link aggregation group on an adjacent switch. Once these PDUs are exchanged, the two link aggregation groups can then synchronize into an active partnership.
Figure 50. Enabling connectivity for VxRail initial implementation with LAG
An individual VxRail node connects to the adjacent top-of-rack switches with a standard virtual switch at power-on, and virtual switches do not support link aggregation. The standard virtual switch cannot exchange LACP PDUs with the adjacent top-of-rack switches and, therefore, cannot establish a peering relationship. The peering relationship does not occur until later in the initial implementation process when the VxRail-managed virtual distributed switch is configured on the cluster and an LACP policy becomes active on the virtual distributed switch. At that point in time, both the sending and receiving of LACP PDUs at both ends can start.
The use of the “lacp individual” or compatible feature enables a switch port configured for link aggregation to remain in an open and active state, which doesn’t disrupt VxRail network connectivity. Because the ports remain open, VxRail initial implementation can proceed, and the peering relationship can be established during that process.
Enabling load balancing for the nonmanagement VxRail networks requires peering the pair of ports on each VxRail node with a pair of ports on the top-of-rack switches.
Figure 51. Plugging into equivalent switch ports for link aggregation
If you are enabling link aggregation across a pair of switches, and you have matching open ports on both switches, the best practice is to plug the cables into equivalent port numbers on both switches. We recommend creating a table to map each VxRail node port to a corresponding switch port. Then, identify which ports on each VxRail will be enabled for link aggregation.
For example, if you want to deploy a VxRail cluster with four nodes, and reserve and use two ports on the NDC/OCP and two ports on a PCIe adapter card for the VxRail cluster, and use the first eight ports on a pair of top-of-rack switches for connecting the cables, you could use the resulting table to identify the switch ports to be configured for link aggregation.
Figure 52. Sample port-mapping table
Assuming that you are using the second port on the NDC/OCP and PCIe adapter card for the nonmanagement VxRail networks, you can identify the switch port pairs, as shown in the columns shaded green, to be configured for link aggregation.
Dell recommends creating a table mapping the VxRail ports to the switch ports as part of the planning and design phase.