The following sections detail the configuration settings that are required for VxRail networking.
Note: If you do not plan to use the auto-discover method due to multicast restrictions, and will use the manual method instead for selecting nodes for the cluster build operation, this task can be skipped.
VxRail clusters have no backplane, so communication between its nodes is facilitated through the network switch. This communication between the nodes for device discovery purposes uses VMware’s Loudmouth capabilities, which are based on the RFC-recognized “Zero Network Configuration” protocol. New VxRail nodes advertise themselves on the network using the VMware Loudmouth service, and are discovered by VxRail Manager with the Loudmouth service.
VMware’s Loudmouth service depends on multicasting, which is required for the VxRail internal management network. The network switch ports that connect to VxRail nodes must allow for pass-through of multicast traffic on the VxRail Internal Management VLAN. Multicast is not required on your entire network, just on the ports connected to VxRail nodes.
VxRail creates very little traffic through multicasting for auto-discovery and device management. Furthermore, the network traffic for the Internal Management network is restricted through a VLAN. You can enable MLD Snooping and MLD Querier on the VLAN if supported on your switches.
If MLD Snooping is enabled, MLD Querier must be enabled. If MLD Snooping is disabled, MLD Querier must be disabled.
Note: If you do not plan to use vSAN as the primary storage resource on the VxRail cluster, this task can be skipped.
For early versions of VxRail, multicast was required for the vSAN VLAN. One or more network switches that connected to VxRail had to allow for the pass-through of multicast traffic on the vSAN VLAN. Starting with VxRail v4.5, all vSAN traffic replaces multicast with unicast. This change helps reduce network configuration complexity and simplifies switch configuration. Unicast is a common protocol enabled by default on most enterprise Ethernet switches.
If you are required to configure multicast, note that VxRail multicast traffic for vSAN will be limited to broadcast domain per vSAN VLAN. There is minimal impact on network overhead as management traffic is nominal. You can limit multicast traffic by enabling IGMP Snooping and IGMP Querier. We recommend enabling both IGMP Snooping and IGMP Querier if your switch supports them and you configure this setting.
IGMP Snooping software examines IGMP protocol messages within a VLAN to discover which interfaces are connected to hosts or other devices that are interested in receiving this traffic. Using the interface information, IGMP Snooping can reduce bandwidth consumption in a multi-access LAN environment to avoid flooding an entire VLAN. IGMP Snooping tracks ports that are attached to multicast-capable routers to help manage IGMP membership report forwarding. It also responds to topology change notifications.
IGMP Querier sends out IGMP group membership queries on a timed interval, retrieves IGMP membership reports from active members, and allows updates to group membership tables. By default, most switches enable IGMP Snooping but disable IGMP Querier. You will need to change the settings if so.
If IGMP Snooping is enabled, IGMP Querier must be enabled. If IGMP Snooping is disabled, IGMP Querier must be disabled.
Configure the VLANs on the switches depending on the VxRail version being deployed. The VLANs are assigned to the switch ports as a later task.
Configure the following VLANS:
Figure 69. VxRail logical networks: VxRail cluster with vSAN
Figure 70. VxRail logical networks: 2-node cluster with Witness
Using the VxRail Network Configuration Table, perform the following steps:
If more than one top-of-rack switch is being deployed to support the VxRail cluster, configure interswitch links between the switches. Configure the interswitch links to allow all VLANs to pass through.
Configure the port mode on your switch based on the plan for the VxRail logical networks, and whether VLANs will be used to segment VxRail network traffic. Ports on a switch operate in one of the following modes:
Link aggregation is supported for VxRail initial implementation on a customer-managed virtual distributed switch only under the following conditions:
Link aggregation is supported for VxRail initial implementation on a VxRail-managed virtual distributed switch under the following conditions:
If these conditions are not applicable to your plans, do not enable link aggregation, including protocols such as LACP and EtherChannel, on any switch ports that are connected to VxRail node ports before initial implementation. Doing so will cause VxRail initial implementation to fail. When the iniital implementation process is complete, you can configure link aggregation on the operational VxRail cluster, as described in Complete link aggregation after implementation.
This section describes the following additional tasks to be undertaken if the VxRail cluster was implemented against switches that were preconfigured for link aggregation:
During the VxRail cluster initial implementation process, a link aggregation group with default settings was configured on the virtual distributed switch. During this process, a default load balancing mode was configured on the link aggregation group.
Once link aggregation is in an active state on the Ethernet switches, a best practice is to disable the feature that kept the switch ports open in a port channel or EtherChannel while waiting for synchronization with another pair of ports. Disabling the feature is recommended because the open switch ports will continue to consume switch resources. Those resources are no longer necessary because the link aggregation pairing is complete.
If your plans meet the conditions for supporting link aggregation during the VxRail initial implementation process, perform these actions before starting:
Network traffic must be allowed uninterrupted passage between the physical switch ports and the VxRail nodes. Certain Spanning Tree states can place restrictions on network traffic and can force the port into an unexpected timeout mode. These conditions that are caused by Spanning Tree can disrupt VxRail normal operations and impact performance.
If Spanning Tree is enabled in your network, ensure that the physical switch ports that are connected to VxRail nodes are configured with a setting such as Portfast or set as an edge port. These settings set the port to forwarding state, so no disruption occurs. Because vSphere virtual switches do not support STP, physical switch ports that are connected to an ESXi host must have a setting such as Portfast configured if spanning tree is enabled to avoid loops within the physical switch network.
Network instability or congestion contributes to low performance in VxRail, and has a negative effect on the vSAN I-O datastore operations. VxRail recommends enabling flow control on the switch to assure reliability on a congested network. Flow control is a switch feature that helps manage the rate of data transfer to avoid buffer overrun. During periods of high congestion and bandwidth consumption, the receiving network will inject pause frames for a period of time to the sender network to slow transmission in order to avoid buffer overrun. The absence of flow control on a congested network can result in increased error rates and force network bandwidth to be consumed for error recovery. The flow control settings can be adjusted depending on network conditions, but VxRail recommends that flow control is receive on and transmit off.