Follow the steps in this section for the configuration settings required for VxRail networking.
Note: If you do not plan to use the auto-discover method due to multicast restrictions, and will use the manual method instead for selecting nodes for the cluster build operation, this task can be skipped.
VxRail clusters have no backplane, so communication between its nodes is facilitated through the network switch. This communication between the nodes for device discovery purposes uses VMware’s Loudmouth capabilities, which are based on the RFC-recognized "Zero Network Configuration" protocol. New VxRail nodes advertise themselves on the network using the VMware Loudmouth service, and are discovered by VxRail Manager with the Loudmouth service.
VMware’s Loudmouth service depends on multicasting, which is required for the VxRail internal management network. The network switch ports that connect to VxRail nodes must allow for pass-through of multicast traffic on the VxRail Internal Management VLAN. Multicast is not required on your entire network, just on the ports connected to VxRail nodes.
VxRail creates very little traffic through multicasting for auto-discovery and device management. Furthermore, the network traffic for the Internal Management network is restricted through a VLAN. You can enable MLD Snooping and MLD Querier on the VLAN if supported on your switches.
If MLD Snooping is enabled, MLD Querier must be enabled. If MLD Snooping is disabled, MLD Querier must be disabled.
For early versions of VxRail, multicast was required for the vSAN VLAN. One or more network switches that connected to VxRail had to allow for the pass-through of multicast traffic on the vSAN VLAN. Starting with VxRail v4.5, all vSAN traffic replaces multicast with unicast. This change helps reduce network configuration complexity and simplifies switch configuration. Unicast is a common protocol enabled by default on most enterprise Ethernet switches.
If you are required to configure multicast, note that VxRail multicast traffic for vSAN will be limited to broadcast domain per vSAN VLAN. There is minimal impact on network overhead as management traffic is nominal. You can limit multicast traffic by enabling IGMP Snooping and IGMP Querier. We recommend enabling both IGMP Snooping and IGMP Querier if your switch supports them and you configure this setting.
IGMP Snooping software examines IGMP protocol messages within a VLAN to discover which interfaces are connected to hosts or other devices that are interested in receiving this traffic. Using the interface information, IGMP Snooping can reduce bandwidth consumption in a multi-access LAN environment to avoid flooding an entire VLAN. IGMP Snooping tracks ports that are attached to multicast-capable routers to help manage IGMP membership report forwarding. It also responds to topology change notifications.
IGMP Querier sends out IGMP group membership queries on a timed interval, retrieves IGMP membership reports from active members, and allows updates to group membership tables. By default, most switches enable IGMP Snooping but disable IGMP Querier. You will need to change the settings if this is the case.
If IGMP Snooping is enabled, IGMP Querier must be enabled. If IGMP Snooping is disabled, IGMP Querier must be disabled.
Configure the VLANs on the switches depending on the VxRail version being deployed and the type of cluster being deployed. The VLANs are assigned to the switch ports as a later task.
For VxRail clusters using version 4.7 or later:
For VxRail clusters using versions earlier than 4.7:
For all VxRail clusters:
The additional VxRail Witness traffic separation VLAN to manage traffic between the VxRail cluster and the witness. This is only needed if deploying VxRail stretched cluster or 2-Node cluster.
Using the VxRail Network Configuration Table, perform the following steps:
If more than one top-of-rack switch is being deployed to support the VxRail cluster, configure inter-switch links between the switches. Configure the inter-switch links to allow the all VLANs to pass through.
Configure the port mode on your switch based on the plan for the VxRail logical networks, and whether VLANs will be used to segment VxRail network traffic. Ports on a switch operate in one of the following modes:
Link aggregation is supported for the VxRail initial implementation process only if the VxRail version on the nodes is 7.0.130, and you correctly follow the guidance to deploy the virtual distributed switches on your external vCenter with the proper link aggregation settings. If either of these conditions are not applicable, do not enable link aggregation, including protocols such as LACP and EtherChannel, on any switch ports that are connected to VxRail node ports before initial implementation.
During the VxRail initial build process, either two or four ports will be selected on each node to support the VxRail management networks and any guest networks configured at that time. The VxRail initial build process configures a virtual distributed switch on the cluster, and then configures a portgroup on that virtual distributed switch for each VxRail management network.
When the iniital implementation process completes, you can configure link aggregation on the operational VxRail cluster, as described in Configure link aggregation on VxRail networks. If your requirements include using any spare network ports on the VxRail nodes that were not configured for VxRail network traffic for other use cases, link aggregation can be configured to support that network traffic. These can include any unused ports on the NDC or on the optional PCIe adapter cards. Updates can be configured on the virtual distributed switch deployed during VxRail initial build to support the new networks, or a new virtual distributed switch can be configured. Since the initial virtual distributed switch is under the management and control of VxRail, the best practice is to configure a separate virtual distributed switch on the vCenter instance to support these networking use cases.
Network traffic must be allowed uninterrupted passage between the physical switch ports and the VxRail nodes. Certain Spanning Tree states can place restrictions on network traffic and can force the port into an unexpected timeout mode. These conditions that are caused by Spanning Tree can disrupt VxRail normal operations and impact performance.
If Spanning Tree is enabled in your network, ensure that the physical switch ports that are connected to VxRail nodes are configured with a setting such as ‘Portfast’, or set as an edge port. These settings set the port to forwarding state, so no disruption occurs. Because vSphere virtual switches do not support STP, physical switch ports that are connected to an ESXi host must have a setting such as ‘Portfast’ configured if spanning tree is enabled to avoid loops within the physical switch network.
Network instability or congestion contributes to low performance in VxRail, and has a negative effect on the vSAN I-O datastore operations. VxRail recommends enabling flow control on the switch to assure reliability on a congested network. Flow control is a switch feature that helps manage the rate of data transfer to avoid buffer overrun. During periods of high congestion and bandwidth consumption, the receiving network will inject pause frames for a period of time to the sender network to slow transmission in order to avoid buffer overrun. The absence of flow control on a congested network can result in increased error rates and force network bandwidth to be consumed for error recovery. The flow control settings can be adjusted depending on network conditions, but VxRail recommends that flow control should be ‘receive on’ and ‘transmit off’.
Now that the switch base settings are complete, the next step is the switch ports. Perform the following steps for each switch port that will be connected to a VxRail node: