Follow the steps in this section for the configuration settings required for VxRail networking.
Note: You can skip this task if you plan not to use the auto-discover method due to multicast restrictions and instead use the manual method for selecting nodes for the cluster build operation.
VxRail clusters have no backplane, so communication between its nodes is facilitated through the network switch. This communication between the nodes for device discovery purposes uses the VMware Loudmouth capabilities, which are based on the RFC-recognized “Zero Network Configuration" protocol. New VxRail nodes advertise themselves on the network using the VMware Loudmouth service. VxRail Manager discovers them with the Loudmouth service.
The VMware Loudmouth service depends on multicasting, which is required for the VxRail internal management network. The network switch ports that connect to VxRail nodes must allow for pass-through of multicast traffic on the VxRail Internal Management VLAN. Multicast is not required on your entire network. It is only required on the ports that are connected to VxRail nodes.
VxRail creates little traffic through multicasting for auto-discovery and device management. Furthermore, the network traffic for the Internal Management network is restricted through a VLAN. If MLD Snooping and MLD Querier are supported on your switches, you can enable them on the VLAN.
If MLD Snooping is enabled, MLD Querier must be enabled. If MLD Snooping is disabled, MLD Querier must be disabled.
Note: You can skip this task if you do not plan to use vSAN as the primary storage resource on the VxRail cluster.
For early versions of VxRail, multicast was required for the vSAN VLAN. One or more network switches that connected to VxRail had to allow for the pass-through of multicast traffic on the vSAN VLAN. Starting with VxRail v4.5 all vSAN traffic replaces multicast with unicast. This change helps reduce network configuration complexity and simplifies switch configuration. Unicast is a common protocol that is enabled by default on most enterprise Ethernet switches.
If you are required to configure multicast, VxRail multicast traffic for vSAN is limited to broadcast domain per vSAN VLAN. There is minimal impact on network overhead as management traffic is nominal. You can limit multicast traffic by enabling Internet Group Management Protocol (IGMP) Snooping and IGMP Querier. If your switch supports both IGMP Snooping and IGMP Querier and you configure this setting, Dell Technologies recommends enabling them.
IGMP Snooping software examines IGMP protocol messages within a VLAN to discover which interfaces are connected to hosts or other devices that are interested in receiving this traffic. Using the interface information, IGMP Snooping can reduce bandwidth consumption in a multi-access LAN environment to avoid flooding an entire VLAN. IGMP Snooping tracks ports that are attached to multicast-capable routers to help manage IGMP membership report forwarding. It also responds to topology change notifications.
IGMP Querier sends out IGMP group membership queries on a timed interval, retrieves IGMP membership reports from active members, and allows updates to group membership tables. By default, most switches enable IGMP Snooping but disable IGMP Querier. If this setting is true on your switch, change the settings.
If IGMP Snooping is enabled, IGMP Querier must be enabled. If IGMP Snooping is disabled, IGMP Querier must be disabled.
Configure the VLANs on the switches depending on the VxRail version being deployed and the type of cluster being deployed. The VLANs are assigned to the switch ports as a later task.
For VxRail clusters using version 4.7 or later:
For VxRail clusters using versions earlier than 4.7:
For all VxRail clusters:
The additional VxRail Witness traffic separation VLAN to manage traffic between the VxRail cluster and the witness. This VLAN is only needed if deploying VxRail stretched cluster or 2-Node cluster.
If more than one top-of-rack switch is being deployed to support the VxRail cluster, configure interswitch links between the switches. Configure the interswitch links to allow all VLANs to pass through.
Perform the steps in this section to configure the switch ports.
Configure the port mode on your switch based on the plan for the VxRail logical networks, and whether VLANs will be used to segment VxRail network traffic. Ports on a switch operate in one of the following modes:
Link aggregation is supported for the VxRail initial implementation process only if:
If either of these conditions are not applicable, do not enable link aggregation on any switch ports that are connected to VxRail node ports before initial implementation. This limitation includes protocols such as Link Aggregation Control Protocol (LACP) and EtherChannel.
During the VxRail initial build process, either two or four ports are selected on each node. These ports support the VxRail management networks and any guest networks that are configured then. The VxRail initial build process configures a virtual-distributed switch in the cluster. Then it configures a port group on that virtual-distributed switch for each VxRail management network.
When the initial implementation process completes, you can configure link aggregation on the operational VxRail cluster, as described in Configure link aggregation on VxRail networks. Your requirements may include using any spare network ports on the VxRail nodes that were not configured for VxRail network traffic for other use cases. You can configure link aggregation to support that network traffic. These ports can include any unused ports on the NDC-OCP or on the optional PCIe adapter cards.
Updates can be configured on the virtual-distributed switch that is deployed during VxRail initial build to support the new networks. Or you can configure a new virtual-distributed switch. The initial virtual-distributed switch is under the management and control of VxRail. The best practice is to configure a separate virtual-distributed switch in the vCenter instance to support these networking use cases.
Network traffic must be allowed uninterrupted passage between the physical switch ports and the VxRail nodes. Certain Spanning Tree states can place restrictions on network traffic and can force the port into an unexpected timeout mode. These conditions that Spanning Tree causes can disrupt VxRail normal operations and impact performance.
If Spanning Tree is enabled in your network, ensure that the physical switch ports that are connected to VxRail nodes are configured with a setting such as “Portfast.” Or you can configure the port as an edge port. These settings set the port to forwarding state, so no disruption occurs. vSphere virtual switches do not support STP. You can enable Spanning Tree to avoid loops within the physical switch network. If you do, you must configure physical switch ports that are connected to an ESXi host with a setting such as “Portfast.”
Network instability or congestion contributes to low performance in VxRail and has a negative effect on the vSAN I-O datastore operations. VxRail recommends enabling flow control on the switch to assure reliability on a congested network. Flow control is a switch feature that helps manage the rate of data transfer to avoid buffer overrun. During periods of high congestion and bandwidth consumption, the receiving network temporarily injects pause frames to the sender network to slow transmission. Inserting pause frames helps to avoid buffer overrun.
The absence of flow control on a congested network can result in increased error rates and force network bandwidth to be consumed for error recovery. The flow control settings can be adjusted depending on network conditions, but VxRail recommends that flow control should be “receive on” and “transmit off.”
Now that the switch base settings are complete, the next step is the switch ports. Perform the following steps for each switch port that is connected to a VxRail node: