Note: You can skip this section if you plan to enable Dell EMC SmartFabric Services and extend VxRail automation to the TOR switch layer.
For the VxRail initialization process to pass validation and build the cluster, you must configure the ports that VxRail will connect to on your switch before you plug in VxRail nodes and powering them on.
Follow these steps to set up your switch:
Note: This section provides guidance for preparing and setting up your switch for VxRail. Be sure to follow your vendor’s documentation for specific switch configuration activities and for best practices for performance and availability.
VxRail Appliances have no backplane, so communication between its nodes is facilitated through the network switch. This communication between the nodes uses VMware’s Loudmouth auto-discovery capabilities, based on the RFC-recognized "Zero Network Configuration" protocol. New VxRail nodes advertise themselves on the network using the VMware Loudmouth service, and are discovered by VxRail Manager with the Loudmouth service. VMware’s Loudmouth service depends on multicasting, which is required for the VxRail internal management network.
The network switch ports that connect to VxRail nodes must allow for pass-through of multicast traffic on the VxRail Internal Management VLAN. Multicast is not required on your entire network, just on the ports connected to VxRail nodes.
VxRail creates very little traffic through multicasting for auto-discovery and device management. Furthermore, the network traffic for the Internal Management network is restricted through a VLAN. You can choose to enable MLD Snooping and MLD Querier on the VLAN if supported on your switches.
If MLD Snooping is enabled, MLD Querier must be enabled. If MLD Snooping is disabled, MLD Querier must be disabled.
Starting in VxRail v4.5.0, all vSAN traffic replaces multicast with unicast. This change helps reduce network configuration complexity and simplifies switch configuration.
For VxRail v4.5.0 and earlier, multicast is required for the vSAN VLAN. One or more network switches that connect to VxRail must allow for pass-through of multicast traffic on the vSAN VLAN. Multicast is not required on your entire network, just on the ports connected to VxRail.
VxRail multicast traffic for vSAN will be limited to broadcast domain per vSAN VLAN. There is minimal impact on network overhead as management traffic is nominal. You can limit multicast traffic by enabling IGMP Snooping and IGMP Querier. We recommend enabling both IGMP Snooping and IGMP Querier if your switch supports them and you configure this setting.
IGMP Snooping software examines IGMP protocol messages within a VLAN to discover which interfaces are connected to hosts or other devices that are interested in receiving this traffic. Using the interface information, IGMP Snooping can reduce bandwidth consumption in a multi-access LAN environment to avoid flooding an entire VLAN. IGMP Snooping tracks ports that are attached to multicast-capable routers to help manage IGMP membership report forwarding. It also responds to topology change notifications.
IGMP Querier sends out IGMP group membership queries on a timed interval, retrieves IGMP membership reports from active members, and allows updates to group membership tables. By default, most switches enable IGMP Snooping but disable IGMP Querier. You will need to change the settings if this is the case.
If IGMP Snooping is enabled, IGMP Querier must be enabled. If IGMP Snooping is disabled, IGMP Querier must be disabled.
For questions about how your switch handles multicast traffic, contact your switch vendor.
The uplinks on the switches must be configured to allow passage for external network traffic to administrators and end-users. This includes the VxRail external management network (or combined VxRail management network earlier than version 4.7) and Virtual Machine network traffic. The VLANs representing these networks need to be passed upstream through the uplinks. For VxRail clusters running at version 4.7 or later, the VxRail internal management network must be blocked from outbound upstream passage.
If the VxRail vMotion network is going to be configured to be routable outside of the top-of-rack switches, include the VLAN for this network in the uplink configuration. This is to support the use case where virtual machine mobility is desired outside of the VxRail cluster.
If you plan to expand the VxRail cluster beyond a single rack, configure the VxRail network VLANs for either stretched Layer 2 networks across racks, or pass upstream to terminate at Layer 3 routing services if new subnets will be assigned in expansion racks.
In a dual-switch environment, configure the ports that are used for inter-switch communication to allow passage for all the VxRail virtual networks. Plan switch port configuration.
Configure the port mode on your switch based on the plan for the VxRail logical networks, and whether VLANs will be used to segment VxRail network traffic. Ports on a switch operate in one of the following modes:
Do not enable link aggregation, including protocols such as LACP and EtherChannel, on any switch ports that are connected to VxRail node ports that are supporting VxRail management network traffic. During the VxRail initial build process, either 2 or 4 ports will be selected to support the VxRail management networks and any guest networks that are configured at that time. The VxRail initial build process will configure a virtual distributed switch on the cluster, and then configure a portgroup on that virtual distributed switch for each VxRail management network.
Figure 27. Unused VxRail node ports configured for non-VxRail network traffic
If your requirements include using any spare network ports on the VxRail nodes that were not configured for VxRail network traffic for other use cases, then link aggregation can be configured to support that network traffic. These can include any unused ports on the network daughter card (NDC) or on the optional PCIe adapter cards. Updates can be configured on the virtual distributed switch deployed during VxRail initial build to support the new networks, or a new virtual distributed switch can be configured. Since the initial virtual distributed switch is under the management and control of VxRail, the best practice is to configure a separate virtual distributed switch on the vCenter instance to support these networking use cases.
Network traffic must be allowed uninterrupted passage between the physical switch ports and the VxRail nodes. Certain Spanning Tree states can place restrictions on network traffic and can force the port into an unexpected timeout mode. These conditions that are caused by Spanning Tree can disrupt VxRail normal operations and impact performance.
If Spanning Tree is enabled in your network, ensure that the physical switch ports that are connected to VxRail nodes are configured with a setting such as ‘Portfast’, or set as an edge port. These settings set the port to forwarding state, so no disruption occurs. Because vSphere virtual switches do not support STP, physical switch ports that are connected to an ESXi host must have a setting such as ‘Portfast’ configured if spanning tree is enabled to avoid loops within the physical switch network.
Network instability or congestion contributes to low performance in VxRail, and has a negative effect on the vSAN I-O datastore operations. VxRail recommends enabling flow control on the switch to assure reliability on a congested network. Flow control is a switch feature that helps manage the rate of data transfer to avoid buffer overrun. During periods of high congestion and bandwidth consumption, the receiving network will inject pause frames for a period of time to the sender network to slow transmission in order to avoid buffer overrun. The absence of flow control on a congested network can result in increased error rates and force network bandwidth to be consumed for error recovery. The flow control settings can be adjusted depending on network conditions, but VxRail recommends that flow control should be ‘receive on’ and ‘transmit off’.
Now that you understand the switch requirements, it is time to configure your switches. The VxRail network can be configured with or without VLANs. For performance and scalability, we highly recommended configuring VxRail with VLANs. As listed in Appendix C: VxRail Setup Checklist, you will be configuring the following VLANs:
For VxRail clusters using version 4.7 or later:
For VxRail clusters using versions earlier than 4.7:
For VxRail clusters using version 4.5 or later:
For VxRail clusters using versions earlier than 4.5:
For all VxRail clusters:
Figure 28. VxRail Logical Networks: Version earlier than 4.7 (left) and 4.7 or later (right)
Figure 29. VxRail Logical Networks: 2-Node Cluster with Witness
Using Appendix A: VxRail Network Configuration Table, perform the following steps: