While the networking requirements for VxRail and Cloud Foundation differ, there is overlap in the sense that Cloud Foundation domains depend on the networking resources enabled by VxRail for connectivity. Therefore, the supporting physical network must be properly designed and configured to support VxRail cluster network traffic, and the additional requirements for Cloud Foundation.
Figure 51. VxRail and NSX Overlay Networks
A leaf switch is at the lowest tier in a multi-tier architecture, and often referred to as a ‘top-of-rack’ switch. The VxRail nodes will only connect with leaf switches in a single rack, with the upper tier switches, known as spine switches, enable multi-rack interconnectivity.
The number of Ethernet ports from each VxRail node you reserve for Cloud Foundation on VxRail networking will drive the configuration process for each switch port connected to a VxRail node port. If the VxRail network traffic and Cloud Foundation network traffic will be physically separated between the nodes and the leaf switches, the VLANs for VxRail and Cloud Foundation only need to be assigned only to the required switch ports.
The following tasks must be performed in the top-of-rack switches in order to prepare for a VxRail cluster deployment and to prepare to support NSX:
- Select switches with sufficient open ports capacity to connect all the VxRail nodes, connect the inter-switch links between the leaf switches, and connect upstream to the adjacent network layer.
- Configure at least 1600 MTU to support host overlay network traffic (9000 preferred). A minimum MTU size of 1600 (9000 preferred) must be configured on the leaf switches.
- Ensure that the port type on the switches (RJ45, SFP+) match the port type on the VxRail nodes.
- Configure each of the VLANs required for the VxRail clusters on the switches.
- Configure VLAN for the NSX host overlay network on each switch. If you plan to use DHCP to supply IP addresses for the host overlay network, configure this network so that it can reach the DHCP server.
- Configure two VLANs for the NSX edge uplinks. These networks will enable BGP peering between NSX edge devices and upstream physical network.
- Configure the VLAN for the NSX edge overlay network.
- Do not configure any constructs such as port channels, LACP, or VPCs for the switch ports connecting to VxRail nodes. vSphere will manage the teaming and failover policies.
- For each switch port to be directly connected to the VxRail nodes:
- Configure each port as Layer 2 trunk port.
- Configure the VLANs required for the VxRail networking on each port supporting VxRail networking
- Configure the VLAN for the NSX host overlay network on each port supporting NSX network traffic
- Configure Spanning Tree on the switch ports to be directly connected to the VxRail nodes as edge ports, or in ‘portfast’ mode.
- Configure unicast on the VLAN representing the vSAN network.
- If opting for VxRail automatic device discovery, configure IPv6 multicast on the VLAN representing the VxRail Internal Management network.
- Configure MLD snooping and MLD querier on the VLAN representing the VxRail Internal Management Network (recommended).
- Configure the inter-switch links to allow network traffic to pass between the two switches.
Each VxRail node has a separate Ethernet port for out-of-band server management called ‘Integrated Dell Remote Access Controller’ (iDRAC). A separate Ethernet switch is recommended to provide connectivity for server maintenance. The server maintenance traffic can also be redirected through the existing network infrastructure. For complete details about VxRail cluster network requirements, see the Dell VxRail Network Planning Guide.
The table in Appendix C: Cloud Foundation on VxRail Networks lists the individual VLANs that must be configured on the top-of-rack switches. The example switch configuration syntax displayed in Appendix I: Sample switch configuration settings offers guidance on how to configure an Ethernet switch with sample VLANs and a sample switch port configuration.