There are options and design decisions to be considered on the integration of the VxRail cluster physical and virtual networks with your data center networks. The decisions made regarding VxRail networking cannot be easily modified after the cluster is deployed and supporting Cloud Foundation, and should be decided before actual VxRail cluster deployment.
Each VxRail node has an integrated network daughter card (NDC). The NDC can support either 2x10Gb or 2x25Gb Ethernet ports, or 4x10Gb Ethernet ports. You can choose to support your Cloud Foundation on VxRail workload using only the ports on the Network Daughter Card. If NIC-level redundancy is a business requirement, a decision can be made to install optional PCIe cards into each VxRail node. The PCIe adapter cards support both 10 Gb and 25 Gb expansion ports.
VxRail supports both predefined network profiles and custom network profiles when deploying the cluster to support Cloud Foundation VI workload domains. The best practice is to select the network profile that supports the number of NDC-based ports and PCIe-based ports being selected per node to support Cloud Foundation on VxRail networking. This ensures that VxRail and CloudBuilder will configure the supporting virtual networks by following the guidance designed into these network profiles.
If VCF on VxRail networking will be configured on two node ports, a decision must be made whether to add a PCIe expansion card into each VxRail node to eliminate the NDC as a single point of failure. If only the NDC is present, the first two ports on each VxRail node will be reserved to support VCF on VxRail networking using a predefined network profile. If both NDC and PCIe ports are present, a custom network profile can be configured, with the option to select a port on the NDC and a port on the PCIe adapter card to reserve for VCF on VxRail networking.
In both two-port instances, the NSX and VxRail networks are configured to share the bandwidth capacity of the two Ethernet ports.
Figure 19. Two ports reserved for VCF on VxRail networking
If VCF on VxRail networking will be configured with four ports, either all four ports on the NDC can be used (provided it is based on 10gb connectivity), or the workload can be spread across ports on the NDC and PCIe adapter cards. The latter option is preferred because it enables resiliency across the node devices, and also across the pair of switches.
The option to use only NDC ports uses a predefined network profile, with automatic assignment of the VMnics to the uplinks. If deploying the VxRail cluster using two NDC-based ports and two PCIe-based ports, the best practice is to use a profile that maps the uplinks from the virtual distributed switch to the VMnics as shown in the following graphic. Then, to enable network resiliency, plug the NDC-based ports into separate switches, and then plug the PCIe-based ports into separate switches. This profile permits the NSX and VxRail networks to optimize the sharing of bandwidth capacity across the four Ethernet ports, and best support the default teaming and failover policies configured for each Cloud Foundation on VxRail network.
Figure 20. Four ports reserved for VCF on VxRail networking
Cloud Foundation on VxRail supports the physical segmentation of VxRail and Cloud Foundation network traffic onto dedicated network ports, and onto separate, dedicated virtual distributed switches.
Note: Support of more than a single virtual distributed switch requires a minimum version of VMware Cloud Foundation 4.0.1.
If two node ports are selected for Cloud Foundation on VxRail networking, all the VxRail network traffic and Cloud Foundation/NSX traffic is consolidated onto the two ports. A second virtual distributed switch is not support for the two-port connectivity option, so all VxRail and Cloud Foundation/NSX traffic flows through a single virtual distributed switch.
Figure 21. Connectivity options for VxRail nodes with two 10GbE NDC ports or two 25GbE NDC ports
With the four-port node option without the optional PCIe card, the vMotion and vSAN network traffic supporting VxRail are positioned on the second port on the network daughter card, and the Cloud Foundation/NSX traffic is assigned to the last two ports. With this network profile, a second virtual distributed switch can be deployed to isolate the VxRail network traffic on the first virtual distributed switch, and the Cloud Foundation/NSX traffic on the second virtual distributed switch.
Figure 22. Connectivity options for VxRail nodes with four 10GbE NDC ports
For the four-port option using both NDC-based ports and PCIe-based ports from each VxRail node, a decision can be made to direct all Cloud Foundation on VxRail traffic onto a single virtual distributed switch. The uplinks can be assigned to a single virtual distributed switch to support all network traffic, or a second virtual distributed switch can be deployed to segment VxRail network traffic and Cloud Foundation/NSX network traffic.
Figure 23. Connectivity options for VxRail nodes with two 25GbE NDC ports and two 25GbE PCIe ports
For planned workloads that have very high bandwidth requirements, up to eight Ethernet ports can be used across the NDC and PCIe cards. The VxRail network traffic is spread across four ports, and Cloud Foundation/NSX network traffic is spread across the other four ports.
Figure 24. Sample connectivity option for VxRail nodes with two NDC ports and 6 PCIe ports
The reservation and assignment of the physical ports on the VxRail nodes to support Cloud Foundation on VxRail networking is performed during the initial deployment the VxRail cluster. Dell-EMC recommends that careful consideration be taken to ensure that sufficient network capacity is built into the overall design to support planned workloads. If possible, Dell Technologies recommends an overcapacity of physical networking resources to support future workload growth.