There are options and design decisions to be considered on the integration of the VxRail cluster physical and virtual networks with your data center networks. The decisions made regarding VxRail networking cannot be easily modified after the cluster is deployed and supporting Cloud Foundation, and should be decided before actual VxRail cluster deployment.
Each VxRail node has an integrated network daughter card (NDC). The NDC can support either 2x10Gb or 2x25Gb Ethernet ports, or 4x10Gb Ethernet ports. You can choose to support your Cloud Foundation on VxRail workload using only the ports on the Network Daughter Card. If NIC-level redundancy is a business requirement, a decision can be made to install optional PCIe cards into each VxRail node. The PCIe adapter cards support both 10 Gb and 25 Gb expansion ports.
VxRail supports both predefined network profiles and custom network profiles when deploying the cluster to support Cloud Foundation VI workload domains. The best practice is to select the network profile that supports the number of NDC-based ports and PCIe-based ports being selected per node to support Cloud Foundation on VxRail networking. This ensures that VxRail and CloudBuilder will configure the supporting virtual networks by following the guidance designed into these network profiles.
If VCF on VxRail networking will be configured on two node ports, a decision must be made whether to add a PCIe expansion card into each VxRail node to eliminate the NDC as a single point of failure. If only the NDC is present, the first two ports on each VxRail node will be reserved to support VCF on VxRail networking using a predefined network profile. If both NDC and PCIe ports are present, a custom network profile can be configured, with the option to select a port on the NDC and a port on the PCIe adapter card to reserve for VCF on VxRail networking.
In both two-port instances, the NSX and VxRail networks are configured to share the bandwidth capacity of the two Ethernet ports.
If VCF on VxRail networking will be configured with four ports, either all four ports on the NDC can be used (provided it is based on 10gb connectivity), or the workload can be spread across ports on the NDC and PCIe adapter cards. The option to use only NDC ports uses a predefined network profile, with automatic assignment of the VMnics to the uplinks. Configuring both NDC ports and PCIe ports is preferred because it enables resiliency across the node devices and across the pair of switches.
If deploying the VxRail cluster using NDC-based ports and PCIe-based ports, the best practice is to enable network resiliency by plugging the NDC-based ports into one switch, and then plugging the PCIe-based ports into a second switch. Then, use a custom profile that maps the uplinks from the virtual distributed switch to the VMnics, as shown in the following figure. This profile permits the NSX and VxRail networks to optimize the sharing of bandwidth capacity across the four Ethernet ports on each node. It offers the best support for the default teaming and failover policies configured for each Cloud Foundation on VxRail network.
Cloud Foundation on VxRail supports the physical segmentation of VxRail and Cloud Foundation network traffic onto dedicated network ports, and onto separate, dedicated virtual distributed switches.
If the VxRail nodes are configured with two Ethernet ports, all the VxRail network traffic and Cloud Foundation/NSX-T traffic is consolidated onto the two ports. A second virtual distributed switch is not support for the two-port connectivity option, so all VxRail and Cloud Foundation/NSX-T traffic flows through a single virtual distributed switch.
With the four-port node option without the optional PCIe card, the vMotion and vSAN network traffic supporting VxRail are positioned on the second port on the network daughter card, and the Cloud Foundation/NSX-T traffic is assigned to the last two ports. With this network profile, a second virtual distributed switch can be deployed to isolate the VxRail network traffic on the first virtual distributed switch, and the Cloud Foundation/NSX-T traffic on the second virtual distributed switch.
For the four-port option using both NDC-based ports and PCIe-based ports from each VxRail node, a decision can be made to direct all Cloud Foundation on VxRail traffic onto a single virtual distributed switch, or to redirect the NSX-T network traffic onto a separate virtual distributed switch.
For planned workloads that have very high bandwidth requirements, up to eight Ethernet ports can be used across the NDC and PCIe cards. The VxRail network traffic is spread across four ports, and Cloud Foundation/NSX-T network traffic is spread across the other four ports.
The reservation and assignment of the physical ports on the VxRail nodes to support Cloud Foundation on VxRail networking is performed during the initial deployment the VxRail cluster. Dell-EMC recommends that careful consideration be taken to ensure that sufficient network capacity is built into the overall design to support planned workloads. If possible, Dell Technologies recommends an overcapacity of physical networking resources to support future workload growth.