There are options and design decisions to be considered on the integration of the VxRail cluster physical and virtual networks with your data center networks. The decisions made regarding VxRail networking cannot be easily modified after the cluster is deployed and supporting Cloud Foundation, and should be decided before actual VxRail cluster deployment.
Each VxRail node has an on-board, integrated network card. Depending on the VxRail models selected and supported options, the on-board Ethernet ports can be configured as either 2x10Gb, 4x10Gb, 2x25Gb, or 4x10Gb. You can choose to support your Cloud Foundation on VxRail workload using only the on-board Ethernet ports, or deploy with both on-board Ethernet ports and with Ethernet ports from expansion adapter cards. If NIC-level redundancy is a business requirement, a decision can be made to install optional Ethernet adapter cards into each VxRail node for this purpose. Depending on the VxRail nodes selected for the cluster, the adapter cards can support 10 Gb, 25 Gb and 100 expansion ports.
VxRail supports both predefined network profiles and custom network profiles when deploying the cluster to support Cloud Foundation VI workload domains. The best practice is to select the network profile that aligns with the number of on-board ports and expansion ports being selected per node to support Cloud Foundation on VxRail networking. This ensures that VxRail and CloudBuilder will configure the supporting virtual networks by following the guidance designed into these network profiles.
If VCF on VxRail networking will be configured on two node ports, a decision must be made whether to add an expansion card into each VxRail node to eliminate the on-board port as a single point of failure. If only the on-board ports are present, the first two ports on each VxRail node will be reserved to support VCF on VxRail networking using a predefined network profile. If both on-board and expansion ports are present, a custom network profile can be configured, with the option to select an on-board port and expansion port to reserve for VCF on VxRail networking.
In both two-port instances, the NSX and VxRail networks are configured to share the bandwidth capacity of the two Ethernet ports.
Figure 20. 2 ports reserved for VCF on VxRail networking
If VCF on VxRail networking will be configured with four ports, either all four on-board ports can be used, or the workload can be spread across on-board and expansion ports. The option to use only on-board ports uses a predefined network profile, with automatic assignment of the VMnics to the uplinks. Configuring with both on-board and expansion ports is preferred because it enables resiliency across the node devices and across the pair of switches.
Figure 21. 4 ports reserved for VCF on VxRail networking in pre-defined network profile
Deploying the VxRail cluster using NDC-based ports and PCIe/OCP-based ports with a custom network profile offers flexibility for network assignments. The best practice with a custom network profile to enable network resiliency is to plug the NDC-based ports into one switch, and then plug the PCIe/OCP-based ports into the second switch. Then, use the custom network profile feature to map the uplinks from the virtual distributed switch to the VMnics to spread the workload across both switches and also protect from a single point of failure.
Figure 22. 4 ports and 1 VDS for VCF on VxRail networking using a custom network profile
Using a custom network profile also eases the assignment of uplinks if the Cloud Foundation on VxRail instance is deployed with separate virtual distributed switches to segment VxRail network traffic and VCF/NSX network traffic.
Figure 23. 4 ports and 2 VDS for VCF on VxRail networking using a custom network profile
Cloud Foundation on VxRail supports the physical segmentation of VxRail and Cloud Foundation network traffic onto dedicated network ports, and onto separate, dedicated virtual distributed switches.
If the VxRail nodes are configured with two Ethernet ports, all the VxRail network traffic and Cloud Foundation/NSX-T traffic is consolidated onto the two ports. A second virtual distributed switch is not supported for the two-port connectivity option, so all VxRail and Cloud Foundation/NSX-T traffic flows through a single virtual distributed switch.
Figure 24. Connectivity options for VxRail nodes with two on-board RJ45 ports or two on-board SFP+ ports
With the option of deploying four on-board ports, the vMotion and vSAN network traffic supporting VxRail are positioned on the second port, and the Cloud Foundation/NSX-T traffic is assigned to the last two ports. With this network profile, a second virtual distributed switch can be deployed to isolate the VxRail network traffic on the first virtual distributed switch, and the Cloud Foundation/NSX-T traffic on the second virtual distributed switch.
Figure 25. Connectivity options for VxRail nodes with four on-board ports
For the four-port option using both on-board ports and expansion ports from each VxRail node, a decision can be made to direct all Cloud Foundation on VxRail traffic onto a single virtual distributed switch, or to redirect the NSX-T network traffic onto a separate virtual distributed switch.
Figure 26. Connectivity options for VxRail nodes with two on-board ports and two expansion ports
For planned workloads that have very high bandwidth requirements, up to eight Ethernet ports can be used across the on-board and expansion cards. The VxRail network traffic is spread across four ports, and Cloud Foundation/NSX-T network traffic is spread across the other four ports.
Figure 27. Sample connectivity option for VxRail nodes with two on-board ports and 6 expansion ports
The reservation and assignment of the physical ports on the VxRail nodes to support Cloud Foundation on VxRail networking is performed during the initial deployment the VxRail cluster. Dell Technologies recommends that careful consideration be taken to ensure that sufficient network capacity is built into the overall design to support planned workloads. If possible, Dell Technologies recommends an overcapacity of physical networking resources to support future workload growth.