There are options and design decisions to be considered on the integration of the VxRail cluster physical and virtual networks with your data center networks. The decisions made regarding VxRail networking cannot be easily modified after the cluster is deployed and supporting Cloud Foundation, and should be decided before actual VxRail cluster deployment.
Each VxRail node has an on-board, integrated network card. Depending on the VxRail models selected and the supported options, the on-board Ethernet ports can be configured as either 2x10Gb, 4x10Gb, 2x25Gb, 4x10Gb or 4x25Gb. You can choose to support your Cloud Foundation on VxRail workload using only the on-board Ethernet ports, or deploy with both on-board Ethernet ports and with Ethernet ports from PCIe adapter cards. If NIC-level redundancy is a business requirement, you can install optional Ethernet adapter cards into each VxRail node for this purpose. Depending on the VxRail nodes selected for the cluster, the adapter cards can support 10 Gb, 25 Gb and 100Gb ports.
VxRail supports both predefined network profiles and custom network profiles when deploying the cluster to support Cloud Foundation VI workload domains. The best practice is to select the network profile that aligns with the number of on-board ports and expansion ports being selected per node to support Cloud Foundation on VxRail networking. This ensures that VxRail and CloudBuilder will configure the supporting virtual networks by following the guidance designed into these network profiles.
If VCF on VxRail networking will be configured on two node ports, you must decide whether to add an expansion card into each VxRail node to eliminate the on-board port as a single point of failure. If only the on-board ports are present, the first two ports on each VxRail node will be reserved to support VCF on VxRail networking, using a predefined network profile. If both on-board and expansion ports are present, you can configure a custom network profile, with the option to select an on-board port and expansion port to reserve for VCF on VxRail networking.
In both two-port instances, the NSX-T and VxRail networks are configured to share the bandwidth capacity of the two Ethernet ports.
Figure 38. Two ports reserved for VCF on VxRail networking
If VCF on VxRail networking will be configured with four ports, either all four on-board ports can be used, or the workload can be spread across on-board and expansion ports. The option to use only on-board ports uses a predefined network profile, with automatic assignment of the VMnics to the uplinks. Configuring with both on-board and expansion ports is preferred because it enables resiliency across the node devices and across the pair of switches.
Figure 39. Four ports reserved for VCF on VxRail networking in pre-defined network profile
A custom network profile offers more flexibility and choice than a pre-defined network profile, and should take precedence for Cloud Foundation on VxRail deployments.
Network profile | Pre-defined | Custom |
Assign any NDC/OPC/PCIe NIC to any VxRail network | No | Yes |
Custom MTU | No | Yes |
Custom teaming and failover policy | No | Yes |
Configure more than one Virtual Distributed Switch | No | Yes |
Assign 2, 4, 6, or 8 NICs to support VxRail networking | No | Yes |
Cloud Foundation on VxRail supports not only selecting the Ethernet ports on each node for physical network connectivity, but also assigning the Ethernet ports to be configured as uplinks for the supporting virtual distributed switches in the virtual infrastructure. This enables a more efficient bandwidth resource consumption model. It also enables the physical segmentation of VxRail and Cloud Foundation network traffic onto dedicated Ethernet ports, and enables segmentation onto separate virtual distributed switches.
If the VxRail nodes are configured with two Ethernet ports for networking purposes, all the VxRail network traffic and Cloud Foundation/NSX-T traffic is consolidated onto those two ports. Additional virtual distributed switches are not supported for the two-port connectivity option, so all VxRail and Cloud Foundation/NSX-T traffic flows through a single virtual distributed switch.
Figure 40. Two connectivity options for VxRail nodes with two NICs
With the option of deploying four NICs to a single VDS, all four ports on the NDC/OCP can be used, or the resiliency can be realized with a custom network profile to select NICs from both the NDC/OCP and PCIe devices. With this option, two uplinks can be reserved to support NSX-T traffic, or the NSX-T traffic can share uplinks with the other networks.
Figure 41. Two connectivity options for VxRail nodes with four NICs with a single VDSs
Deploying Cloud Foundation on VxRail with four NICs enables the option to deploy a second virtual distributed switch to isolate network traffic. One option is to separate non-management VxRail networks, such as vSAN and vMotion, away from the VxRail management network with two virtual distributed switches.
Another option is to deploy NSX-T on a second virtual distributed switch to enable separation from all VxRail network traffic. This specific virtual distributed switch is solely to support NSX-T, and is not used for VxRail networking. If this option is selected, then two unused NICs must be reserved as uplinks for this specific virtual distributed switch.
Figure 42. Two connectivity options for VxRail nodes with four NICs and two VDSs
For planned workloads that have high workload demands or strict network separation requirements, six NICs or eight NICs can be used across the NDC/OCP and PCIe adapter cards to support Cloud Foundation on VxRail.
With the six NIC option, you have the option to deploy up to three virtual distributed switches to support the Cloud Foundation on VxRail environment. If the desired outcome is to separate non-management VxRail networks and VxRail management networks, then a second virtual distributed switch can be deployed at the time the VxRail cluster is built. With this option, two uplinks can be reserved to support NSX-T traffic, or the NSX-T traffic can share uplinks with the other networks.
Figure 43. Connectivity option for VxRail nodes with six NICs and two VDSs
If the desired outcome is to further separate non-management VxRail networks, VxRail management networks, and NSX-T network traffic, then you can deploy a third virtual distributed switch. This third virtual distributed switch is solely for NSX-T traffic, so the underlying VxRail cluster is built with four NICs in order to reserve two unused NICs for NSX-T.
Figure 44. Connectivity option for VxRail nodes with six NICs and three VDSs
In extreme cases where you prefer to have dedicated uplinks assigned to the vMotion and vSAN networks, you can deploy Cloud Foundation on VxRail with eight NICs. The eight NIC option supports up to three virtual distributed switches to support VxRail and NSX-T networking. If two virtual distributed switches are desired, then the NSX-T networking shares a virtual distributed switch with VxRail management traffic.
Figure 45. Connectivity option for VxRail nodes with eight NICs and two VDSs
If you prefer further segmentation of the VxRail management networks, VxRail non-management networks and NSX-T, you can deploy a third virtual distributed switch to support NSX-T traffic. With this option, two unused NICs are required for the third virtual distributed switch dedicated to NSX-T network traffic.
Figure 46. Connectivity options for VxRail nodes with eight NICs and three VDSs
Cloud Foundation on VxRail does not support adding another virtual distributed switch during the automated deployment process for customer networks. If there are other networks you want to integrate into the Cloud Foundation on VxRail instance post-deployment, then the best practice is to reserve a minimum of two NICs for this purpose.
Figure 47. Customer-supplied VDS with 4 NICs reserved for Cloud Foundation on VxRail networking
Figure 48. Customer-supplied VDS with 6 NICs reserved for Cloud Foundation on VxRail networking
The reservation and assignment of the physical ports on the VxRail nodes to support Cloud Foundation on VxRail networking is performed during the initial deployment of the VxRail cluster. Dell Technologies recommends taking careful consideration to ensure that sufficient network capacity is built into the overall design to support planned workloads. If possible, Dell Technologies recommends an overcapacity of physical networking resources to support future workload growth.