This section illustrates the physical host network connectivity options for different VxRail profiles and connectivity options when only using a second vDS for NSX traffic, the VxRail vDS is only used for system traffic.
Figure 38 illustrates a VxRail deployed with 2x10 profile on the 4-port NDC. The remaining two ports are used for the second vDS for the NSX traffic.
The second option is VxRail deployed with a 4x10 profile consuming all four ports of the NDC, this will place vSAN and vMotion onto their own dedicated physical NICs on the NDC. The NSX-T traffic will use a second vDS and uplinks connecting to pNICs on the PCI-E 10 GbE.
The first 25 GbE option shown here uses the 2-port 25 GbE NDC for the VxRail vDS and a second vDS is created for the NSX-T traffic using the two ports of the PCI-E card.
As with the previous option, additional PCIe cards can be added to the node for other traffic, for example, backup, replication, and so on.
The second option will require a total of six 25GbE ports, the VxRail will be deployed with the 4x25 profile using the two port NDC and the PCIe and the second vDS for NSX-T traffic will require an additional 2x25 GbE card.
A last option here provides full NIC level redundancy for both VxRail system traffic and also NSXT traffic using four NICs with six interfaces in total connected to the TOR switches, the VxRail will be deployed with the 4x25 profile using the two port NDC and the PCIe and the second vDS for NSX-T traffic will require two additional 2x25 GbE card with one interface from each NIC connect to each TOR.