The VxRail can be deployed with either 2x10/2x25 GbE, 4x10 GbE, or 4x25 GbE profile. It will need the necessary network hardware to support the initial deployment. The following diagrams illustrate the different host connectivity options for the different VxRail deployment types for either the Mgmt WLD or a VI WLD. For the Mgmt WLD, the edge overlay and the Edge Uplink networks will be deployed if AVN is enabled. For the VI WLD, these networks will be deployed if NSX-T Edges are deployed using edge automation in SDDC Manager.
This section illustrates the physical host network connectivity options for different VxRail profiles and connectivity options when only using the single VxRail vDS.
Figure 34 illustrates a VxRail deployed with 2x10 profile on the 4-port NDC. The remaining two ports are unused, and these can be used for other purposes if required.
The next option is VxRail deployed with a 4x10 profile. This will place vSAN and vMotion onto their own dedicated physical NICs on the NDC and NSX-T traffic will use vmnic0 and vmnic1 shared with management traffic. Additional PCI card can be installed and used for other traffic if that is required.
The first option with a single VxRail vDS is a 2x25 profile for the VxRail using the 25GbE NDC. The VxRail system traffic uses the two ports of the NDC along with NSX-T traffic.
As with the previous option, additional PCIe cards can be added to the node for other traffic, for example, backup, replication, and so on.
The second option is to deploy the VxRail using the 4x25 profile which will provide NIC level redundancy for the VxRail system traffic. The system port groups will use an uplink connection to NDC and PCIe. The NSX-T traffic will not have NIC level redundancy as it uses both ports on the NDC by default.