This option is an example of a cabling setup for two NDC-OCP ports and two PCIe ports that are connected to a pair of 10 GbE or 25 GbE switches. If the ports are of the same type and run at the same speed, any NDC-OCP port and any PCIe port can be selected.
Figure 64: Any NDC-OCP ports paired with PCIe ports shows VxRail nodes with any two 10-25 GbE NDC-OCP ports and two 10-25 GbE PCIe ports that are connected to two TOR switches and an optional connection to an iDRAC management switch.
With this custom option, the ports that are selected for VxRail networking are not required to reside on the PCIe adapter card in the first slot.
Figure 65: Two NDC-OCP ports paired with PCIe ports other than the first slot shows VxRail nodes with any two 10-25 GbE NDC-OCP ports and any two 10-25 GbE PCIe ports that are connected to two TOR switches and an optional connection to the iDRAC management switch.
In this outlier use case where there is a specific business or operational requirement, VxRail can be deployed using only the ports on PCIe adapter cards. The ports must be of the same type and running at the same speed. This option supports spreading the VxRail networking across ports on more than one PCIe adapter card.
VxRail is most commonly deployed with two or four ports. For more network-intense workload requirements, VxRail can be deployed with six or even eight network ports. This option supports spreading the VxRail networking between NDC-OCP ports and PCIe ports, and between ports on two different PCIe adapter cards.
In this topology, resource-intense workloads such as vSAN and vMotion can each have a dedicated Ethernet port instead of shared Ethernet ports. This topology prevents the possibility of saturation of shared Ethernet port resources.
Figure 67: Four NDC-OCP ports and two PCIe ports configuration shows VxRail nodes with four NDC-OCP ports and two PCIe ports that are connected to two TOR switches and an optional connection to the iDRAC management switch.
With the six-port option, you can use more of the PCIe ports instead of the NDC-OCP ports. Your nodes may have two PCIe slots that are occupied with NICs. If so, this option offers the flexibility to spread the VxRail networking workload across three NIC.
Figure 68: Two NDC-OCP ports and ports from two PCIe adapters configuration shows VxRail nodes with two NDC-OCP ports, and ports from two PCIe adapter cards, that are connected to two TOR switches and an optional connection to the iDRAC management switch.
For workload use cases with extreme availability, scalability, and performance requirements, four TOR switches can be positioned to support VxRail networking. In this example, each Ethernet port is connected to a single TOR switch. Each pair of top-of-rack switches is logically connected using interswitch links.
This topology also addresses the use case of physical network separation to meet specific security policies or governance requirements. For instance, the networks that are required for VxRail management and operations can be isolated on one pair of switches. Network traffic for guest user and application access can be targeted on the other pair of switches.
This option offers more flexibility in that it can support adapters at different speeds. The less resource-intensive networks can run at lower speeds, and more resource-intense networks run at a higher speed. For example, the two NDC-OCP ports can be connected to 10 GbE switches to support the management networks. The two ports on the PCIe can connect to a pair of 25 GbE switches to support nonmanagement networks.
Figure 69: Four TOR switches to support VxRail cluster networking shows VxRail nodes with four ports that are connected to four TOR switches, and an optional connection to the iDRAC management switch.