The diagrams in this appendix show various options for physical wiring between VxRail nodes and the adjacent, top-of-rack switches. They are provided as illustrative examples to help with the planning and design process for physical network connectivity.
All VxRail nodes are manufactured with Ethernet ports built into the chassis. VxRail nodes support Network Daughter Cards (NDC) or adapters supported by the Open Compute Project (OCP) specifications for the built-in Ethernet ports, depending on VxRail model. Optional PCIe adapter cards can be installed in the VxRail nodes to provide additional Ethernet ports for redundancy purposes and increased bandwidth. The examples in this section display options starting with connectivity from a single NDC/OCP adapter to network topologies that address requirements for failure protection and optimal bandwidth distribution.
If additional Ethernet connectivity is required to support other use cases outside of VxRail networking, such as additional guest networks or external storage, additional slots on the VxRail nodes should be reserved for PCIe adapter cards. If this is a current requirement or potential future requirement, be sure to select a VxRail node model with sufficient PCIe slots to accommodate the additional adapter cards.
The examples include topologies for predefined network profiles and custom network profiles. With a predefined network profile, VxRail Manager automatically selects the Ethernet ports for assignment to VxRail networks. With a custom network profile, the network topology is manually configured using VxRail Manager.
Custom network profiles are supported with VxRail version 7.0.130 or later.
Network profiles using 6 ports and 8 ports for VxRail networking are supported with VxRail version 7.0.400 or later.
Figure 71. VxRail nodes with two 10gb or 25gb NDC/OCP ports connected to two TOR switches, and one optional connection to management switch for iDRAC
With this predefined profile, VxRail selects the two ports on the NDC/OCP to support VxRail networking. If the NDC/OCP adapter on the VxRail nodes is shipped with four Ethernet ports, the two leftmost ports are selected. If you choose to use only two Ethernet ports, the remaining ports can be used for other use cases. This connectivity option is the simplest to deploy. It is suitable for smaller, less demanding workloads that can tolerate the loss of the NDC/OCP adapter as a single point of failure.
Figure 72. VxRail nodes with four 10gb NDC/OCP ports connected to 2 TOR switches, and one optional connection to management switch for iDRAC
In this predefined network profile, VxRail selects all four ports on the NDC/OCP to support VxRail networking instead of two. The same number of cable connections should be made to each switch. This topology provides additional bandwidth over the two-port option, but provides no protection resulting from a failure with the network adapter card.
Figure 73. VxRail nodes with two 10/25gb NDC/OCP ports and two 10/25gb PCIe ports connected to 2 TOR switches, and one optional connection to management switch for iDRAC
In this predefined network profile option, two NDC/OCP ports and two ports on the PCIe card in the first slot are selected for VxRail networking. The network profile splits the VxRail networking workload between the NDC/OCP ports and the two switches, and splits the workload on the PCIe-based ports between the two switches. This option ensures against the loss of service with a failure at the switch level, and also with a failure in either the NDC/OCP or PCIe adapter card.
Figure 74. VxRail nodes with any two 10/25gb NDC/OCP ports and two 10/25gb PCIe ports connected to two TOR switch, and one optional connection to management switch for iDRAC
This is an example of a custom cabling setup with 2 NDC/OCP ports and 2 PCIe ports connected to a pair of 10gb switches or 25gb switches. Any NDC/OCP port and any PCIe port can be selected to support VxRail networking. However, the two NICs assigned to support a specific VxRail network must be of the same type and running at the same speed.
Figure 75. VxRail nodes with any two 10/25gb NDC/OCP ports and any two 10/25gb PCIe ports connected to 2 TOR switches, and one optional connection to management switch for iDRAC
With the custom option, there is no restriction that the ports selected for VxRail networking reside on the PCIe adapter card in the first slot. If there is more than one PCIe adapter card, ports can be selected from either card.
In this outlier use case where there is a specific business or operational requirement for this topology, VxRail can be deployed using only the ports on PCIe adapter cards. The ports must be of the same type and running at the same speed.
Figure 76. VxRail nodes with two or four PCIe ports connected to a pair of TOR switch, and one optional connection to management switch for iDRAC
This option supports spreading the VxRail networking across ports on more than one PCIe adapter card.
VxRail is most commonly deployed with 2 or 4 ports. For more network-intense workload requirements, VxRail can be deployed with 6 or even 8 network ports. This option supports spreading the VxRail networking between NDC/OCP ports and PCIe ports, and between ports on two different PCIe adapter cards.
In this topology, resource-intense workloads such as vSAN and vMotion can each have a dedicated Ethernet port instead of shared Ethernet ports. This prevents the possibility of saturation of shared Ethernet port resources.
Figure 77. VxRail nodes with four NDC/OCP ports and a pair of PCIe ports connected to a pair of TOR switch, and one optional connection to management switch for iDRAC
With the six-port option, you can use more of the PCIe ports as opposed to the NDC/OCP ports. If your nodes have two PCIe slots occupied with network adapter cards, this offers the flexibility to spread the VxRail networking workload across three network adapter cards.
Figure 78. VxRail nodes with two NDC/OCP ports and ports from a pair of PCIe adapter cards connected to a pair of TOR switches, and one optional connection to management switch for iDRAC
For workload use cases with extreme availability, scalability, and performance requirements, up to eight ports can be selected to support VxRail networking. This option is advantageous if it is desirable to have resource-intense networks such as vSAN or vMotion have dedicated Ethernet ports. This option can also be useful you want VxRail to configure a guest network with dedicated Ethernet ports as part of the initial build process.
Figure 79. VxRail nodes with four NDC/OCP ports and two ports from a pair of PCIe adapter cards connected to a pair of TOR switches, and one optional connection to management switch for iDRAC
This topology also addresses the use case of physical network separation to meet specific security policies or governance requirements, with four Ethernet switches positioned to support VxRail networking. For instance, the networks required for VxRail management and operations can be isolated on one pair of switches, while network traffic for guest user and application access can be targeted on the other pair of switches.
This topology is also applicable for workload use cases with extreme availability, scalability, and performance requirements. For instance, each VxRail network can be configured for redundancy at the switch level and network adapter level if the nodes are installed with four adapter cards containing two ports each.
Figure 80. VxRail nodes with eight ports connected to four Ethernet switches, and one optional connection to management switch for iDRAC