The VxRail system is a self-contained environment with the compute, storage, server virtualization, and management services that make up an HCI. The distributed cluster architecture allows independent nodes to work together as a single system. Each node contributes to and consumes system resources. This close coupling between nodes is accomplished through IP networking connectivity. IP networking also provides access to virtual machines and the services they provide.
While VxRail is a self-contained infrastructure, it is not a stand-alone environment. It is intended to connect and integrate with the customer’s existing data center network. A typical implementation uses one or more customer-provided 10 GbE top-of-rack (ToR) switches to connect each node in the VxRail cluster. For smaller environments, an option to use 1 GbE switches is available, but these lower-bandwidth networks limit performance and scale. While the customer typically provides the network switches, Dell Technologies offers Ethernet switches that can be included with the system.
The following figure shows typical network connectivity using two switches for redundancy. Single-switch implementations are also supported.
The number of Ethernet switch ports required depends on the VxRail model. Most current-generation models require 2-port or 4-port 10 GbE connectivity for VxRail system traffic. Additional options of 2-port 25 GbE SFP28 and 4-port 1 GbE are available for some models. Additional network connectivity can be accomplished by adding NIC cards. VxRail management can configure an additional PCIe NIC card for network redundancy of the VxRail system traffic. Customers must configure the PCIe NIC cards separately for traffic apart from VxRail system traffic, primarily VM traffic, through vCenter.
Network traffic is separated using switch-based VLAN technology and vSphere Network I/O Control (NIOC). A VxRail cluster has the following types of system network traffic:
Preinstallation planning includes verifying that enough physical switch ports are available and that the ports are configured for the appropriate VLANs. VLANs along with IP addresses and other network configuration information are used when the system is configured during installation. For detailed planning and configuration information, see the
When the system is initialized during installation, the configuration wizard automatically configures the required uplinks following VxRail standards and best practices. The wizard asks for the NIC configuration:
Table 2. NIC configuration options
2 x 10 GbE
Management, vSAN, vMotion, and VM traffic are associated with these ports with the appropriate network teaming policy and NIOC settings.
4 x 10 GbE
2 x 25 GbE
4 x 25 GbE
4 x 1 GbE
This port is valid only for systems with hybrid storage configuration with a single processor. The four 10 GbE ports auto-negotiate down to 1 GbE. Management, vSAN, vMotion, and VM traffic are associated with the four 10 GbE ports with the appropriate network teaming policy and NIOC settings.
During installation, port redundancy is available with active/standby and active/active NIC teaming policies. Customers can benefit from increased network bandwidth using active/active teaming and a link aggregation network connection. Also, redundancy at the network card level can be configured for VxRail system traffic using ports from network daughter and NIC cards. If one network card fails, traffic can continue to flow through the other card. If nodes have additional physical NIC ports for traffic apart from VxRail system traffic, the ports can be configured using standard vSphere procedures after installation.