The VxRail system is a self-contained environment with compute, storage, server virtualization, and management services that make up a hyperconverged infrastructure. The distributed cluster architecture allows independent nodes to work together as a single system. Each node contributes to and consumes system resources. This close coupling between nodes is accomplished through IP networking connectivity. IP networking also provides access to virtual machines and the services they provide.
While VxRail is a self-contained infrastructure; it is not a standalone environment. It is intended to connect and integrate with the customer’s existing data center network. A typical implementation uses one or more customer-provided 10 GbE Top of Rack (ToR) switches to connect each node in the VxRail cluster. For smaller environments, an option to use 1 GbE switches is available, but these lower-bandwidth networks limit performance and scale. While the network switches are typically customer provided, Dell EMC offers an Ethernet switch, S4148, which can be included with the system.
The figure below shows typical network connectivity using two switches for redundancy. Single-switch implementations are also supported.
Figure 19. Typical VxRail physical network connectivity for 10 GbE configurations
The number of Ethernet switch ports required depends on the VxRail model. Most current generation models require two-port or four-port 10 GbE connectivity for VxRail system traffic with additional options of two-port 25 GbE SFP28 and four-port 1 GbE available for some models. Additional network connectivity can be accomplished by adding NIC cards. VxRail management can configure an additional PCIe NIC card for network redundancy of the VxRail system traffic. Customers would need to configure the PCIe NIC cards separately for non-VxRail system traffic, primarily VM traffic, through vCenter.
Network traffic is separated using switch-based VLAN technology and vSphere Network I/O Control (NIOC). Four types of system network traffic exist in a VxRail cluster:
Management. Management traffic is used for connecting to VxRail Manager plug-in on vCenter, for other management interfaces and for communications between the management components and the ESXi nodes in the cluster. Either the default VLAN or a specific management VLAN is used for management traffic.
vSAN. Data access for read and write activity as well as for optimization and data rebuild is performed over the vSAN network. Low network latency is critical for this traffic, and a specific VLAN isolates this traffic.
vMotion. VMware vMotionTM allows virtual machine mobility between nodes. A separate VLAN is used to isolate this traffic.
Virtual Machine. Users access virtual machines and the service provided over the VM network(s). At least one VM VLAN is configured when the system is initially configured, and others may be defined as required.
Pre-installation planning includes verifying that enough physical switch ports are available and that the ports are configured for the appropriate VLANs. VLANs along with IP addresses and other network-configuration information are used when the system is configured during installation. Detailed planning and configuration information is in the VxRail Network Guide.
When the system is initialized during installation, the configuration wizard automatically configures the required uplinks following VxRail standards and best practices. The wizard asks for the NIC configuration:
During installation, port redundancy is available with active/standby and active/active NIC Teaming policies. Customers can benefit from increased network bandwidth using active/active network connection. Additionally, network card level redundancy can be configured for VxRail system traffic using ports from NDC and NIC card. If one network card fails, traffic can continue to flow through the other card. If nodes have additional physical NIC ports for non-VxRail system traffic, they can be configured after installation using standard vSphere procedures.