The physical connections between the ports on your network switches and the NICs on the VxRail nodes enable communications for the virtual infrastructure within the VxRail cluster. The virtual infrastructure within the VxRail cluster uses the virtual distributed switch to enable communication within the cluster, and out to IT management and the application user community.
VxRail has predefined logical networks to manage and control traffic within the cluster and outside of the cluster. Certain VxRail logical networks must be made accessible to the outside community. For instance, connectivity to the VxRail management system is required by IT management. VxRail networks must be configured for end-users and application owners who need to access their applications and virtual machines running in the VxRail cluster. In addition, a network supporting I/O to the vSAN datastore is required, and a network to support vMotion, which is used to dynamically migrate virtual machines between VxRail nodes to balance workload, must also be configured. Finally, an internal management network is required by VxRail for device discovery.
Figure 30. VxRail Logical Network Topology
All the Dell PowerEdge servers that serve as the foundation for VxRail nodes include a separate Ethernet port that enables connectivity to the platform to perform hardware-based maintenance and troubleshooting tasks. A separate network to support management access to the Dell PowerEdge servers is recommended, but not required.
IP addresses must be assigned to the VxRail external management network, vSAN network, vMotion network, and any guest networks you want to configure on the VxRail cluster. Decisions need to be made on the IP address ranges reserved for each VxRail network:
Figure 31. VxRail Network IP Requirements
Virtual LANs (VLANs) define the VxRail logical networks within the cluster, and the method that is used to control the paths that a logical network can pass through. A VLAN, represented as a numeric ID, is assigned to a VxRail logical network. The same VLAN ID is also configured on the individual ports on your top-of-rack switches, and on the virtual ports in the virtual-distributed switch during the automated implementation process. When an application or service in the VxRail cluster sends a network packet on the virtual-distributed switch, the VLAN ID for the logical network is attached to the packet. The packet will only be able to pass through the ports on the top-of-rack switch and the virtual distributed switch where there is a match in VLAN IDs. Isolating the VxRail logical network traffic using separate VLANs is highly recommended, but not required. A ‘flat’ network is recommended only for test, non-production purposes.
As a first step, the network team and virtualization team should meet in advance to plan VxRail’s network architecture.
VxRail groups the logical networks in the following categories: External Management, Internal Management, vSAN, vSphere vMotion, and Virtual Machine. VxRail assigns the settings that you specify for each of these logical networks during the initialization process.
Before VxRail version 4.7, both external and internal management traffic shared the external management network. Starting with VxRail version 4.7, the external and internal management networks are broken out into separate networks.
External Management traffic includes all VxRail Manager, vCenter Server, ESXi communications, and in certain cases, vRealize Log Insight. All VxRail external management traffic is untagged by default and should be able to go over the Native VLAN on your top-of-rack switches.
A tagged VLAN can be configured instead to support the VxRail external management network. This option is considered a best practice, and is especially applicable in environments where multiple VxRail clusters will be deployed on a single set of top-of-rack switches. To support using a tagged VLAN for the VxRail external management network, configure the VLAN on the top-of-rack switches, and then configure trunking for every switch port that is connected to a VxRail node to tag the external management traffic.
The Internal Management network is used solely for device discovery by VxRail Manager during initial implementation and node expansion. This network traffic is non-routable and is isolated to the top-of-rack switches connected to the VxRail nodes. Powered-on VxRail nodes advertise themselves on the Internal Management network using multicast, and discovered by VxRail Manager. The default VLAN of 3939 is configured on each VxRail node that is shipped from the factory. This VLAN must be configured on the switches, and configured on the trunked switch ports that are connected to VxRail nodes.
If a different VLAN value is used for the Internal Management network, it not only must be configured on the switches, but must also be applied to each VxRail node on-site. Device discovery on this network by VxRail Manager will fail if these steps are not followed.
Device discovery requires multicast to be configured on this network. If there are restrictions within your data center regarding the support of multicast on your switches, then you can bypass configuring this network, and instead use a manual process to select and assign the nodes that form a VxRail cluster.
Using the manual node assignment method instead of node discovery for VxRail initial implementation requires version 7.0.130 or later.
It is a best practice to configure a VLAN for the vSphere vMotion and vSAN networks. For these networks, configure a VLAN for each network on the top-of-rack switches, and then include the VLANs on the trunked switch ports that are connected to VxRail nodes.
The Virtual Machine networks are for the virtual machines running your applications and services. These networks can be created by VxRail during the initial build process, or created afterward using the vClient after initial configuration is complete. Dedicated VLANs are preferred to divide Virtual Machine traffic, based on business and operational objectives. VxRail creates one or more VM Networks for you, based on the name and VLAN ID pairs that you specify. Then, when you create VMs in vSphere Web Client to run your applications and services, you can easily assign the virtual machine to the VM Networks of your choice. For example, you could have one VLAN for Development, one for Production, and one for Staging.
Network Configuration Table |
Enter the external management VLAN ID for VxRail management network (VxRail Manager, ESXi, vCenter Server/PSC, Log Insight). If you do not plan to have a dedicated management VLAN and will accept this traffic as untagged, enter “0” or “Native VLAN.” |
Network Configuration Table |
Enter the internal management VLAN ID for VxRail device discovery. The default is 3939. If you do not accept the default, the new VLAN must be applied to each VxRail node before cluster implementation to enable discovery. |
Network Configuration Table |
Enter a VLAN ID for vSphere vMotion. |
Network Configuration Table |
Enter a VLAN ID for vSAN. |
Network Configuration Table |
Enter a Name and VLAN ID pair for each VM guest network you want to create. You must create at least one VM Network. (Enter 0 in the VLAN ID field for untagged traffic) |
Note: If you plan to have multiple independent VxRail clusters, we recommend using different VLAN IDs across multiple VxRail clusters to reduce network traffic congestion.
For a 2-Node cluster, the VxRail nodes must connect to the Witness over a separate Witness traffic separation network. The Witness traffic separation network is not required for stretched-cluster but is considered a best practice. For this network, a VLAN is required to enable Witness network on this VLAN must be able to pass through upstream to the Witness site.
Figure 32. Logical network with Witness and Witness Traffic Separation
Network Configuration Table |
Enter the Witness traffic separation VLAN ID. |