The physical connections between the ports on your network switches and the NICs on the VxRail nodes enable virtual infrastructure communications within the VxRail cluster. The virtual infrastructure within the VxRail cluster uses the virtual-distributed switch to enable communication within the cluster, and out to IT management and application users.
VxRail has predefined logical networks to manage and control traffic within the cluster and outside of the cluster. Certain VxRail logical networks must be made accessible to the outside community. For instance, IT management requires connectivity to the VxRail management system. VxRail networks must be configured for end-users and application owners who must access their applications and virtual machines running in the VxRail cluster. A network to support I/O to the vSAN datastore is required unless you plan to use Fibre Channel storage as the primary storage resource with a dynamic cluster.
In addition, a network to support vMotion, which is used to dynamically migrate virtual machines between VxRail nodes to balance workload, must also be configured. Finally, VxRail requires an internal management network for device discovery. This network can be skipped if you plan to use manual device discovery.
All the Dell PowerEdge servers that serve as the foundation for VxRail nodes include a separate Ethernet port that enables out-of-band connectivity to the platform. This port enables VxRail to perform hardware-based maintenance and troubleshooting tasks. A separate network to support management access to the Dell PowerEdge servers is recommended, but not required.
IP addresses must be assigned to the VxRail external management network, the vMotion network, and any guest networks. If your cluster uses vSAN as primary storage, IP addresses are required for the vSAN network. You can also choose to segment the external management network to separate subnets for the physical and logical components. Decisions must be made on the IP address ranges reserved for each VxRail network.
Virtual LANs (VLANs) define the VxRail logical networks within the cluster, and the method that is used to control the paths that a logical network can pass through. A VLAN, represented as a numeric ID, is assigned to a VxRail logical network. The same VLAN ID is also configured on the individual ports on your top-of-rack switches, and on the virtual ports in the virtual-distributed switch during the automated implementation process.
When an application or service in the VxRail cluster sends a network packet on the virtual-distributed switch, the VLAN ID for the logical network is attached. The packet can only pass through the ports on the top-of-rack switch and the virtual-distributed switch where the VLAN IDs match. Isolating the VxRail logical network traffic using separate VLANs is highly recommended, but not required. A “flat” network is recommended only for test, nonproduction purposes.
As a first step, the network team and virtualization team should meet in advance to plan the VxRail network architecture.
VxRail groups the logical networks in the categories that are listed below. VxRail assigns the settings that you specify for each of these logical networks during the initialization process.
Before VxRail version 4.7, both external and internal management traffic shared the external management network. Starting with VxRail version 4.7, the external and internal management networks are broken out into separate networks.
External Management network—Supports communications to the ESXi hosts, and has common network settings with the vCenter Management Network. All VxRail external management traffic is untagged by default and could go over the Native VLAN on your top-of-rack switches.
A tagged VLAN can be configured instead to support the VxRail external management network. This option is considered a best practice. It is especially applicable in environments where multiple VxRail clusters are deployed on a single set of top-of-rack switches. To support using a tagged VLAN for the VxRail external management network, configure the VLAN on the top-of-rack switches. Then configure trunking for every switch port that is connected to a VxRail node to tag the external management traffic.
vCenter Management network—Hosts the VxRail Manager and the VxRail vCenter Server. By default, it also shares the same network settings as the External Management network. In this context the physical ESXi hosts and logical VxRail management components share the same subnet, and share the same VLAN. Starting with version 7.0.350 this logical network can be assigned to a unique subnet and assigned a VLAN separate from the external management network.
Internal Management network—Used solely for device discovery by VxRail Manager during initial implementation and node expansion. This network traffic is nonroutable and is isolated to the top-of-rack switches connected to the VxRail nodes. Powered-on VxRail nodes advertise themselves on the Internal Management network using multicast and are discoverable by VxRail Manager. The default VLAN of 3939 is configured on each VxRail node that is shipped from the factory. This VLAN must be configured on the switches and on the trunked switch ports that are connected to VxRail nodes.
If a different VLAN value is used for the Internal Management network, it must be configured on the switches and applied to each VxRail node on-site. If these steps are not followed, device discovery on this network by VxRail Manager fails.
Device discovery requires multicast to be configured on this network. You may have restrictions within your data center regarding the support of multicast on your switches. You can bypass configuring this network, and instead use a manual process to select and assign the nodes that form a VxRail cluster.
Note: Using the manual node assignment method instead of node discovery for VxRail initial implementation requires version 7.0.130 or later.
vSAN and vSphere vMotion networks—If you plan to leverage vSAN for VxRail cluster storage resources, a best practice is to configure a VLAN for the vSAN network. Another best practice is to configure a VLAN for the vSphere vMotion network. Configure a VLAN for each network on the top-of-rack switches, and then include the VLANs on the trunked switch ports that are connected to VxRail nodes.
Virtual Machine networks—These networks are for the virtual machines running your applications and services. VxRail can create them during the initial build process or created afterward using the vClient after initial configuration is complete. Dedicated VLANs are preferred to divide Virtual Machine traffic. That choice is based on business and operational objectives. VxRail creates one or more VM Networks for you, based on the name and VLAN ID pairs that you specify.
When you create VMs in the vSphere Web Client to run your applications and services, you can easily assign the virtual machine to your choice of VM networks. For example, you could have one VLAN for Development, one for Production, and one for Staging.
Network configuration table row 1
Enter the external management VLAN ID for VxRail management network (VxRail Manager, ESXi, vCenter Server-PSC, Log Insight).
If you do not plan to have a dedicated management VLAN and can accept this traffic as untagged, enter “0” or “Native VLAN.”
Network configuration table row 2
Enter the internal management VLAN ID for VxRail device discovery. The default is 3939.
If you do not accept the default, the new VLAN must be applied to each VxRail node before cluster implementation to enable discovery.
Network configuration table row 3
Enter a VLAN ID for vSphere vMotion
Network configuration table row 4
Enter a VLAN ID for vSAN, if applicable.
(Enter 0 in the VLAN ID field for untagged traffic.)
Network configuration table rows 5 and 6
Enter a Name and VLAN ID pair for each VM guest network that you want to create.
VM Network can be configured during the cluster build process, or after the cluster is built.
(Enter 0 in the VLAN ID field for untagged traffic)
Network configuration table row 7
Enter the vCenter Server Network VLAN ID (if different from the external management VLAN ID)
For a 2-Node cluster, the VxRail nodes must connect to the Witness over a separate Witness traffic separation network. The Witness traffic separation network is not required for stretched cluster but is considered a best practice. For this network, a VLAN is required to enable Witness network on this VLAN must be able to pass through upstream to the Witness site.
Network configuration table row 77
|Enter the Witness traffic separation VLAN ID.|