The physical connections between the ports on your network switches and the NICs on the VxRail nodes enable communications for the virtual infrastructure within the VxRail cluster. The virtual infrastructure within the VxRail cluster uses the virtual distributed switch to enable communication within the cluster, and out to IT management and the application user community.
Figure 57. VxRail Logical Network Topology
VxRail has predefined logical networks to manage and control traffic within the cluster and outside of the cluster. Certain VxRail logical networks must be made accessible to the outside community. For instance, connectivity to the VxRail management system is required by IT management. VxRail networks must be configured for end-users and application owners who need to access their applications and virtual machines running in the VxRail cluster. A network to support I/O to the vSAN datastore is required unless you plan to use Fibre Channel storage as the primary storage resource with a dynamic cluster. In addition, a network to support vMotion, which is used to dynamically migrate virtual machines between VxRail nodes to balance workload, must also be configured. Finally, an internal management network is required by VxRail for device discovery, and this network can be skipped if you plan to use manual device discovery.
All the Dell PowerEdge servers that serve as the foundation for VxRail nodes include a separate Ethernet port that enables out-of-band connectivity to the platform to perform hardware-based maintenance and troubleshooting tasks. A separate network to support management access to the Dell PowerEdge servers is recommended, but not required.
IP addresses must be assigned to the VxRail external management network, the vMotion network, and any guest networks. If your cluster will use vSAN as primary storage, IP addresses are required for the vSAN network. You can also choose to segment the external management network to separate subnets for the physical and logical components. Decisions must be made on the IP address ranges reserved for each VxRail network.
Figure 58. Overview of VxRail Core Network IP Requirements
Virtual LANs (VLANs) define the VxRail logical networks within the cluster, and the method that is used to control the paths that a logical network can pass through. A VLAN, represented as a numeric ID, is assigned to a VxRail logical network. The same VLAN ID is also configured on the individual ports on your top-of-rack switches, and on the virtual ports in the virtual distributed switch during the automated implementation process. When an application or service in the VxRail cluster sends a network packet on the virtual distributed switch, the VLAN ID for the logical network is attached to the packet. The packet will only be able to pass through the ports on the top-of-rack switch and the virtual distributed switch where there is a match in VLAN IDs. Isolating the VxRail logical network traffic using separate VLANs is highly recommended, but not required. A “flat” network is recommended only for test, non-production purposes.
As a first step, the network team and virtualization team should meet in advance to plan the VxRail network architecture.
VxRail groups the logical networks in the following categories: External Management, Internal Management, vCenter Management Network, vSAN, vSphere vMotion, and Virtual Machine. VxRail assigns the settings that you specify for each of these logical networks during the initialization process.
Before VxRail version 4.7, both external and internal management traffic shared the external management network. Starting with VxRail version 4.7, the external and internal management networks are broken out into separate networks.
External Management network supports communications to the ESXi hosts, and also has common network settings with the vCenter Management Network. All VxRail external management traffic is untagged by default and should be able to go over the Native VLAN on your top-of-rack switches.
A tagged VLAN can be configured instead to support the VxRail external management network. This option is considered a best practice, and is especially applicable in environments where multiple VxRail clusters will be deployed on a single set of top-of-rack switches. To support using a tagged VLAN for the VxRail external management network, configure the VLAN on the top-of-rack switches, and then configure trunking for every switch port that is connected to a VxRail node to tag the external management traffic.
The vCenter Management Network hosts the VxRail Manager and the VxRail vCenter Server. By default, it also shares the same network settings as the External Management network. In this context, the physical ESXi hosts and the logical VxRail management components share the same subnet and share the same VLAN. Starting with version 7.0.350, this logical network can be assigned to a unique subnet and assigned a VLAN separate from the external management network.
The Internal Management network is used solely for device discovery by VxRail Manager during initial implementation and node expansion. This network traffic is non-routable and is isolated to the top-of-rack switches connected to the VxRail nodes. Powered-on VxRail nodes advertise themselves on the Internal Management network using multicast, and discovered by VxRail Manager. The default VLAN of 3939 is configured on each VxRail node that is shipped from the factory. This VLAN must be configured on the switches, and configured on the trunked switch ports that are connected to VxRail nodes.
If a different VLAN value is used for the Internal Management network, it not only must be configured on the switches, but must also be applied to each VxRail node on-site. Device discovery on this network by VxRail Manager will fail if these steps are not followed.
Device discovery requires multicast to be configured on this network. If there are restrictions within your data center regarding the support of multicast on your switches, you can bypass configuring this network, and instead use a manual process to select and assign the nodes that form a VxRail cluster.
Note: Using the manual node assignment method instead of node discovery for VxRail initial implementation requires version 7.0.130 or later.
If you plan to leverage vSAN for VxRail cluster storage resources, it is a best practice to configure a VLAN for the vSAN network. It is also a best practice to configure a VLAN for the vSphere vMotion network. Configure a VLAN for each network on the top-of-rack switches, and then include the VLANs on the trunked switch ports that are connected to VxRail nodes.
The Virtual Machine networks are for the virtual machines running your applications and services. These networks can be created by VxRail during the initial build process, or created afterward using the vClient after initial configuration is complete. Dedicated VLANs are preferred to divide Virtual Machine traffic, based on business and operational objectives. VxRail creates one or more VM Networks for you, based on the name and VLAN ID pairs that you specify. Then, when you create VMs in the vSphere Web Client to run your applications and services, you can easily assign the virtual machine to the VM Networks of your choice. For example, you could have one VLAN for Development, one for Production, and one for Staging.
Network Configuration Table | Action |
Row 1 | Enter the external management VLAN ID for VxRail management network (VxRail Manager, ESXi, vCenter Server, Log Insight). If you do not plan to have a dedicated management VLAN and will accept this traffic as untagged, enter “0” or “Native VLAN.” |
Row 2 | Enter the internal management VLAN ID for VxRail device discovery. The default is 3939. If you do not accept the default, the new VLAN must be applied to each VxRail node before cluster implementation to enable discovery. |
Row 3 | Enter a VLAN ID for vSphere vMotion. |
Row 4 | Enter a VLAN ID for vSAN, if applicable. |
Rows 5 and 6 | Enter a Name and VLAN ID pair for each VM guest network that you want to create. VM Network can be configured during the cluster build process, or after the cluster is built. (Enter 0 in the VLAN ID field for untagged traffic) |
Row 7 | Enter the vCenter Server Network VLAN ID (if different from the external management VLAN ID). |