Multiple VLANs are designed to isolate traffic across AZ1 and AZ2. All networking is performed at layer 2. No layer 3 routing is required on the network for AZ1 hosts to reach AZ2 hosts.
The following figure shows the network topology:
Figure 6. Network topology
The management domain network configuration is recorded in the vcf-vxrail-deployment-parameter.xlsx spreadsheet. This configuration is used for deploying the management domain on AZ1 using the VMware CloudBuilder utility.
The following table shows the management domain network configuration details:
Table 5. Management domain networks
VLAN number |
Port group name |
CIDR notation |
Gateway |
MTU |
2312 |
vCenter Server Network |
10.226.131.64/26 |
10.226.131.65 |
1500 |
1602 |
vSphere vMotion |
n/a |
n/a |
n/a |
1601 |
vSAN |
n/a |
n/a |
n/a |
1603 |
VXLAN (VTEP) - DHCP Network |
n/a |
n/a |
9000 |
2313 |
nsxv-uplink01 |
10.226.131.128/28 |
10.226.131.129 |
9000 |
2314 |
nsxv-uplink02 |
10.226.131.144/28 |
10.226.131.145 |
9000 |
Ports for all hosts are configured in the management cluster, as shown in the following figure:
Figure 7. Switch port interface setting
The default gateways for the management network and NSX-v uplink networks are configured on the network core. HSRP is used for redundancy, as shown in the following figure:
Figure 8. VLAN interface setting
Two vSAN witness appliances are deployed to the stand-alone ESXi host, one each for the management and workload domain clusters. Layer 2 VLANs are stretched to the “third site” to facilitate the layer 2 stretch deployment methodology. vSAN witness appliances are deployed onto the management VLAN with an additional interface on the vSAN VLAN.