The Edge node design for the Mgmt WLD deployment with AVN enabled follows the VVD 6.0 design. It is fully automated since VCF 4.0 release once AVN is enabled. For the Mgmt WLD, when deployed with Cloud Builder, two Edge node VMs are deployed in the Mgmt WLD cluster. The Edge nodes themselves have an N-VDS or NSX-T managed switch that is configured on them to provide the connectivity to external networks. The individual interfaces fp-eth0 and fp-eth1 on the N-VDS connect externally through a vDS using two different uplink port groups that are created in trunking mode. The vDS can be either the VxRail vDS or second NSX vDS depending on what network layout is required for the system and NSX traffic. Two TEPs are created on the edge N-VDS to provide East/West connectivity between the Edge nodes and the host transport nodes. This traffic is active/active using both uplinks which are defined in the uplink profile. The management interface eth0 is connected to the vDS management port group. Figure 27 shows the connectivity for the Edge nodes running on the ESXi host in the Mgmt WLD cluster.
Note: The uplink port groups used to connect the edge VM overlay interfaces are configured as trunk as the VLAN tagging is done by the N-VDS where the uplink profile defines the VLAN for the edge overlay.
The Edge node design for the VI WLD is very similar to the Mgmt WLD. If the Edge automation is used to deploy the edge cluster for a VI WLD, the same network configuration can be achieved. The following diagram shows the edge connectivity where the cluster was added to the VI WLD using a second vDS with two uplinks.
VCF on VxRail has a shared edge and compute cluster design. This means that the Edge node VMs TEP and uplink interfaces connect to the Host VDS for external connectivity and the same hosts can be used for user VMs that use the same host overlay.
The NSX-T edge routing design is based on the VVD design that is located here . A Tier-0 gateway is deployed in active/active mode with ECMP enabled to provide redundancy and better bandwidth utilization. Both uplinks are used. Two uplink VLANs are needed for North/South connectivity for the edge virtual machines in the Edge node cluster. The dedicated uplink profile that is created for the edge transport nodes defines named teaming policies. These policies are to be used in the edge uplink transport zone and the segments that are created for the Tier 0 gateway and use as a transit network to connect the Tier 0 interfaces. It is the named teaming policy that allows traffic from the Edge node to be pinned to an uplink network/VLAN connecting to the physical router. BGP provides dynamic routing between the physical environment and the virtual environment. eBGP is used between the Tier-0 Gateway and the physical TORs. An iBGP session is established between the T0 edge VMs SR components.