The following section describes the design of the NSX-T VI WLD.
The following NSX-T external network requirements must be met before deploying any NSX-T based VI WLD from SDDC Manager.
The NSX-T components are installed when the first VxRail cluster is added to the NSX-T VI WLD. The SDDC Manager deploys NSX-T components onto the management and the VI WLD clusters. The following list highlights the major steps that are performed during the deployment process:
Note: No additional NSX-T Managers are needed when a second NSX-T based VI WLD is added. The vCenter is added as a Compute Manager and the ESXi hosts are prepared for use for NSX-T.
Figure 24 shows the NSX-V and NSX-T components deployed in the MGMT VI WLD. It shows the VI WLD with two NSX-T clusters added to the VI WLD.
Figure 24. NSX-T VI WLD Cluster Design
A transport zone defines the span of the virtual network, as logical switches only extend to N-VDS on the transport nodes that are attached to the transport zone. Each ESXi host has an N-VDS component for the hosts to communicate or participate in a network, they must be joined to the transport zone. There are two types of transport zones:
When the first cluster is added to the first VI WLD, SDDC Manager creates the Overlay and VLAN transport zones in NSX-T Manager. Two additional VLAN transport zones must be manually created on Day 2 for the Edge VM uplink traffic to the physical network.
Figure 25. NSX-T Transport Zones
Note: When subsequent clusters are added to a WLD, or if a new WLD is created, all the nodes participate in the same VLAN and Overlay Transport Zones. For each cluster the same VLAN or a different VLAN can be used for the TEP traffic for the Overlay.
Segments are used to connect VMs to Layer 2 networks, and they can be either VLAN or Overlay segments. Following is the complete list of segments that are needed to support the virtual infrastructure for an SDDC created on VCF on VxRail.
Segment Name |
Uplink and Type |
Transport Zone |
VLAN (example) |
Overlay (VCF Deployed) |
None |
VLAN-TZ |
None |
Edge-uplink01 |
None |
VLAN-TZ |
0-4094 |
Edge-uplink02 |
None |
VLAN-TZ |
0-4094 |
Edge-Overlay |
None |
VLAN-TZ |
0-4094 |
uplink01 |
None |
Uplink01-TZ |
1647 |
uplink02 |
None |
Uplink02-TZ |
1648 |
Note: Only the Overlay segment is created during the deployment of the NSX-T WLD. The other segments must be created manually on Day 2.
The uplink profile is a template that defines how an N-VDS that exists in each transport node (either host or Edge VM) connects to the physical network. It specifies:
Profile |
Teaming Policy |
Active Uplinks |
VLAN (example) |
MTU |
Host-uplink (VCF Deployed) |
Load Balance Source |
uplink-1,uplink-2 |
1644 |
9000 |
Edge-overlay-profile |
Failover Order |
uplink-1 |
1649 |
9000 |
Edge-uplink01-profile |
Failover Order |
uplink-1 |
1647 |
9000 |
Edge-uplink02-profile |
Failover Order |
uplink-1 |
1648 |
9000 |
A transport node as described earlier is either a host or an edge VM that has an N-VDS component installed and is added to one or more transport zones. Transport node profiles are used for host transport nodes. They contain the following information about the transport node.
During the VCF deployment of an NSX-T VI WLD when a new cluster is added to the VI WLD, a transport profile is created with the settings in the preceding list. When the clusters are added to the NSX-T VI WLD, the transport node profile is applied to the nodes in the cluster, creating the N-VDS, adding the nodes to the transport zones, configuring the physical interfaces and creating and assigning an IP to a TEP so hosts can communicate over the overlay network. Figure 26 shows a compute node with Logical segments created that can use the N-VDS to communicate to VMs in the same transport zone.
Figure 26. Compute Node
Note: The application logical segments can be either Overlay-backed or VLAN-backed segments.
The edge node design follows the VVD design and is a manual configuration for VCF on VxRail, two edge node VMs are deployed in the first VI WLD cluster. VCF on VxRail has a shared edge and compute cluster design meaning the edge node VMs overlay and uplink interfaces connect to the Host N-VDS for external connectivity. The management interface connects to the VxRail vDS port group as shown in Figure 27. For additional details on the edge node connectivity design please refer to the VVD documentation located here Transport Node and Uplink Policy Design.
Figure 27. Edge Node connectivity design
Note: The Overlay and uplink segments used to connect the edge VM overlay and uplink interfaces are in trunking mode as the Edge transport node NVDS will use VLAN tagging.
The NSX-T edge routing design is based on the VVD design located here Routing Design using NSX-T. A Tier-0 gateway is deployed in Active/Active mode with ECMP enabled to provide redundancy and better bandwidth utilization as both uplinks are utilized, two uplink VLANs are needed for North/South connectivity for the Edge virtual machines in the Edge Node cluster. BGP is used to provide dynamic routing between the physical environment and the virtual environment, eBGP is used between the Tier-0 Gateway and the physical TORs, an iBGP session is established between the T0 edge VMs SR components.
Figure 28. Edge Node North/South connectivity
The manual configuration steps to deploy the Edge node cluster following the VVD design are located here: Deploy NSX-T Edge Cluster on VxRail.