The Cloud Foundation software layer is dependent on the VxRail networks and underlying networking infrastructure to be correctly configured and operational. There are additional considerations and steps that must be undertaken to support Cloud Foundation workloads.
Note: How the final supporting physical network is deployed will vary depending on individual requirements and networking equipment selection. Guidance provided here is limited to what is needed to be configured on the physical switch infrastructures, providing switch configuration syntax is out of scope for this document.
Each network configured to bridge Layer 2 networks over Layer 3 will be given an ID known as a ‘segment’ ID. The segment range is configured after the Cloud Foundation domain is created. For instance, a segment ID range of 5000 to 7000 will support up to 2000 extended networks. We recommend that you record the NSX segment ID for each extended network for tracking purposes, and capture the properties of the bridged Layer 2 network.
The Cloud Foundation management workload domain and each VI workload domain require VLANs to be configured on the physical switches to support virtual machine traffic. The Cloud Foundation on VxRail VLANs table in the appendix outlines the required and optional VLANs for the management workload domain and VI workload domain.
SDDC Manager configures a port group on the virtual switch to support VXLAN overlay traffic. Traffic on the overlay network passes through the VTEP on the host up through the physical network layer to its destination. The physical network layer must be configured to support the overlay traffic:
Data center routing services must be configured for the Cloud Foundation on VxRail management networks that require upstream connectivity. The management components in the management workload domain connect upstream via Layer 2 networks. At the Layer 2 and Layer 3 boundary where the VLANs terminate, all networks requiring Layer 3 services must be configured with IP addresses to enable routing.
In most cases, the VI workload domains that can be constructed in a standard architecture after the deployment of the management workload domain will connect to routing services upstream using Border Gateway Protocol.
Figure 30 Comparison of management vs. workload domain routing requirements
When two virtual machines connected to different hosts need to communicate, VXLAN-encapsulated traffic is exchanged between two VTEPs on the hosts. In VXLAN, all the learning about the virtual machine MAC address and its association with the VXLAN tunnel endpoints (VTEP) is performed through the support of physical network. VXLAN depends on the IP multicast protocol to populate the router forwarding tables in the virtual network.
A unique IP address range is assigned as the multicast group IP address to the VTEP in each VXLAN network. Identify available multicast IP addresses to assign for this role.
In Cloud Foundation on VxRail, two VXLAN tunnel endpoints are configured on each VxRail node. The endpoints are configured as virtual network adapters and are connected to the port group on the VxRail cluster’s virtual distributed switch for the VXLAN network.
Figure 31 DHCP services for VXLAN Tunnel Endpoints
The IP addresses for the virtual adapters configured for the VxRail clusters are fixed. However, the IP addresses assigned to the two vmkernel interfaces for the tunnel endpoints are obtained from a DHCP server located in the data center. For each overlay network configured, a DHCP server must be accessible on that network to provide IP addresses to the tunnel endpoints.
Perform the following tasks to prepare DHCP services for the Cloud Foundation on VxRail deployment:
Figure 32 NSX-T edge uplinks IP addresses assigned by DHCP
Note: If the VLAN cannot be extended out to the DHCP server, enabling ‘DHCP Helper’ services on the host overlay VLAN is recommended, if supported on the leaf switches.
Figure 33 DHCP Helper connects DHCP services to host overlay VLAN
Preparing the upstream physical network for routing synchronization can be performed after the initial deployment of the Cloud Foundation management workload domain is complete, unless the Application Virtual Network will be configured during this process. In most cases, VI workload domains deployed in Cloud Foundation on VxRail will connect upstream by peering with external routing services using eBGP. In most instances, NSX-T will be the selected virtual network platform for the VI workload domains.
Figure 34 BGP relationship between NSX-T Edge Gateways and external routers
A pair of NSX-T edge devices configured as Tier-0 gateways will be deployed for this purpose in the management workload domain. The NSX-T edge devices must be able to establish an eBGP peer relationship with the upstream routing services. The following tasks must be completed on the upstream switches for peering with the Edge Tier-0 gateways:
The example switch configuration syntax displayed in Example Switch Configuration Settings for BGP Peering offers guidance on how to configure an Ethernet switch for peering with a pair of Edge Gateways.