Cloud Foundation on VxRail offers flexibility with regards to the selection of a physical network architecture to support the planned deployment. The most common network topology for Cloud Foundation on VxRail, and is considered a best practice, is a spine-leaf topology. In this model, the VxRail nodes connect directly to the leaf-layer switches, and multiple VxRail clusters can be supported on a single pair of leaf-layer switches. The spine layer is positioned primarily for aggregating upstream traffic, providing connectivity to external resources and enabling VTEP tunneling between racks.
Decisions must be made regarding the location of the Layer 2 and Layer 3 boundaries to support Cloud Foundation on VxRail networking. The NSX-T Tier-0 gateways depend on peering with a router upstream in the physical network using External Border Gateway Protocol (eBGP) to update routing tables in the virtual network.
The VLANs used in Cloud Foundation on VxRail to support the guest virtual machine networks terminate at these upstream routers in the physical network. Therefore, using the route mapping for the applications planned for the VI workload domains drives the decisions for the peering of the NSX-T edge virtual devices in Cloud Foundation, and guides the process for enabling and configuring the adjacent routers in the physical network.
In most cases, routing outside of the virtual network is positioned in either the spine layer or leaf layer. If you choose to deploy a spine-leaf network topology, enabling Layer 3 at either the spine layer or the leaf layer is not required. However, this means Layer 2 traffic must pass through both the leaf and the spine layers to reach the routers. This option is more suitable for small-scale deployments, and it is easy to deploy and configure. It is appealing for sites that have low routing requirements, or the plan is for a small workload.
Figure 25. Options for Layer 2/ Layer 3 boundaries in spine-leaf network topology
Establishing the router layer at the spine layer means the uplinks on the leaf layer are trunked ports, and pass through all of the required VLANs to the routing services on the spine layer. This topology has the advantage of enabling the Layer 2 networks to span across all of the switches at the leaf layer. This topology can simplify VxRail networks that extend beyond one rack because the switches at the leaf layer do not need to support Layer 3 services, and enabling VTEP tunneling between the switches in different racks is not necessary.
Figure 26. VxRail cluster nodes extended beyond one physical rack
A major drawback to this topology is scalability. Ethernet standards enforce a limitation of addressable VLANs to 4094, which can be a constraint in a shared switch layer fabric. Do not select this topology option if your deployment might breach this threshold.
Enabling routing services at the leaf layer is preferred for Cloud Foundation on VxRail deployments. This option overcomes the VLAN limitation imposed by establishing the routing at the spine layer. This option will optimize routing traffic, as it requires the least number of hops for the NSX-T edge virtual devices to peer with an adjacent upstream router. A caveat is that this option does require Layer 3 services to be licensed and configured at the leaf layer. In addition, since Layer 2 networks now terminate at the leaf layer, they cannot span leaf switches. If there is a requirement to extend Layer 2 networks across switches in multiple racks, the best practice is to enable hardware-based (VTEP) tunneling.
The key points to consider for the decisions regarding the network architecture and topology are: