Cloud Foundation on VxRail offers flexibility regarding the selection of a physical network architecture to support the planned deployment. A spine-leaf topology is the most common network topology for Cloud Foundation on VxRail and is considered a best practice. In this model, the VxRail nodes connect directly to the leaf-layer switches, and multiple VxRail clusters can be supported on a single pair of leaf-layer switches. The spine layer is positioned primarily for aggregating upstream traffic, providing connectivity to external resources and enabling VTEP tunneling between racks.
Decisions must be made regarding the location of the Layer 2 and Layer 3 boundaries to support Cloud Foundation on VxRail networking. The NSX Tier-0 gateways depend on peering with a router upstream in the physical network using External Border Gateway Protocol (eBGP) to update routing tables in the virtual network.
The VLANs used in Cloud Foundation on VxRail to support the guest virtual machine networks terminate at these upstream routers in the physical network. Therefore, using the route mapping for the applications planned for the VI workload domains drives the decisions for the peering of the NSX edge virtual devices in Cloud Foundation, and guides the process for enabling and configuring the adjacent routers in the physical network.
In most cases, routing outside of the virtual network is positioned in either the spine layer or leaf layer. It is not desirable to have routing decisions made above the spine layer for Cloud Foundation on VxRail deployments.
Having the router boundary at the leaf layer is preferable to establishing the routing boundary at the spine layer. This means that the uplinks on the leaf layer are trunked ports, and pass through all the required VLANs, using a port channel or similar construct, to routing services on the spine layer. This option might be preferable if existing subnets and VLANs are planned for the deployment, and Cloud Foundation on VxRail is being integrated into an existing network topology with set standards.
This topology has the advantage of enabling the Layer 2 networks to span across all the switches at the leaf layer. This topology can simplify VxRail networks that extend beyond one rack because the switches at the leaf layer do not need to support Layer 3 services, and enabling VTEP tunneling between the switches in different racks is not necessary. For instance, for VxRail clusters using vSAN as a datastore across racks, all I-O operations can use a Layer 2 network for transport.
A major drawback to this topology is scalability. Ethernet standards enforce a limitation of addressable VLANs to 4094, which can be a constraint in a shared switch layer fabric. Do not select this topology option if your deployment might breach this threshold.
The most common network design, and considered the best practice, is to have the Layer 2/Layer 3 boundary at the leaf layer. Enabling routing services at the leaf layer is preferred for Cloud Foundation on VxRail deployments. This option overcomes the VLAN limitation imposed by establishing the routing at the spine layer. This option optimizes routing traffic, because it requires the least number of hops for the NSX edge virtual devices to peer with an adjacent upstream router. Network management would be simplified, because each rack would have its own set of subnets for allocation, and NSX would be used to span virtual machine networks across racks.
A caveat is that this option does require Layer 3 services to be licensed and configured at the leaf layer. In addition, because Layer 2 networks now terminate at the leaf layer, they cannot span leaf switches. If there is a requirement to extend Layer 2 networks across switches in multiple racks, the best practice is to enable hardware-based (VTEP) tunneling.
The key points to consider for the decisions regarding the network architecture and topology are: