The production topology uses a leaf-spine fabric for performance and scalability. SFS automates the deployment of this fabric.
With SFS, two leaf switches are used in each rack for redundancy and performance. A Virtual-Link Trunking interconnect (VLTi) connects each pair of leaf switches. Every leaf switch has an L3 uplink to every spine switch. Equal-Cost Multi-Path routing (ECMP) is leveraged to use all available bandwidth on the leaf-spine connections.
SFS uses BGP-EVPN and VXLAN to stretch L2 networks across the L3 leaf-spine fabric. This allows for the scalability of L3 networks with the VM mobility benefits of an L2 network. For example, a VM can be migrated from one rack to another without the need to change its IP address and gateway information.
Each VxRail node has two network adapter ports. The ports carry Management, VSAN, vMotion, and NSX-T traffic. The leaf port numbers shown in the previous figure are used throughout this guide.
For high availability, VMware recommends deploying a cluster of three NSX Manager VMs and a cluster of two NSX Edge VMs. VM host rules are configured to keep NSX Managers on separate hosts, and NSX Edges on separate hosts.
An additional rule is created to keep NSX Edges on hosts connected to the border leafs. The border leafs provide the physical uplink connections to the external network. In this guide, the border leafs are Leaf1A and Leaf1B in Rack 1.
Only the NSX Edges need to be on VxRail nodes in a specific rack. NSX Manager VMs, infrastructure VMs (such as vCenter server, VxRail Manager, and OMNI), and tenant (or user) VMs may be located on VxRail nodes in different racks as needed.