Home > Integrated Products > VxRail > Guides > Architecture Guide—VMware Cloud Foundation 5.1 on VxRail > Multirack design considerations
You might want to span WLD VxRail clusters across racks to avoid a single point of failure within one rack. The management VMs running on the Mgmt WLD VxRail cluster and any management VMs running on the VI WLD require VxRail nodes to reside on the same L2 management network. This requirement ensures that the VMs can be migrated between racks and maintain the same IP address. For a Layer 3 Leaf-Spine fabric, this requirement is a problem because the VLANs are terminated at the leaf switches in each rack.
SDDC Manager now provides the option to select Static or DHCP-based IP assignments to Host TEPs. This option can also be used for stretched clusters and L3 aware workload domain clusters. VMware Cloud Foundation 5.1 on VxRail 8.0.200 supports the configuration of a Sub-Transport Node profile (Sub-TNP) within NSX as a new topology for vSAN (OSA) stretched clusters. This is useful when a cluster spans multiple racks and when the transport nodes of this cluster must use different transport VLANs or acquire their IP addresses for their tunnels using different IP pools.
VxRail clusters deployed across racks require a network design that allows a single (or multiple) VxRail clusters to span between racks. This solution uses a Dell PowerSwitch hardware VTEP to provide an L2 overlay network. This design extends L2 segments over an L3 underlay network for VxRail node discovery, vSAN, vMotion, management, and VM/App L2 network connectivity between racks. The following figure is an example of a multi-rack solution using hardware VTEP with VXLAN BGP EVPN. The advantage of VXLAN BGP EVPN over a static VXLAN configuration is that each VTEP is automatically learned as a member of a virtual network from the EVPN routes received from the remote VTEP.
For more information about Dell Network solutions for VxRail, see the Dell VxRail Network Planning Guide.