Home > Networking Solutions > Enterprise/Data Center Networking Solutions > Enterprise SONiC Networking Solutions > Guides > EVPN-VxLAN based Multisite Data Center Interconnect (DCI) using Dell Enterprise SONiC > VxLAN
A major shift in network loads with virtualization. For instance, the compute and storage resources in one data center can be scaled up by borrowing resources from another DCN. This can be achieved by leveraging live virtual machine (VM) migration. To ensure service continuity during and after the migration of a VM, the VM's IP address and running status must remain unaltered. To enable smooth VM migration, all involved servers must be deployed on a Layer 2 domain, which can also span multiple regions. Virtual eXtensible Local Area Network (VxLAN) was introduced to address this problem. VxLAN is example of network virtualization over Layer 3 (NVO3) technologies defined by the Internet Engineering Task Force (IETF). VxLAN helps extend the Layer 2 domain by encapsulating Layer 2 Ethernet frames and transmitting them over VxLAN tunnel. The tunnel is established between two Virtual Tunnel EndPoints (VTEPs), which can be end hosts, network switches, or routers that encapsulate and de-encapsulate the virtual machine (VM) traffic into a VxLAN header. VxLAN, which can be considered an extension of VLAN, also solves the VLAN scaling limitations. According to IEEE 802.1Q standard, only four thousand VLANs can be created on a switch that does not meet the network isolation requirements of large data centers, which can host thousands of VMs. Alternatively, we can create as many as 16 million VxLANs in an administrative domain theoretically.
In each packet, VxLAN includes the identifier of the specific NVO instance, called VNI. VxLAN encapsulation is based on UDP, with an 8-byte header following the UDP header. VxLAN provides a 24-bit VNI, as shown in Figure 1, which typically provides a one-to-one mapping to the tenant VLAN ID (VID), as described in [RFC7348].
Integrated Routing and Bridging (IRB) is a technique that allows routing as well as bridging on the same interface on a router. In IRB, a router maintains the existing VLAN header when forwarding the frame between the interfaces. VxLAN uses two types of IIRB techniques to extend L2 host subnets over an L3 network:
In asymmetric IRB, each L2 host VLAN is mapped to a unique VxLAN VNI. If there is one tenant, the default VRF is used. Multiple L2 tenants use non-default VRFs. Ingress VTEPs perform routing and bridging and egress VTEPs perform only L2 bridging.
Because routing is performed only on ingress VTEPs, performance improves because the egress VTEP of a VxLAN tunnel only performs bridging, not routing as in symmetric IRB. The disadvantage of using asymmetric IRB is that you must configure all tenant VLANs on each VTEP in the network. As a result, each VTEP uses more routing table memory (compared to symmetric IRB). For this reason, asymmetric IRB is typically deployed in small and medium-sized data centers.
In symmetric IRB, routing is performed on both ingress and egress VTEPs. When you use symmetric IRB, host VLANs need to be configured only on the local VTEP. As a result, the routing table memory used on each VTEP is reduced, which allows symmetric IRB to scale better in larger data centers. Symmetric IRB uses a dedicated L3 VNI to route traffic between host VLANs within a tenant VRF.