The NSX Manager is responsible for the deployment of the NSX controller clusters and ESXi host preparation for NSX. The host preparation process installs various vSphere installation bundles (VIBs) to enable VXLAN, distributed routing, distributed firewall, and a user world agent for control plane communications. The NSX Manager is also responsible for the deployment and configuration of the NSX Edge services gateways and associated network services (load balancing, firewalling, NAT, and so on). It provides the single point of configuration and the REST API entry points for NSX in a vSphere environment.
The NSX Manager ensures security of the control plane communication of the NSX architecture. It creates self-signed certificates for the nodes of the controller cluster and ESXi hosts that are allowed to join the NSX domain. Each WLD has an NSX Manager as part of the VCF on VxRail solution.
The controller cluster in the NSX platform is the control plane component that manages the hypervisor switching and routing modules. The controller cluster consists of controller nodes that manage specific logical switches and includes three nodes that are clustered for scale-out and high-availability. The NSX controllers are required for each WLD, including management and any additional VxRail VI WLD.
The vSwitch in NSX for vSphere is based on the VDS with additional components added to enable a rich set of services. The add-on NSX components include kernel modules that are distributed as VMware installation bundles (VIBs). These modules run within the hypervisor kernel, providing services including distributed routing, distributed firewall, and VXLAN to VLAN bridging. The NSX VDS abstracts the physical network, providing access-level switching in the hypervisor. This is central to network virtualization as it enables logical networks that are independent of physical constructs, such as VLANs.
The NSX vSwitch enables support for overlay networking with the use of the VXLAN protocol and centralized network configuration. Overlay networking with NSX provides the following capabilities:
VXLAN is an overlay technology encapsulating the original Ethernet frames that are generated by workloads connected to the same logical Layer 2 segment or logical switch.
VXLAN is a L2 over L3 (L2oL3) encapsulation technology. The original Ethernet frame that is generated by a workload is encapsulated with external VXLAN, UDP, IP, and Ethernet headers to ensure it can be transported across the network infrastructure interconnecting the VXLAN endpoints.
Scaling beyond the 4094 VLAN limitation on traditional switches has been solved by leveraging a 24-bit identifier, named VXLAN Network Identifier (VNI), which is associated to each L2 segment created in logical space. This value is carried inside the VXLAN header and is normally associated to an IP subnet, similarly to what traditionally happens with VLANs. Intra-IP subnet communication occurs between devices that are connected to the same virtual network or logical switch.
VXLAN tunnel endpoints (VTEPs) are created within the vSphere distributed switch to which the ESXi hosts that are prepared for NSX for vSphere are connected. VTEPs are responsible for encapsulating VXLAN traffic as frames in UDP packets and for the corresponding decapsulation. VTEPs are essentially VMkernel ports with IP addresses and are used both to exchange packets with other VTEPs and to join IP multicast groups through Internet Group Membership Protocol (IGMP).
Logical switching enables extension of an L2 segment or IP subnet anywhere in the fabric independent of the physical network design. The logical switching capability in the NSX platform provides the ability to deploy isolated logical L2 networks with the same flexibility and agility that exists for virtual machines. Virtual and physical endpoints can connect to logical segments and establish connectivity independently from their physical location in the data center network.
The NSX distributed logical router (DLR) provides an optimal data path for traffic within the virtual infrastructure, particularly East-West communications. It consists of a control plane component and a data plane component. The control virtual machine is the control plane component of the routing process, which provides communication between the NSX Manager and the NSX Controller cluster. NSX Manager sends logical interface information to the control virtual machine and the NSX Controller cluster. The control virtual machine sends routing updates to the NSX Controller cluster.
The data plane consists of kernel modules running on the hypervisor that provide high performance, low overhead first-hop routing.
The NSX Edge provides centralized on-ramp and off-ramp routing between the logical networks that are deployed in the NSX domain and the external physical network infrastructure. The NSX Edge supports various dynamic routing protocols (for example, OSPF, iBGP, eBGP) and can also leverage static routing. The routing capability supports two models, active-standby stateful services and ECMP. It also offers support for Layer 2, Layer 3, perimeter firewall, load balancing, and other services such as SSLVPN and DHCP-relay. Figure 12 shows how the Edge services gateways can be deployed in a pair using equal cost multi-path (ECMP) to load balance. The ESGs peer with an upstream physical router to allow traffic from the NSX domain out to the physical network and beyond to the Internet if necessary.
The NSX DFW provides stateful firewall services to any workload in the NSX environment. DFW runs in the kernel space and provides near-line rate network traffic protection. The security enforcement implementation enables firewall rule enforcement in a highly scalable manner without creating bottlenecks on physical appliances. DFW is activated when the host preparation process is completed. If a VM does not require DFW service, it can be added to the exclusion list functionality.
By default, NSX Manager, NSX Controllers, and Edge services gateways are automatically excluded from DFW function. During deployment, VCF also adds the Management VMs to the DFW exclusion list.