Dell EMC VxRail is a hyper-converged infrastructure (HCI) solution that consolidates compute, storage, and network into a single, highly available, unified system. All physical compute, network, and storage resources in VxRail are managed as a single shared pool. They are allocated to applications and services based on customer-defined business and operational requirements. The foundation for VxRail is a collection of uniquely engineered and manufactured server nodes that are bonded together in a cluster, and placed under a single point of management.
The pool of nodes that comprise a VxRail cluster are dependent on an external physical network infrastructure to serve as a backplane to enable essential HCI services. They provide connectivity to customer resources and end users. The logical networking in a VxRail cluster is enabled by the configuration of a single virtual distributed switch to support required VxRail management, virtual machine interconnectivity, and access to external application and end-user resources. Each node connects to a redundant set of adjacent leaf Ethernet switches in the data center to enable interconnectivity between the physical and logical networks through the virtual distributed switch.
When the VxRail cluster is formed from the pool of nodes during initial implementation, the nodes establish physical network connectivity with the adjacent leaf switches. Portgroups are configured on the virtual distributed switch that is created during this process. Each of these portgroups enables connectivity for the essential networking services required by the VxRail cluster.
The VxRail management components in the form of virtual machines are deployed as a part of the implementation process. They are assigned to a portgroup on the virtual distributed switch that supports the external management network. These VxRail management components are dependent on the underlying physical networking infrastructure to provide connectivity to IT administration. In addition, virtual machines created on the VxRail cluster after implementation must be able to communicate with other virtual machines and applications both on the local virtual network, and with applications and end users external to the VxRail cluster. This is accomplished through the configuration of guest networks on the VxRail cluster using defined portgroups on the virtual distributed switch.
The remaining logical networks shown in the graphic that are required by the VxRail cluster at the time of implementation are internal to the VxRail cluster. They do not require connectivity outside of the adjacent leaf switches. The internal management network is used for node and switch discovery, the vSAN network support storage services, and the vMotion network which enables virtual machine mobility between the nodes of the cluster.
The network infrastructure supporting the VxRail cluster must be properly configured before initial implementation to support VxRail connectivity. Continuous management and reconfiguration of the physical network is expected to support ongoing operation of the applications and virtual machines on the cluster. These tasks enable continuous management of the leaf switches to support the VxRail required networks. They also provide the upstream infrastructure to extend the networks east and west across data center racks, and northbound and southbound to and from the core network.
VxRail is an integrated infrastructure solution with a validated architecture and control points derived from a single point of management. The procedures and processes to establish connectivity between the physical and logical networks are prescriptive and standardized. This simplifies the tasks that need to be performed on the physical network to support VxRail applications and operations. Many of those repetitive tasks can be automated.
Dell EMC SFS is an optional feature integrated into OS10, the operating system engineered for Dell EMC enterprise-class Ethernet switches. When SFS is enabled on the switches, control and management no longer originate from the default method of the console, outside of a few core management functions. Instead, it is reconfigured to establish a new personality profile at enablement. It transitions to a mode to support the applications or products using its networking resources. When a set of interconnected switches is set to a common personality profile, the switches form a united fabric, with one switch being selected as the master switch. The fabric can be easily expanded by connecting additional switches into the unified infrastructure, and enabling the personality profile through SFS. The fabric can be expanded linearly at the leaf layer, or spine switches can be added to form a two-tier network that can span across data center racks.
Support for a VxRail personality profile is engineered into both SFS and into VxRail Manager, and can be enabled with a minimum of two switches to form a fabric. When the VxRail personality profile mode is set on the pair of switches, the master switch is reconfigured so that a browser can be used in a simplified interface to configure the properties for implementing a VxRail cluster on the fabric.
When a VxRail cluster is implemented, VxRail Manager discovers the powered-on nodes on the internal management network, places them under VxRail control, and uses them to form the managed cluster. VxRail Manager uses this same internal network to discover a switch fabric with the VxRail personality profile enabled, thereby establishing a link and point of control between the VxRail logical network and the physical network.
The primary benefit of this established link is the synchronization of the physical switch fabric and VxRail logical networks during the cluster build process to ensure uninterrupted network connectivity. During initial cluster build in VxRail Manager, the network settings and properties are entered to form the VxRail cluster. After the data entry process is completed, VxRail Manager sends the network settings and properties to the switch fabric, where SFS automatically configures the switch fabric to support the VxRail cluster.
The VxRail personality profile within SFS supports expansion to a multi-rack, multi-tier network topology into a single managed switch fabric. The default method of using the console for switch and network configuration is disabled by SFS, except for a handful of basic management functions. The OMNI vCenter plug-in is used to support ongoing management of the single switch fabric.
Both vCenter and VxRail Manager do not have control or visibility to the supporting physical network enabled by SFS without the OMNI plug-in. The OMNI plug-in is a free download from Dell. It is deployed as a virtual appliance on a vCenter instance in the data center to support SmartFabric-enabled network administration tasks. The tool can manage and operate multiple instances of SmartFabric-enabled networks, and provides support for tasks such as uplink connectivity, on-boarding of non-integrated devices, and life cycle management. The OMNI plug-in can be accessed directly through a vClient with a user interface on the vCenter instance. When registered with the vCenter supporting the VxRail cluster, the OMNI plug-in enables a single, unified management view for the physical network fabric and the virtualized network environment.
The link between the physical and virtual networks by the OMNI plug-in also enables the automatic synchronization of the physical and virtual networks to ensure uninterrupted network connectivity after the VxRail cluster is built. For instance, if a new network portgroup with a new VLAN is created on the virtual distributed switch in the cluster, that network state change is detected, and the new VLAN is sent to the switch fabric, where it is applied to the fabric using SFS. There is no need to manually configure the switch fabric using the console in this instance.