vSAN presents a robust, secure, and efficient shared datastore to all nodes within a VxRail cluster. External SAN storage is typically not part of a VxRail environment. However, often a requirement exists to access external storage to move virtual machines and data into a VxRail environment or move data between environments. Fibre Channel SAN connectivity is supported, and so is IP-based storage. An important distinction is that data in the Fibre Channel, iSCSI, or NFS datastore is self-contained and is not distributed to the disk groups within the VxRail cluster. External storage can be used to provide additional capacity to the VxRail environment, but external storage is typically not used to meet capacity requirements. VMware Virtual Machine File Systems (VMFS) can be configured over Fibre Channel and iSCSI. NFS is available over IP. vSphere Virtual Volumes (vVols) are available through Fibre Channel and IP.
Customers can order Fibre Channel host bus adapters (HBAs) with their VxRail for external storage. Fibre Channel storage can be configured to complement local VxRail cluster storage or as primary storage for VxRail dynamic node clusters.
Customers that run application workloads on enterprise storage arrays typically value the data resiliency and data protection services that these storage platforms offer. These services are critical for many industries, including the healthcare and financial industries. By attaching external storage to VxRail dynamic node clusters, customers can meet these requirements while also benefiting from a simplified LCM experience in their vSphere clusters.
Note: When used with Fibre Channel-attached external storage as primary storage, VxRail dynamic nodes only support Dell PowerStore-T, PowerMax, and Unity XT storage arrays.
A common use case for Fibre Channel-attached external storage as secondary storage is to address a customer’s desire to continue to use an existing storage array as a secondary storage to VxRail. Another use case is to migrate data from FC storage to VxRail vSAN datastores. Customers can connect to storage arrays that are supported by the HBA card and validated by VMware. However, Dell only provides support for connection of the HBA to a Dell PowerStore, SC, Unity, Symmetrix VMAX or PowerMax, and XtremIO storage array that is qualified by eLab.
When configuring external storage through the Fibre Channel HBA, customers can install VM, VIB, or driver files to operationalize the use of the external storage as required. VxRail does not include the firmware and drivers of the Fibre Channel HBA in its Continuously Validated States. Customers are responsible for maintaining and updating the Fibre Channel HBA. Customers can install multiple HBAs if the PCIe bus has available slots.
iSCSI can be used to provide mobility for VMs and associated data onto and between VxRail environments. The following figure shows a VxRail environment that includes iSCSI storage in addition to the vSAN datastore.
Data on the iSCSI storage is easily moved into the vSAN datastore in the VxRail environment or between VxRail environments.
iSCSI provides block-level storage using the SCSI protocol over an IP network. SCSI uses a client/server, initiator-target model where initiators issue read/write operations to target devices, and targets either return the requested read data or persistently save write data. iSCSI in a VMware environment is standard functionality. A software adapter using the NIC on an ESXi host is configured as an initiator, and targets on an external storage system present LUNs to the initiators. The external LUNs are configured as VMFS datastores. For more information about using ESXi with iSCSI SAN, see the vSphere product documentation.
iSCSI configuration is performed using the vSphere Web Client. The high-level configuration steps are:
After iSCSI configuration is complete, iSCSI targets and LUNs can be discovered and used to create datastores and map them to the hosts in the cluster.
iSCSI works best in a network environment that provides consistent and predictable performance, and a separate VLAN is usually implemented. When planning the network requirements for the VxRail environment, consider iSCSI network requirements to ensure that connectivity to the external iSCSI storage system exists and that the additional network traffic will not affect other applications.
A network file system provides file-level storage using the NFS protocol over an IP network. It can work in use cases such as iSCSI—the difference being that NFS devices are presented as file systems rather than block devices. The following figure shows a network file system that has been exported from a network-attached server and mounted by the ESXi nodes in the VxRail environment. This network-attached file system allows for data mobility into and between VxRail environments as well as access to additional storage.
The external NFS server can be an open system host, typically UNIX or Linux, or a specially built system. The NFS server takes physical storage and creates a file system. The file system is exported, and client systems—ESXi hosts in a VxRail system, in this example—mount the file system and access it over the IP network.
Like iSCSI, NFS is a standard vSphere feature and is configured using the vCenter Web Client. The high-level configuration steps are:
The NFS file system appears like the vSAN datastore. VMs, templates, OVA files, and other storage objects can be easily moved between the NFS file system and the vSAN datastore using vMotion.
As with iSCSI, NFS works best in network environments that provide consistent and predictable performance. Consider the network requirements for NFS when initially planning the network requirements for VxRail environment.