The vSAN presents a robust, secure, and efficient shared datastore to all nodes within a VxRail cluster. External SAN storage is typically not part of a VxRail environment. However, often a requirement exists to access external storage in order to move virtual machines and data into a VxRail environment or move data between environments. Fibre Channel SAN connectivity is supported, and so is IP-based storage. An important distinction is that data in the Fibre Channel, iSCSI or NFS datastore is self-contained and is not distributed to the disk groups within the VxRail cluster. External storage can be used to provide additional capacity to the VxRail environment but, external storage is typically not used to meet capacity requirements.
Customers can order Fibre Channel (FC) host bus adaptors (HBA) with their VxRail for external storage. FC storage can be configured to complement local VxRail cluster storage. Common use cases for external storage are customer’s desire to continue to use their existing storage array as a secondary storage to VxRail, or they are looking for a method to migrate data from their FC storage to VxRail vSAN datastores. VxRail does not provide lifecycle management to the FC HBA. Customers will need to manage it via vCenter Server.
Using a FC HBA, customer can connect to storage arrays that is supported by the HBA card and validated by VMware. However, Dell EMC will only provide support for connection of the HBA to a Dell EMC storage array (i.e. PowerStore, SC, Unity, , Symmetrix VMAX/PowerMAX, and XtremIO) that is qualified by eLab.
When configuring external storage via the FC HBA, customer is allowed to install VM/VIB/drivers to operationalize the use of the external storage as required. The customer is responsible for maintaining and updating it. Customers can install multiple HBAs if there are slots available on the PCIe bus.
iSCSI can be used to provide mobility for VMs and associated data onto and between VxRail environments. The figure below shows a VxRail environment that includes iSCSI storage in addition to the vSAN datastore.
Figure 22. Data mobility into and between VxRail environments
Data on the iSCSI storage is easily moved into the VxRail vSAN environment or between VxRail environments.
iSCSI provides block-level storage using the SCSI protocol over an IP network. SCSI uses a client-server, initiator-target model where initiators issue read/write operations to target devices, and targets either return the requested read data or persistently save write data. iSCSI in a VMware environment is standard functionality. A software adapter using the NIC on an ESXi host is configured as an initiator, and targets on an external storage system present LUNs to the initiators. The external LUNs could be used by ESXi as raw device mapping (RDM) devices, however usually, the use case is for VxRail to configure them as VMFS datastores. (Refer to vSphere documentation for more information: Using ESXi with iSCSI SAN.)
iSCSI configuration is performed using the vSphere web client. The steps involve creating a port group on the VDS, creating a VMkernel Network Adapter and associating it with the port group, and assigning an IP address. Then, from the vCenter Manage Storage Adapters view, the Add iSCSI Software Adapter dialog is used to create the software adapter. The last step is to bind the iSCSI software adapter with VMkernel adapter. Once this is complete, iSCSI targets and LUNs can be discovered and used to create new datastores and map them to the hosts in the cluster.
iSCSI works best in a network environment that provides consistent and predictable performance, and a separate VLAN is usually implemented. iSCSI network requirements should be considered when planning the network requirements for VxRail environment to make sure connectivity to the external iSCSI storage system exists, and the additional network traffic will not impact other applications.
NFS is a network filesystem that provides file-level storage using the NFS protocol over an IP network. It can work in use cases similar to iSCSIthe difference being that NFS devices are presented as file systems rather than block devices. The figure below shows an NFS file system that has been exported from a network-attached server and mounted by the ESXi nodes in the VxRail environment.
Figure 23. Network-attached file system with VxRail
This enables data mobility into and between VxRail environments as well as enabling additional storage capacity.
The external NFS server can be an open system host, typically Unix or Linux, or a specially built system. The NFS server takes physical storage and creates a file system. The file system is exported and client systems, in this example ESXi hosts in a VxRail system, mount the file system and access it over the IP network.
Similar to iSCSI, NFS is a standard vSphere feature and is configured using the vCenter web client. This is done in the Hosts and Clusters view under Related Objects and the New Datastore dialog. Select NFS as datastore type, the NFS version, the name of the datastore, the IP address or hostname of the NFS server that exported the filesystem, and the host that will mount it. The NFS filesystem will appear like the vSAN datastore. VMs, templates, OVA files, and other storage objects can be easily moved between the NFS filesystem and the vSAN datastore using vMotion.
As with iSCSI, NFS works best in network environments that provide consistent and predictable performance. The network requirements for NFS should be considered when initially planning the network requirements for VxRail environment.