Home > Storage > PowerMax and VMAX > Storage Admin > Dell PowerMax and VMware vSphere Configuration Guide > NVMe over Fabrics (NVMeoF)
Beginning with PowerMaxOS 5978.444.444, the PowerMax array introduced NVMeoF or NVMe over Fabrics. NVMe stands for non-volatile memory. This is the media itself such as NAND-based flash or storage class memory (SCM) which compromise the PowerMax backend. NVMe or non-volatile memory express is a set of standards which define a PCI Express (PCIe) interface used to efficiently access data storage volumes on NVM. NVMe provides concurrency, parallelism, and scalability to drive performance. It replaces the SCSI interface. NVMeoF is the specification that details how to access that NVMe storage over the network between host and storage. The network transport could be Fibre Channel, TCP, RoCE, or any other number of next generation fabrics. Dell currently supports Fibre Channel with NVMe on the PowerMax 2000/8000/2500/8500, also known as FC-NVMe, and TCP with NVMe on the PowerMax 2500/8500, also known as NVMe/TCP.
To use FC-NVMe requires 32 Gb/s Fibre Channel modules (SLICs). The emulation FN is assigned to ports on these modules to support FC-NVMe. In addition, in order to use FC-NVMe with a host, a supported Operating System and supported HBA card is necessary. With NVMeoF, targets are presented as namespaces (equivalent to SCSI LUNs) to a host in active/active or asymmetrical access modes (ALUA).
To use NVMe/TCP with the PowerMax 2500/8500 requires 25 Gb/s network modules (SLICs). An OR emulation is assigned to ports on these modules to support NVMe/TCP. In addition, in order to use NVMe/TCP with a host, a supported Operating System and supported network card is necessary. With NVMeoF, targets are presented as namespaces (equivalent to SCSI LUNs) to a host in active/active or asymmetrical access modes (ALUA).
VMware vSphere 7 is the first release to support NVMeoF. It offers both NVMe over Fibre Channel (FC-NVMe) and NVMe over RDMA (RoCE v2). Currently Dell only offers FC-NVMe with vSphere 7 on the PowerMax 2000/8000/2500/8500 and NVMe/TCP with vSphere 7 U3 on the PowerMax 2500/8500. ESXi hosts discover and use the presented namespaces and emulates the NVMeoF targets as SCSI targets internally and presents them.
Note: The PowerMax 2000/8000 does not support NVMe/TCP. The PowerMax 2500/8500 supports FC-NVMe when running a minimum of PowerMaxOS 10.1.0.0.
Dell requires the following to use FC-NVMe with vSphere 7:
Both Dell and VMware have a number of restrictions for NVMeoF which are included below:
Despite these restrictions, there is nothing unique about the configuration of NVMeoF on the PowerMax for VMware. The general, recommended best practices from Dell should be followed.
NVMeoF devices will use the EUI prefix, or Extended Unique Identifier, and not the NAA or Network Addressing Authority. Either naming, however, is unique to that LUN. The NAA or EUI number is generated by the storage device. Since the NAA or EUI is unique to the LUN, that same naming is used across any ESXi host(s) to which the device is presented. This is what permits ESXi hosts to recognize the same datastore on that LUN, even if those hosts are in different vCenters (or standalone). Figure 3 shows how those FC-NVMe devices appear from within the vSphere Client and Figure 4 shows NVMe/TCP. They are not classified as (Dell) EMC disks, rather simply NVMe.They are not classified as (Dell) EMC disks, rather simply NVMe. Like devices presented via FC and iSCSI, NVMeoF supports both the mobility ID and compatibility ID formats, each represented below.
Figure 3 FC-NVMe devices in the vSphere Client
Figure 4 NVMe/TP devices in the vSphere Client
VMware offers specific esxcli commands for NVMe to list namespaces, controllers, drivers, etc. The options are available in Figure 5.
Figure 5 Esxcli nvme command
In addition, many of the command outputs are available in the vSphere Client. Simply highlight the appropriate storage adapter where the NVMe devices are presented. In Figure 6, detail on devices, paths, namespaces, and controllers is available.
Figure 6 NVMeoF detail in the vSphere Client
There are two supported methods for migrating a VM from a VMFS datastore on a SCSI presented device to an NVMe presented device: online and offline.
The online method utilizes VMware Storage vMotion to move the VM between protocols. This method requires two devices, the SCSI VMFS datastore where the VM resides, and a new, NVMe presented device with a new VMFS datastore.
The offline method involves resignaturing the VMFS datastore after presenting it via the new NVMe protocol. Recall that the signature of a VMFS datastore is partially comprised of the device ID, and since SCSI and NVMe present different formats for the ID, VMware will show a mismatch when the device is presented through a different protocol. Therefore, the device must be resignatured, and the VMs re-registered.
Each method has its pros and cons. The online method allows no downtime, yet double the storage is used (at least initially), and it can be very time consuming. The offline method is very fast, however it does require downtime, no matter how short. A customer should weigh these pros and cons to come up with a suitable plan for migration. For small environments, Storage vMotion may be perfectly acceptable; however, large environments may require using both methods, determining which datastores must remain online and which ones can suffer downtime.
Dell, in conjunction with VMware, validated the offline method in the following KB: https://www.dell.com/support/kbdoc/en-us/000213232?lang=en