Home > Storage > PowerMax and VMAX > Storage Admin > Dell PowerMax and VMware vSphere Configuration Guide > Networking and connectivity
VMware ESXi requires SPC-2 compliant SCSI devices. When connecting VMware ESXi to a Dell PowerMax storage array the SPC-2 bit on the appropriate Fibre Channel ports needs to be set. The SPC-2 is set by default on all PowerMax arrays. For PowerMax arrays, the default factory settings should be used.
Note: Please consult the Dell Support Matrix for an up-to date listing of port settings.
Note: The bit settings described previously, and in the Dell Support Matrix, are critical for proper operation of VMware virtualization platforms and Dell software. The bit settings can be either set at the port or per HBA.
In the PowerMaxOS iSCSI implementation, port flags which previously needed to be set on the physical port are now set on the iSCSI target. The iSCSI target port flags are:
The SCSI_3, SPC2_PROTOCOL_VERSION, and SCSI_SUPPORT1 flags are enabled by default when a target is created on PowerMax.
The PowerMax supports SCSI-3 Persistent Reservations on all devices, except for vVols and those devices presented through NVMeoF (as noted above). The support includes all the following types of SCSI-3 PR:
SCSI-3 Persistent Reservations and WEAR support enables clustered vmdk support in vSphere 7 and 8.
When using FC or FC-NVMe, each ESXi server in the VMware vSphere environment should have at least two physical HBAs, and each HBA should be connected to at least two different front-end ports on different directors of the PowerMax.
Similarly, when using iSCSI or NVMe/TCP, each ESXi server in the VMware vSphere environment should have at least two physical NICs (do not use teaming) and there should be at least two front-end ports on different directors of the PowerMax.
In the examples below for FC, SAN switches are not included. The diagrams depict the recommended zoning, rather than the physical connectivity; however, it is possible some customers may use direct connectivity which would mirror the examples.
Note: iSCSI connectivity proceeds differently than FC zoning. For setup details, please see the whitepaper iSCSI Implementation for Dell Storage.
If the PowerMax in use has only one engine then each HBA should be connected to the odd and even directors within it. Each ESXi host with 2 HBAs utilizing 2 ports, therefore, would have a total of 4 paths as in Figure 1. Note the port numbers were chosen randomly and the emulations are not included.
Figure 1. Connecting ESXi servers to a single engine PowerMax
The multi-engine connectivity is demonstrated in Figure 2.
Figure 2. Connecting ESXi servers to a multi-engine PowerMax
It should be noted the ports on the PowerMax do not share a CPU. On the PowerMax all the front-end ports have access to all CPUs designated for that part of the director. This also means that a single port on a PowerMax can do far more IO than a similar port on the previous array models. From a performance perspective, for small block IO, two ports per host would be sufficient while for large block IO, more ports would be recommended to handle the increase in Gb/s.
Note: Dell recommends using four ports in each port group. If creating a port group in Unisphere for PowerMax, the user will be warned if less than four ports are used.