Address the following PowerMax iSCSI considerations when setting up your system.
When planning PowerMax connectivity for performance and availability, use “go wide before going deep” policy, which means it is better to connect storage ports across different engines or directors than to use all the ports on a single director. In this way, even if a component fails, the storage can continue to service host I/Os. Connect at least two iSCSI ports to the Ethernet switch, preferably from different director boards and to different switches. This ensures high availability and protection against network failures.
PowerMax iSCSI targets are assigned to a specific director board. We recommend evenly distributing targets among all available director boards for best performance as each director board has its own assigned CPU cores to service iSCSI I/O requests.
While creating a masking view, ensure that each storage group is accessible from the host by at least two targets/iSCSI virtual ports. Map each target to a separate physical port.
When using host-based MPIO like PowerPath or DM-Multipath, a single host initiator can access the same devices by presenting them through multiple targets on the storage array. This allows the host to see the devices through multiple paths. Each path from the initiator to the targets will have its own session and connection. This connectivity method is often referred to as “port binding”.
Dell Technologies recommends that you use the port binding method for connecting to the PowerMax from hosts. Configure multiple targets on PowerMax and establish connectivity from the host initiator to each of the targets.
For most OLTP workloads, which feature a mixture of database reporting and batch jobs, 4 or 8 front-end ports providing 4 or 8 paths per device can provide very good throughput (IOPS) and moderate bandwidth (GB/s). Each path translates to one iSCSI session between initiator and target.
VLANs and PowerMax host initiator groups are used to support multi-tenancy on PowerMax. VLANs provide isolated virtual networks so that only tenants (initiators) that are on the same VLAN as the target can access the target. PowerMax host initiator groups are part of a PowerMax device masking configuration, which allows fast and flexible changes to relationships between host initiators, storage target ports, and storage devices. Only the participating initiators can see the assigned devices. The PowerMax iSCSI target is not tied to a specific port, and up to 64 targets can be mapped to a physical port. Each target can have up to eight IP interfaces assigned to it, providing a high level of multi-tenancy with security. Multiple IP interfaces can share the same physical interface and can still be isolated using different VLANs and assigned to different targets on the same ports.
Host I/O limits is a quality of service (QOS) feature that provides the option to place specific IOPs or bandwidth limits on any storage group. Assigning a specific Host I/O limit for IOPS to a storage group with low performance requirements can ensure that a spike in I/O demand will not saturate or overload the storage and degrade the performance of more critical applications. Using Host I/O limits can ensure predictable performance for all hosts in a multi-tenant environment. The Host I/O limit is applicable at the storage group level, so it is available for both FC and iSCSI storage access.
PowerMax CPU cores are a critical resource when planning for performance. PowerMax automatically allocates cores to each emulation, such as FC, iSCSI, SRDF, and so on. You can list he number of cores allocated for each emulation using the Solutions Enabler command: ‘symcfg list –dir ALL’. In certain cases—especially with a low number of PowerMax brick count and many emulations—the default core allocation may not take into account specific application I/O bottlenecks. If your major workload is on iSCSI, ensure that enough cores have been allocated to SE emulations on all Power Bricks. The example below shows that the core count for iSCSI (SE emulation) is 12 on both director boards.
# symcfg list -dir ALL
Symmetrix ID: 000197600352 (Local)
S Y M M E T R I X D I R E C T O R S
Ident Type Engine Cores Ports Status
----- ------------ ------ ----- ----- ------
IM-1A IM 1 4 0 Online
IM-2A IM 1 4 0 Online
SE-1E GigE 1 12 4 Online
SE-2E GigE 1 12 4 Online
While it is rare, if you suspect that this is a problem with iSCSI performance, contact your Dell Technologies account representative to review the core allocation distribution between emulations.