Take the following matters into account when planning your PowerMax iSCSI implementation.
When planning PowerMax connectivity for performance and availability use a “go-wide before going deep” approach, which means it is better to connect storage ports across different engines or directors than to use all the ports by order on a single director. In this way, even if a component fails, the storage can continue to service database I/Os.
While creating a masking view, ensure that each storage group is accessible from the database server by at least two targets/iSCSI virtual ports. In that way, each storage device will be available on two or more paths from the database server. This topology increases both availability and performance.
PowerMax CPU cores are a critical resource when planning for performance. PowerMax automatically allocates cores to each emulation, such as FC, iSCSI, SRDF, and so on. You can list the number of cores allocated for each emulation using the Solutions Enabler command: ‘symcfg list –dir all’. In certain cases—especially with a low number of PowerMax engines (bricks) count and many emulations—the default core allocation may not take into account specific application I/O bottlenecks. If your major workload is on iSCSI, ensure that enough cores have been allocated to SE emulation and the cores are balanced equally across directors and engines (bricks). The example below shows that core count for iSCSI (SE emulation) is 12 on both director boards.
# symcfg list -dir all
Symmetrix ID: 000197600352 (Local)
S Y M M E T R I X D I R E C T O R S
Ident Type Engine Cores Ports Status
----- ------------ ------ ----- ----- ------
IM-1A IM 1 4 0 Online
IM-2A IM 1 4 0 Online
SE-1E GigE 1 12 4 Online
SE-2E GigE 1 12 4 Online
If you suspect that insufficient number of cores are associated with the iSCSI emulation, contact your Dell EMC account representative to review the core allocation distribution between emulations.
When using host-based multipathing software like PowerPath or DM-Multipath, a single host initiator can access the same devices by presenting them through multiple targets on the storage system. This allows the host to see the devices through multiple paths. Each path from the initiator to the targets will have its own session and connection. This connectivity method is often referred to as “port binding”.
When configuring PowerMax iSCSI connectivity, Dell Technologies recommends that you use the port-binding method. Configure multiple targets on the PowerMax and establish connectivity from the database server initiator to each of the targets.
For most OLTP workloads, which feature a mixture of database reporting and batch jobs, 4 or 8 front-end ports providing 4 or 8 paths per device can provide very good throughput (IOPS) and moderate bandwidth (GB/s). Each path translates to one iSCSI session between initiator and target.
VLANs and PowerMax host initiator groups are used to support multi-tenancy on PowerMax. VLANs provide isolated virtual networks so that only tenants (initiators) that are on the same VLAN as the target can access the target. PowerMax host initiator groups are part of a PowerMax device masking configuration, which allows fast and flexible changes to relationships between host initiators, storage target ports, and storage devices. Only the participating initiators can see the assigned devices. The PowerMax iSCSI target is not tied to a specific port, and up to 64 targets can be mapped to a physical port. Each target can have up to 8 IP interfaces assigned to it, providing a high level of multi-tenancy with security. Multiple IP interfaces can share the same physical interface and can still be isolated using different VLANs and assigned to different targets on the same ports.
With high-capacity and powerful NVMe flash storage such as the PowerMax storage system, there are often many databases and applications that are consolidated into one storage system. The PowerMax storage system uses Service Levels (SL) to determine the performance objectives and priorities of applications by managing the I/O latencies of the storage groups (SGs) in accordance with their SL.
By default, the PowerMax storage system assigns an Optimized SL to new SGs. This SL receives the best performance the system can give it, but has the same priority as all other SGs that are also set with the Optimized SL. In this case, it is possible that a sudden high load from one SG (such as an auxiliary application) might affect the performance of another SG (such as a key mission-critical application) because they all share the same system priorities and performance goals. Using specific SLs can prevent this situation.
Use cases for SLs include “caging” the performance of a “noisy neighbor”, prioritizing Production versus Test/Dev systems performance, and satisfying the needs of Service Providers or organizations using “chargeback” in which their clients pay for a service level.
The Host I/O limits is a quality of service (QOS) feature that provides the option to place specific IOPS or bandwidth limits on any storage group. Assigning a specific Host I/O limit for IOPS to a storage group with low performance requirements can ensure that a spike in I/O demand will not saturate or overload the storage and degrade the performance of more critical applications. Using Host I/O limits can ensure predictable performance for all servers in a multi-tenant environment. The Host I/O limits is applicable at the storage group level, so it is available for both FC and iSCSI storage access.