The following sections describe considerations regarding the design and connectivity of the iSCSI network.
An iSCSI architecture is made up of a set of core components, including initiator and target nodes, iSCSI names, IP interfaces, sessions, connections, and security. This section details each of the components. Figure 1 shows the relationships between each of these components in establishing an iSCSI session for presenting a PowerMax device to Windows servers.
Figure 1. Core components of PowerMax iSCSI
iSCSI initiator nodes such as hosts are the data consumers. The iSCSI initiator can be implemented either as a driver installed on the host system or within the hardware of an iSCSI HBA, which typically includes TCP/IP Offload Engine (TOE). The host initiates a request to and receives responses from an iSCSI target (storage node). iSCSI initiators must manage multiple, parallel communication links to multiple targets.
iSCSI target nodes, such as disk arrays or tape libraries, are data storage providers.
iSCSI target nodes expose one or more SCSI LUNs to specific iSCSI initiators. The target node listens for and responds to commands from iSCSI initiators on the network. On the enterprise storage level, iSCSI target nodes are logical entities, not tied to a specific physical port. iSCSI targets must manage multiple, parallel communication links to multiple initiators.
In a PowerMax iSCSI implementation, iSCSI target nodes are also referred to as storage virtual ports to indicate the separation of a target node from its physical port. Multiple target nodes can be associated with each physical port, and therefore provide more scale and flexibility.
iSCSI initiator and target nodes are identified by a unique iSCSI name. iSCSI names are ASCII strings and must be unique on a per-namespace (Network ID) basis. iSCSI names would ideally be unique worldwide, but since they can be generated by users or by algorithms, there can be duplicates even on the same array. iSCSI names are formatted in two ways:
Note: As IQN formatting is most common, the examples in this paper are all based on IQN.
iSCSI target nodes are accessed through IP interfaces (also called network portals). iSCSI network portals contain key network configuration information such as:
An iSCSI network portal can only provide access to a single iSCSI target node; however, you can access an iSCSI target node through multiple network portals. These portals can be grouped together to form a portal group. Portal groups are identified by a unique portal group tag (network ID) and defined for the iSCSI target node. All portals in a portal group must provide access to the same iSCSI target node.
iSCSI initiator and target nodes communicate by a linkage called an iSCSI session. The session is the vehicle for the transport of the iSCSI packets, or Portal Data Units (PDUs) between the initiators and target nodes. Each session is started by the initiator, which logs into the iSCSI target. The session between the initiator and target is identified by an iSCSI session ID. Session IDs are not tied to the hardware and can persist across hardware swaps.
Session components are tied together by a TCP/IP connection. The IP addresses and TCP port numbers in the network portals (IP interfaces) define the end points of a connection.
Network design is key to making sure iSCSI works properly and delivers the expected performance in any environment. The following are best practice considerations for iSCSI networks:
Note: Especially in a Microsoft applications environment, where block size is typically 8KB, a 9,000 MTU will be able to transport a Microsoft application block in a single frame, where a 1,500 MTU will require transmitting multiple packets for each database block read or write I/O operation.
Use either Dell EMC PowerPath or native Windows multipathing (MPIO) on all hosts. It is important that the two do not coexist on the same server as this can cause instability.
Utilize multipathing software enabled on the host rather than multiple connections per iSCSI session (MC/S). MC/S is not supported for PowerMax iSCSI targets.
For Windows MPIO, use the "Round Robin (RR)" load balancing policy. Round Robin uses an automatic path selection rotating through all available paths, enabling the distribution of load across the configured paths. This path policy can help improve I/O throughput. For PowerMax storage arrays, all available paths will be used in the Round Robin policy.
Use the “Symmetrix Optimized” algorithm for Dell EMC PowerPath software. This is the default policy and means that administrators do not need to change or tweak configuration parameters. PowerPath selects a path for each I/O according to the load balancing and failover policy for that logical device. The best path is chosen according to the algorithm.