In a typical SAN configuration, HBA ports (initiators) and storage front-end ports (targets) are connected to a switch. The switch software creates zones, pairing initiators and targets. Each pairing creates a physical path between the server and the storage through which I/Os can pass. For configuring server and storage connectivity, using SAN switches, the following best practices apply:
The following points provide guidelines for number of paths per device and server to storage connectivity.
When using FC protocol, the PowerMax 32Gb front-end modules are configured and referred to as FAs (FC front-end adapters). When using FC-NVMe protocol, the PowerMax 32Gb front-end modules use the same hardware, but are configured and referred to as FNs (FC-NVMe front-end adapters).
Because the PowerMax embedded management requires access to the storage, even when the system uses FNs exclusively for server connectivity, at least one port per director on the first engine (brick) must be configured as FA. When a port is configured, CPU cores from that director are assigned to it.
As a result, when comparing storage CPU core assignment between a single-engine system configured exclusively for FC server connectivity, to a system configured exclusively for FC-NVMe server connectivity, all other things being equal, there are slightly fewer cores to support FC-NVMe ports.
This difference is not important in most cases and does not detract from FC-NVMe advantages such as improved latency and optimized I/O access. In addition, the more engines (bricks) that are configured, the less of a difference this will make as it only affects the first engine. However, in cases where maximum IOPS are expected from the system (such as in acceptance tests and benchmarks), it could be seen as an advantage to using FC instead of FC-NVMe.
PowerMax uses masking views to determine which devices are visible to servers. A masking view contains a Storage Group (SG), Port Group (PG), and Initiator Group (IG). When you create a masking view, the devices in the SG are made visible to the server(s) initiators in the IG, with access to storage by the ports in the PG.
When changes are made to any of the masking view components, they automatically update the masking view. For example, adding devices to the SG automatically makes the new devices visible to the server through the initiators and ports in the masking view.
A storage group (SG) contains a group of devices that are managed together. Additionally, an SG can contain other SGs, in which case the top level SG is called a parent SG, and the lower level SGs are called child SGs. In a parent/child SG configuration, devices are managed either by using any of the child SGs directly or by using the parent SG so that the operation affects all the child SGs. For example, use the parent SG for the masking view and the child SGs for database backup/recovery snapshots and more granular performance monitoring.
An initiator group (IG) contains a group of server initiators’ (HBA ports) World Wide Names (WWNs) to which storage devices are mapped. Additionally, an IG can contain other IGs, in which case the top level IG is called a parent IG, and the lower level IGs are called child IGs.
A parent/child IG deployment is useful when the database is clustered. Each child IG contains the initiators from a single server and the parent IG aggregates all of them. When the masking view is created, the parent IG is used. When a cluster node is added or removed from the cluster, the masking view does not change. Only the parent IG is updated by adding or removing the child IG that matches the node that is being added or removed.
A port group (PG) contains a group of targets (storage front-end ports). When placed in a masking view, these are the storage ports through which the devices in the SG are accessed.
Because the physical connectivity is determined by the SAN zone sets, for simplicity of management, we recommend that you include all the storage ports that the database will be using in the PG. The specific path relationships between the PG ports and IG initiators are determined by the zone sets.
For environments that are not mission-critical, it is sufficient to create a simple masking view for the entire database with all the devices in a single SG, and therefore use a single masking view.
The following guidelines apply to high-performance, mission critical databases in which data and log SGs are separated to allow backup and recovery by using storage snapshots and more granular performance monitoring.
In this case, data_sg and redo_sg are joined under a parent dataredo_sg SG, and FRA is in its own SG. The following table shows that there are two masking views for the database and one for the cluster or Grid Infrastructure.
Table 6. Sample masking view design
Server1, Server2, …
(same as above)
(same as above)
(same as above)
(same as above)
(same as above)
(same as above)
If the database is clustered, the IG is a parent IG that contains the cluster nodes. If the database is not clustered, the IG can contain the single server initiators (no child IGs). Similarly, if the database is clustered, “Grid” ASM disk group devices can be in their own SG and masking view. If the database is not clustered, the Grid masking view is optional.
There are several advantages to this design:
The following example shows Command Line Interface (CLI) execution, from device creation all the way to masking view. Masking views can be created in Unisphere using Wizards. CLI is recommended only when such commands are scripted and saved.
Masking view creation example using Command Lines Interface (CLI)
export SYMCLI_SID=<SID> # Storage ID
# Create ASM Disk Groups devices
symdev create -v -tdev -cap 40 -captype gb -N 3 # +GRID
symdev create -v -tdev -cap 200 -captype gb -N 16 # +DATA
symdev create -v -tdev -cap 50 -captype gb -N 8 # +REDO
symdev create -v -tdev -cap 150 -captype gb -N 4 # +FRA
symsg create grid_sg # Stand-alone SG for Grid infrastructure
symsg create fra_sg # Stand-alone SG for archive logs
symsg create data_sg # Child SG for data and control file devices
symsg create redo_sg # Child SG for redo log devices
symsg create dataredo_sg # Parent SG for database (data+redo) devices
# Add appropriate devices to each SG
symsg -sg grid_sg addall -devs 12E:130 # modify device IDs
symsg -sg data_sg addall -devs 131:133,13C:148 # as necessary
symsg -sg redo_sg addall -devs 149:150
symsg -sg fra_sg addall -devs 151:154
# Add the child SGs to the parent
symsg -sg dataredo_sg add sg data_sg,redo_sg
symaccess -type port -name 188_pg create # 188 is the storage SID
symaccess -type port -name 188_pg add -dirport 1D:4,1D:5,1D:6,1D:7
symaccess -type port -name 188_pg add -dirport 2D:4,2D:5,2D:6,2D:7
symaccess -type port -name 188_pg add -dirport 1D:8,1D:9,1D:10,1D:11
symaccess -type port -name 188_pg add -dirport 2D:8,2D:9,2D:10,2D:11
symaccess -type initiator -name dsib0144_ig create
symaccess -type initiator -name dsib0144_ig add -wwn 10000090faa910b2
symaccess -type initiator -name dsib0144_ig add -wwn 10000090faa910b3
symaccess -type initiator -name dsib0144_ig add -wwn 10000090faa90f86
symaccess -type initiator -name dsib0144_ig add -wwn 10000090faa90f87
symaccess -type initiator -name dsib0146_ig create
symaccess -type initiator -name dsib0146_ig add -wwn 10000090faa910aa
symaccess -type initiator -name dsib0146_ig add -wwn 10000090faa910ab
symaccess -type initiator -name dsib0146_ig add -wwn 10000090faa910ae
symaccess -type initiator -name dsib0146_ig add -wwn 10000090faa910af
symaccess -type initiator -name dsib0057_ig create
symaccess -type initiator -name dsib0057_ig add -wwn 10000090fa8ec6e8
symaccess -type initiator -name dsib0057_ig add -wwn 10000090fa8ec6e9
symaccess -type initiator -name dsib0057_ig add -wwn 10000090fa8ec8ac
symaccess -type initiator -name dsib0057_ig add -wwn 10000090fa8ec8ad
symaccess -type initiator -name dsib0058_ig create
symaccess -type initiator -name dsib0058_ig add -wwn 10000090fa8ec6ec
symaccess -type initiator -name dsib0058_ig add -wwn 10000090fa8ec6ed
symaccess -type initiator -name dsib0058_ig add -wwn 10000090fa8ec720
symaccess -type initiator -name dsib0058_ig add -wwn 10000090fa8ec721
symaccess -type initiator -name db_ig create # Parent IG for RAC
symaccess -type initiator -name db_ig add -ig dsib0144_ig
symaccess -type initiator -name db_ig add -ig dsib0146_ig
symaccess -type initiator -name db_ig add -ig dsib0057_ig
symaccess -type initiator -name db_ig add -ig dsib0058_ig
symaccess create view -name dataredo_mv -pg 188_pg -ig db_ig -sg dataredo_sg
symaccess create view -name fra_mv -pg 188_pg -ig db_ig -sg fra_sg
symaccess create view -name grid_mv -pg 188_pg -ig db_ig -sg grid_sg
PowerMax uses thin devices exclusively, which means that storage capacity is only consumed when applications write to the devices. This approach saves flash capacity because storage is only consumed with actual demand.
PowerMax devices can be sized from a few megabytes to multiple terabytes. Therefore, you might be tempted to create only a few very large devices, but consider the following:
While there is no one size that fits all databases, for the size and number of devices, we recommend the following: