Home > Storage > PowerStore > Data Protection > Dell PowerStore: Metro Volume > A sample use case for Metro Volume and Red Hat High-Availability Cluster
Metro Volume effectively protects hosts and applications in various failure scenarios. To illustrate its use in a cluster environment, the following example demonstrates Metro Volume's protection capabilities with a Red Hat High-Availability Cluster.
Red Hat High-Availability Cluster is an Add-On that creates and configures high availability clusters. The cluster comprises various components, including Pacemaker, Corosync, Cluster Logical Volume Manager (CLVM), and GFS2 cluster file system.
This paper does not provide the procedure to install and configure the cluster. For information about Red Hat High-Availability Cluster, see the document Configuring and managing high availability clusters on the Red Hat documentation portal.
Figure 85 depicts the use of Metro Volume in a Red Hat High-Availability Cluster environment.
Two PowerStore systems are configured to replicate to each other on a replication network. Array 1 and Host 1 are in the same location, while Array 2 and Host 2 are in a different location. Therefore, the hosts are configured with the following host connectivity.
PowerStore | Linux hosts | Host connectivity |
Array 1 | Host 1 | Host is co-located with this system |
Array 1 | Host 2 | Host is co-located with the remote system |
Array 2 | Host 2 | Host is co-located with this system |
Array 2 | Host1 | Host is co-located with the remote system |
Three volumes are created on Array 1. metro-vol-001 and metro-vol-002 are members of a PowerStore volume group, metro_vg1. The Metro protection is then enabled for the volume group and the individual volume metro-vol-003. This creates the Metro replication session for each volume and volume group, and mirror copies of the volume and volume group on Array 2. For more information about how to configure Metro Volume, see the Metro Volume operations section in this document.
Metro volume/volume group | Array | Map to Linux host | Metro Role |
metro_vg1 (metro-vol-001 and metro-vol-002) | Array 1 | Host 1 | Preferred |
metro_vg1 (metro-vol-001 and metro-vol-002) | Array 2 | Host 1 | Non-Preferred |
metro-vol-003 | Array 1 | Host 1 | Preferred |
metro-vol-003 | Array 2 | Host 2 | Non-Preferred |
Each side of a Metro Volume is designated as either Preferred or Non-preferred. The preferred volume is the one used to initiate the metro sync session. Preferred volumes are not restricted to the same array but can be distributed across arrays. Furthermore, Metro roles can be modified after their initial designation.
To view the Metro role and Metro Remote System for a volume in PowerStore Manager UI, go to Storage > Volumes. Use the column selection dropdown to add the Metro Role and Metro Remote System columns to the table.
In our example, for simplicity, because all metro sessions are initiated on Array 1, the volumes on Array 1 are designated as Preferred.
The role plays a crucial part in handling failure in split-brain situations through the Polarization mechanism. Without a witness, the Polarization mechanism always keeps the preferred volume online, while the non-preferred volume is taken offline. With a witness, the decision-making process gains intelligence, allowing it to handle a broader range of failure scenarios.
For more information about Polarization and Witness, see the Polarization section and the Witness section in this document.
The Metro volume group metro_vg1 consists of metro-vol-001 and metro-vol-002, and is mapped to Host 1 from both Array 1 and Array 2. This forms a uniform storage presentation where Host 1 has access to both sides of the Metro Volume, shown as blue lines in Figure 85.
The Metro volume group metro_vg1 is also mapped to Host 2 from both Array 1 and Array 2, shown as blue lines in Figure 85.
The solid lines in the figure represent Active/Optimized paths to the arrays, while the dashed lines represent Active/Non-optimized paths.
The volume metro-vol-003 from Array 1 is mapped to Host 1, while metro-vol-003 from Array 2 is mapped to Host 2. This forms a non-uniform storage presentation where each Linux host can only access one side of the Metro Volume, shown as green lines in Figure 85.
To ensure consistent device names across cluster hosts, a multipath alias is created for each Metro Volume based on its UUID. See the Device alias section in this document.
After performing a SCSI scan on each clustered host, examine the paths and their ALUA states using the CLI or the script provided in the Examining path priority and ALUA state section in this document.
SCSI scan command: rescan-scsi-bus.sh -a
Run the map-paths.sh script on both Linux hosts to display the path states of each volume as follows.
On Host 1:
# Uniform presentation
metro-vol-001 sdan active 50 active/optimized array1-nb-iom0-p0
metro-vol-001 sdat active 50 active/optimized array1-nb-iom0-p1
metro-vol-001 sdab active 10 active/non optimized array2-na-iom0-p0
metro-vol-001 sdal active 10 active/non optimized array2-nb-iom0-p0
metro-vol-001 sdb active 10 active/non optimized array2-na-iom0-p1
metro-vol-001 sdl active 10 active/non optimized array2-nb-iom0-p1
metro-vol-001 sdn active 10 active/non optimized array1-na-iom0-p1
metro-vol-001 sdq active 10 active/non optimized array1-na-iom0-p0
# Uniform presentation
metro-vol-002 sdu active 50 active/optimized array1-na-iom0-p1
metro-vol-002 sdz active 50 active/optimized array1-na-iom0-p0
metro-vol-002 sdac active 10 active/non optimized array2-na-iom0-p0
metro-vol-002 sdam active 10 active/non optimized array2-nb-iom0-p0
metro-vol-002 sdas active 10 active/non optimized array1-nb-iom0-p0
metro-vol-002 sdaz active 10 active/non optimized array1-nb-iom0-p1
metro-vol-002 sdc active 10 active/non optimized array2-na-iom0-p1
metro-vol-002 sdm active 10 active/non optimized array2-nb-iom0-p1
# Non-uniform presentation
metro-vol-003 sdau active 50 active/optimized array1-nb-iom0-p0
metro-vol-003 sdba active 50 active/optimized array1-nb-iom0-p1
metro-vol-003 sdaa active 10 active/non optimized array1-na-iom0-p0
metro-vol-003 sdw active 10 active/non optimized array1-na-iom0-p1
On Host 2:
# Uniform presentation
metro-vol-001 sdan active 50 active/optimized array2-nb-iom0-p0
metro-vol-001 sdn active 50 active/optimized array2-nb-iom0-p1
metro-vol-001 sdab active 10 active/non optimized array2-na-iom0-p0
metro-vol-001 sdap active 10 active/non optimized array1-nb-iom0-p0
metro-vol-001 sdav active 10 active/non optimized array1-nb-iom0-p1
metro-vol-001 sdb active 10 active/non optimized array2-na-iom0-p1
metro-vol-001 sdp active 10 active/non optimized array1-na-iom0-p1
metro-vol-001 sdr active 10 active/non optimized array1-na-iom0-p0
# Uniform presentation
metro-vol-002 sdac active 50 active/optimized array2-na-iom0-p0
metro-vol-002 sdc active 50 active/optimized array2-na-iom0-p1
metro-vol-002 sdaa active 10 active/non optimized array1-na-iom0-p0
metro-vol-002 sdam active 10 active/non optimized array2-nb-iom0-p0
metro-vol-002 sdau active 10 active/non optimized array1-nb-iom0-p0
metro-vol-002 sdba active 10 active/non optimized array1-nb-iom0-p1
metro-vol-002 sdm active 10 active/non optimized array2-nb-iom0-p1
metro-vol-002 sdy active 10 active/non optimized array1-na-iom0-p1
# Non-uniform presentation
metro-vol-003 sdao active 50 active/optimized array2-nb-iom0-p0
metro-vol-003 sdo active 50 active/optimized array2-nb-iom0-p1
metro-vol-003 sdad active 10 active/non optimized array2-na-iom0-p0
metro-vol-003 sdd active 10 active/non optimized array2-na-iom0-p1
Clustered LVM (CLVM) is an extension of the standard Logical Volume Manager. It allows multiple Linux hosts to manage a shared storage pool simultaneously. In our example, two CLVM logical volumes are created to manage the Metro Volumes on the clustered hosts.
GFS2 is a Linux cluster file system for RHEL. It allows multiple hosts in a cluster to mount the file system simultaneously, enabling concurrent file access. GFS2 is commonly used with CLVM in a HA environment. A GFS2 file system is created on each CLVM logical volume and mounted on both Host 1 and Host 2.
You can host application workloads on these GFS2 file systems. In our example, we distribute the data files of three KVM VMs across these file systems: