Home > Storage > PowerMax and VMAX > Storage Admin > Dell PowerMax: Data Mobility Best Practices and Operational Guide > Metro-based NDM overview
All migrations to Dell PowerMax arrays use Metro-based NDM built upon SRDF/Metro active/active technology. For more information about SRDF/Metro, see the SRDF/Metro Overview and Best Practices document.
Benefits of using Metro-based NDM include the following:
Previous versions of NDM provided with Solutions Enabler 8.3, Unisphere 8.4, and HYPERMAX OS 5977.1125 releases allowed data to be migrated from a VMAX (5876) to a VMAX3 (5977) array without application downtime.
With the release of Solutions Enabler 9.0 and HYPERMAX OS Q2 2018, the NDM feature was enhanced to help automate the process of moving applications from a VMAX3 (5977) or VMAX All Flash (5977) array to another VMAX All Flash (5978) or PowerMax (5978) array.
With the later release of Solutions Enabler 9.1 and higher, full interfamily migrations are possible (5977 to 5977, 5978 to 5978 or 6079, or 6079 to 6079).
The source hardware is not a limiting factor. Metro-based NDM is supported from arrays running 5977 or 5978 to 5977,5978 or 6079 code, regardless of the underlying technology.
With SRDF/Metro, the session goes active/active only after all SCSI information and application data is synchronized from R1-R2 using SRDF adaptive copy technology. The time to be fully active/active largely depends on the time it takes for the data transfer to finish.
To improve the user experience with NDM, the software is enhanced such that the SRDF/Metro session goes active/active instantly on NDM create. This instant active mode of operation only applies to underlying Metro technology for NDM and does not apply to regular SRDF/Metro for running active/active applications. This makes both sides of the SRDF/Metro active and read/write to the host within the duration of the create command.
NDM is supported across SRDF synchronous distances. However, because of the requirement that the host see both the source and target storage, migrations are typically performed between arrays within a data center.
The following steps and diagram describe the process flow for Metro-based NDM:
The following steps and diagram describe the component flow for Metro-NDM with precopy:
Two of the underlying processes that ensure that NDM is non-disruptive is the ability to always maintain device visibility by spoofing and swapping devices IDs between source and target devices.
NDM can migrate data and cut over to the target array non-disruptively by both swapping device IDs between the source and target devices and manipulating the paths from the host to both arrays. The device ID contains the device’s unique WWN and other information about it, such as a device identifier that the user has assigned to a device through Solutions Enabler or Unisphere. All this information is copied to the target devices.
NDM performs the data migration and device ID swap without the host being aware. The path management changes appear as either the addition of paths or the removal of paths to the existing source device. To the host and application, there is no change in the device that it is accessing and access to the device is maintained throughout the entire migration process.
For an example of this in action see section Examine paths and device post commit.
Devices included in a migration session on the source array can remain in existing replication sessions throughout the migration. NDM evaluates the state of any current replication sessions before proceeding with the migration and makes sure that they are in the proper state to allow the migration to succeed. By maintaining existing replication, NDM ensures there is no Recovery Point Objective (RPO) impact during the period of the migration.
Though existing replication sessions can be left in place during the migration, replication relationships are not migrated to the target array. These replication resources need to be created on the target array, if required, at the appropriate point in the migration.
For example, SRDF replication can be configured between the target array and its remote array while in the CutoverSyncing state or after the CutoverSync state is reached. The new DR RDF pairs can then be allowed to synchronize before the commit so that DR is maintained throughout the migration. SRDF can also be set up in the CutoverNoSync state, which is reached when the sync command is used to stop replication. For local Snap/VX sessions running against the source volumes, existing sessions on the source array can continue as normal during the migration and new sessions can also be created while the new SRDF to the DR site is configured.
Storage on the source and target arrays that is involved in the migration of an application should never be altered, and the migration resources should not be managed, outside of the NDM commands. If any changes in the migration session are detected when a migration operation is performed, the operation is blocked until the changes that were made are undone, allowing the migration operation to proceed as expected.
The following are examples of manual changes made to the NDM session that will cause the session to stop or fail:
Most of the steps for configuring and unconfiguring NDM are done automatically using the environment setup and remove commands. Prior to running the setup, the following steps are required:
Note: SRDF ports do not need to be dedicated to NDM operations. Ports involved in ongoing SRDF disaster recovery operations may be shared with NDM sessions, but analysis should be performed prior to setting up NDM to make certain there is adequate bandwidth to handle both DR and migration traffic.
A minimum of two SRDF links (FC or GigE) are required to support an NDM session. These ports must be spread across at least two directors.
The migration source or target devices cannot be tagged for RecoverPoint use.
Cutover NDM supports hosts that boot directly from the VMAX array. The host boot BIOS must be updated to point to the target volume so that when the host is rebooted later it will find the volume containing the operating system. For details on boot drive configuration, refer to the vendor specific HBA management guide or BIOS guides.
All features and functionality for Data Mobility with Non-Disruptive Migration are fully supported through the REST API. See Dell PowerMax API documentation for further information.
The RDF group created for NDM between two arrays can be identified by its label. The label follows the format of M_XXXXYYYY. XXXX is the last four digits of the lower numbered storage array and YYYY is the last four digits of the higher numbered array. This group is used for all NDM migrations between the two arrays. This group is automatically created as part of the environment setup. Figure 3 The following figure shows the group as views through Data Protection > SRDF Groups view in Unipshere for PowerMax.
Multiple-environment setup operations can be performed for a single source array, provided that a different target array is specified for each migration environment. All NDM RDF groups on a source or target array can be in use simultaneously, for concurrent migrations to or from an array.
For example, a single PowerMax, VMAX All Flash, or VMAX3 target array can have multiple NDM RDF groups, each connected to one of four different source arrays. This means that the target array can be the target of migrations from each of those four arrays in a consolidation use case.
Likewise, a single source array can have multiple NDM RDF groups, each connected to one of four different targets. This means that storage groups on the source array can be migrated to any of those four PowerMax arrays.
When migrations are completed, the environment can be removed for each array pair. The environment remove operation removes the NDM RDF group between the two arrays, provided that no devices on either array have an RDF mirror in the NDM RDF group.
When NDM sessions are created, NDM configures the following items on the target array with the same names as those on the source array:
Both initiator groups and port groups can exist in multiple masking views, so these groups are reused when applicable.
A host also may be attached to multiple source arrays. For example, if a storage group spans two source arrays, when the storage is migrated, the target array contains two sets of SGs, IGs, PGs, and MVs, one for each source array.
When the first SG on the first array is migrated to the target array, the following occurs:
When a second SG on the second source array is migrated to the target array, the following rules apply:
Alternatively, you can manually create the PG on the target in advance. Then, select this as the target PG for the NDM session or create it as part of the NDM create process. This option is new for Solutions Enabler 9.1.
All migrations are performed against an SG, which is the data container that is migrated with NDM. The following rules apply:
For hardware and software requirements, refer to the PowerMax/VMAX All Flash/VMAX3 Features Simple Support Matrix.