Home > Storage > PowerMax and VMAX > Storage Admin > Dell EMC PowerMax and VMAX: Non-Disruptive Migration Best Practices and Operational Guide > NDM overview
Benefits of using NDM include the following:
Non-Disruptive Migration leverages VMAX SRDF replication technologies to move the application data to the new storage array. It also uses PowerMax/VMAX auto-provisioning, in combination with Dell EMC PowerPath or a supported host multipathing solution, to manage host access to the data during the migration process.
NDM is available in the following two forms depending on the source array involved in the migration session. From a user standpoint, the process is very similar in terms of interaction, but the architecture is fundamentally different.
Note: Reference the extensive support matrix when planning to migrate applications using NDM. In addition to the support matrix, reference the appendixes in this document for caveats on specific host-OS and multipathing combinations before attempting a migration.
Since the initial release of the NDM feature, the cutover feature has been the method by which migrations from an array running 5876.xx.xx code have been undertaken. This process uses what is referred to as the three Cs: create, cutover, and commit.
The migration of an application from the source to the target array is completed using a sequence of user-initiated operations, each which is fully automated. These migrations are performed at the storage group (SG) level. The entire migration of a storage group can be accomplished with a few clicks in Unisphere or through simple and short Solutions Enabler commands.
After the create operation completes, the administrator issues a host rescan to allow the host to discover the paths to the newly created devices. Once this is complete, I/O issued by the application is directed to either the source or the target arrays through the host multipathing software. The array operating system ensures that all I/Os that are directed to the target by the host are actually serviced by the source array until the cutover.
Other supported NDM operations include the following:
Two of the underlying processes that ensure that NDM is non-disruptive is the technologies ability to maintain device visibility at all times by spoofing and swapping devices IDs between source and target devices.
NDM is able to migrate data and cut over to the target array non-disruptively by both swapping device IDs between the source and target devices and manipulating the paths from the host to both arrays. The device ID contains the device’s unique WWN and other information about it, such as a device identifier that the user has assigned to a device through Solutions Enabler or Unisphere. All of this information is copied to the target devices.
NDM performs the data migration and device ID swap without the host being aware. The path management changes appear as either the addition of paths or the removal of paths to the existing source device. To the host and application, there is no change in the device that it is accessing and access to the device is maintained throughout the entire migration process.
NDM is supported across SRDF synchronous distances. However, because of the requirement that the host see both the source and target storage, migrations are typically performed between arrays within a data center.
Devices included in a migration session on the source array can remain in existing replication sessions throughout the migration. NDM evaluates the state of any current replication sessions before proceeding with the migration and makes sure that they are in the proper state to allow the migration to succeed. By maintaining existing replication, NDM ensures there is no RPO impact during the period of the migration.
Though existing replication sessions can be left in place during the migration, replication relationships are not migrated to the target array. These replication resources need to be created on the target array, if required, at the appropriate point in the migration.
For example, SRDF replication can be configured between the target array and its remote array while in the CutoverSyncing state or after the CutoverSync state is reached. The new DR RDF pairs can then be allowed to synchronize before the commit so that DR is maintained throughout the migration. SRDF can also be set up in the CutoverNoSync state, which is reached when the sync command is used to stop replication. For local Snap/VX sessions running against the source volumes, existing sessions on the source array can continue as normal during the migration and new sessions can also be created at the same time that the new SRDF to the DR site is configured.
Storage on the source and target arrays that is involved in the migration of an application should never be altered, and the migration resources should not be managed, outside of the NDM commands. If any changes in the migration session are detected when a migration operation is performed, the operation is blocked until the changes that were made are undone, allowing the migration operation to proceed as expected.
The following are examples of manual changes made to the NDM session that will cause the session to stop or fail:
Most of the steps for configuring and unconfiguring NDM are done automatically using the environment setup and remove commands. Prior to running the setup, the following steps are required:
Note: SRDF ports do not need to be dedicated to NDM operations. Ports involved in ongoing SRDF disaster recovery operations may be shared with NDM sessions, but analysis should be performed prior to setting up NDM to make certain there is adequate bandwidth to handle both DR and migration traffic.
A minimum of two SRDF links (FC or GigE) are required to support an NDM session. These ports must be spread across at least two directors.
The migration source or target devices cannot be tagged for RecoverPoint use.
Cutover NDM supports hosts that boot directly from the VMAX array. The host boot BIOS must be updated to point to the target volume so that when the host is rebooted at a later date it will find the volume containing the operating system. For details on boot drive configuration, refer to the vendor specific HBA management guide or BIOS guides.
Both methods of NDM, Pass-through and Metro-based, are fully supported through the REST API.
The RDF group created for NDM between two arrays can be identified by its label. The label follows the format of M_XXXXYYYY. XXXX is the last four digits of the lower numbered storage array and YYYY is the last four digits of the higher numbered array. This group is used for all NDM migrations between the two arrays. This group is automatically created as part of the environment setup.
Multiple-environment setup operations can be performed for a single source array, provided that a different target array is specified for each migration environment. All NDM RDF groups on a source or target array can be in use simultaneously, for concurrent migrations to or from an array.
For example, a single PowerMax, VMAX All Flash, or VMAX3 target array can have multiple NDM RDF groups, each connected to one of four different source VMAX arrays. This means that the target array can be the target of migrations from each of those four VMAX arrays in a consolidation use case.
Likewise, a single VMAX source array can have multiple NDM RDF groups, each connected to one of four different target PowerMax, VMAX All Flash, or VMAX3 arrays. This means that the VMAX array can be the source of migrations to each of those four VMAX3 or All Flash arrays.
When migrations are completed, separate environment remove operations are required for each array pair. The environment remove operation removes the NDM RDF group between the two arrays, provided that no devices on either array have an RDF mirror in the NDM RDF group.
When NDM sessions are created, NDM configures the following items on the target array with the same names as those on the source array:
Both initiator groups and port groups can exist in multiple masking views, so these groups are reused when applicable.
A host also may be attached to multiple source arrays. For example, if a storage group spans two source arrays, when the storage is migrated, the target array contains two sets of SGs, IGs, PGs, and MVs, one for each source array.
When the first SG on the first array is migrated to the target array, the following occurs:
When a second SG on the second source array is migrated to the target array, the following rules apply:
Alternatively, you can manually create the PG on the target in advance. Then, select this as the target PG for the NDM session or create it as part of the NDM create process. This option is new for Solutions Enabler 9.1.
All migrations are performed against an SG, which is the data container that is migrated with NDM. The following rules apply:
For hardware and software requirements, refer to the PowerMax/VMAX All Flash/VMAX3 Features Simple Support Matrix.