Home > Storage > Unity XT > Data Protection > Dell Unity: Replication Technologies > Supported Replication Configurations
Dell Unity’s Native Asynchronous Replication features allow supported storage resources to be replicated locally within the same system, or remotely between systems. The following sections outline the supported configurations for asynchronous replication. For more information about which systems are supported for asynchronous replication, please review Appendix C: Replication Support Across Platforms.
Dell Unity’s Native Asynchronous Replication feature allows Block and File resources to be replicated locally within the same system. When replicating file systems or VMware NFS datastores, the NAS Server must also be replicated. When configuring local replication, the source and destination storage resources cannot exist within the same Pool. By replicating to a different Pool, a storage resource is protected against the unlikely event the source Pool encounters a data unavailable situation. All asynchronous replication operations are supported when local replication is configured. Replication Connections and Interfaces are not required when local replication is configured. Figure 34 below shows an example of local replication.
Dell Unity’s Native Asynchronous Replication feature is supported in many different topologies, and deployment models will vary depending on the requirements of the configuration. At a system level, the following configurations are supported:
System Level
Figure 35 is a graphical view of the supported topologies listed above. Note the figure uses LUNs to represent the storage resource, but file resources are also supported in these topologies. In all topologies mentioned, a Dell Unity All Flash system, a Dell Unity Hybrid system, or a Dell UnityVSA system can be used for any system in the configuration. For a list of other Dell storage systems supporting asynchronous replication to and from Dell Unity, please see Appendix C. Replication Interfaces are required to be configured on each system participating in remote replication. A Replication Connection also needs to be configured between each system pair to allow replication sessions to be configured. Asynchronous replication allows for many different deployment models to meet the needs of an organization.
One-Directional replication is typically deployed when only one of the systems will be used for production I/O. The second system is a replication target for all production data and sits idle within the same data center or a remote location. If the need arises, the DR system can be placed into production and host production I/O. In this scenario, mirroring the production system’s configuration on the DR system is suggested, as each system would then have the same performance potential. For physical systems this would mean mirroring the drive configurations and Pool layout, while on Dell UnityVSA systems this would mean configuring similar Virtual Drives and Pools.
The Bi-Directional replication topology is typically used when production I/O needs to be spread across multiple systems or locations. The systems may exist within a single data center or in different, remote locations. When using this replication topology, production I/O from each system is replicated to the peer system. If there is an outage, one of the systems can be promoted as the primary production system, and all production I/O can be sent to it. Once the outage is addressed, the replication configuration can be changed back to its original configuration. This replication topology ensures both systems are in use by production I/O at all times.
The One-to-Many replication topology is usually deployed when production exists on a single system, but replication needs to occur to multiple remote systems. This replication topology can be used to replicate data from a production system to a remote location to provide local data access to a remote team. At the remote location, Dell Unity Snapshots can be used to provide host access to the local organization or test team. In this topology, any combination of Dell Unity All Flash systems, Dell Unity Hybrid systems, and Dell UnityVSA systems can be used. The production system may be an All Flash system replicating to multiple physical All-Flash or Hybrid systems and/or Dell UnityVSA systems.
The Many-to-One replication topology is deployed when multiple production systems exist and replicating to a single system to consolidate the data is required. This topology is useful when multiple production data sites exist, and data must be replicated from these sites to a single DR data center. One example of this configuration is Remote Office Branch Office (ROBO) locations. A Dell UnityVSA may be deployed at each ROBO site, and all replicate back to a single All Flash or Hybrid Flash system. Utilizing Dell UnityVSA at ROBO locations eliminates the need for a physical Dell Unity system at each site.
For the One-to-Many and Many-to-One replication topology examples in Figure 35, One-Directional replication is depicted. One-Directional replication is not a requirement when configuring the One-to-Many and Many-to-One replication topologies. Each individual Replication Connection can be used for bi-directional replication between systems, which allows for more replication options than what is depicted.
In UnityOS version 4.4, Dell Unity supports the ability to use the MetroSync feature, also known as native file synchronous replication, which allows a file resource to be replicated synchronously to one destination system and asynchronously to another destination system simultaneously. For more information, see the Dell Unity: MetroSync white paper on Dell Technologies Info Hub.
Starting with Dell UnityOS version 5.0, asynchronously replicated file resources can also be configured in advanced replication topologies. This allows for configurations such as fan-out and cascading replication at the granularity of a NAS Server and its associated file resources. Prior to this, these configurations were not supported since asynchronous replication was limited to a single destination. The ability to replicate and store the same dataset on multiple systems provides additional data protection and enables use cases such as content distribution. This feature does not support synchronous replication in versions prior to the UnityOS 5.2 release and is not available for block resources.
To use this feature, all systems in the topology must be running Dell UnityOS version 5.0 or later. Replication between Dell UnityOS 5.0 and 4.x is still supported in a one-directional configuration. This feature is available on both physical and virtual Dell Unity systems. The following advanced topologies are supported:
File Resource Level
Example: A à B and A à C
Example:
A à B (Synchronous)
A à C (Asynchronous)
A à D (Asynchronous)
A à E (Local Asynchronous)
> A à B (Synchronous)
> A à C (Asynchronous)
> B à D (Asynchronous)
> A à B (Asynchronous)
> B à C (Synchronous
Fan-out replication configurations allow a file resource to be replicated to up to four different destination systems. Cascading replication configurations allow a destination file resource to be replicated again to another system. When the RPO is reached from B à C, the data that is replicated is based on the last completed sync from A. That means if there is an ongoing sync from A à B when the B à C RPO is reached, the new changes from A are not replicated until the next RPO between B à C is reached.
With cascaded configurations, you can only failover to the directly adjacent site. For example, assume a cascaded configuration from A à B à C. In this configuration, you cannot failover directly from A à C. To accomplish this, you would need to failover from A à B and then failover from B à C.
A combination of fan-out and cascading can be configured if each resource does not exceed four total replication sessions. When designing replication topologies, it is important to note that a maximum of four total replication sessions can be created for each resource. For example, if you create a cascade from A à B à C, then C can only fan-out to three other systems due to the existing session from B à C. The tested limit for the number of cascaded sessions is three, but there is no hard limit.
Figure 36 shows an example of a supported advanced replication topology. An example of a cascaded topology is replicating from Hopkinton à Boston à London. An example of a fan-out topology is San Francisco à Tokyo and Mexico City. Also shown in the figure is the ability for each session in the topology to have its own custom RPO configured. The valid range for the RPO value is 5 minutes through 1440 minutes (24 hours).
Advanced replication topologies can be used in conjunction with Proxy NAS Servers to enable access from any one of the destination systems. This is useful for use cases such as DR testing, test/dev, and analytics. Also, NDMP can be enabled on any one of the systems for backup operations.
Once a replication session is configured between two systems, a second replication session cannot be configured using the same two systems. A fan-out configuration requires each destination system to be unique physical or virtual systems. However, one of the sessions can be configured for local replication to a different pool within the same system. The local replication session counts towards the maximum limit of four sessions for that resource.
It is prohibited to configure a replication session to a resource that is already a replication destination for another resource. For example, assume a fan-out topology with A à B and A à C. In this configuration, you cannot create a replication session from B à C as that results in multiple sessions writing to the same destination. You also cannot replicate a resource back to its original source system.
It is also prohibited to create a configuration or run operations that result in multiple systems replicating to the same destination resource. For example, assume a cascaded topology with A à B and B à C. If you initiate a failover from B à C, you cannot initiate a resume operation to start replicating from C à B.
After advanced replication is configured, it is crucial to properly document the topology for future reference. This is very valuable during disaster scenarios where failing over the correct session is necessary to restore data access and minimize downtime. Each system is only aware of the systems that it is replicating to, so it is unable to provide an end-to-end topology view. Remember to keep this updated if any systems are added, removed, or has its role changed in the topology. For example, if you have a fan-out from A à B and A à C, and a failover is initiated from A à B. Afterwards, if you resume the failed over session from B à A, this becomes a cascaded configuration from B à A à C.
Management of advanced replication sessions is performed at the NAS Server level and are automatically propagated to all associated file system sessions that are replicated to the same system. These operations include pause, resume, sync, failover with sync, failover, and failback. When failing over sessions in an advanced topology, it is also important to avoid failing over multiple sessions as that could result in a duplicate IP scenario. For example, you can create a fan-out configuration from A à B and A à C. Initiating failover operations on both B and C would result in both of those sites turning to production mode and cause a duplicate IP conflict. This could also happen if you initiate a failover from B à C in a cascaded configuration, since both A and C would be running in production mode.
After a failover, it is possible for changes to be made to the destination resource. For example, assume a fanout configuration from A à B and A à C, and a failover from A to B has occurred. In this case, both A and B could have different changes so a normal failback operation fails. The admin must check the box to either preserve any changes made or discard them by overwriting the data to successfully failback. The available options are:
Using the same example as above, a normal resume operation also fails since both A and B could have different changes. To successfully resume the session, the admin must check the box to resync and overwrite any data written to the remote system. In this case, B starts replicating its changes to A and overwrites any changes that were made to A. After the resume operation completes, the topology is transformed into a cascaded configuration where B à A à C.
In each of these examples, the original configuration is restored after the failback or resume operation completes. Prior to Dell UnityOS 5.1, all the replication updates leverage the internal replication common base snapshots to replicate only the changed data. After Dell UnityOS 5.1, user snapshots are used as common base when running a failback or resume action on a replication session.
With advanced replication configured, snapshot replication is also supported. For a single storage object, snapshot replication can only be enabled on one session at a time when using advanced replication. For example, if you create a cascade from A à B à C, then either the A à B or B à C session can have snapshot replication enabled, but not both at the same time, as shown in Figure 37.
In this configuration, it is still possible to replicate a snapshot from A à C, but a workaround is required. To accomplish this, follow the procedure below:
To enable or disable the snapshot replication feature, the replication session must first be paused and then the operation must be performed on the source system. This operation can be scripted using UEMCLI or REST API if this needs to be done often.
When configuring a cascaded replication topology that includes four or more systems, it may be possible to enable snapshot replication on multiple sessions simultaneously. This can be accomplished by enabling snapshot replication between A à B, leaving snapshot replication disabled between B à C, and also enabling snapshot replication between C à D. In this configuration, the replicated snapshots for the separate sessions are not consistent with each other.
With a mixed topology, it may be possible to have snapshot replication enabled on multiple sessions. For example, you can create a fan-out from A à B and A à C along with a cascade from C à D. With this configuration, you can enable snapshot replication on both A à B and from C à D. These configurations are allowed because from each site’s point of view, there is still only one session that has snapshot replication enabled. An example of this configuration is shown in Figure 38.
In Dell UnityOS version 5.1 and later, snapshot replication is supported on all replication sessions within an advanced File replication environment. Advanced File replication topologies include fan-out and cascade topologies. To support snapshot replication, all systems within the advanced File replication topology must be running UnityOS version 5.1 or later. By default, snapshot replication is not enabled on a replication session, but can be enable at time of session creation or modified at any time using Unisphere, Unisphere CLI, or REST API.
In Unisphere, the Cascade replicated snapshots option has been added to allow for snapshot replication across all replication session in an advanced File remote replication topology. This setting allows for snapshots to be replication from site A à B and the same snapshots replicated from site B à C. Using Figure 36 as an example, enabling the Cascade replicated snapshots option on replication sessions from Boston to London allows snapshots created and replicated from Hopkinton to Boston to also replicate to London. This setting is enabled and disabled on a per replication session basis.
Advanced replication configurations are also supported with MetroSync’s ability to asynchronously replicate to a third site. From the third site, cascaded and fan-out topologies can be configured. Prior to UnityOS 5.2, this feature is not supported for synchronous replication, you cannot cascade or fan-out from the synchronously replicated systems. Figure 39 shows a supported advanced replication topology with MetroSync.
Dell UnityOS 5.2 removes the previous limitation by allowing to cascade from the destination resource of a synchronous replication session. The configuration can be referenced as bridge mode, as shown in Figure 40. By using a bridge mode configuration, the main site is replicated synchronously to a near site, then replicated to an additional site using asynchronous replication. The production resource can also be replicated asynchronously to an additional site. By doing this, we can have additional copies of the same data while expanding the fault domains.
Additionally, with UnityOS 5.2 we can have a star mode configuration, as shown in Figure 41. With the main site being replicated synchronously to one site, and asynchronously replicated up to three additional sites.
As a recommendation to ensure no issues are seen, all the systems should be running the same UnityOS version. But as bare minimum requirement, to use the star and bridge modes we need to ensure that the systems participating in the synchronous replication session need to be running UnityOS version 5.2 or greater. The systems participating in the asynchronous replication sessions could potentially be running older UnityOS versions.