Migration tests were performed on a PowerFlex system to identify any bottlenecks that might impact the migration speed or data consistency after the migration.
Two 3-node PowerFlex clusters were used, each Dell R640 SO node is configured with 10x1.5TB NVMe drives and 4x25Gbps ports to ensure that they were not storage or network bound. A CO node with SDC installed was mapped to both clusters. On each PowerFlex cluster, volumes were created and mapped to the CO node. The CO node has visibility to volumes from both the PowerFlex clusters. PowerPath is installed on the CO node. The CO node configures (/dev/scini*) block devices automatically during service start and will use the PowerPath Migration Enabler to migrate data from one PowerFlex volume to another PowerFlex volume.
Testing on a single volume migration when PowerFlex is running normal workload. 8 KB IO size was considered with 50% read and 50% write. With this setup we achieved a maximum of 130K/130K IOPs for single SDC with eight SDS threads.
As the results show, there is a gentle decline when there is an increase in IOPs percentage of workload on source volume, but the migration speed that is achieved is still above 100 MB/sec with full workload.
Based on the single migration test, with this case multiple parallel migrations were performed. To test PowerFlex scalability additional volumes were added to perform the migrations simultaneously.
PowerPath Migration Enabler can have as many HostCopy in synchronized state as needed for migrations, however only eight will be actively syncing at any given time. As seen in the image above, after initiating eight migrations in parallel, the scaling is linear.