The following section provides details about PowerMax compression and deduplication of server I/Os.
When new writes arrive from the database servers, they are registered at the PowerMax cache and immediately acknowledged to the server, achieving low write latencies, as shown in the following figure.
Figure 30. Deduplication step 1: Server writes the register in the PowerMax cache
Because PowerMax cache is persistent, it does not have to write the data to the NVMe flash media immediately. Oracle can continue to write to the same or adjacent database blocks multiple times.
When PowerMax does write the data to the NVMe flash storage, if compression is enabled for the storage group, then the 128 KB cache slot with the new data is sent to the hardware compression module where the data is compressed and Hash IDs are generated.
The cache slot is tested for uniqueness and if indeed it is unique, the compressed version of the data is stored in the appropriate compression pool, and the thin device pointers are updated to point to the data’s new location, as shown in the following figure.
Figure 31. Deduplication step 2: Cache slot compressed and checked for uniqueness
If the compressed data is not unique—that is, if a previous identical copy of that same data is already stored compressed in PowerMax—the data is not stored again. Instead, just the thin devices’ pointers are updated to point to the existing compressed version of the data, as shown in the following figure.
Figure 32. Deduplication step 3: Deduplication of non-unique data
This example shows the power of deduplication: multiple copies of identical data are stored only once in the PowerMax storage system.