Home > Storage > PowerScale (Isilon) > Product Documentation > Data Efficiency > Dell PowerScale OneFS: Data Reduction and Storage Efficiency > Inline data reduction and OneFS feature integration
The following table describes the integration, influence, and compatibility between inline data reduction and the various OneFS data services.
Note: Except for the job engine and non-disruptive upgrade (NDU), the following services each require a product license and are not enabled, configured, and active by default on a cluster.
OneFS feature | Detail | ||||||||||||||
SyncIQ | If compressed and/or deduplicated data is replicated to a target cluster with SyncIQ, those files are automatically decompressed and rehydrated on read and transferred and written to the target in their uncompressed form. However, if the target happened to also be a compression node pool, inline data reduction would occur. | ||||||||||||||
NDMP Backup | Because files are backed up as if the files were not compressed or deduplicated, backup and replication operations are not faster for compressed or deduplicated data. OneFS NDMP backup data will not be compressed unless compression is provided by the backup vendor’s DMA software. However, compression is often provided natively by the backup tape or VTL device. | ||||||||||||||
SnapshotIQ | Compression will not affect the data stored in a snapshot. However, snapshots can be created of compressed data. If a data reduction tier is added to a cluster that already has a significant amount of data stored in snapshots, it will take time before the snapshot data is affected by compression. Newly created snapshots will contain compressed data, but older snapshots will not. Deduplicated data can also end up in a snapshot if the HEAD file is deduped and then the shadow references are COWed to the snapshot. While OneFS inline compression works with writable snapshots data, deduplication is not supported, and existing files under writable snapshots will be ignored by inline deduplication. However, inline dedupe can occur on any new files created fresh on the writable snapshot. | ||||||||||||||
SmartLock | In-line data reduction is compatible with SmartLock, OneFS’ data retention and compliance product. Compression and deduplication deliver storage efficiency for immutable archives and write once, read many (or WORM) protected data sets. The F910, F900, F810, F710, F600, F210, F200, F700/7000, H5600, and A300/3000 hardware all support compressing and deduping data in files from a SmartLock/WORM domain, including compressing existing files that are currently stored in an uncompressed state. Since the logical content of the file data is unchanged, WORM compliance is unaffected. | ||||||||||||||
SED Encryption | Encryption with SED drives is supported on clusters with F810 nodes running OneFS 8.2.1 and later(15.4TB SSD drives only), H5600 nodes running OneFS 8.2.2 and later, F600 & F200 nodes running OneFS 9.0 and later, F900 nodes running OneFS 9.2 and later, F710 & F210 nodes running OneFS 9.7 and later, and F910 nodes running OneFS 9.8 and later. | ||||||||||||||
SmartQuotas | OneFS SmartQuotas is one of the principal methods for inline data reduction efficiency reporting. Quotas account for compressed files as if they consumed both shared and unshared data. From the quota side, compressed files appear no differently than regular files to standard quota policies. | ||||||||||||||
SmartPools | Compressed files will only reside on compression nodes and will not span SmartPools node pools or tiers. This is to avoid potential performance or protection asymmetry which could occur if portions of a file live on different classes of storage. SmartPooled data will be uncompressed before it is moved, so full uncompressed capacity will be required on the compressed pool. | ||||||||||||||
CloudPools | Although CloudPools can use compression to transfer data to the service provider, in OneFS 8.2.1 and later, compressed or deduplicated data cannot be exported directly from disk without incurring a decompression/compression cycle. CloudPools sees uncompressed data and then re-compresses data. CloudPools uses a different chunk size that inline compression. | ||||||||||||||
Non-disruptive Upgrade | In-line data reduction is only available on F810 nodes in OneFS 8.2.1, H5600 nodes in OneFS 8.2.2, F710, F600, F210, & F200 nodes in OneFS 9.0, F900 nodes in OneFS 9.2, and H700/7000 and A300/3000 nodes in OneFS 9.2.1. Gen6 clusters with an Ethernet backend, running earlier versions of OneFS 8.x code can be non-disruptively upgraded to OneFS 8.2.1 and later. | ||||||||||||||
Non-disruptive Upgrade | In-line data reduction is only available on F810 nodes in OneFS 8.2.1, H5600 nodes in OneFS 8.2.2 and later, F600 & F200 nodes in OneFS 9.0 and later, F900 nodes in OneFS 9.2 and later, and H700/7000 and A300/3000 nodes in OneFS 9.2.1 and later, F710 and F210 nodes in OneFS 9.7 and later, and F910 nodes in OneFS 9.8 and later. Gen6 clusters with an Ethernet backend, running earlier versions of OneFS 8.x code can be non-disruptively upgraded to OneFS 8.2.1 and later. | ||||||||||||||
File Clones | File cloning places data in shadow stores and notifies SmartDedupe by flagging the inode of the cloned LIN so that SmartDedupe samples the shadow references (normally it skips them). | ||||||||||||||
SmartDedupe | SmartDedupe post process dedupe is compatible with inline data reduction and vice versa. In-line compression can compress OneFS shadow stores. However, for SmartDedupe to process compressed data, the SmartDedupe job will have to decompress it first in order to perform deduplication, which is an addition resource overhead. Neither SmartDedupe nor inline dedupe are immediately aware of the duplicate matches that each other finds. Both inline dedupe and SmartDedupe could dedupe blocks containing the same data to different shadow store locations, but OneFS is unable to consolidate the shadow blocks together. When blocks are read from a shadow store into L1 cache, they are hashed and added into the in-memory index where they can be used by inline dedupe.
Unlike SmartDedupe, inline dedupe can deduplicate a run of consecutive blocks to a single block in a shadow store. Avoid running both inline dedupe and SmartDedupe on the same node pools.
| ||||||||||||||
Small File Storage Efficiency (SFSE) | SFSE is mutually exclusive to all the other shadow store consumers (file clones, inline-dedupe, SmartDedupe). Files can either be packed with SFSE or cloned/deduped, but not both. Inlined files (small files with their data stored in the inode) will not be deduplicated and non-inlined data files that are once deduped will not inline afterwards. | ||||||||||||||
Job Engine | Only the jobs which access logical data will incur compression and/or decompression overhead costs. These include: SmartPools, when moving data to or from a compressed node pool. IntegrityScan, when working on compressed data. FlexProtect, if there is spillover to another nodepool. SmartDedupe must decompress data first to perform deduplication, which is an addition resource expense. Other jobs working on metadata and physical data will be unaffected by inline data reduction. | ||||||||||||||
Job Engine | Only the jobs which access logical data will incur compression and/or decompression overhead costs. These include: SmartPools, when moving data to or from a compressed node pool. IntegrityScan, when working on compressed data. FlexProtect, if there is spillover to another nodepool. SmartDedupe must decompress data first to perform deduplication, which is an addition resource expense. Other jobs working on metadata and physical data will be unaffected by inline data reduction. | ||||||||||||||
DataIQ & InsightIQ | While OneFS, DataIQ, and InsightIQ (PowerScale’s multi-cluster reporting and trending analytics tools), are compatible, InsightIQ is not yet fully integrated with inline data reduction and will not report efficiency savings. |