Home > Storage > PowerScale (Isilon) > Product Documentation > Data Efficiency > Next-Generation Storage Efficiency with Dell PowerScale Inline Data Reduction > Inline compression
The F810 nodes use an FPGA-based hardware offload engine resident on the back-end PCIe network adapter to perform real-time data compression. This action occurs as files are written to from a node in the cluster using a connected client session. Similarly, files are re-inflated on demand as they are read by clients.
On top of the FPGA, the OneFS hardware offload engine uses a proprietary implementation of DEFLATE with the highest level of compression, while incurring minimal to no performance penalty for highly compressible datasets.
The compression engine consists of three main components:
Engine component | Description |
Search Module | LZ77 search module analyzes inline file data chunks for repeated patterns. |
Encoding Module | Performs data compression (Huffman encoding) on target chunks. |
Decompression Module | Regenerates the original file from the compressed chunks. |
Since they reside on the same card, the data compression engine shares PCIe bandwidth with the node’s backend Ethernet interfaces. In general, there is plenty of bandwidth available. A best practice is to run highly compressible datasets through the F810 nodes with compression enabled. However, it is not advisable to run non-compressible datasets with compression enabled.
OneFS provides software-based compression for the F910, F900, F710, F600, F210, F200, H700/7000, H5600, and A300/3000 platforms. Compression in software is also used as fallback in the event of an F810 hardware failure, and in a mixed cluster for use in nodes without a hardware offload capability. Both hardware and software compression implementations are DEFLATE compatible.