Home > APEX > Storage > White Papers > Introduction to APEX File Storage for AWS > Test results
APEX File Storage for AWS can deliver exceptionally superior performance. A single 6-node cluster can deliver a total of 17,828 MB/sec sequential read throughput, 4,206 MB/sec sequential write throughput.
The Table 6 shows a configuration example of the cluster that achieved the sequential read and write throughput performance.
Cluster item | Configuration |
Cluster size | 6 nodes |
EC2 instance type | m6idn.8xlarge |
EBS volume type | gp3 |
EBS volumes per node | 12x 1 TiB |
Cluster raw capacity | 72 TiB |
Cluster protection level | +2n |
In APEX File Storage for AWS, OneFS has inline compression and inline deduplication enabled by default. Our research shows that 2.0 is a typical data reduction ratio on OneFS clusters. The dataset data reduction ratio is 2.0 in this test.
This section describes the four key factors that affect performance when designing a OneFS cluster of APEX File Storage for AWS. They are:
Starting with OneFS 9.7.0, APEX File Storage for AWS supports m5dn, m6idn, m5d, i3en instance type series. The testing found that:
APEX File Storage for AWS has good scalability while adding new nodes to the cluster. When you need more performance, adding nodes is a good option.
When setting up the cluster, users can choose the supported volume size and volume count in each node. For example, gp3 clusters support 5, 6, 10, 12, 15, 18, or 20 volumes per node. The performance profile of gp3 volumes can vary from 3,000 IOPS, 125 MiB/sec to 16,000 IOPS, 1,000 MiB/sec. St1 volume performance limit scales linearly, at the rate of 40 MiB/sec per TiB.
Performance tests show that when aggregated EBS available throughput (volume count * per-volume throughput) is the same or greater than instance EBS bandwidth limit, there will be no significant performance difference in the sequential read and write workloads.
For sequential read and sequential write workloads, gp3/SSD cluster and st1/HDD cluster achieve similar performance. This does not mean that st1 cluster performance can be compared to gp3 clusters in other workloads. For example, gp3 cluster can achieve much better performance with lower latency in a classic metadata intensive workload.
We suggest using the st1 EBS volume type only for archive workloads.