Home > Storage > PowerScale (Isilon) > Product Documentation > Cloud > Introduction to APEX File Storage for Azure > Considerations
Overall, our performance testing identified several key considerations to address prior to deploying APEX File Storage for Microsoft Azure clusters. These considerations are crucial to ensure that the clusters can effectively meet your organization's performance needs.
This section describes the three key factors that affect performance when designing a OneFS cluster of APEX File Storage for Microsoft Azure. The key factors are:
Note: The performance testing is conducted with supported +2d:1n protection level configurations. See Appendix A: supported cluster configuration details for all supported combinations.
Starting with OneFS 9.8.0.0, APEX File Storage for Microsoft Azure supports Ddv5-series VMs, Ddsv5-series VMs, Edv5-series VMs, and Edsv5-series VMs. Table 4 shows two Azure storage throughput limits and the network bandwidth limit at the node level for tests. These three limits will directly impact the maximum sequential read throughput performance.
Node type/VM size | vCPU | Memory (GiB) | Max uncached disk throughput (MBps) | Max burst uncached disk throughput (MBps) | Max network bandwidth (Mbps) |
32 | 128 | 865 | 2,000 | 16,000 | |
48 | 192 | 1,315 | 3,000 | 24,000 | |
64 | 256 | 1,735 | 3,000 | 30,000 | |
104 | 672 | 4,000 | 4,000 | 100,000 |
When optimizing the performance of a cluster, it is recommended to see Appendix C: recommended data disk configuration details for optimal performance for data disks configuration per node.
This section describes sequential read and sequential write performance for different node types.
The Figure 2 represents a 128KB sequential read workload for different node types. It indicates that the sequential read performance increases with more powerful (larger VM size) nodes in the cluster.
The max burst uncached disk throughput and max network bandwidth directly impact the maximum sequential read throughput performance.
Note: Each test uses 4-node cluster with 12 data disks per node.
The Figure 3 represents a 512KB sequential write workload for different node types. It indicates that the sequential write performance increases with more powerful (larger VM size) nodes in the cluster.
Note: Each test uses 4-node cluster with 12 data disks per node.
This section describes sequential read and sequential write performance for node scale-out.
Table 5 shows cluster configurations for node scale-out tests.
Node type | Node count | Data disk type | Data disk count |
10 | P40 | 12 | |
14 | P40 | 12 | |
18 | P40 | 12 |
Note: The performance testing is conducted with supported +2d:1n protection level configurations. See Appendix A: supported cluster configuration details for all supported combinations.
Figure 4 represents a 128KB sequential read workload for node scale-out. It indicates that the sequential read performance increases with more nodes in the cluster.
Figure 5 represents a 512KB sequential write workload for node scale-out. It indicates that the sequential write performance increases with more nodes in the cluster.
This section describes sequential read performance when leveraging virtual machine-level bursting.
For VMs that support bursting, Azure will start with fully stocked credits for the VM and allow bursting for up to 30 minutes at the maximum burst rate, which is higher than the virtual machine-level’s max uncached disk throughput. The VM-level burst credits are restocked whenever throughput falls below the VM-level maximum uncached disk throughput limit. It takes less than a day to fully restock when burst credits are fully depleted. For more information about virtual machine-level bursting, see the Azure bursting document.
Since sequential writes do not utilize virtual machine-level bursting due to sequential write throughput lower than the virtual machine-level’s max uncached disk throughput, virtual machine-level bursting does not affect sequential write performance.
The Figure 6 represents a 128KB sequential read workload with and without VM-level bursting.
Note: This test uses 4 D32ds_v5 nodes cluster with 12 data disks per node.