Storage latency is the time that a storage array takes to complete a read request or acknowledge a write to the database. Innovations in storage media have driven storage latency lower. For example, before flash SSDs, storage latencies were typically measured in milliseconds. Flash drives, which drove latencies below 1 ms, represented a significant advancement. NVMe drives offer greater efficacies, further lowering latencies for reads and writes. The overall goal for the testing of this reference architecture was for all average storage latencies to be 1 ms or less for both reads and writes.
For OLTP workloads, physical reads from storage are generally random, small-block I/O operations. Database and application performance depend on how quickly data can be read from storage. Thus, the lower the read latency, the faster the application users can access critical data. SQL Server and Oracle databases commonly perform thousands or millions of reads per hour, depending on the business load. In the OLTP baseline validation test, we expected that the SQL Server and Oracle databases would demonstrate the lowest tested latency because there were no other workloads on the PowerMax 2000 array during that test. The following tables show the average read and write latencies for the OLTP workload:
Table 12. Average read latencies for the OLTP workload
Workload |
Database |
Average read latencies (ms) |
|
Data LUNs |
Log LUNs |
||
OLTP |
SQL Server 1 |
.47 |
.21 |
SQL Server 2 |
.47 |
.23 |
|
Oracle |
.47 |
.29 |
Table 13. Average write latencies for the OLTP workload
Workload |
Database |
Average write latencies (ms) |
|
Data LUNs |
Log LUNs |
||
OLTP |
SQL Server 1 |
.20 |
.18 |
SQL Server 2 |
.20 |
.16 |
|
Oracle |
.35 |
.64 |
The average read latency for the data and log LUNs for the SQL Server databases and the Oracle database remained under .5 ms. The average write latency for the data and log LUNs was under .4 ms, except for the Oracle log LUN with an average of .64 ms. Some exceptionally low average latencies stood out during testing:
In the DSS use case testing, throughput is the key performance metric because the database was scanning large tables and requesting large blocks of data from the PowerMax 2000 array. The following tables document our test findings, showing the impact of the added DSS workload on the latencies of the baseline OLTP workloads.
Table 14. Average read latencies for the OLTP workload with the DSS workload running in parallel
Workload |
Database |
Average read latencies (ms) |
|
Data LUNs |
Log LUNs |
||
OLTP |
SQL Server 1 |
.79 |
.27 |
SQL Server 2 |
.79 |
.28 |
|
Oracle |
.60 |
.47 |
Table 15. Average write latencies for the OLTP workload with the DSS workload running in parallel
Workload |
Database |
Average write latencies (ms) |
|
Data LUNs |
Log LUNs |
||
OLTP |
SQL Server 1 |
.23 |
.20 |
SQL Server 2 |
.23 |
.19 |
|
Oracle |
.31 |
.69 |
The addition of the DSS workload did cause our average read latencies to increase, but this was expected because the load on the array increased. All the average read latencies remained under the goal of 1 ms.
The addition of the DSS workload had a minor impact on average write latencies for both data and log. The average write latencies remained consistently low because the PowerMax cache accelerates all writes to storage. In this case, the write latencies remained under .24 ms for all SQL Server databases, and for Oracle they remained under .35 ms for data LUNs and .69 for log LUNs.
For the final use case, we added the snapshot OLTP workloads on top of the OLTP and DSS workloads. The test findings show a minor increase in average read latencies for data. For example, latency increased by .08 ms for SQL Server and Oracle data LUNs. Average read latencies for the SQL Server log LUNs did not increase, while the Oracle log LUN increased by .08 ms. For the OLTP workload, all average read latencies remained under the 1 ms goal, as shown in the following table:
Table 16. Average read latencies for the OLTP and snapshot OLTP workloads with the DSS workload running in parallel
Workload |
Database |
Average read latencies (ms) |
|
Data LUNs |
Log LUNs |
||
OLTP |
SQL Server 1 |
.87 |
.26 |
SQL Server 2 |
.87 |
.27 |
|
Oracle |
.68 |
.55 |
|
Snapshot OLTP |
SQL Server 1 |
1.10 |
.83 |
Oracle |
.82 |
.51 |
As shown, the snapshot OLTP SQL Server database did have an average read latency of 1.10 ms for data, but, because this database was simulating a test and development environment, the latency goal was less critical. For the same database, the log latencies were .83 ms, which is under the goal of 1 ms.
The snapshot OLTP Oracle database showed low average read and write latencies of .51 ms and .26 ms respectively for log LUNs. Also, all the average read and write latencies for Oracle were under the 1 ms performance goal.
These test findings show the capability of the PowerMax cache to accelerate most writes to the storage array. All average write latencies for both SQL Server and Oracle were .31 ms or under except for .75 ms for the Oracle log LUNs, as shown in the following table:
Table 17. Average write latencies for the OLTP and snapshot OLTP workloads with the DSS workload running in parallel
Workload |
Database |
Average write latencies (ms) |
|
Data |
Log |
||
OLTP |
SQL Server 1 |
.24 |
.22 |
SQL Server 2 |
.24 |
.20 |
|
Oracle |
.31 |
.75 |
|
Snapshot OLTP |
SQL Server 1 |
.26 |
.24 |
Oracle |
.25 |
.26 |
During the testing, a pattern of very low latencies for write I/O to the array emerged. We observed that write latencies were considerably lower than read latencies. This is expected because the PowerMax array has a large cache that accelerates I/O and is weighted towards caching all write requests. Also, all writes to the PowerMax cache are immediately acknowledged back to the database application.