We tested three incremental use cases:
The performance metrics for these tests included:
- CPU cores—We generated production-like workloads using as few CPU cores as possible. Oracle and SQL Server databases use core-based licensing. As the number of cores increase, so does the licensing cost. Using the combination of the MX840c servers and the PowerMax 2000 storage array enabled us to generate a significant database workload with fewer compute cores.
- CPU utilization—We captured CPU utilization at the Linux layer using dstat. The CPU utilization values that we captured represent the sum of all the work that was supported by the cores assigned to the VMs. Reporting CPU utilization provides an understanding of the processing load that was carried by the CPU cores in these tests. There were no target goals for CPU utilization because using fewer cores in each VM was a higher priority; however, we captured this metric to provide insight into the processing workload.
- TPM—We captured the number of transactions per minute (TPM) to show how fast an OLTP database was processing transactions. A higher TPM value indicates that the database was processing more business transactions. In testing the mixed workload solution, the goal was to generate sufficient TPM to support a typical production workload. This metric applies to OLTP workloads only and is captured in the HammerDB report.
- NOPM—New Orders per Minute (NOPM) is a throughput measurement in the TPC-C benchmark. Each transaction consists of the following transaction types: new-order, payment, order status, delivery, and stock-level transactions. Thus, NOPM indicates how many order transactions were completed in one minute as part of a serial business process. This metric applies to OLTP workloads only and is captured in the HammerDB report.
- IOPS—The number of IOPS indicates the load on a storage system. You can use IOPS to understand the amount of load that each database and application is placing on the array and if they are approaching the maximum load on the storage array. IOPS together with latency provides a comprehensive picture of storage performance. In these tests, the goal was to show IOPS that is appropriate for the support of production databases.
- Latency in submilliseconds—Latency indicates how fast data is read and written to the storage array. Storage latency is an important metric for OLTP applications because the faster the storage system can respond to read and write requests, the more responsive the application experience is for the users. The storage latency goal for this solution was 1 ms or less for reads and writes to all data and log files on OLTP workloads that were simulating production workloads.
- Throughput in megabytes per second (MB/s)—Throughput is a metric that is used for DSS workloads to indicate how fast the system can process large amounts of data using complex queries. The greater the throughput of a system, the more data it can process and the faster it can perform complex data analysis. Our goal was to generate a moderate level of throughput on the solution to show that customers can have both DSS workloads and OLTP workloads running in parallel.
- Compression and deduplication—We disabled PowerMax compression and deduplication at the storage group level for all use cases. Therefore, we did not observe data reduction in this validation testing. In the stress-testing phase, we ran the worst-case test scenarios with 100 percent active data on the PowerMax 2000. This workload profile does not take advantage of the PowerMax data reduction performance features. A typical database production environment with mixed workloads does benefit from the PowerMax compression and deduplication engine, which provides performance and consolidation advantages. You might prefer to use these features when you deploy the solution.