Before starting the validation work, the Engineering team conducted internal load testing to determine the workload profile. HammerDB was used to generate an Online Transaction Processing Workload (OLTP) that simulates enterprise applications. The goal of generating a significant load on the Oracle infrastructure was to ensure that the system was sufficiently taxed to show how best practices optimized performance. In this case, four VMs each with sixteen processor cores was the initial target. The Hammer DB workload configuration is shown in the following table:
Table 1. Hammer DB workload configuration
Setting name | Value |
Total transactions per user | 1,000,000 |
Number of warehouses | 5,000 |
Minutes of ramp up time | 10 |
Minutes of test duration | 50 |
Use all warehouses | Yes |
User delay (ms) | 500 |
Repeat delay (ms) | 500 |
Iterations | 1 |
With this HammerDB configuration, each best practice was validated in an hour-long workload test: 10 minutes ramp up time plus 50 minutes test duration. This goal of running a workload for one hour was to validate that each best practice was used to ensure the database system reached a consistent performance state. Reaching a consistent run state has the benefit of validating whether the configuration is stable and if the best practice shows value over time.
“Use all warehouses” is a Hammer DB parameter that forces the use of all 5,000 warehouses. The result of forcing the use of all warehouses means the workload will generate more I/O on the storage array. The first set of best practices compares baseline database performance without an optimal storage configuration to a database configuration with an optimal storage configuration.
New Order per Minute (NOPM) and Transaction per Minute (TPM) provide metrics to interpret the HammerDB results. These metrics are from the TPC-C benchmark and indicate the result of a test. During our best practice validation, we compared those metrics against the baseline to ensure that there that performance increased.