In the first use case, we created two Oracle 12c Release 2 RAC databases across two PowerEdge 740 servers, as shown in Table 25. We used SLOB to create an OLTP workload with a 60/40 read/write mixture. We created the databases with an 8 KB block size and with ASM in a coarse-striped and externally redundant configuration.
Figure 25. Use case 1 architecture diagram
Table 42 shows the high-level configuration of the two production Oracle RAC databases.
Table 42. Production Oracle RAC database configuration
Category |
Specification/setting |
PROD configuration |
Operating system |
VM guest OS |
RHEL 7.3 |
VM configuration |
vCPUs per VM |
2 |
vMem per VM |
48 GB |
|
Database configuration |
Database version |
12c R2 RAC |
Database size |
1 TB |
|
db_block_size |
8 KB |
|
db_file_multiblock_read_count |
1 |
|
sga_max_size |
24 GB |
|
pga_aggregate_target |
8 GB |
|
SLOB I/O configuration |
Read/write ratio |
60/40 |
vCPU and vMem performance is, in large part, determined by how the VMs are configured. To each of the four VMs’ two vCPUs, we assigned a reservation of 0 MHz and a CPU limit of unlimited. A vCPU with a reservation of 0 MHz means that there is no guarantee for CPU clock cycles. A CPU limit of unlimited means that the VMs could use the full computational resources of up to two physical cores.
We configured each of the four VMs with 48 GB of memory and a memory reservation of 48 GB. When the VM configured memory matches the reservation memory size, the VM gets all its memory from physical memory and is not at risk for hypervisor memory swapping or ballooning. At an Oracle database configuration level, sga_max_size and pga_aggregate_target limit the amount of memory used by the database. In this use case, sga_max_size is set to 24 GB, and pga_aggregate_target is set to 8 GB, meaning the databases can use a maximum of 32 GB of memory. This leaves 16 GB of memory for the Linux operating system.
We ran the two production Oracle RAC databases on dedicated PowerEdge R740 servers and a dedicated VMAX 250F array. The goal of this test was to develop and validate implementation best practices for running Oracle databases on this modern platform. We monitored performance, but because the two Oracle 12c RAC databases had dedicated servers and storage, performance measurements do not reflect the consolidation capabilities of the database platform. Most customers will consolidate databases to achieve greater capital and operation expenditure savings and gain more value from their investment in licensing the databases. In generating an OLTP workload, the goal was not to maximize performance but rather to create a realistic production workload that is characteristic of a typical small-configuration workload.
In this test, we ran an OLTP workload across the two RAC databases in parallel for 30 minutes. In Figure 26, the first bar shows the number of physical cores (pCPUs) in each server relative to the number of virtual cores (vCPUs) in this test. This is useful because CPU overcommitment, if excessive, can degrade performance. The general recommendation for business-critical workloads is no greater than a 1:1 ratio of vCPUs to pCPUs. In this use case, the vCPU-to-pCPU ratio was well under the 1:1 recommendation.
Figure 26. Use case 1 server and storage performance metrics
The average CPU utilization across the four VMs was 26 percent, which provides significant room for growth. Each RAC database generated over 5,800 IOPS with read and write latencies well under the .75 millisecond goal. The two Oracle RAC databases combined generated over 11,600 IOPS, which is representative of production workloads in the small configuration.
Global memory is a crucial data accelerator in the VMAX architecture. All read and write operations are transferred to or from the global memory at much greater speeds than transfers to physical drives. This means the VMAX array has the ability to deliver large write-buffering that accelerates database performance. For this OLTP workload, Table 43 shows the percentage of reads and writes satisfied from the VMAX system cache.
Table 43. VMAX read/write cache hit percentages
Workload |
VMAX read cache hit percentage |
VMAX write cache hit percentage |
PRD OLTP |
35.11% |
100% |
In addition to demonstrating a strong performance profile, use case 1 shows how a fraction of the available CPUs and VMAX storage can be used to support production workloads. Unused CPU and storage resources represent the opportunity for consolidation, enabling the IT organization to standardize Oracle databases on the Ready Bundle platform. Use case 2 expands our tested Oracle ecosystem to include development databases.