Home > Workload Solutions > Oracle > Best Practices > Intel-Based Oracle Best Practices on Dell PowerEdge R740 and PowerMax 2000 > Introduction to Best Practices for Oracle > Baseline Configuration
In this section, we cover details of the baseline configuration. A baseline configuration was used for determining the initial workload and to enable a comparison with the first set of best practices.
To achieve the goal of 80 percent overall processor utilization in the R740 server, six virtualized databases were run in parallel. After several tests using HammerDB and an OLTP workload, a baseline configuration was developed. The following table shows the vCPU and memory allocation for each virtualized database:
Table 5: vCPU and memory allocation
Resource reservation | Baseline configuration per virtual machine | Total across six virtualized databases |
vCPU | 6 cores | 36 cores |
Memory | 128 GB | 786 GB |
A VMware reservation is a guaranteed allocation of CPU or memory for the virtual machine. In the case of the vCPU each virtual machine received a 6-core reservation. With six total virtualized databases the total vCPU reservation was 36-cores (six virtual machines each with six cores) on the server. The PowerEdge R740s servers are two socket servers and in the configuration for our tests two Intel Xeon Gold 6254 CPUs were used. Each Intel CPU has 18 cores so there was a total of 36 physical cores. This matches with the total number of vCPUs used across the six virtual machines.
While the virtual machines used all the physical cores by default, the PowerEdge R740 servers enable logical processors. When logical processors are enabled, the hypervisor presents twice as many processor cores. In our configuration the 36 physical processor cores are presented as 72 cores (36 x 2). Logical processors can boost performance by enabling more executions in parallel. In the case of our six virtual databases, we used half of the available processor resources: 36 processor cores were reserved, and 72 cores were available.
There is no best practice for how processor cores were allocated to the virtual machine in the baseline configuration. Rather the target of 80 percent CPU utilization was the determining factor in guiding the number of vCPUs per virtual database. Testing showed the server reached 80percent utilization with six virtualized databases with six processors when running the HammerDB OLTP workloads.
The memory configuration per virtual machine included a reservation of 128 GB. With six virtual machines, a total of 768 GB of memory was reserved. The total memory available in the server was 1.5 TB so the virtual machines used half of the available physical memory in the server. In allocating memory to each virtual machine, the goal was to ensure each VM fit into a physical NUMA (Non-uniform memory access) node. NUMA refers to a multiprocessor configuration in which each physical CPU has a dedicated memory bank. Information can be retrieved faster from the local memory bank than having to pull the data from another memory bank. When a virtual machine’s memory spans two memory banks, wait times are incurred, causing sub-optimal performance.
To prevent this sub-optimal NUMA design, each virtual machine has been allocated 128 GB of memory, thus enabling the use of only the local memory bank. The VMware ESXi hypervisor is responsible for the placement of memory on a processor and has been designed to optimize performance. By reserving memory that enables the virtual machine to use a local memory bank we have optimized performance for all the virtual machines on the server.
The PowerMax features the ability to have parent and child storage groups, which enables the administrator to create a configuration that facilitates ease of management, efficient snapshots, and replication. For the baseline storage configuration, all six virtualized databases used the same design. The following table shows the baseline storage configuration. The database used was named SAGE.
Table 6: Baseline storage configuration
Parent storage group | Child storage group | Number of LUNs | VMware volumes | Volume size (GB) | Notes |
SAGE_DB1 | DB1_DATA | 1 | SG_DB1_D1 | 1,000 | Data files |
DB1_REDO | 1 | SG_DB1_R1 | 50 | Redo files | |
DB1_ARCH | 1 | SG_DB1_A1 | 100 | Archive files | |
SAGE_GRID1 | 3 | SG_DB1_GRID1 | 50 | Grid infrastructure | |
SG_DB1_GRID2 | 50 |
| |||
SG_DB1_GRID3 | 50 |
| |||
SAGE_FRA | 1 | SG_DB1_FA | 100 | Flash recovery area | |
SAGE_TEMP | 1 | SG_DB1_TM | 500 | Temporary files |
A naming convention using DBn allowed the database administrators to increment the value to create storage groups for all the databases. For example, the second copy of the database had a parent storage group name of SAGE_DB2 with child groups of DB2_DATA, DB2_REDO, and DB2_ARCH and the parent group SAGE_GRID2 with VMware volumes SG_DB2_GRID1-3. Using this naming convention allowed the administration team to repurpose copies of the database quickly.
For the SAGE_FRA and SAGE_TEMP there were only parent storage groups. Under each parent storage group there were LUNs and VMware VMFS datastores for each database. The following table shows how this configuration works with the first three databases. In the case of SAGE_FRA and SAGE_TEMP the VMware VMFS volume names used DB<n> to indicate what database the volume serviced. For example, SG_DB1_FA was part of the first database and SG_DB2_FA was part of the second database.
Table 7: Storage configuration with first three databases
Parent storage group | Child storage group | Number of LUNs | VMware volumes | Volume size (GB) | Notes |
SAGE_FRA | 3 | SG_DB1_FA | 100 | Database 1 | |
SG_DB2_FA | 100 | Database 2 | |||
SG_DB3_FA | 100 | Database 3 | |||
SAGE_TEMP | 3 | SG_DB1_TM | 500 | Database 1 | |
SG_DB2_TM | 500 | Database 2 | |||
SG_DB3_TM | 500 | Database 3 |
In terms of performance monitoring the storage administrator can look at the storage groups to analyze performance. For example, if there was concern regarding reading and writing to datafiles the storage administrator can monitor the DB1_DATA child storage group. This capability to granularly monitor different parts of the database as it relates to storage can assist with quick performance analysis. In the storage best practices, we will build out this existing storage configuration to optimize database performance.
The Oracle database configuration includes many default settings that reflect a new installation of the database. In the following table, a subset of the database configuration parameters is listed to show the baseline settings:
Table 8: Database configuration parameters
Parameters | Values |
SGA_TARGET | 64 GB |
PGA_TARGET | 10 GB |
ORACHK UTILITY | N/A |
DB_BLOCK_SIZE | 8 K |
PROCESSES, SESSIONS, TRANACTIONS | 3000 4524 4976 |
UNDO_MANAGEMENT, UNDO_TABLESPACE | AUTO, UNDOTBS1 |
FILESYSTEM_IO_OPTIONS | NONE |
DISK_ASYNC_IO | TRUE |
DB_FILE_MULTIBLOCK_READ_COUNT | 128 |
DB_WRITER_PROCESS | 1 |
REDOLOG_FILE_SIZE | 200 MB (each) |
REDOLOG_FILE_BLOCK_SIZE | 512 |
LOG_CHECKPOINT_INTERVAL | 0 |
LOG_CHECKPOINT_TIMEOUT | 1800 |
LOG_BUFFER | 115,200 K |
ASM STRIPPING / AU SIZE | 4 MB |
REDO / TEMP FILES | COARSE |
In reviewing the configuration parameters, the overall profile can be labeled as an entry-level database. An entry-level database configuration provides a foundation to build upon best practices. Over the course of validating best practices the database configuration will change to optimize performance. This approach of starting with a minimal configuration and adding best practices has the benefit of enabling customers to address database growth challenges using these validated recommendations.