Home > Workload Solutions > Oracle > Best Practices > Intel-Based Oracle Best Practices on Dell PowerEdge R750xs and PowerStore 5000T > Introduction to Best Practices for Oracle > Baseline configuration
Notice: Dell Technologies announced the availability of new PowerStore models in July 2022. The new PowerStore models offer greater performance, more features, and capacity compared to previous storage arrays. In this paper we optimized Oracle database performance on the PowerStore 1000T. All the recommendations and best practices in this paper also apply to the new PowerStore arrays.
The primary goal of the Oracle best practices is to improve customer outcomes using Dell infrastructure. It can be challenging for customers to determine how an Oracle database should be optimally designed, as research can be time intensive and confusing. The goal of this Oracle best practice program aims to centralize and validate recommendations to ensure that our customers can successfully run an Oracle database on Dell infrastructure.
Our approach to validating best practices starts with gathering recommendations from multiple sources and categorizing the guidelines into different layers of the database infrastructure. For example, in the case of the optimal PowerEdge configuration for databases, the team pulled best practices from current manuals and held meetings to review the findings with PowerEdge experts. We actively worked with experts and engineers responsible for each part of the Oracle infrastructure throughout each stage of the tests.
Technical expertise of each component and proper order of implementation is critical to get the most performance out of the infrastructure. For new infrastructure deployment, applying all best practices at once before running any benchmark should not have any effect on the ordering. However, customers that are applying these best practices on an existing environment should apply these best practices in the order that would provide the best performance results. The following list describes each infrastructure layer that was validated for best practices, in the order tested:
Before starting the validation work, the Engineering team conducted internal load testing to determine the workload profile. HammerDB was used to generate an Online Transaction Processing Workload (OLTP) that simulates enterprise applications. The goal of generating a significant load on the Oracle infrastructure was to ensure that the system was sufficiently taxed to show how best practices optimized performance. In this case, the initial target was four VMs each with sixteen processor cores. The Hammer DB workload configuration is shown in Table 1.
Table 1: Hammer DB workload configuration
Setting name | Value |
Total transactions per user | 1,000,000 |
Number of warehouses | 5,000 |
Minutes of ramp up time | 10 |
Minutes of test duration | 50 |
Use all warehouses | Yes |
User delay (ms) | 500 |
Repeat delay (ms) | 500 |
Iterations | 1 |
Each best practice was validated in an hour-long workload test: 10 minutes ramp up time plus 50 minutes test duration. This hour-long time frame allowed us to to ensure the database system reached a consistent state of performance. Reaching a consistent run state validates whether the configuration is stable and if the best practice shows value over time.
The HammerDB parameter “Use all warehouses” forces the use of all 5,000 warehouses. This causes the workload to generate more I/O on the storage array. The first set of best practices compares baseline database performance without an optimal storage configuration to a database configuration with an optimal storage configuration.
New Order per Minute (NOPM) and Transaction per Minute (TPM) provide metrics to interpret the HammerDB results. These metrics are from the TPC-C benchmark and indicate the result of a test. During our best practice validation, we compare those metrics against the baseline to ensure that there is an increase in performance.
The team sequentially tested each component and documented the results within each layer of the infrastructure. For example, within the storage layer, the goal was to show how optimizing the number of volumes for DATA, REDO, and FRA disk groups improve performance of an Oracle database.
The working theory was that we would observe a gain in performance with each additional test configuration. Thus, with the last test in changing the database parameters, an overall optimal Oracle database solution would be achieved.
All best practices are not created equal. To enhance the value of best practices, we have identified which configuration changes produce the greatest results. This should enable anyone reviewing the best practices to easily identify the recommendations that will have the most value. Best practices were categorized as follows:
When reviewing the best practices, the day and value of the recommendations are combined. Here are some examples:
With this way of ranking best practices, the hope is that customers can very quickly decide which recommendations will provide the best value. For example, one or two best practices from the section on Linux optimization might provide most of the value depending on the customer’s needs. Investing time in implementing these best practices provides the greatest return. This approach can be taken for each layer of the database system until all Day 1, highly recommended best practices are completed. This differentiates how we present best practices and provides a pathway for customers to have the best return on investment.
Best practices are broad recommendations designed to apply to most Oracle databases using Dell infrastructure. It is important to recognize that every database workload and system is different, meaning the value of these best practices will vary from system to system. As with any configuration or change to a database system, the best approach is to test and validate the change prior to implementing the best practice on a production system. We recommend testing all best practices before implementing the changes in production.
The architecture was designed to broadly represent the infrastructure that customers use for their Oracle databases. Dell PowerEdge R750xs servers were used for the compute layer. Each PowerEdge R750xs was configured the same to ensure consistency of test results. Two Intel® Xeon® Gold 6338 processors with 32 physical cores each for a total of 64 cores were used. The default server configuration enables logical processors so that at the hypervisor layer 128 cores were available. The following table shows the configuration details:
Table 2: Server configuration
Processors |
|
Memory |
|
Network Adapters |
|
HBA |
|
To learn more about the server visit the PowerEdge R750xs page and download the specification sheet.
Dell Connectrix switches were used for connectivity from the servers to the PowerStore 5000T storage array. The Connectrix DS-6620B is designed to support medium-to-large sized database deployments. The Connectrix configuration used in our tests included 8 active 32 Gb/s ports to optimize the connection to the PowerStore storage.
The PowerStore storage array is cloud-ready and offers enterprise-rich data services like snapshots, replication, and many other features in a small footprint. Out of the box, the PowerStore storage array is highly tuned and does not require much manual tuning. In the PowerStore 5000T configuration used, there were 21 NVMe drives, each 1.92TB in size.
The following table gives full details of the PowerStore 5000T configuration:
Table 3: PowerStore 5000T configuration details
Processors |
|
Cache size |
|
Drives |
|
Total usable capacity |
|
Front-end I/O modules |
|
Many production database systems use dedicated infrastructure. All the Dell infrastructure was reserved for the database in our best practice tests. No parallel workloads were running and competing for CPU, network, and storage resources. Validating best practices in a dedicated environment helped mitigate variables that might negatively impact test results. Many of today’s environments have been consolidated and the challenges that can arise in tuning one database system on shared infrastructure.
With consolidated systems, the implementation of best practices might improve performance, but the positive gains might not be as significant due to the shared resources. These best practices can assist with resolving some challenges by integrating Day 1, highly recommended configuration as part of provisioning an Oracle database. As the database ecosystem transforms using best practices, overall systems performance may rise, and consolidated systems might perform more efficiently. Best practices offer the enterprise the ability to deploy a database with the optimal design and ecosystems with the capability to drive improved efficiencies.
Oracle offers one of the most popular databases in the world. Oracle 19c is a version of the database that many customers are using and which we chose for our best practices tests. The Oracle database has an array of features and capabilities. In our final validation tests, the engineering team tested changes to the database configuration that will optimize performance.
Virtualization of databases is continuously increasing. There are many advantages to virtualizing the database including consolidation, agility, and ease of management. We choose to virtualize the Oracle database with VMware vSphere version 7.3. Virtualization offers another layer of configuration that applies to the database. As virtualization impacts every part of the database infrastructure, we chose to integrate VMware best practices with each physical layer, rather than having a dedicated section for virtualization. This integration should simplify reviewing best practices as both physical and virtualization recommendations for each part of the infrastructure.
The Linux operating system has been widely accepted for supporting Oracle databases. We used Red Hat Enterprise Linux version 8.5 for the tests because this Linux flavor provides the stability, reliability, and security required for databases. The same best practice approach was taken with Red Hat Linux used for VMware. Linux best practices are integrated into each physical layer of recommendations. Thus, it is very easy to review Linux recommendations by reading CPU best practices or another physical layer.
The following table summarizes the software architecture used in validating best practices:
Table 4: Software architecture used
Oracle database | Oracle Database EE 19c (19.0.0.0) with release update RU 19.15
| |
Operating system |
| |
VMware vSphere | VMware 7.3 with vCenter 7.3 |
There are several combinations possible for the software architecture. In testing with Oracle Database19c, Red Hat Enterprise Linux 8.5, and VMware vSphere 7.3, the goal was to have a design that applies to whatever database customers use today. Future best practices programs will address new versions of the Oracle database and advances in both the operating system and virtualization.
In this section, we cover the details of the baseline configuration. A baseline configuration was used for determining the initial workload and to enable a comparison with the first set of best practices.
To achieve a meaningful test result for a single R750xs server, four virtualized databases were run in parallel. After several tests using HammerDB and an OLTP workload, a baseline configuration was developed. The following table shows the vCPU and memory allocation for each virtualized database:
Table 5: vCPU and memory allocation
Resource reservation | Baseline configuration per virtual machine | Total across four virtualized databases |
vCPU | 16 cores | 64 cores |
Memory | 128 GB | 512 GB |
A VMware reservation is a guaranteed allocation of CPU or memory for the virtual machine. In the case of the vCPU, each virtual machine received a 16-core reservation. With four total virtualized databases, the total vCPU reservation was 64-cores (four virtual machines each with sixteen cores) on the server. The PowerEdge R750xs servers are two socket servers and in the configuration for our tests, two Intel® Xeon® Gold 6338 CPUs were used. Each Intel® CPU has 32 cores, for a total of 64 physical cores.
While the virtual machines used 64 physical cores by default, the PowerEdge R750xs servers enable logical processors. When logical processors are enabled, the hypervisor presents twice as many processor cores. In our configuration the 64 physical processor cores are presented as 128 cores (64 x 2). Logical processors can boost performance by enabling more executions in parallel. In the case of our four virtual databases, we used 64 of the available logical processor cores from the total available of 128 logical processor cores.
The memory configuration per virtual machine included a reservation of 128 GB. With four virtual machines, a total of 512GB of memory was reserved. The total memory available in the server was 1TB and the virtual machines used half of the available physical memory in the server. In allocating memory to each virtual machine, the goal was to ensure each VM fit into a physical non-uniform memory access (NUMA) node. NUMA refers to a multiprocessor configuration in which each physical CPU has a dedicated memory bank. Information can be retrieved faster from the local memory bank than having to pull the data from another memory bank. When a virtual machine’s memory spans two memory banks, wait times are incurred, causing sub-optimal performance.
To prevent this sub-optimal NUMA design, each virtual machine has been allocated 128GB of memory, thus enabling the use of only the local memory bank. The VMware ESXi hypervisor is responsible for the placement of memory on a processor and has been designed to optimize performance. By reserving memory that enables the virtual machine to use a local memory bank we have optimized performance for all the virtual machines on the server.
Each ESXi host (R750xs server) has two dual-port Fibre Channel HBAs. The PowerStore 5000T under test has two Fibre Channel front-end I/O modules (one per node). Each I/O module has four 32Gbps ports with a total of eight 32Gbps front-end ports. There are two Connectrix Fibre Channel switches. Each Connectrix switch is its own fabric. For connectivity, each ESXi host was zoned to all eight front-end ports of the PowerStore across the two switches. This configuration should be the default setup. The figure below shows the SAN zoning configuration.
Figure 1: SAN Zoning Configuration
The PowerStore volume group feature enables the administrator to create a configuration that facilitates ease of management, efficient snapshots, and replication. For the baseline storage configuration, all four virtualized databases (DB1 to DB4) used the same design. The following table shows the baseline storage configuration for DB1. The database used was named ORABP22.
Table 6: Baseline storage configuration for DB1
Volume Group | Volume Name | Number of LUNs | VMware Datastore | Volume size (GB) | Notes |
orabp22-vm1-os | orabp22-vm1-os | 1 | orabp22-vm1-os-ds | 400 | Operating System |
orabp22-vm1-grid | orabp22-vm1-grid-001 | 1 | orabp22-vm1-grid-001-ds | 50 | GRID |
orabp22-vm1-grid-002 | 1 | orabp22-vm1-grid-002-ds | 50 | ||
orabp22-vm1-grid-003 | 1 | orabp22-vm1-grid-003-ds | 50 | ||
orabp22-vm1-db1 | orabp22-vm1-db1-data-001 | 1 | orabp22-vm1-data-001-ds | 1000 | Data files |
orabp22-vm1-db1-redo-001 | 1 | orabp22-vm1-redo-001-ds | 55 | Online redo logs | |
orabp22-vm1-db1-fra-001 | 1 | orabp22-vm1-fra-001-ds | 60 | Flash Recovery Area | |
orabp22-vm1-temp | orabp22-vm1-db1-temp | 1 | orabp22-vm1-temp-ds | 500 | Temp files |
The naming convention using VMn and DBn allowed the database administrators to increment the value to create volume groups for other databases. For example, the second copy of the database had a volume group name of ORABP22_VM2_DB2. Using this naming convention allowed the administration team to repurpose copies of the database quickly. Please note that each LUN was configured as a VMware datastore.
For detailed PowerStore best practices for Oracle database, see Oracle Best Practices on PowerStore.
The Oracle database configuration includes many default settings that reflect a new installation of the database. In the following table, a subset of the database configuration parameters is listed to show the baseline settings:
Table 8: Database configuration parameters
Parameters | Values |
SGA_TARGET | 64 GB |
PGA_TARGET | 10 GB |
DB_BLOCK_SIZE | 8 K |
PROCESSES | 640 |
SESSIONS | 984 |
TRANSACTIONS | 1082 |
UNDO_MANAGEMENT | AUTO |
UNDO_TABLESPACE | UNDOTBS1 |
FILESYSTEM_IO_OPTIONS | NONE |
DISK_ASYNC_IO | TRUE |
DB_FILE_MULTIBLOCK_READ_COUNT | 128 |
DB_WRITER_PROCESS | 1 |
REDOLOG FILE SIZE | 200 MB (each) |
ASM ALLOCATION UNIT SIZE | 4 MB |
REDO / TEMP FILES STRIPING | COARSE |
In reviewing the configuration parameters, the overall profile can be labeled as an entry-level database. An entry-level database configuration provides a foundation to build upon best practices. Over the course of validating best practices, the database configuration will change to optimize performance. This approach of starting with a minimal configuration and adding best practices has the benefit of enabling customers to address database growth challenges using these validated recommendations.