The database servers tested in this Ready Solution for Oracle—PowerEdge R940-based physical production database servers, a PowerEdge R740-based physical XVC databases server, and a PowerEdge R940-based virtual databases server—were designed and configured with the following best practices:
Table 9. Database servers: Memory DIMMs capacity and quantities
Use case | Server type | Number of CPU sockets | DIMMs per channel populated | DIMMs per socket populated | Per DIMM capacity | Total physical DRAM |
OLTP PROD Database | R940 | 4 | 1 | 6 | 64 GB | 4 x 1 x 6 x 64 = 1,536 GB |
XVC Databases | R740 | 2 | 1 | 6 | 32 GB | 2 x 1 x 6 x 32 = 768 GB |
Virtual Databases | R940 | 4 | 2 | 12 | 32 GB | 4 x 2 x 12 x 32 = 1,536 GB |
For additional recommended best practices that were implemented on the physical database servers see RHEL 7.4 as bare-metal operating system.
The virtualized environment in this Ready Solution for Oracle was tested with two virtual databases deployed on a single R940-based VMware ESXi host.
Figure 6. Single ESXi host with two VMs running two virtual databases
As shown in the figure above, the ESXi host contains:
The ESXi host and the VMs were configured, monitored, and maintained using VMware vSphere Web Client and VMware vCenter Server Appliance (VCSA), which was deployed as a VM on the management server.
In the PowerEdge R940-based virtual databases server, we deployed ESXi 6.4 U2 as the hypervisor. We applied best practices in our test environment, as described in the following sections.
Note: We used the XtremIO Host Configuration Guide to apply best practices. See this guide for the complete list of best practices for XtremIO storage in a VMware ESXi host environment.
The following table lists the default and the recommended HBA queue depth settings in ESXi 6.5 hosts connecting to XtremIO X2 storage arrays:
Table 10. HBA queue depth settings in ESXi based hosts
Parameter | Default value | Recommended value |
LUN Queue Depth | QLogic: 64 Emulex: 30 | QLogic: 256 Emulex: 128 |
HBA Queue Depth | QLogic: N/A Emulex: 8192 | QLogic: N/A Emulex: 8192 (maximum) |
We set the LUN queue depth to the recommended value of 256 for the QLogic HBAs used in our virtualized databases server. This setting ensures that the XtremIO X2 storage arrays handle an optimal number of SCSI commands (including I/O requests).
Note: The QLogic HBA Queue Depth setting is no longer read by vSphere, therefore, it is not relevant when configuring a vSphere host with QLogic HBAs.
The virtual databases ESXi host was configured by using vSphere Native Multipathing (NMP). The following parameter values are recommended on the host for optimal performance with XtremIO X2 storage:
Note: In ESXi 6.5, the default path selection policy is round-robin and the default path switching frequency is 1. Therefore, no change was needed in our virtualized databases server.
The following ESXi host parameters were configured:
To get the list of all XtremIO volumes, type:
$> esxcli storage nmp path list | grep XtremIO -B1 | grep "\ naa" | sort | uniq
To set the value for each volume, type:
$> esxcli storage core device set -d <naa.xxx> -O 256
where <naa.xxx> is the XtremIO volume obtained from the previous command.
We used the following design principles and best practices to create the VMs in this Ready Solution for Oracle:
Table 11. SCSI controller properties set in VMs
Controller | Purpose | SCSI bus sharing | Change type |
SCSI 0 | Guest OS disk | None | VMware Paravirtual |
SCSI 1 | Oracle DATA disks | Physical | VMware Paravirtual |
SCSI 2 | Oracle REDO disks | Physical | VMware Paravirtual |
SCSI 3 | Oracle OCR, GIMR, FRA, TEMP | Physical | VMware Paravirtual |
Table 12. VM configuration: vCPU and vMem details
VM | Number of vCPUs | vMem | |
Reservation (GB) | Total (GB) | ||
VM1 | 18 | 120 | 140 |
VM2 | 18 | 120 | 140 |
In the PowerEdge R940-based production database servers and the PowerEdge R740-based XVC databases server, we deployed RHEL 7.4 as the bare-metal operating system.
Note: We used the XtremIO Host Configuration Guide to apply best practices. See this guide for the complete list of best practices for XtremIO storage in a Linux environment.
The following table lists the default and recommended HBA queue depth settings for a Linux environment.
Table 13. HBA queue depth settings in Linux-based servers
Parameter | Default value | Recommended value |
LUN Queue Depth | QLogic: 32 Emulex: 30 | QLogic: Keep default value Emulex: Keep default value |
HBA Queue Depth | QLogic: 32 Emulex: 8192 | QLogic: 65535 (maximum) Emulex: 8192 (maximum) |
Note: We kept the default queue depth values in our physical database servers because we used Emulex HBAs.
The recommended I/O elevator setting in the RHEL operating system running in the database servers connecting to XtremIO X2 storage arrays is either deadline or noop. The cfg I/O elevator setting is not recommended. In our physical database servers, we used deadline, which is the default I/O elevator setting in RHEL 7.4.
We configured the physical database servers using Linux Native Multipathing available in the RHEL 7.4 operating system. We created the configuration file for the /etc/multipath.conf multipath daemon with the following recommended settings:
devices {
device {
vendor XtremIO
product XtremApp
path_grouping_policy multibus
path_checker tur
path_selector "queue-length 0"
rr_min_io_rq 1
user_friendly_names yes
fast_io_fail_tmo 15
failback immediate
}
We partitioned the database disks or XtremIO volumes presented to the Linux-based physical database servers by using fdisk with the default starting sector value of 2,048. This setting ensures that the starting sector number is a multiple of 16 (16 sectors, at 512 bytes each, is 8 KB). Therefore, each database disk is correctly aligned with the XtremIO storage LUN striping.
In this Ready Solution for Oracle, we deployed RHEL 6.9 as the guest operating system in VM1 running Oracle 11g R2 database and RHEL 7.4 as the guest operating system in VM2 running Oracle 12c R2 database.
In each guest operating system , we adhered to the following recommended best practices.
For optimal XtremIO X2 storage performance in a VMware environment, we recommend PVSCSI controllers and driver in the guest VMs. In this Ready Solution for Oracle, we ensured that the inbox RHEL vmw_pvscsi driver module was loaded and used in the guest operating systems.
Note: The PVSCSI driver is used only when the SCSI controller type is set to VMware Paravirtual in the VM settings.
The following table shows the default and the recommended vmw_pvscsi parameter settings.
Table 14. PVSCSI parameter settings in guest operating systems
Parameter | Default value | Recommended value |
vmw_pvscsi.cmd_per_lun | RHEL 6: 64 RHEL 7: 254 | RHEL 6: 254 RHEL 7: 254 |
vmw_pvscsi.ring_pages | RHEL 6: 8 RHEL 7: 8 | RHEL 6: 32 RHEL 7: 32 |
The parameters and their respective recommended values in this table were appended to the kernel boot arguments:
Other guest operating systems were configured as follows: