We used the following design principles and best practices to create the database VMs for the Oracle databases.
We recommend multiple SCSI controllers of type VMware Paravirtual to optimize and balance the I/O for the different Oracle database hard disks, as described in this section.
The following table shows the recommended SCSI controller design for the OLTP and snapshot database VMs. The DATA and REDO disks are distributed across separate dedicated SCSI controllers because in an Oracle OLTP workload both these types of disks generate a high level of I/O to the storage. In contrast, the OCR, FRA, and TEMP disks generate relatively little I/O and, hence, can exist together on a separate dedicated SCSI controller.
Table 34. SCSI controller properties in the OLTP and snapshot database VMs
Controller |
Purpose |
SCSI bus sharing |
Controller type |
SCSI 0 |
Guest operating system disk |
None
|
VMware Paravirtual
|
SCSI 1 |
Oracle DATA disks |
||
SCSI 2 |
Oracle REDO disks |
||
SCSI 3 |
Oracle OCR, FRA, and TEMP disks |
The following table shows the recommended SCSI controller design for the DSS database VM. DSS workloads generate mostly read I/O to the DATA disks with little I/O to the REDO disks. Therefore, as shown in the following table, the 10 DATA disks are distributed across the three dedicated SCSI controllers for load balance, while the rest of the light I/O type disks (the guest operating system, OCR, REDO, FRA, and TEMP disks) exist together on the first SCSI controller.
Table 35. SCSI controller properties in the DSS database VM
Controller |
Purpose |
SCSI bus sharing |
Type |
SCSI 0 |
Guest operating system, OCR, REDO, FRA, and TEMP disks |
None
|
VMware Paravirtual
|
SCSI 1 |
3 Oracle DATA disks |
||
SCSI 2 |
3 Oracle DATA disks |
||
SCSI 3 |
4 Oracle DATA disks |
The virtual hard disks for the VMs’ guest operating system are created as a VMFS6 datastore. All the volumes or virtual hard disks for the Oracle databases (DATA, REDO, FRA, TEMP, and OCR disks) are directly added to their respective VMs as raw devices or through raw device mapping (RDM). For these raw devices, although ESXi creates a mapping file with the .vmdk extension and saves it on a VMFS datastore, the mapping file contains only mapping information, while the data itself is stored directly on the storage LUN.
We left all the vCPU and vMem properties in all the Oracle database VMs at their default values except for Memory Reservation. We allocated different quantities of vCPUs, vMems, and memory reservations to the different types of database VMs. For details about the allocated values during the testing of each of the test cases, see Table 2 and Table 7.
We added two virtual network adapters to each of the database VMs: one for in-band VM management and one for Oracle public traffic. We configured the two adapters with the recommended type setting of VMXNet 3. For details about the configuration of virtual switches and physical adapters, see Compute and network design in Appendix B.
For each VM, under VM Options advanced settings, we added the disk.enableUUID configuration parameter and set its value to TRUE. This setting ensures that the VMDK always presents a consistent disk UUID to the VM.
In this reference architecture, we used the following best practices to deploy and configure Red Hat Enterprise Linux 7.4 as the guest operating system in the VMs that were running the standalone Oracle databases:
We also followed these important best practices:
KERNEL=="sd[a-z*[1-9", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="3600601600f004300accaed5bd9741db5", SYMLINK+="oracleasm/disks/ora-redo1", OWNER="grid", GROUP="asmadmin", MODE="0660"
As described in VM design and configuration, we mapped all Oracle database related LUNs that were presented to the ESXi host from the PowerMax storage array directly as raw devices to their respective database VMs using raw device mapping (RDM). In compliance with the Oracle database requirements, we assigned the ownership of the raw devices to the grid user that owns the Oracle GI and Oracle Automatic Storage Management (ASM). The device link for these Oracle related raw devices is /dev/oracles’/disks/oral-XXX. For example, /dev/oracle’s/disks/ora-redo1 is the device link for REDO1 LUN/raw device.
The following table shows the Oracle disk groups that we created for the OLTP database using the raw devices or the virtual disks presented to the VM from the storage array. Except for the OCR disk group that uses the normal redundancy (with triple mirroring), all other disk groups used the external redundancy setting. The coarse striping setting is recommended for DATA, FRA, and OCR disk groups, and the fine-grain striping setting is recommended for REDO1, REDO2, and TEMP disk groups.
Table 36. ASM disk group design for the OLTP database
ASM disk group |
Purpose |
Redundancy |
ASM striping |
ASM disk group size (GB) |
LUN |
LUN size (GB) |
DATA |
Data files, control files, undo tablespace |
External redundancy |
Coarse |
2,000 |
DATA00 |
500 |
DATA01 |
500 |
|||||
DATA02 |
500 |
|||||
DATA03 |
500 |
|||||
FRA |
Archive log files |
External redundancy |
Coarse |
200 |
FRA0 |
100 |
FRA1 |
100 |
|||||
REDO1 |
Online redo logs |
External redundancy
|
Fine-grain |
50 |
REDO0 |
25 |
REDO1 |
25 |
|||||
REDO2
|
Online redo logs |
External redundancy
|
Fine-grain
|
50 |
REDO2 |
25 |
REDO3 |
25 |
|||||
TEMP
|
Temp files |
External redundancy
|
Fine-grain
|
500 |
TEMP
|
500
|
OCR |
OCR, voting disk, GIMR |
Normal redundancy |
Coarse
|
50 |
OCR0 |
50 |
OCR1 |
50 |
|||||
OCR3 |
50 |
Note: The ASM disk group design for the DSS database and the snapshot database is identical to the OLTP database ASM disk group design that is shown in the table with the following exception: In the DSS disk group design, the DATA disk group has eight 500 GB disks for a total disk group size of 4 TB and the TEMP disk group has one 2 TB disk.
Oracle ASM includes a feature through which you can move the data to higher performance tracks of the spinning disks in the compact phase at the end of ASM disk rebalancing. This feature has no benefit for Dell EMC PowerMax storage when physical storage is being virtualized and flash devices are being used. You can disable the rebalancing feature by running the alter diskgroup command for all the disk groups. The following example shows the command for the DATA disk group:
SQL> alter diskgroup DATA set attribute '_rebalance_compact' = 'FALSE';
For more information about ASM compact rebalancing, see Oracle Support note 1902001.1.