Home > Workload Solutions > Oracle > Best Practices > Intel-Based Oracle Best Practices on Dell PowerEdge R740 and PowerMax 2000 > Storage Best Practices > Day Three Best Practices > VMware ESXi: HBA Queue Depth
Increasing the HBA queue depth can increase storage performance, optimizing data across the connections to storage. In this best practice, the HBA queue depth is increased to optimize performance.
Category | PowerMax Storage |
Product | VMware ESXi: HBA and LUN Queue Depth |
Type of best practice | Performance Optimization |
Day and value | Day 3, Fine-tuning |
Overview
Relational database management servers, like Oracle, servicing high throughput online transaction processing applications create bursts of disk I/O activity that benefit from careful design of the complete I/O path between server memory and permanent storage. When using Fibre Channel networking for access to shared storage, the Oracle database server will include one or more host bus adapter (HBA) cards. In Fibre Channel networking, HBAs provide the same type of services to the host that a network interface controller (NIC) would for TCP/IP networks.
A key configuration parameter available on HBA cards are the HBA and LUN queue depths. The availability of an I/O buffer or queue on the HBA allows the host operating system to pass an I/O request on the HBA, even if the target storage device unable to process the request immediately. This reduces the potential for blocking threads at the operating system level that would have to wait for the target to be ready for new I/Os. However, building up a significant number of queued I/Os on the HBA can create a large burst of I/Os to the storage target that could overwhelm the capability of the I/O ports and/or the storage controllers on the shared array.
When a large number of hosts connect to a single shared storage array, the storage administrators may request or require that server administrators decrease the HBA queue depth settings on connected servers, especially database servers. This helps alleviate resource contention concerns that can occur when hosts are saturating the storage front-end ports or controller processors with too many simultaneous requests. Our best practices development environment was dedicated to a single database server host, so this was not a concern for this project.
We focused on answering whether increasing the HBA queue depth would have a beneficial impact on workload performance. The default queue depth of most HBA cards is 32, pending I/Os. Check with your vendor to verify the setting for your equipment and use the management API or application to confirm that the setting has not been changed from the default before proceeding with testing or use in production.
Recommendation
Changing the HBA queue depth showed no performance improvement. Test findings showed no improvement for the following:
Server CPU utilization was unchanged.
The Emulex LightPulse HBA cards used in our testing lab had a default queue depth of 32. We tested a scenario with the queue depth increased to 64 and found no significant gain in performance. This indicates that 1) the server is not frequently filling the full queue depth of 32 and 2) the PowerMax is able to process all the queued I/Os that the HBAs can forward with a deeper queue depth.
Implementation Steps
To set lun_queue_depth, open the ESXi console and run this command:
esxcli system module parameters set -p lpfc_lun_queue_depth=64 -m lpfc
for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep naa.60000970000197`; do esxcli storage core device set -d $i -O 64; done
References