Home > Storage > PowerStore > Databases and Data Analytics > Dell PowerStore: Microsoft SQL Server Best Practices > Reducing SQL Server I/O
Optimizing SQL Server workloads will improve the performance and scale of the storage layer and ultimately the related applications. The following sections include common techniques to reduce SQL Server storage I/O.
Allocating the proper amount of memory to SQL Server can increase performance and avoid unnecessary I/O. SQL Server performs all I/O through the buffer pool (cache) and uses a large portion of its memory allocation for the buffer pool. Ideally, when SQL Server performs I/O, the data is already in the buffer pool and it does not need to go to disk. This type of I/O is referred to as logical I/O and is the most desirable because it results in the best performance. If the SQL Server data does not need to reside in the buffer pool, it needs to access disk resulting in physical I/O. Proper memory allocation is critical to SQL Server performance and can improve storage performance as well. Often, SQL Server and storage performance can be further improved by adding additional memory to the SQL Server instance. Generally, allocating more memory is better, but there is a point of diminishing returns that is unique to each environment.
Starting with SQL Server 2014, the buffer pool can be extended to a file on the file system to provide additional space to cache data or index pages. Using this feature can provide significant performance benefits without adding memory to the database server in some cases. With more pages cached on the server, the I/O load on the array is reduced. When placing the buffer pool extension on the array, create a separate volume for the buffer pool extension and do not configure snapshots for the buffer pool extension volume. SQL Server repopulates the buffer pool data when the instance is restarted. Therefore, data recovery does not apply.
PMEM can be used to accelerate challenging I/O workloads and make I/O patterns more efficient. Virtualized environments running Hyper-V or VMware can also take advantage of PMEM making it a wise investment. For more information, see Configure persistent memory (PMEM) for SQL Server on Windows and the Dell NVDIMM-N Persistent Memory User Guide.
By default, SQL Server writes zeros to the data file during the allocation process. The process of zeroing out the datafile consumes I/O and acquires locks as the SQL Server datapages are written. This activity can occur for minutes or even hours depending on the file size. While this result may seem minor, writing zeros to these files can occur at disruptive periods when time and performance are critical. Example scenarios include database auto growth, expanding a full datafile, replication, or restoring a database as part of a disaster-recovery event. When Instant File Initialization is enabled, SQL Server will skip the process of zeroing out its datafiles when allocating space. Dell Technologies recommends enabling Instant File Initialization.
The Resource Governor allows database administrators to limit the IO, CPU and memory resources a SQL query can consume. For example, the Resource Governor can be used to reduce the impact of a user running an I/O intensive report by limiting the maximum number of IOPS that user can perform. While a query throttled by the Resource Governor takes more time to complete, overall database performance is better.
Reducing SQL Server I/O requires a holistic approach. Many of the items in this section require involvement from the entire team responsible for SQL Server applications. This team could include the business owner, architect, developer, database administrator, and system administrator. Decisions at the design level have a multiplied downstream impact as data is written and read multiple times and duplicated in various types of database copies. These include databases that are copied for other uses such as testing and reporting, replicated databases, replicated storage, and backups. One of the most challenging aspects of SQL Server is that the I/O pattern and the amount of I/O that is generated can vary greatly depending on the application, even if those applications have databases of the same size. This is because the design of both the database and the data access code control SQL Server I/O.
Database tuning can be one of the most cost-effective ways to reduce I/O and improve scalability. At a high level, consider the following areas when tuning a database to reduce I/O.
The foundation of the entire database and the schema for how data will be stored and ultimately accessed is determined by the database design. The database design should support both usability and efficient data access. This includes efficient table design and datatypes as well as indexes, partitioning, and other features that can improve efficiency. It is common for database design to only be focused on usability while performance and scale are overlooked.
How a query is written can greatly affect the amount of I/O SQL Server must perform when running the query. Queries should return only the required amount of data in the most efficient manner possible. Tune the queries responsible for consuming a relatively large number of resources as well as possible for best performance and scale.
Consider how applications are using the data and the way it is requested. Sometimes code and component reuse can result in the same data being unnecessarily retrieved repeatedly. All data access should be purposeful.
SQL Server uses a cost-based optimizer to generate query plans for data access. These plans are based on the statistics regarding how data is distributed in the tables. If the statistics are inaccurate, bad query plans may result and unnecessary I/O will be performed. Proper database maintenance includes ensuring that statistics are up to date. Frequent data modifications can also lead to fragmentation within SQL Server datafiles, producing unnecessary I/O. Fragmentation can be addressed through index reorganization or rebuilds as part of regular database maintenance. The database maintenance process itself can also have a large I/O impact. Typically, every table and index does not need to be rebuilt or reorganized every time maintenance is run. In addition, table partitioning strategies can also be leveraged to make the maintenance process more selective. Consider implementing intelligent maintenance scripts that use usage statistics to perform maintenance on an as-needed basis. For mission-critical databases, maintenance activities need to be considered as part of the overall design. If maintenance is not considered as part of the overall process, issues can arise, such as unmanageable sizes and feature incompatibilities that limit available options and strategies.
A VMware article was published about performance gains on VMware vCloud on AWS and VMware vSAN by enabling the SQL Server Trace Flag 1800 (see https://blogs.vmware.com/apps/2021/12/enhancing-performance-vmc-on-aws-sql-server-trace-flag-1800.html.) This trace flag changes the size of log file I/O from 512b to 4K to achieve advertised gains of up to 300% in the referenced configurations. Therefore, it may seem that other platforms may recognize a similar benefit. Based on internal workload testing on PowerStore, enabling this trace flag resulted in minimal or no gain. PowerStoreOS virtualizes the underlying disk to maximize performance for various I/O block sizes. Therefore, while it may be used, it is not considered a best practice to enable this.