Home > Storage > PowerVault > Guides > Dell PowerVault ME5 Series: Microsoft SQL Server Best Practices > SQL Server I/O reduction
Unnecessary I/O can be avoided, and performance can be increased by allocating the proper amount of memory to SQL Server. SQL Server performs all I/O through the buffer pool (cache) and therefore uses a large portion of its memory allocation for the buffer pool. Ideally, when SQL Server performs I/O, the data is already in the buffer pool and it does not need to go to disk. This type of I/O is referred to as logical I/O and is the most desirable because it results in the best performance. If the SQL Server data does not need to reside in the buffer pool, it will need to access disks, resulting in physical I/O. Proper memory allocation is critical to SQL Server performance and can improve storage performance as well. In many cases, SQL Server and storage performance can be further improved by adding memory. Adding memory generally improves performance, but there is a point of diminishing returns that is unique to each environment.
The SQL Server buffer pool extension feature allows the SQL Server buffer pool to be extended to a file on the file system to provide additional space to cache data or index pages. Using this feature may provide performance benefits without adding memory to the database server in some cases. By caching more pages on the server, the I/O load on the array is reduced. When placing the buffer pool extension on the array, create a separate volume for the buffer pool extension and do not take snapshots of the buffer pool extension volume. The buffer pool data is repopulated by SQL Server when the instance is restarted, therefore data recovery does not apply.
The overall I/O workload can be reduced by enabling database compression in SQL Server. Although there is a tradeoff in terms of CPU utilization on the database server, compression is still a viable option to consider and test in any environment. Database compression reduces I/O by reducing the amount of data that needs to be stored. The SQL Server data pages are compressed in memory before being written to disk, resulting in fewer pages needed to store the same number of rows and therefore less I/O.
By default, SQL Server writes zeros to the data file during the allocation process. The process of zeroing out the data files consumes I/O and acquires locks as the SQL Server data pages are written. This activity can occur for minutes or even hours depending on the file size. Although this may not seem important, writing zeros to these files can occur at critical periods when time and performance are critical, such as database auto growth, expanding a full data file, replication, or restoring a database as part of a disaster-recovery event.
When Instant File Initialization is enabled, SQL Server skips the process of zeroing out its data files when allocating space. Dell Technologies recommends enabling Instant File Initialization.
The SQL Server Resource Governor allows database administrators to limit the CPU, memory, and I/O resources that a query can consume. For example, the Resource Governor can be used to reduce the impact of a user running an I/O-intensive report by limiting the maximum number of IOPS that that user can perform. While a query throttled by the Resource Governor will take more time to complete, overall database performance can be improved by reducing contention for bottlenecked resources or throttling lower priority workloads, such as reporting. For more information on the benefits and implementation of Resource Governor, see the Microsoft SQL Server Resource Governor page.
Reducing SQL Server I/O requires a holistic approach. Many of the items in this section will require involvement from the whole team responsible for the SQL Server applications, including the business owner, architect, developer, database administrator, and system administrator. Decisions at the design level have a multiplied impact downstream because data is written and read multiple times and duplicated in various types of database copies. This includes databases copied for other uses, such as testing and reporting, replicated databases, replicated storage, and backups.
One of the most challenging aspects of SQL Server is that the I/O pattern and the amount of I/O that is generated can vary greatly depending on the application, even if those applications have databases of the same size. This is because the design of both the database and the data-access code control the SQL Server I/O. Database tuning can be one of the most cost-effective ways to reduce I/O and improve scalability.
At a high level, consider the tips in the following subsections when tuning a database to reduce I/O.
The database design determines the foundation of the entire database and the schema for how data will be stored and ultimately accessed. The database design should support both usability and efficient data access. This includes efficient table design and datatypes as well as indexes, partitioning, and other features that can improve efficiency. It is common for database design to be focused only on usability while performance and scale are overlooked.
How a query is written can greatly affect the amount of I/O SQL Server needs to perform when executing the query. Queries should return only the required amount of data in the most efficient manner possible. For best performance and scale, tune the queries responsible for consuming the most resources.
Consider how applications are using the data and how the data is requested. Sometimes code and component reuse can result in the same data being unnecessarily retrieved repeatedly. All data access should be purposeful. For example, when possible, SQL queries should avoid using “SELECT *” syntax and accessing or updating columns that are not used by the application.
SQL Server uses a cost-based optimizer to generate query plans for data access. These plans are based on the statistics regarding how data is distributed in the tables. If the statistics are inaccurate, bad query plans may result and unnecessary I/O will be performed. Proper database maintenance includes ensuring that statistics are up to date.
Frequent data modifications can also lead to fragmentation within SQL Server data files, producing unnecessary I/O. Fragmentation can be addressed through index reorganization or rebuilds as part of regular database maintenance. The database maintenance process itself can also have a large I/O impact. Typically, every table and index does not need to be rebuilt or reorganized every time maintenance is run. In addition, table partitioning strategies can also be leveraged to make the maintenance process more selective. Consider implementing intelligent maintenance scripts that use usage statistics to perform maintenance on an as-needed basis. For mission-critical databases, consider maintenance activities as part of the overall design. If maintenance is not considered as part of the overall process, issues can arise, such as unmanageable sizes and feature incompatibilities that limit available options and strategies.
Unnecessary index rebuilds will also cause updates to ME5 intelligent data access features such as read cache and tiering and therefore may cause unnecessary data shuffling.