The goal of the Microsoft SQL Server best practices program was to improve customer outcomes using Dell Technologies infrastructure. It can be challenging for customers to determine how a SQL Server database should be optimally designed, as research can be time intensive and confusing. The goal of this SQL Server best practice program is to centralize and validate recommendations to help customers successfully run SQL Server instances on our infrastructure.
At the beginning of the program, we gathered recommendations from multiple sources and categorized the guidelines into different layers of the database infrastructure. For example, for the optimal PowerEdge configuration for databases tests, the team pulled best practices from current manuals and held meetings with PowerEdge experts to review the findings. This meant we actively worked with specialists and engineers responsible for each part of the SQL Server infrastructure. The following list describes each infrastructure layer validated for best practices, in the order tested:
The engineering team conducted internal load testing to determine the workload profile before starting the validation tests. HammerDB was used to generate an Online Transaction Processing Workload (OLTP) that simulates many common enterprise applications. The goal of generating a significant load on the SQL Server infrastructure was to ensure that the system was sufficiently taxed to demonstrate that best practices optimized performance. In this case, the initial target was six VMs each with ten processor cores. The HammerDB workload configuration is described in Table 1.
Table 1: HammerDB workload configuration
Total transactions per user
Number of warehouses
Number of virtual users
Minutes of ramp up time
Minutes of test duration
Use all warehouses
User delay (millisecond)
Repeat delay (millisecond)
With this HammerDB configuration, each best practice was validated in an hour-long workload test: 10 minutes ramp up time plus 50 minutes for test duration. We chose to run the workload for one hour to ensure that the database system reached a consistent performance state. Reaching a consistent run state determines whether the configuration is stable and the best practice prove value over time.
The HammerDB parameter Use all warehouses enables increased I/O to the database area by assigning all the warehouses to the virtual users. The result of forcing the use of all warehouses means the workload will generate more I/O on the storage subsystem. The first set of best practices compares baseline database performance without an optimal storage configuration to a database configuration with an optimal storage configuration.
The metrics New Order per Minute (NOPM) and Transaction per Minute (TPM) help us interpret our results. These metrics are from the TPC-C benchmark and indicate the result of a test. During our best practice validation, we compared these metrics against the baseline to ensure that there is an increase in performance.
The team sequentially tested a best practice or a set of practices during the program. For example, with the storage configuration, the team tested with multiple volumes using the PowerStore Volume Groups feature.
The purpose of the storage configuration best practice category was to optimize the SQL Server database using the PowerStore volume groups. The goal was to show how multiple volumes can improve SQL Server database performance. With each additional best practice, the working theory was that we would observe a gain in performance. For example, the last test in changing the SQL Server database should achieve the overall optimal SQL Server performance. Building upon prior best practices shows that when best practices are deployed together, overall database performance becomes optimized.
To enhance the value of best practices, we have identified which configuration changes produced the greatest results. This system should enable anyone reviewing the best practices to easily identify which recommendations will have the most value. Best practices were categorized as follows:
When reviewing the best practices, the day and value of the recommendations are combined. Here are some examples:
The goal of this ranking system is to help customers to quickly decide which recommendations will provide the best value. For example, two or three best practices from the section on CPU optimization might provide most of the value for a customer depending on their goals. Investing the time to implement these best practices provides the greatest return. This approach can be taken for each layer of the database system until all Day 1, highly recommended best practices are completed. This methodology makes best practices unique and provides a way for customers to have the best return on investment.
Best practices are broad recommendations that apply to most SQL Server environments using Dell infrastructure, but it is important to recognize every database application workload and environment is different, meaning the value of these best practices will vary from system to system. As with any configuration or change to a database system, the best approach is to validate the change before implementing the best practice on a production system. Thus, we recommend testing all best practices before implementing the changes in production.
The architecture was designed to broadly represent the infrastructure customers use for their SQL Server databases. Dell PowerEdge R750xs servers were used for the compute layer. Each PowerEdge R750xs was configured the same way to ensure consistency. Two Intel Xeon Gold 6338 processors with 32 physical cores each, for a total of 64 cores, were used. The default server configuration enables logical processors which means 128 cores were available at the hypervisor level. The following table shows the detailed configuration:
Table 2: PowerEdge R750xs Configuration details
We used Dell Connectrix switches to connect the servers to the PowerStore storage array. The Connectrix DS-6620B is designed to support medium to large-sized database deployments. The Connectrix configuration used in our tests included 8 active 32Gbps ports to optimize the connection to the PowerStore storage.
The PowerStore storage array is cloud-enabled and offers enterprise-rich data services like snapshots, replication, and many other features in a small footprint. Out of the box, the PowerStore storage array is highly tuned and does not require much manual tuning. In the PowerStore 5000T configuration used, there were 21 NVMe drives, each 1.92 TB in size.
Complete details of the PowerStore 5000T configuration are described in the following table:
Table 3: PowerStore 5000T configuration
Total usable capacity
Front-end I/O modules
Many production database systems use dedicated infrastructure. In validating the SQL Server best practices, all the Dell infrastructure was reserved for the database. No parallel workloads were running and competing for CPU, network, and storage resources. Validating best practices in a dedicated environment helped to eliminate variables that might impact test results. We understand that many environments have been consolidated and challenges can arise in tuning one database system on shared infrastructure.
The implementation of best practices might improve performance in consolidated systems, but the positive gains may not be as significant due to the shared resources. Using these best practices can mitigate some challenges by integrating Day 1, Highly Recommended configuration practices as part of provisioning an SQL Server database. As the database ecosystem is transformed by using best practices, the overall system performance may rise, and consolidated systems might perform more efficiently. Best practices offer the enterprise the ability to deploy a database with the optimal design and ecosystems with the capability to drive improved efficiencies.