Home > Storage > PowerStore > Databases and Data Analytics > Dell PowerStore: Microsoft SQL Server Best Practices > SQL Server design considerations
The I/O storage system is a critical component of any SQL Server environment. Sizing and configuring a storage system without understanding the I/O requirements can have unfavorable results. Analyzing performance in an existing environment using a tool like Live Optics can help define the I/O requirements. For best results, capture performance statistics for at least a 24-hour period that includes the system peak workload.
When creating SQL Server databases, the default is to have one database file, one filegroup, and one log file. If at any point the database will have high performance requirements, it is recommended to use at least two datafiles per file group. SQL Server will write data evenly across all files in the filegroup and therefore the underlying files. Using multiple files allows the database workload to be spread across multiple volumes or vVols. Even if they are placed on the same volume in the beginning, it is advantageous to create this configuration from the start. More files can be added later, but the data must be restriped to achieve balanced access through re-indexing or some other technique. This reason is because SQL Server uses a proportional fill strategy when writing to datafiles such that it will write more data to files that have a greater amount of free space.
More filegroups can be created and may be beneficial to use some SQL Server features. However, the number of filegroups has no performance benefit.
SQL Server log files are accessed sequentially and do not use proportional fill. Therefore, there is no performance benefit to creating extra log files.
While every environment is unique, an online transaction processing (OLTP) workload typically consists of small, random reads and writes. A storage system that services this type of workload is primarily sized based on capacity and the number of IOPS required. PowerStore models allow the flexibility to be configured for block-optimized or unified (block and file) workloads to meet unique OLTP requirements.
An online analytic processing (OLAP), decision support system (DSS), or big data workload is typically dominated by large, sequential reads. A storage system that services this type of workload is primarily sized based on throughput. When designing for throughput, the performance of the entire path between the server and the drives in PowerStore needs consideration. For high-throughput requirements to be met, multiple HBAs may also be used in the server, the array, or both.
The most common scenario is a mixed workload environment. Typically, SQL Server I/O patterns do not strictly fall into an OLTP or analytics pattern. This factor is what can make SQL Server workloads challenging because no two workloads behave the same. In addition, the same SQL Server host or instance may be servicing multiple applications or transaction workloads. Mixed workload can also imply that multiple applications (in addition to SQL Server) are residing on the same host or accessing the same storage. The combined workload of these applications invalidates any typical application I/O usage pattern. For these reasons, it is important to gather performance metrics for best sizing results.