Home > Storage > PowerStore > Databases and Data Analytics > Dell PowerStore: Elastic Stack > Overview
Regardless of the underlying storage array, understand the requirements of the entire application stack before deployment. This knowledge helps ensure that a proper PowerStore T model is selected and sized to deliver the expected performance and capacity. In addition to selecting and sizing a PowerStore T model, changes to the infrastructure components like the storage fabric may also be required. If Elasticsearch is new to the environment, Elasticsearch design factors and expected performance metrics must be determined before sizing the array and supporting infrastructure. See Elastic Stack sizing for more information.
With a well-designed system, all components within the stack work together to provide the maximum I/O performance metrics:
Latency: The amount of time an I/O operation takes to complete. High latencies typically indicate an I/O bottleneck.
IOPS: The number of reads and writes occurring each second. IOPS is key for determining the number of required disks in an array while maintaining accepted response times. If the array uses SSDs, the array typically provides enough IOPS once capacity and throughput are met.
Throughput: The amount of data in bytes per second transferred between the server and storage array. Throughput is primarily used to define the path between the server and array and the number of required drives. A few SSDs can often meet IOPS requirements but may not meet throughput requirements. Throughput can be calculated as follows using IOPS and the average I/O size: Throughput MBs = IOPS x I/O size.
When sizing, assume all I/O will be random. Assuming random I/O will yield best results.
Some points to consider when selecting and sizing PowerStore include the following:
More points to consider are listed in Elastic Stack sizing:
I/O bandwidth and IOPS should be tested on dedicated components of the I/O path to ensure expected performance is achieved before creating an Elastic Stack environment and deploying to production. Before releasing any storage system to production, use Dell LiveOptics (formally DPACK) and CloudIQ on a simulated production system for an extended period that includes the peak workload. The simulation will help to define the I/O requirements. The account team can use the PowerStore sizer to estimate the storage needs.
It is recommended to repeat this test and validate the process on the production server immediately after go-live to validate and establish a benchmark of initial performance metrics.
Note: Use caution if the test is run on a live system after a period post go-live as the test could cause significant performance issues.
Once a design can deliver the expected throughput requirement, additional disks can be added to the storage solution to meet capacity requirements. However, the converse is not necessarily true. If a design meets the expected capacity requirements, adding disks to the storage solution may not make the design meet the required throughput requirements. For example, consider the following. Since disk drive capacity is growing faster than disk I/O throughput rates, a situation can occur where fewer disks can store a large volume of data. However, the small number of disks cannot provide the same I/O throughput as a larger number of smaller disks.
After validating throughput of I/O paths between the PowerStore array and the server, and meeting capacity requirements, test the disk I/O capabilities for the designed workload of the PowerStore array. Successful tests will validate that the storage design provides the required IOPS and throughput with acceptable latencies.