Home > Workload Solutions > SAP > Guides > SAP HANA TDI Guides > Dell Validated Design for SAP HANA TDI Deployments with Dell PowerFlex File Storage > SAP HANA capacity requirements
Every SAP HANA node requires storage devices and capacity for:
Operating system boot image
Local drives or the equivalent are required for a boot volume in each SAP HANA node. Every SAP HANA node requires approximately 100 GB capacity for the operating system, including the /usr/sap/ directory.
SAP HANA installation (/hana/shared/)
For installation of the SAP HANA binary files and the configuration, trace, and log files, every SAP HANA node requires access to a file system that is mounted under the local mount point, /hana/shared/. An SAP HANA scale-out cluster requires a single shared file system to be mounted on every node. Configure a shared NFS file system on the PowerFlex File NAS node for provisioning the required SAP shared file systems such as /hana/shared/ and /sapmnt/SID/.
Calculate the size of the /hana/shared/ file system by using the formula in the SAP HANA Storage Requirements white paper. This white paper is included as an attachment in SAP Note 1900823: SAP HANA Storage Connector API (access requires SAP login credentials). Version 2.10 of the white paper provides the following formulas.
Single node (scale-up):
Sizeinstallation(single-node) = MIN(1 x RAM; 1 TB)
Multinode (scale-out):
Sizeinstallation(scale-out) = 1 x RAM_of_worker per 4 worker nodes
SAP HANA persistence (data and log)
The SAP HANA in-memory database requires disk storage for:
Every SAP HANA scale-up and worker (scale-out) node requires two disk volumes to save the in-memory database on disk (data) and to keep a redo log. The size of these volumes depends on the anticipated total memory requirement of the database and the RAM size of the node. To help you prepare the disk sizing, SAP provides references to tools and documents in the SAP HANA Storage Requirements white paper. This document is attached to SAP Note 1900823: SAP HANA Storage Connector API. The size of the data and log devices depends on the database size. Version 2.10 of the white paper provides the following formula to calculate the size of the data volume:
Sizedata = 1.2 x net disk space for data
In this formula, net disk space for data is the anticipated total memory requirement of the database plus 20 percent free space.
If the database is distributed across multiple nodes in a scale-out cluster, divide the net disk space by the number of SAP HANA worker nodes in the cluster. For example, if the net disk space is 2 TB and the scale-out cluster consists of four worker nodes, every node must have a data volume of 616 GB assigned to it (2 TB / 4 = 512 GB x 1.2 = 616 GB).
If the net disk space is unknown at the time of storage sizing, Dell Technologies recommends using the RAM size of the node plus 20 percent free space to calculate the capacity of the datafile system.
The size of the log volume depends on the RAM size of the node. The SAP HANA Storage Requirements white paper that is attached to SAP Note 1900823: SAP HANA Storage Connector API provides the following formulas to calculate the minimum size of the log volume:
[systems ≤ 512GB ] Sizeredolog = 1/2 x RAM
[systems > 512GB ] Sizeredolog(min) = 512GB
Backups
SAP HANA supports backup to a file system or backups to SAP-certified third-party tools. Dell Technologies supports data protection strategies for SAP HANA backups using Dell PowerProtect appliances and software.
Note: Although it is possible to back up the SAP HANA database to the same storage on which the primary persistence resides, it is not recommended.
If you plan to back up SAP HANA to an NFS file system on a different Dell array, for sizing details see the SAP HANA Storage Requirements white paper that is attached to SAP Note 1900823: SAP HANA Storage Connector API. The capacity depends on the data size and the frequency of change operations in the database, as well as the number of backup generations that are kept on disk.
The SAP HANA persistent devices use multiple I/O patterns:
Access to the data volume is primarily random, with blocks ranging in size from 4 KB to 64 MB. The data is written asynchronously with parallel IOPS to the datafile system. During normal operations, most of the I/Os to the datafile system are writes. Data is read from the file system only during database restarts, SAP HANA backups, host autofailover, or a column store table load or reload operation.
Log volume
Access to the log volume is primarily sequential, with blocks ranging in size from 4 KB to 1 MB. SAP HANA keeps a 1 MB buffer in memory for the redo log. When the buffer is full, it is synchronously written to the log volume. When a database transaction is committed before the log buffer is full, a smaller block is written to the file system. Because data is written synchronously to the log volume, a low latency for the I/O to the storage device is important, especially for the smaller 4 KB and 16 KB block sizes.
During normal database operations, most of the I/Os to the log volume are writes. Data is read from the log volume only during database restart, HA failover, log backup, or database recovery.