Home > Workload Solutions > SAP > Guides > SAP HANA TDI Guides > Dell Validated Solution for SAP HANA TDI Deployments with Dell Metro Node > Overview
Every SAP HANA node requires storage devices and capacity for:
For the SAP HANA nodes to boot from a volume on a storage array (that is, from the SAN), the overall capacity calculation for the SAP HANA installation must include the required operating system capacity. Every SAP HANA node requires approximately 100 GB capacity for the operating system, including the /usr/sap/ directory.
SAP HANA installation (/hana/shared/)
For installation of the SAP HANA binary file, configuration files, traces, and logs, every SAP HANA node requires access to a file system that is mounted under the /hana/shared/ local mount point. An SAP HANA scale-out cluster requires a single shared file system to be mounted on every node. Most SAP HANA scale-out installations use an NFS server-based shared file system for this purpose. NAS systems such as Dell PowerMax embedded NAS (eNAS) arrays, Dell PowerStore file systems, and Dell Isilon systems also provide this /hana/shared/ file system. You can calculate the size of the /hana/shared/ file system by using the formula in the SAP HANA Storage Requirements white paper. This paper is included as an attachment in SAP Note 1900823: SAP HANA Storage Connector API (access requires SAP login credentials). Version 2.10 of the document provides the following formulas:
Single node (scale-up):
Sizeinstallation(single-node) = MIN(1 x RAM; 1 TB)
Multinode (scale-out):
Sizeinstallation(scale-out) = 1 x RAM_of_worker per 4 worker nodes
SAP HANA persistence (data and log)
The SAP HANA in-memory database requires disk storage for:
Every SAP HANA scale-up and worker (scale-out) node requires two disk volumes to save the in-memory database on disk (data) and to keep a redo log (log). The size of these volumes depends on the anticipated total memory requirement of the database and the RAM size of the node. To help prepare the disk sizing, SAP provides references to tools and documents in the SAP HANA Storage Requirements white paper. This paper is included as an attachment in SAP Note 1900823: SAP HANA Storage Connector API (access requires SAP login credentials). The size of the data and log devices depends on the database size. Version 2.10 of the white paper provides the following formula to calculate the size of the data volume:
Sizedata = 1.2 x net disk space for data
In this formula, net disk space for data is the anticipated total memory requirement of the database plus 20 percent free space.
If the database is distributed across multiple nodes in a scale-out cluster, divide the net disk space by the number of SAP HANA worker nodes in the cluster. For example, if the net disk space is 2 TB and the scale-out cluster consists of four worker nodes, every node must have a data volume of 616 GB assigned to it (2 TB / 4 = 512 GB x 1.2 = 616 GB).
If the net disk space is unknown at the time of storage sizing, Dell Technologies recommends using the RAM size of the node plus 20 percent free space to calculate the capacity of the datafile system.
The size of the log volume depends on the RAM size of the node. The SAP HANA Storage Requirements white paper provides the following formulas to calculate the minimum size of the log volume:
[systems ≤ 512GB ] Sizeredolog = 1/2 x RAM
[systems > 512GB ] Sizeredolog(min) = 512GB
SAP HANA supports backup to a file system or backups to SAP-certified third-party tools. Dell Technologies supports data protection strategies for SAP HANA backups using Dell PowerProtect appliances and software. Although it is possible to back up an SAP HANA database to an NFS file system on a Dell array, Dell Technologies does not recommend backing up the SAP HANA database to the storage array on which the primary persistence resides. If you plan to back up SAP HANA to an NFS file system on a different Dell storage array, see the SAP HANA Storage Requirements white paper (access requires SAP login credentials) for details about sizing. The capacity depends on not only the data size and the frequency of change operations in the database, but also on the number of backup generations that are kept on disk.