Every SAP HANA node requires storage devices and capacity for the following purposes:
For the SAP HANA nodes to be able to start up from a volume on a PowerMax array (boot from the SAN), the overall capacity calculation for the SAP HANA installation must include the required operating system capacity. Every SAP HANA node requires approximately 100 GB capacity for the operating system, including the /usr/sap/ directory. For information about best practices when booting from a SAN, see the Dell EMC Host Connectivity Guide for Linux.
Every SAP HANA node requires access to a file system mounted under the local mount point, /hana/shared/, for installation of the SAP HANA binary files, configuration files, traces, and logs. An SAP HANA scale-out cluster requires a single shared file system, which must be mounted on every node. Most SAP HANA installations use an NFS for this purpose. PowerMax arrays provide this file system with the embedded eNAS option.
You can calculate the size of the /hana/shared/ file system by using the formulas provided in SAP HANA Storage Requirements. Version 2.10 provides the following formulas:
Single node (scale-up):
Sizeinstallation(single-node) = MIN(1 x RAM; 1 TB)
Sizeinstallation(scale-out) = 1 x RAM_of_worker per 4 worker nodes
The SAP HANA in-memory database requires disk storage for the following purposes:
Every SAP HANA scale-up node and scale-out (worker) node requires two disk volumes to save the in-memory database on disk (data) and keep a redo log (log). The size of these volumes depends on the anticipated total memory requirement of the database and the RAM size of the node. SAP HANA Storage Requirements provides references to help you with disk sizing. Version 2.10 of the document states that you can calculate the size of the data volume by using the following formula:
Sizedata = 1.2 x net disk space for data
“Net disk space” is the anticipated total memory requirement of the database plus 20 percent free space.
If the database is distributed across multiple nodes in a scale-out cluster, divide the net disk space by the number of SAP HANA worker nodes in the cluster. For example, if the net disk space is 2 TB and the scale-out cluster consists of 4 worker nodes, every node must have a 616 GB data volume assigned to it (2 TB / 4 = 512 GB x 1.2 = 616 GB).
If the net disk space is unknown at the time of storage sizing, we recommend using the RAM size of the node plus 20 percent free space to calculate the capacity of the data file system.
The size of the log volume depends on the RAM size of the node. SAP HANA Storage Requirements provides the following formulas to calculate the minimum size of the log volume:
[systems ≤ 512 GB] Sizeredolog = 1/2 x RAM
[systems > 512 GB] Sizeredolog(min) = 512 GB
SAP HANA supports backup to a file system or the use of SAP-certified third-party tools. Dell Technologies supports data-protection strategies for SAP HANA backup using Dell EMC Data Domain and Dell EMC Networker. Even though you can back up an SAP HANA database to an NFS on a PowerMax array, we do not recommend backing up the SAP HANA database to the storage array on which the primary persistence resides. If you plan to back up SAP HANA to an NFS on a different PowerMax array, see SAP HANA Storage Requirements for information about sizing the backup file system. The capacity depends not only on the data size and the frequency of change operations in the database, but also on the number of backup generations kept on disk.
For information about using Dell EMC Data Domain data protection for SAP HANA, see Data Domain Backup and Recovery for SAP HANA.