Every SAP HANA node requires storage devices and capacity for the following:
When the SAP HANA nodes boot from a volume on Unity (boot from SAN), the required capacity for the operating system must be included in the overall capacity calculation for the SAP HANA installation. Every SAP HANA node requires approximately 100 GB capacity for the operating system. This capacity includes space for the /usr/sap/ directory.
When booting from a SAN, follow the best practices in the “Booting from SAN” section of the Dell EMC Host Connectivity Guide for Linux.
To install the SAP HANA binaries, as well as the configuration files, traces, and logs, every SAP HANA node requires access to a file system mounted under the local mount point /hana/shared/. In an SAP HANA scale-out cluster, a single shared file system is required and must be mounted on every node. Most SAP HANA installations use an NFS file system for this file system. Unity all-flash and hybrid arrays can provide this file system with the NAS option. The size of the /hana/shared/ file system can be calculated using the latest formula in the SAP HANA Storage Requirements White Paper. Version 2.10 of this document used the following formulas for calculation:
Single node (scale-up):
Sizeinstallation(single-node= MIN(1 x RAM; 1 TB)
Sizeinstallation(scale-out) = 1 x RAM_of_worker per 4 worker nodes
The SAP HANA in-memory database requires disk storage to:
Every HANA node (scale-up) or worker node (scale-out) requires two disk volumes to save the in-memory database on disk (data) and to keep a redo log (log). The size of these volumes depends on the anticipated total memory requirement of the database and the RAM size of the node. To prepare the disk sizing, SAP provides several tools and documents, as described in the SAP HANA Storage Requirements White Paper.
Version 2.10 (February 2017) of this document provides the following formulas to calculate the size of the data volume:
Option 1: If an application-specific sizing program can be used:
Sizedata = 1.2x anticipated net disk space for data
where “net disk space” is the anticipated total memory requirement of the database plus an additional 20 percent free space. If the database is distributed across multiple nodes in a scale-out cluster, the “net disk space” must be divided by the number of HANA worker nodes in the cluster. For example, if the net disk space is 2 TB and the scale-out cluster consists of four worker nodes, then every node must be assigned a 616 GB data volume (2 TB / 4 = 512 GB x 1.2 = 616 GB).
If the net disk space is unknown at the time of the storage sizing, EMC recommends using the RAM size of the node plus 20 percent free space for a capacity calculation of the data file system.
Option 2: If no application-specific sizing program is available, the recommended size of the data volume of a given SAP HANA system is equal to the total memory required for that system:
Sizedata = 1 x RAM
The size of the log volume depends on the RAM size of the node. The SAP HANA Storage Requirements White Paper provides the following formulas to calculate the minimum size of the log volume:
[systems ≤ 512GB ] Sizeredolog = 1/2 x RAM
[systems > 512GB ] Sizeredolog(min) = 512GB
HANA supports backup to a file system or use of SAP-certified third-party tools. Dell EMC supports data protection strategies for SAP HANA backup using Dell EMC Data DomainTM and NetWorker. Although an SAP HANA backup to an NFS file system on a Unity all-flash or hybrid array is possible, Dell EMC does not recommend backing up the SAP HANA database to the storage array where the primary persistence resides. If you plan to back up SAP HANA to an NFS file system on a different Unity array, refer to the SAP HANA Storage Requirements White Paper for details about sizing the backup file system. The capacity depends not only on the data size and the frequency of change operations in the database, but also on the backup generations kept on disk.