Home > Workload Solutions > SAP > Guides > Dell Validated Design for High Availability for SAP with Red Hat Pacemaker Clusters > Installing SAP
Prerequisites
When installing SAP, ensure that:
For fault tolerance at the network level, Dell Technologies recommends using active/passive network bonding devices. For more information, see Chapter 14. Configuring network bonding Red Hat Enterprise Linux 8 | Red Hat Customer Portal.
Install the SAP components in the following order:
Note: Dell Technologies strongly recommends using two Fiber Channel fabrics and multipathing for the shared disks.
Two fiber channel disks are assigned to each server, but the disks are mounted only on one server at a time. The mountpoints are managed through the cluster stack as resources.
nwacsc01number mkfs.xfs /dev/sdb
nwacsc01# mkfs.xfs /dev/sdc
nwacsc01# mkdir -p /usr/sap/TR1/{ASCS00,ERS10}
nwacsc01# mount /dev/sdb /usr/sap/TR1/ASCS00
nwacsc02# mkdir -p /usr/sap/TR1/{ASCS00,ERS10}
nwacsc02# mount /dev/sdc /usr/sap/TR1/ERS10
# mkdir -p {/sapmnt,/sapinst,/usr/sap/TR1/SYS}
# mount <nfsServerIP>:/sapmnt /sapmnt
# mount <nfsServerIP>:/sapinst /sapinst
# mount <nfsServerIP>:/sapsys /usr/sap/TR1/SYS
Assign the cluster IP to the first node where the ASCS instance is running initially:
nwacsc01# ip addr add 10.14.20.19/24 dev bond0
nwacsc01# cd /sapinst/SWPM
nwacsc01# ./sapinst SAPINST_USE_HOSTNAME=nwascs-ha
Download the Software Provisioning Manager (SWPM). The option to choose depends on the following SAP NetWeaver version and architecture:
Install SAP NetWeaver 7.5
chown -R tr1adm:sapsys /usr/sap/TR1/ASCS00
Manually add the cluster IP for the ASCS instance:
nwacsc02# ip addr add 10.14.20.20/24 dev bond0
nwacsc02# cd /sapinst/SWPM
nwacsc02# ./sapinst SAPINST_USE_HOSTNAME=nwers-ha
In the SWPM interface:
chown -R tr1adm:sapsys /usr/sap/TR1/ERS10
On nwascs01, run the following commands:
# su - tr1adm
# sapcontrol -nr 00 -function Stop
# sapcontrol -nr 00 -function StopService
On nwascs02, run the following commands:
# su - tr1adm
# sapcontrol -nr 10 -function Stop
# sapcontrol -nr 10 -function StopService
Run the following commands on any server on which /sapmnt is mounted:
# sed -i -e 's/Restart_Program_01/Start_Program_01/' /sapmnt/TR1/profile/TR1_ASCS00_nwascs-ha
# sed -i -e 's/Restart_Program_00/Start_Program_00/' /sapmnt/TR1/profile/TR1_ERS10_nwers-ha
On nwascs01, run the following commands:
# su -tr1adm
# sapcontrol -nr 00 -function StartService TR1
# sapcontrol -nr 00 -function Start
On nwascs02, run the following commands:
# scale unit - tr1adm
# sapcontrol -nr 10 -function StartService TR1
# sapcontrol -nr 10 -function Start
Ensure that:
Start by installing the empty database on both hosts so that you can use the latest SAP HANA patch release.
Note: While SWPM assumes that you are using the installation media, it is recommended that you install the latest SAP HANA patch packages instead of installing the base version first and then updating to the latest patch release.
hana01# ip addr add 10.14.20.19/24 dev bond0
# cd /sapinst/SAP_HANA_DATABASE/
# ./hdblcm –ignore=check_signature_file
Notes:
Use the ignore parameter only if you installed an SAP HANA database patch release and did not create a signature with SAPcar during the extraction.
Enter the values that are specified in Table 7 and Table 8.
Ensure that you enter the correct value for the sapsys group. Do not use the SAP HANA default value of 79.
On hana01, run the following command:
hana01 # cd /sapinst/SWPM
./sapinst SAPINST_USE_HOSTNAME=hana-ha
In the SWPM interface:
nwwrk01 # cd /sapinst/SWPM/
nwwrk01 # ./sapinst SAPINST_USE_HOSTNAME=nwwrk01
nwwrk02 # cd /sapinst/SWPM/
nwwrk02 # ./sapinst SAPINST_USE_HOSTNAME=nwwrk02
Before you configure the SAP HANA replication, set up the hdbuser-store for a backup user. This design guide uses “backup” as the username.
On both hosts, run the following commands:
# su – th0adm
# hdbuserstore -i add backup localhost:30013@SYSTEMDB system
Note: For data protection reasons, create a backup user with the appropriate permissions on your databases. You can also use the SYSTEM permissions.
To configure replication:
hana01:# su – th0adm
# hdbsql -i 00 -U backup -d SYSTEMDB "BACKUP DATA USING FILE ('/tmp/foo')"
# hdbsql -i 00 -U backup -d SYSTEMDB "BACKUP DATA FOR TH0 USING FILE ('/tmp/foo2')"
Note: Ensure that the backup file destination has enough free available space and is writable for the th0adm user.
th0adm@hana01:# hdbnsutil -sr_enable --name=Node1
hana02:# su – th0adm
th0adm@hana02:# HDB stop
hana02:# scp root@hana01:/usr/sap/TH0/SYS/global/security/rsecssfs/key/SSFS_TH0.KEY/usr/sap/TH0/SYS/global/security/rsecssfs/key/SSFS_TH0.KEY
hana02:# scp root@hana01:/usr/sap/TH0/SYS/global/security/rsecssfs/data/SSFS_TH0.DAT /usr/sap/TH0/SYS/global/security/rsecssfs/data/SSFS_TH0.DAT
hana02:# su – th0adm
th0adm@hana02:# hdbnsutil -sr_register --remoteHost=hana01 --remoteInstance=00 --replicationMode=syncmem --name=Node2
hana02:# su – th0adm
th0adm@hana02:# HDB start
hana01:# su – th0adm
th0adm@hana01:# cdpy
th0adm@hana01:# python systemReplicationStatus.py
The following code snippet shows the output of a successful replication command:
# yum -y install pcs pacemaker resource-agents-sap-hana
# passwd hacluster
[enter a password for the user hacluster]
# systemctl enable pcsd.service; systemctl start pcsd.service
# pcs cluster auth hana01 hana02
# pcs cluster setup --name clhana hana01 hana02
# pcs cluster start --all
Red Hat supports two fencing mechanisms: power fence agents and I/O fence agents. For more information, see Fencing in a Red Hat High Availability Cluster - Red Hat Customer Portal. This example uses Intelligent Platform Management Interface (IPMI) as an industry-proven power fence agent mechanism.
hana01:# su – th0adm
th0adm@hana01:# HDB stop
hana02:# su – th0adm
th0adm@hana02:# HDB stop
[root]# mkdir -p /hana/shared/myHooks
[root]# cp /usr/share/SAPHanaSR/srHook/SAPHanaSR.py /hana/shared/myHooks
[root]# chown -R th0adm:sapsys /hana/shared/myHooks
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /hana/shared/myHooks
execution_order = 1
[trace]
ha_dr_saphanasr = info
Notes:
Adjust the values for th0 with your lowercase SAP SID of your database.
Adjust Node1and Node2 with the database names given during replication setup.
Cmnd_Alias Node1_SOK = /usr/sbin/crm_attribute -n hana_th0_site_srHook_Node1 -v SOK -t crm_config -s SAPHanaSR
Cmnd_Alias Node1_SFAIL = /usr/sbin/crm_attribute -n hana_th0_site_srHook_Node1 -v SFAIL -t crm_config -s SAPHanaSR
Cmnd_Alias Node2_SOK = /usr/sbin/crm_attribute -n hana_th0_site_srHook_Node2 -v SOK -t crm_config -s SAPHanaSR
Cmnd_Alias Node_SFAIL = /usr/sbin/crm_attribute -n hana_th0_site_srHook_Node2 -v SFAIL -t crm_config -s SAPHanaSR
th0adm ALL=(ALL) NOPASSWD: Node1_SOK, Node1_SFAIL, Node2_SOK, Node2_SFAIL
Defaults!Node1_SOK, Node1_SFAIL, Node2_SOK, Node2_SFAIL !requiretty
hana01:# su – th0adm
th0adm@hana01:# HDB start
hana02:# su – th0adm
th0adm@hana02:# HDB start
crm configure property maintenance-mode="true"
# pcs resource defaults update resource-stickiness=1000
# pcs resource defaults update migration-threshold=5000
# pcs resource create SAPHanaTopology_TH0_00 SAPHanaTopology SID=TH0 InstanceNumber=00 \
op start timeout=600 \
op stop timeout=300 \
op monitor interval=10 timeout=600 \
clone clone-max=2 clone-node-max=1 interleave=true
# pcs resource create SAPHana_TH0_00 SAPHana SID=TH0 InstanceNumber=00 \
PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true \
op start timeout=3600 \
op stop timeout=3600 \
op monitor interval=61 role="Slave" timeout=700 \
op monitor interval=59 role="Master" timeout=700 \
op promote timeout=3600 \
op demote timeout=3600 \
promotable notify=true clone-max=2 clone-node-max=1 interleave=true
# pcs resource create vip_TH0_00 IPaddr2 ip="10.14.20.10"
# pcs constraint order SAPHanaTopology_TH0_00-clone then SAPHana_TH0_00-clone symmetrical=false
# pcs constraint colocation add vip_TH0_00 with master SAPHana_TH0_00-clone 2000
crm configure property maintenance-mode="false"
trhana01:~ # crm_mon -r1
The following output is displayed:
trhana01:# su – th0adm
th0adm@hana01:# cdpy
th0adm@hana01:# python systemReplicationStatus.py
At a high level, configuring a cluster consists of the following steps:
On the nwascs01 and nwascs02 nodes:
# yum -y install pcs pacemaker resource-agents-sap
Run the following command:
# pcs cluster auth nwascs01 nwascs02
# pcs cluster setup –name nwascs nwascs01 nwascs02
# pcs cluster start –all
Perform step 3 in Configure Pacemaker .
# crm_mon -r1
After the configuration is complete on the nwascs01 server:
# crm configure property maintenance-mode="true"
For easier handling, the resource configuration is written to the following files:
See Configuration resource files for the files.
# crm configure load update crm_ascs.txt
# crm configure load update crm_ers.txt
# crm configure load update crm_col.txt
After the files are loaded, the resource configuration is complete.
# crm configure property maintenance-mode="false"