Home > Workload Solutions > SAP > Guides > Dell Validated Design for High Availability for SAP with VMware and Red Hat on Dell PowerEdge Servers > Installing SAP
Prerequisites
When installing SAP, ensure that:
For fault tolerance at the network level, Dell Technologies recommends using active/passive network bonding devices. See Chapter 14. Configuring network bonding Red Hat Enterprise Linux 8 | Red Hat Customer Portal.
Install the SAP components in the following order:
On all the servers, run the following commands:
# mkdir -p {/sapmnt,/sapinst,/usr/sap/TR1/SYS}
# mount <nfsServerIP>:/sapmnt /sapmnt
# mount <nfsServerIP>:/sapinst /sapinst
# mount <nfsServerIP>:/sapsys /usr/sap/TR1/SYS
Assign the cluster IP to the first node where the ASCS instance is running initially:
nwacsc01# cd /sapinst/SWPM
nwacsc01# ./sapinst SAPINST_USE_HOSTNAME=nwascs01
Download the Software Provisioning Manager (SWPM). Choose the appropriate instance based on the following SAP NetWeaver version and architecture:
Install SAP NetWeaver 7.5
chown -R tr1adm:sapsys /usr/sap/TR1/ASCS00
Ensure that:
Start by installing the empty database on both hosts so that you can use the latest SAP HANA patch release.
Note: While SWPM assumes that the installation media is used, it is recommended to install the latest SAP HANA patch packages instead of installing the base version first and then updating to the latest patch release.
hana01# ip addr add 10.14.20.19/24 dev bond0
# cd /sapinst/SAP_HANA_DATABASE/
# ./hdblcm –ignore=check_signature_file
Note:
Use the ignore parameter only if an SAP HANA database patch release is installed without creating a signature with SAPcar during the extraction.
Enter the values that are specified in Table 6 and Table 7.
Ensure that you enter the correct value for the sapsys group. Do not use the SAP HANA default value of 79.
On hana01, run the following command:
hana01 # cd /sapinst/SWPM
./sapinst SAPINST_USE_HOSTNAME=hana-ha
In the SWPM interface:
nwwrk01 # cd /sapinst/SWPM/
nwwrk01 # ./sapinst SAPINST_USE_HOSTNAME=nwwrk01
nwwrk02 # cd /sapinst/SWPM/
nwwrk02 # ./sapinst SAPINST_USE_HOSTNAME=nwwrk02
Red Hat strongly recommends using a quorum server for an SAP HANA failover cluster to prevent a “split brain” scenario when a network failure occurs. A “split brain” is when two databases in a cluster act as a primary database without specifying which one should be active. As a result, valid transaction requests could reach two databases, which would require a manual intervention to resolve after the network is back online. Otherwise, you would have to discard transactions on one side and accept data loss.
The quorum service itself is lightweight and can be run on any system on top of the normal workload of an existing server. The service can be installed in the following locations, ordered from most efficient to least efficient:
Note: The first option is better than having no quorum, because VMware uses SAN-based cluster communication in addition to network communication. Therefore, the quorum VM will not be started a second time when only a network outage happens.
sapquorum:# dnf -y install pcs corosync-qnetd
sapquorum:# systemctl start pcsd.service
sapquorum:# systemctl enable pcsd.service
sapquorum:# pcs qdevice setup model net --enable –start
sapquorum:# pcs qdevice status net –full
sapquorum:# firewall-cmd --permanent --add-service=high-availability
sapquorum:# firewall-cmd --add-service=high-availability
Note: For more information about how to set up and use quorum servers, see the official Red Hat document: Configuring quorum devices Red Hat Enterprise Linux 8.
Before configuring the SAP HANA replication, set up the hdbuser-store for a backup user. This design guide uses “backup” as the username.
On both hosts, run the following commands:
# su – th0adm
# hdbuserstore -i add backup localhost:30013@SYSTEMDB system
Note: For data protection reasons, create a backup user with the appropriate permissions on your databases. The SYSTEM user permissions can also be used.
To configure the replication:
hana01:# su – th0adm
# hdbsql -i 00 -U backup -d SYSTEMDB "BACKUP DATA USING FILE ('/tmp/foo')"
# hdbsql -i 00 -U backup -d SYSTEMDB "BACKUP DATA FOR TH0 USING FILE ('/tmp/foo2')"
Note: Ensure that the backup file destination has enough free available space and is writable for the th0adm user.
th0adm@hana01:# hdbnsutil -sr_enable --name=Node1
hana02:# scale unit – th0adm
th0adm@hana02:# HDB stop
hana02:# scp root@hana01:/usr/sap/TH0/SYS/global/security/rsecssfs/key/SSFS_TH0.KEY/usr/sap/TH0/SYS/global/security/rsecssfs/key/SSFS_TH0.KEY
hana02:# scp root@hana01:/usr/sap/TH0/SYS/global/security/rsecssfs/data/SSFS_TH0.DAT /usr/sap/TH0/SYS/global/security/rsecssfs/data/SSFS_TH0.DAT
hana02:# su – th0adm
th0adm@hana02:# hdbnsutil -sr_register --remoteHost=hana01 --remoteInstance=00 --replicationMode=syncmem --name=Node2
hana02:# su – th0adm
th0adm@hana02:# HDB start
hana01:# su – th0adm
th0adm@hana01:# cdpy
th0adm@hana01:# python systemReplicationStatus.py
The following code snippet shows the output of a successful replication command:
To configure Pacemaker:
# dnf -y install pcs pacemaker resource-agents-sap-hana corosync-qdevice
# passwd hacluster
[enter a password for the user hacluster]
# systemctl enable pcsd.service; systemctl start pcsd.service
# pcs host auth hana1 hana2
# pcs cluster setup clhana hana01 hana02
# pcs cluster start –all
# pcs host auth sapquorum
# pcs quorum device add model net host=sapquorum algorithm=ffsplit
# pcs quorum config
Options:
Device:
votes: 1
Model: net
algorithm: ffsplit
host: sapquorum
Red Hat supports two fencing mechanisms: power fence agents and I/O fence agents. For more information, see Fencing in a Red Hat High Availability Cluster - Red Hat Customer Portal.
Dell SAP engineering used VMware fencing, as described in: How do I configure a stonith device using agent fence_vmware_soap in a Red Hat High Availability cluster with pacemaker? - Red Hat Customer Portal.
hana01:# su – th0adm
th0adm@hana01:# HDB stop
hana02:# su – th0adm
th0adm@hana02:# HDB stop
[root]# mkdir -p /hana/shared/myHooks
[root]# cp /usr/share/SAPHanaSR/srHook/SAPHanaSR.py /hana/shared/myHooks
[root]# chown -R th0adm:sapsys /hana/shared/myHooks
[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /hana/shared/myHooks
execution_order = 1
[trace]
ha_dr_saphanasr = info
Note:
Adjust the values for th0 with your lowercase SAP SID of your database.
Adjust Node1 and Node2 with the database names given during replication setup.
Cmnd_Alias Node1_SOK = /usr/sbin/crm_attribute -n hana_th0_site_srHook_Node1 -v SOK -t crm_config -s SAPHanaSR
Cmnd_Alias Node1_SFAIL = /usr/sbin/crm_attribute -n hana_th0_site_srHook_Node1 -v SFAIL -t crm_config -s SAPHanaSR
Cmnd_Alias Node2_SOK = /usr/sbin/crm_attribute -n hana_th0_site_srHook_Node2 -v SOK -t crm_config -s SAPHanaSR
Cmnd_Alias Node_SFAIL = /usr/sbin/crm_attribute -n hana_th0_site_srHook_Node2 -v SFAIL -t crm_config -s SAPHanaSR
th0adm ALL=(ALL) NOPASSWD: Node1_SOK, Node1_SFAIL, Node2_SOK, Node2_SFAIL
Defaults!Node1_SOK, Node1_SFAIL, Node2_SOK, Node2_SFAIL !requiretty
hana01:# su – th0adm
th0adm@hana01:# HDB start
hana02:# su – th0adm
th0adm@hana02:# HDB start
hana01# pcs property set maintenance-mode=true
# pcs resource defaults update resource-stickiness=1000
# pcs resource defaults update migration-threshold=5000
# pcs resource create SAPHanaTopology_TH0_00 SAPHanaTopology SID=TH0 InstanceNumber=00 \
op start timeout=600 \
op stop timeout=300 \
op monitor interval=10 timeout=600 \
clone clone-max=2 clone-node-max=1 interleave=true
# pcs resource create SAPHana_TH0_00 SAPHana SID=TH0 InstanceNumber=00 \
PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=true \
op start timeout=3600 \
op stop timeout=3600 \
op monitor interval=61 role="Slave" timeout=700 \
op monitor interval=59 role="Master" timeout=700 \
op promote timeout=3600 \
op demote timeout=3600 \
promotable notify=true clone-max=2 clone-node-max=1 interleave=true
# pcs resource create vip_TH0_00 IPaddr2 ip="10.14.20.10"
# pcs constraint order SAPHanaTopology_TH0_00-clone then SAPHana_TH0_00-clone symmetrical=false
# pcs constraint colocation add vip_TH0_00 with master SAPHana_TH0_00-clone 2000
pcs property set maintenance-mode=false
hana01:~ # crm_mon -r1
The following output is displayed:
trhana01:# su – th0adm
th0adm@hana01:# cdpy
th0adm@hana01:# python systemReplicationStatus.py
To configure fault tolerance for the ASCS instance:
The VM is turned on automatically after approximately one minute.
This example assumes that the VMware cluster is evenly spanned over two data centers. Automate the placement of the virtual servers through VM and host rules so that each data center has at least one SAP HANA database and one SAP application server.
Define the host groups on the Configuration tab of the vSphere cluster object.
Contains all the hosts that are residing in the first data center.
Contains all the hosts that are residing in the second data center.
Contains the virtual servers: hana01, nwascs01, and nwwrk01.
Contains the virtual servers: hana02, nwascs02.
Creating affinity rules keeps VMs in the correct data center by telling the vSphere dynamic resource scheduler (DRS) which virtual servers should run on which host group.
To create affinity rules:
This rule ensures that the virtual servers can be restarted in the second data center if the first data center is completely offline.
This step is similar to the preceding step, but using VM-DC2 and DC2 as parameters.
If it is necessary to manually place the secondary VM in the correct data center, use the fault tolerance menu of the primary VM, as shown in the following figure: