Home > Workload Solutions > SAP > Guides > Dell Validated Design for High Availability for SAP with SUSE Pacemaker Clusters > Installing SAP
For fault tolerance at the network level, Dell Technologies recommends using active-passive network bonding devices. For information about how to configure these devices, see the SUSE Administration Guide.
Install the SAP components in the following order:
Dell strongly recommends using two Fiber Channel fabrics and multipathing for the shared disks.
Share the disk for the ASCS and ERS instances and assign it to the cluster nodes nwacsc01 and nwacsc02. Dell SAP engineering created the file systems on the first server and mounted only the required disk by running:
nwacsc01# mkfs.xfs /dev/sdb
nwacsc01# mkfs.xfs /dev/sdc
nwacsc01# mkdir -p /usr/sap/TR1/{ASCS00,ERS10}
nwacsc01# mount /dev/sdb /usr/sap/TR1/ASCS00
nwacsc02# mkdir -p /usr/sap/TR1/{ASCS00,ERS10}
nwacsc02# mount /dev/sdc /usr/sap/TR1/ERS10
On all the servers, run:
# mkdir -p {/sapmnt,/sapinst,/usr/sap/TR1/SYS}
# mount <nfsServerIP>:/sapmnt /sapmnt
# mount <nfsServerIP>:/sapinst /sapinst
# mount <nfsServerIP>:/sapsys /usr/sap/TR1/SYS
Manually add the cluster IP for the ASCS instance by running:
nwacsc01# ip addr add 10.14.20.19/24 dev bond0
nwacsc01# cd /sapinst/SWPM
nwacsc01# ./sapinst SAPINST_USE_HOSTNAME=nwascs-ha
The Software Provisioning Manager (SWPM) option to use depends on the SAP NetWeaver version and architecture:
Installing SAP NetWeaver 7.5
In the SWPM interface:
chown -R tr1adm:sapsys /usr/sap/TR1/ASCS00
Manually add the cluster IP for the ASCS instance by running:
nwacsc02# ip addr add 10.14.20.20/24 dev bond0
nwacsc02# cd /sapinst/SWPM
nwacsc02# ./sapinst SAPINST_USE_HOSTNAME=nwers-ha
To install SAP S/4HANA Server 1809:
chown -R tr1adm:sapsys /usr/sap/TR1/ERS10
On nwascs01, run:
# su - tr1adm
# sapcontrol -nr 00 -function Stop
# sapcontrol -nr 00 -function StopService
On nwascs02, run:
# su - tr1adm
# sapcontrol -nr 10 -function Stop
# sapcontrol -nr 10 -function StopService
On hosts nwascs01 and nwascs02, run:
# zypper in sap-suse-cluster-connector
Add the following lines to /usr/sap/TR1/SYS/profile/TR1_ASCS00_trnwacs-ha and /usr/sap/TR1/SYS/profile/TR1_ERS10_trners-ha:
service/halib = $(DIR_CT_RUN)/saphascriptco.so
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector
Perform this step on any server that has /sapmnt mounted on it:
# sed -i -e 's/Restart_Program_01/Start_Program_01/' /sapmnt/TR1/profile/TR1_ASCS00_nwascs-ha
# sed -i -e 's/Restart_Program_00/Start_Program_00/' /sapmnt/TR1/profile/TR1_ERS10_nwers-ha
On nwascs01, run:
# su -tr1adm
# sapcontrol -nr 00 -function StartService TR1
# sapcontrol -nr 00 -function Start
On nwascs02, run:
# su - tr1adm
# sapcontrol -nr 10 -function StartService TR1
# sapcontrol -nr 10 -function Start
Prerequisites:
Start by installing the empty database on both hosts so that you can use the latest SAP HANA patch release.
Note: While SWPM assumes that you are using the installation media, installing the latest SAP HANA patch packages is recommended in preference to installing the base version first and updating to the latest patch release afterwards.
hana01# ip addr add 10.14.20.19/24 dev bond0
# cd /sapinst/SAP_HANA_DATABASE/
# ./hdblcm –ignore=check_signature_file
Note:
The ignore parameter is only needed if you install an SAP HANA database patch release and did not create a signature with SAPcar during the extraction.
Enter the values that are specified in Table 7 and Table 8.
Ensure that you enter the correct value for the sapsys group. Do not use the SAP HANA default value of 79.
On hana01, run:
hana01 # cd /sapinst/SWPM
./sapinst SAPINST_USE_HOSTNAME=hana-ha
Run:
# cd /sapinst/SWPM/
# ./sapinst SAPINST_USE_HOSTNAME=nwwrk01
Then, in the SWPM interface:
Run:
# cd /sapinst/SWPM/
# ./sapinst SAPINST_USE_HOSTNAME=nwwrk02
In the SWPM interface:
Before you configure the SAP HANA replication, set up the hdbuser store for a backup user. This design guide uses “backup” as the username.
On both hosts, run:
# su – th0adm
# hdbuserstore -i add backup localhost:30013@SYSTEMDB system
Note: For safety reasons, create a backup user with the appropriate permissions on your databases. You can also use the SYSTEM permissions.
To cluster the SAP HANA database:
The Communication Layer page is displayed:
Note: Ensure that the NTP service starts on boot. SUSE recommends adding at least three NTP servers as sources. A minimum of three time sources are required to provide a majority decision in case drifts happen. If one source displays the wrong time, it will be consistent on both cluster nodes, which will not be a problem for normal operation.
Note: The number of SBD disks to add depends on the number of available storage systems. For more information, see the SUSE document Storage Protection and SBD | Administration Guide | SUSE Linux Enterprise High Availability Extension 15 SP2.
To verify that the cluster is installed on hana01, run:
trhana01:~ # crm_mon -r1
The following output is displayed:
You can also check the status of the replication by running:
trhana01:# su – th0adm
th0adm@hana01:# cdpy
th0adm@hana01:# python systemReplicationStatus.py
The high-level steps for configuring a cluster are:
Prerequisites
On nwascs01 and nwascs02:
# zypper in -t pattern ha_sles
Use YaST or the command line tool to configure the basic cluster. This example uses the command line.
Run:
# modprobe softdog
# echo "softdog" > /etc/modules-load.d/softdog.conf
# systemctl enable sbd
# ha-cluster-init -y -i eth0 -i eth1 -u -s /dev/sdd -s /dev/sde
# modprobe softdog
# echo "softdog" > /etc/modules-load.d/softdog.conf
# systemctl enable sbd
# rsync 10.14.20.21:/etc/sysconfig/sbd /etc/sysconfig
# ha-cluster-join -c 10.14.20.21 -i eth0 -i eth1
# crm_mon -r1
After the configuration is complete on the server nwascs01, set the cluster to maintenance mode so that it does not trigger any actions:
# crm configure property maintenance-mode="true"
For easier handling, the resource configuration is written to the following files:
crm_ascs.txt
group grp_TR1_ASCS00 \
rsc_ip_TR1_ASCS00 rsc_fs_TR1_ASCS00 rsc_sap_TR1_ASCS00 \
meta resource-stickiness=3000
crm_ers.txt
primitive rsc_fs_TR1_ERS10 Filesystem \
params device="/dev/sdc" directory="/usr/sap/TR1/ERS10" fstype=xfs\
op start timeout=60s interval=0 \
op stop timeout=60s interval=0 \
op monitor interval=20s timeout=40s
primitive rsc_ip_TR1_ERS10 IPaddr2 \
params ip=10.14.20.19 \
op monitor interval=10s timeout=20s
primitive rsc_sap_TR1_ERS10 SAPInstance \
op monitor interval=11 timeout=60 on-fail=restart \
params InstanceName=TR1_ERS10_nwers-ha \
START_PROFILE="/sapmnt/TR1/profile/TR1_ERS10_nwers-ha" \
AUTOMATIC_RECOVER=false IS_ERS=true \
meta priority=1000
group grp_TR1_ERS10 \
rsc_ip_TR1_ERS10 rsc_fs_TR1_ERS10 rsc_sap_TR1_ERS10
crm_col.txt
colocation col_sap_TR1_no_both -5000: grp_TR1_ERS10 grp_TR1_ASCS00
location loc_sap_TR1_failover_to_ers rsc_sap_TR1_ASCS00 \
rule 2000: runs_ers_TR1 eq 1
order ord_sap_TR1_first_start_ascs Optional: rsc_sap_TR1_ASCS00:start \
rsc_sap_TR1_ERS10:stop symmetrical=false
See Appendix A for templates for these files to use at the time of installation, including simple commands to use for adjustment to your environment.
To load the files into the cluster stack, run:
# crm configure load update crm_ascs.txt
# crm configure load update crm_ers.txt
# crm configure load update crm_col.txt
After the three files are loaded, the resource configuration is complete. To end maintenance mode, run:
# crm configure property maintenance-mode="false"