Home > Workload Solutions > SAP > Guides > SAP Data Intelligence on Dell Ready Stack for Red Hat OpenShift Container Platform 4.6 > Prerequisites for installing SAP Data Intelligence 3.1
After a successful deployment of OpenShift Container Platform 4.6 and OpenShift Container Storage 4.6, some additional tasks are required to prepare the cluster for a deployment of SAP Data Intelligence 3.1. For details, see the Red Hat SAP Data Intelligence 3 on OpenShift Container Platform 4 knowledge base. Ensure that you note any recent changes and troubleshooting information in the knowledge base.
This section describes the main prerequisites for the installation.
Some SAP Data Intelligence components require changes at the operating level of compute nodes, which might affect other workloads running on the same cluster. To avoid any impact, we recommend dedicating a set of nodes to the SAP Data Intelligence workload by:
The following sections describe how to perform these operations.
Choose compute nodes for the SAP Data Intelligence workload and label them by running:
# oc label node/sdi-worker{1,2,3} node-role.kubernetes.io/sdi=""
To apply the changes you want to the existing compute nodes, create another machine configuration file and run:
# oc create -f https://raw.githubusercontent.com/redhat-sap/sap-data-intelligence/master/snippets/mco/mc-75-worker-sap-data-intelligence.yaml
In SAP Data Intelligence, the required settings are:
.spec.containerRuntimeConfig.pidsLimit in a ContainerRuntimeConfig.
The result is a modified /etc/crio/crio.conf configuration file on each affected worker node, with pids_limit set to the desired value.
Create a ContainerRuntimeConfig by running:
# oc create -f https://raw.githubusercontent.com/redhat-sap/sap-data-intelligence/master/snippets/mco/ctrcfg-sdi-pids-limit.yaml
Follow these steps:
# tmpl=$'{{with $wl := index $m.labels "workload"}}{{if and $wl (eq $wl "sapdataintelligence")}}{{$m.name}}\n{{end}}{{end}}'; \
if [[ "$(oc get mcp/worker -o go-template='{{with $m := .metadata}}'"$tmpl"'{{end}}')" == "worker" ]]; then
oc label mcp/worker workload-;
fi
# oc create -f https://raw.githubusercontent.com/redhat-sap/sap-data-intelligence/master/snippets/mco/mcp-sdi.yaml
The nodes will inherit all the MachineConfigs targeting worker and sdi roles. The changes are rendered into machineconfigpool/sdi and the worker nodes are restarted one by one until the changes are applied to all of them.
A new role sdi is assigned to the chosen nodes and a new MachineConfigPool containing the nodes is displayed, as shown in the following figures:
The MachineConfigPool nodes are displayed, as shown in the following figure:
Note: If the control-plane nodes will be used for running SDI workloads, the nodes must be schedulable and the machine config files must be duplicated to the nodes and inherit the process ID (PID) limits. For more information, see the Red Hat SAP Data Intelligence 3 on OpenShift Container Platform 4 knowledge base.
OpenShift Container Storage contains the NooBaa object data service. This object data service provides an S3 API to the object storage bucket that can be used with the SAP Data Intelligence solution. You must provide the following information:
After deploying OpenShift Container Storage, create the access keys and bucket through the OpenShift CLI:
# oc get svc -n openshift-storage -l app=noobaa
The following output is displayed:
To create the S3 bucket:
The bucket can be stored in any OpenShift project, for example, sdi-infra.
# for claimName in sdi-checkpoint-store sdi-data-lake; do
oc create -f - <<EOF
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: ${claimName}
spec:
generateBucketName: ${claimName}
storageClassName: openshift-storage.noobaa.io
EOF
done
The object buckets are created, the claims are bound, and the secrets are created with the same names (in our example, sdi-checkpoint-store and sdi-data-lake) as the ObjectBucketClaim (obc). When it is ready, the obc will be bound.
The following output is displayed:
# oc get cm sdi-data-lake -o jsonpath='{.data.BUCKET_NAME}{"\n"}'
# for claimName in sdi-checkpoint-store sdi-data-lake; do
printf 'Bucket/claim %s:\n Bucket name:\t%s\n' "$claimName" "$(oc get obc -o jsonpath='{.spec.bucketName}' "$claimName")"
for key in AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY; do
printf ' %s:\t%s\n' "$key" "$(oc get secret "$claimName" -o jsonpath="{.data.$key}" | base64 -d)"
done
done | column -t -s $'\t'
The following figure shows a sample output value:
Parameter | Sample value |
Amazon S3 Access Key | 2NI09x5X4T23N4YeqGCI |
Amazon S3 Secret Access Key | xn1fozP9pOdLKYEBCb1c3NBtGhWn/D82JUlgXTq0 |
Amazon S3 bucket and directory | sdi-checkpoint-store-652fdcc8-1752-46f4-b7f9-05b4a2d2d79e |
Amazon S3 Region (optional) | Leave empty |