For Use Case 2, we show a basic Kubernetes installation to demonstrate how having the container orchestration system on our LAN provides greater performance and control as well as the ability to customize the configuration. The Kubernetes administrator performs a custom installation of Kubernetes before performing prerequisite tasks as described below. The Kubernetes cluster will facilitate the automation of the manual tasks of the Docker containers that were already described in Use Case 1.
Setting up the Kubernetes includes the following tasks:
- Fulfilling prerequisites
- Installing Kubernetes
- Initializing the Kubernetes cluster
- Adding worker nodes to the Kubernetes cluster
Before setting up the Kubernetes cluster, complete the following prerequisite tasks:
- Set SELinux to permissive mode.
- Configure the firewall.
- Ensure that the br_netfilter module is loaded.
- Disable swap for all nodes.
The following sections provide the details for performing these tasks.
Set SELinux to permissive mode
Setting SELinux to permissive mode enables containers to access the host file system, which is required by pod networks.
# /usr/sbin/setenforce 0
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Configure the firewall
To configure the firewall, select one of the following options:
# firewall-cmd --add-masquerade --permanent
- All nodes must be able to accept connections from the master node on TCP port 10250.
# firewall-cmd --add-port=10250/tcp --permanent
- Traffic must be allowed on the UDP port 8472.
# firewall-cmd --add-port=8472/udp --permanent
- Ensure that all ports required by Kubernetes are available. For instance, TCP port 6443 must be accessible on the master node to allow other nodes to access the API Server. Run the following command on the master node:
# firewall-cmd --add-port=6443/tcp –permanent
- Restart the firewall for these rules to take effect.
All nodes must be able to receive traffic from all other nodes on every port on the network fabric that is used for the Kubernetes pods.
- If you have a requirement NOT to run a firewall directly on the nodes on which Kubernetes is deployed, enter the following commands:
# systemctl disable firewalld
# systemctl stop firewalld
Ensure that the br_netfilter module exists and is loaded. This module is usually loaded, and it is unlikely that you would need to load this module manually.
- Check whether the br_netfilter module is loaded with this command:
# lsmod|grep br_netfilter
- (Optional) If necessary, load the br_netfilter module manually by entering these commands:
# modprobe br_netfilter
# echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf
- Kubernetes requires that packets traversing a network bridge are processed by iptables for filtering and for port forwarding. Ensure that net.bridge.bridge-nf-call-iptables is set to 1 in the sysctl configuration file on all nodes.
# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
# /sbin/sysctl -p /etc/sysctl.d/k8s.conf
Disable swap for all nodes
Enter these commands to check for performance degradation.
# sed -i '/swap/d' /etc/fstab
# swapoff -a
In Use Case 2, we are using one master node and three worker nodes.
To install Kubernetes, follow these steps:
- Ensure that network configuration is complete on all Kubernetes nodes and that all the nodes are communicating with each other and the Internet. Place the hostname and IP address in the /etc/hosts file on all the nodes. All references of IP addresses for the Kubernetes master and worker nodes are stored in this hosts file which is used by different Kubernetes processes.
- Ensure that Docker Enterprise Edition is installed on all the Kubernetes nodes. To check if Docker service is running, enter the following command:
[root@docker ~] # systemctl status docker
To check the Docker version, enter this command:
[root@docker ~] #docker version
- If the Docker Enterprise Edition is not already installed, install it by following the procedure described in Step 2: Activate the Docker Enterprise Edition license.
- Add the Kubernetes repository, which is basically the creation of the etcd repository that is the primary key-value datastore of Kubernetes cluster state as depicted in Figure 12.
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
- Install the Kubernetes packages kubeadm, kubelet, and kubectl. If you are installing a specific version of Kubernetes (such as 1.14.9), specify the version now.
# yum install kubelet-1.14.9 kubectl-1.14.9 kubeadm-1.14.9
# systemctl enable kubelet
# systemctl start kubelet
Note: All these Kubernetes processes are described in earlier sections and depicted in Figure 12. Kubernetes is now loaded on all nodes and ready to be configured.
Initializing the Kubernetes cluster
These steps help you initialize Kubernetes, set up a cluster, and test your Oracle 12c and 19c applications. The steps in this section verify the operability of the Kubernetes cluster and test the networking communications between the master and worker Kubernetes nodes.
- To initialize the Kubernetes cluster, run the following command on the master node.
# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=1.14.9 --ignore-preflight-errors=Swap,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables,SystemVerification
- pod-network-cidr =192.168.0.0/16 is the range of IP addresses for the pod network. (We are using the ‘calico’ virtual network. If you want to use another pod network such as weave-net or flannel, change the IP address range.) There will be different IP addresses for different pod networks. For example, for flannel the address can be 10.244.0.0/16.
- kubernetes-version=1.14.9 is the Kubernetes version that you installed on the Kubernetes nodes
- After the initialization completes, perform the following steps on the master node to copy the node joining command and save it to use in the next section when adding worker nodes to the cluster:
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
- Deploy the pod (calico) network to the Kubernetes cluster.
# kubectl apply -f https://docs.projectcalico.org/v3.10/manifests/calico.yaml
- Check the Kubernetes system pods.
#kubectl get pods --all-namespaces
Adding worker nodes to the Kubernetes cluster
- Connect to each worker node and run the kubeadm join command that we copied in the previous procedure. The addition of Kubernetes worker nodes to the Kubernetes master completes the cluster creation process.
# kubeadm join 10.230.87.241:6443 --token sntfta.wjsndor3q8zqrpjz --discovery-token-ca-cert-hash sha256:2e46cf8ffb2838bfee7d419d6bc27b27e0713f98741b84c8cb673bc34f49e017
Note: Synchronize the system time on the master node and worker nodes.
- Connect to the master node and check the nodes' status.
# kubectl get nodes