Home > Workload Solutions > Container Platforms > SUSE Containers as a Service > Guides > SUSE Rancher, SUSE Linux Enterprise Micro, and K3s for Edge Computing > Deploying K3s
To prepare the K3s deployment:
To install the first K3s server on one of the nodes to be used for the Kubernetes control plane:
K3s_VERSION=""
curl -sfL https://get.k3s.io | \
INSTALL_K3S_VERSION=${K3s_VERSION} \
INSTALL_K3S_SKIP_SELINUX_RPM=true \
INSTALL_K3S_EXEC='server --cluster-init --write-kubeconfig-mode=644' \
sh -s -
Tip: To address availability and possible scaling to a multiple node cluster, enable etcd instead of using the default SQLite data store.
watch -c "kubectl get deployments -A"
The K3s deployment is complete when elements of all the deployments (coredns, local-path-provisioner, metrics-server, and traefik) show at least "1" as "AVAILABLE," as shown in the following figure:
Follow these best practices to further optimize the deployment.
SUSE recommends a full HA K3s cluster for production workloads. The etcd key/value store (or database) requires that an odd number of servers (or control nodes) be allocated to the K3s cluster. In this case, add two additional control-plane servers for a total of three servers.
# Private IP preferred, if available
FIRST_SERVER_IP=""
# From /var/lib/rancher/k3s/server/node-token file on the first server
NODE_TOKEN=""
# Match the version of the first server
K3s_VERSION=""
curl -sfL https://get.k3s.io | \
INSTALL_K3S_VERSION=${K3s_VERSION} \
INSTALL_K3S_SKIP_SELINUX_RPM=true \
K3S_URL=https://${FIRST_SERVER_IP}:6443 \
K3S_TOKEN=${NODE_TOKEN} \
K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC='server' \
sh -
watch -c "kubectl get deployments -A"
By default, the K3s server nodes are available to run non-control-plane workloads. In this case, the K3s default behavior is ideal for the SUSE Rancher server cluster because it does not require additional agent (worker) nodes to maintain a highly available SUSE Rancher server application.
Note: You can change this scenario to the normal Kubernetes default by adding a taint to each server node. For more information, see the official Kubernetes document Taints and Tolerations.
curl -sfL https://get.k3s.io | \
INSTALL_K3S_VERSION=${K3s_VERSION} \
INSTALL_K3S_SKIP_SELINUX_RPM=true \
K3S_URL=https://${FIRST_SERVER_IP}:6443 \
K3S_TOKEN=${NODE_TOKEN} \
K3S_KUBECONFIG_MODE="644" \
sh -