The first step in creating a developer environment using Docker containers is licensing and installing the Docker runtime.
The Docker administrator installs the Enterprise Edition by running the following installation command:
$ yum -y install docker-ee-19.03 docker-ee-cli-19.03 containerd.io
Note: Set up the Docker Enterprise repository before installing Docker.
In a vSphere environment, the VM limits CPU and memory resources. Collaboration between the virtualization and Docker administrators is important because most Docker environments have multiple containers. VMs hosting Docker containers tend to be larger than other VMs, requiring more CPU and memory resources to support containers.
Docker registry placement is a key consideration when building a dev/test environment. The factors that influence Docker registry placement include:
The Docker administrator must work closely with network engineers and security experts to address the placement of the Docker registry. Depending on the size of the container environment, the velocity can place a significant load on the network. Further, customized registry images might contain sensitive configuration settings that must remain secure.
For our lab validation, we used a local private registry, which addressed our key variety, velocity, and security requirements.
To create a local registry from Docker Hub, the developer runs the following commands:
$ docker pull registry:2
$ mkdir -p /registry/private
docker run -d \
-p 5000:5000 \
--name registry \
-v /home/dockerv/registry:/var/lib/registry \
-v /certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/ca.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/ca.key \
Many Kubernetes solutions are available today. For example, turnkey-managed Kubernetes offerings from cloud providers give IT organizations a zero-data-center-footprint solution that requires no installation. An on-premises private-cloud Kubernetes implementation offers greater control and flexibility but requires investment in infrastructure and training. For this use case, we show an open-source Kubernetes installation to demonstrate how having the container orchestration system on our LAN provides greater performance and control as well as the ability to customize the configuration.
Install Kubernetes as follows.
Note: For complete Kubernetes installation instructions, see the Kubernetes documentation.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
# Set SELinux in permissive mode (effectively disabling it)
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
In addition to our Kubernetes environment, we also need a CSI plug-in to complete our automation journey. CSI plug-ins are a Kubernetes-defined standard from VMware that Dell Technologies and others use to provision block and file storage to container orchestration systems. CSI plug-ins unify storage management across many different container orchestration systems including Mesos, Docker Swarm, and Kubernetes.
The vSphere CSI plug-in for Kubernetes provides the following orchestration capabilities:
The Kubernetes administrator works with the storage administrator to download, modify, and install the vSphere CSI plug-in. To configure Cloud Provider Interface (CPI) and CSI, see the following VMware document on GitHub: Deploying a Kubernetes Cluster on vSphere with CSI and CPI.
After you install and configure the CSI plug-in on the VxRail system, check the status of the CSI plug-in-related pods. Check the status by running the kubectl get pods –n kube-system | grep vsphere command, as shown in the following example:
Figure 8. vSphere CSI plug-in running on Kubernetes worker nodes