Deploy K8s clusters into Azure Stack Hub user subscriptions
Mon, 24 Jul 2023 15:06:01 -0000
|Read Time: 0 minutes
Deploy K8s Cluster into an Azure Stack Hub user subscription
Welcome to Part 2 of a three-part blog series on running containerized applications on Microsoft Azure’s hybrid ecosystem. Part 1 provided the conceptual foundation and necessary context for the hands-on efforts documented in parts 2 and 3. This article details the testing we performed with AKS engine and Azure Monitor for containers in the Dell Technologies labs.
Here are the links to all the series articles:
Part 1: Running containerized applications on Microsoft Azure’s hybrid ecosystem – Provides an overview of the concepts covered in the blog series.
Part 2: Deploy K8s Cluster into an Azure Stack Hub user subscription – Setup an AKS engine client VM, deploy a cluster using AKS engine, and onboard the cluster to Azure Monitor for containers.
Part 3: Deploy a self-hosted Docker Container Registry – Use one of the Azure Stack Hub QuickStart templates to setup container registry and push images to this registry. Then, pull these images from the registry into the K8s cluster deployed with AKS engine in Part 2.
Introduction to the test lab
Before proceeding with the results of the testing with AKS engine and Azure Monitor for containers, we’d like to begin by providing a tour of the lab we used for testing all the container-related capabilities on Azure Stack Hub. Please refer to the following two tables. The first table lists the Dell EMC for Microsoft Azure Stack hardware and software. The second table details the various resource groups we created to logically organize the components in the architecture. The resource groups were provisioned into a single user subscription.
Scale Unit Information | Value |
Number of scale unit nodes | 4 |
Total memory capacity | 1 TB |
Total storage capacity | 80 TB |
Logical cores | 224 |
Azure Stack Hub version | 1.1908.8.41 |
Connectivity mode | Connected to Azure |
Identity store | Azure Active Directory |
Resource Group | Purpose |
demoaksengine-rg | Contains the resources associated with the client VM running AKS engine. |
demoK8scluster-rg | Contains the cluster artifacts deployed by AKS engine. |
demoregistryinfra-rg | Contains the storage account and key vault supporting the self-hosted Docker Container Registry. |
demoregistrycompute-rg | Contains the VM running Docker Swarm and the self-hosted container registry containers and the other supporting resources for networking and storage. |
kubecodecamp-rg | Contains the VM and other resources used for building the Sentiment Analysis application that was instantiated on the K8s cluster. |
Please also refer to the following graphic for a high-level overview of the architecture.
Prerequisites
All the step-by-step instructions and sample artifacts used to deploy AKS engine can be found in the Azure Stack Hub User Documentation and GitHub. We do not intend to repeat what has already been written – unless we want to specifically highlight a key concept. Instead, we will share lessons we learned while following along in the documentation.
We decided to test the supported scenario whereby AKS engine deploys all cluster artifacts into a new resource group rather than deploying the cluster to an existing VNET. We also chose to deploy a Linux-based cluster using the kubernetes-azurestack.json API model file found in the AKS engine GitHub repository. A Windows-based K8s cluster cannot be deployed by AKS engine to Azure Stack Hub at the time of this writing (November 2019). Do not attempt to use the kubernetes-windows.json file in the GitHub repository, as this will not be fully functional.
Addressing the prerequisites for AKS engine was very straight forward:
- Ensured that the Azure Stack Hub system was completely healthy by running a Test-Azure Stack.
- Verified sufficient available memory, storage, and public IP address capacity on the system.
- Verified that the quotas embedded in the user subscription’s offers provided enough capacity for all the Kubernetes capabilities.
- Used marketplace syndication to download the appropriate gallery items. We made sure to match the version of AKS engine to the correct version of the AKS Base Image. In our testing, we used AKS engine v0.43.0, which depends on version 2019.10.24 for the AKS Base Image.
- Collected the name and ID of the Azure Stack Hub user subscription created in the lab.
- Created the Azure AD service principal (SPN) through the Public Azure Portal, but a link is provided to use programmatic means to create the SPN, as well.
- Assigned the SPN to the Contributor role of the user subscription.
- Generated the SSH key and private key file using PuTTY Key Generator (PUTTYGEN) for each of the Linux VMs used during testing. We associated the PPK file extension with PUTTYGEN on the management workstation so the saved private key files are opened within PUTTYGEN for easy copy and pasting. We used these keys with PuTTY and WinSCP throughout the testing.
At this point, we built a Linux client VM for running the AKS engine command-line tool used to deploy and manage the K8s cluster. Here are the specifications of the VM provisioned:
- Resource group: demoaksengine-rg
- VM size: Standard DS1v2 (1 vcpu + 3.5 GB memory)
- Managed disk: 30 GB default managed disk
- Image: Latest Ubuntu 16.04-LTS gallery item from the Azure Stack Hub marketplace
- Authentication type: SSH public key
- Assigned a Public IP address so we could easily SSH to it from the management workstation
- Since this VM was required for ongoing management and maintenance of the K8s cluster, we took the appropriate precautions to ensure that we could recover this VM in case of a disaster. A critical file that needed to be protected on this VM after the cluster was created is the apimodel.json file. This will be discussed later in this blog series.
Our Azure Stack Hub runs with certificates generated from an internal standalone CA in the lab. This means we needed to import our CA’s root certificate into the client VM’s OS certificate store so it could properly connect to the Azure Stack Hub management endpoints before going any further. We thought we would share the steps to import the certificate:
- The standalone CA provides a file with a .cer extension when requesting the root certificate. In order to work with an Ubuntu certificate store, we had to convert this to a .crt file by issuing the following command from Git Bash on the management workstation:
openssl x509 -inform DER -in certnew.cer -out certnew.crt - Created a directory on the Ubuntu client VM for the extra CA certificate in /usr/share/ca-certificates:
sudo mkdir /usr/share/ca-certificates/extra - Transferred the certnew.crt file to the Ubuntu VM using WinSCP to the /home/azureuser directory. Then, we copied the file from the home directory to the /usr/share/ca-certificates/extra directory.
sudo cp certnew.crt /usr/share/ca-certificates/extra/certnew.crt - Appended a line to /etc/ca-certificates.conf.
sudo nano /etc/ca-certificates.conf
Add the following line to the very end of the file:
extra/certnew.crt - Updated certs non-interactively
sudo update-ca-certificates
This command produced the following output that verified that the lab’s CA certificate was successfully added:
Note: We learned that if this procedure to import a CA’s root certificate is ever carried out on a server already running Docker, you have to stop and re-start Docker at this point. This is done on Ubuntu via the following commands:
sudo systemctl stop docker
sudo systemctl start docker
We then SSH’d into the client VM. While in the home directory, we executed the prescribed command in the documentation to download the get-akse.sh AKS engine installation script.
curl -o get-akse.sh https://raw.githubusercontent.com/Azure/aks-engine/master/scripts/get-akse.sh
chmod 700 get-akse.sh
./get-akse.sh --version v0.43.0
Once installed, we issued the aks-engine version command to verify a successful installation of AKS engine.
Deploy K8s cluster
There was one more step that needed to be taken before issuing the command to deploy the K8s cluster. We needed to customize a cluster specification using an example API Model file. Since we were deploying a Linux-based Kubernetes cluster, we downloaded the kubernetes-azurestack.json file into the home directory of the client VM. Though we could have used nano on the Linux VM to customize the file, we decided to use WinSCP to copy this file over to the management workstation so we could use VS Code to modify it instead. Here are a few notes on this:
- Through issuing an aks-engine get-versions command, we noticed that cluster versions up to 1.17 were supported by AKS engine v0.43.0. However, the Azure Monitor for Container solution only supported versions 1.15 and earlier. We decided to leave the orchestratorRelease key value in the kubernetes-azurestack.json file at the default value of 1.14.
- Inserted https://portal.rack04.azs.mhclabs.com as the portalURL.
- Used labcluster as the dnsPrefix. This was the DNS name for the actual cluster. We confirmed that this hostname was unique in the lab environment.
- Left the adminUsername at azureuser. Then, we copied and pasted the public key from PUTTYGEN into the keyData field. The following screen shot shows what was pasted.
- Copied the modified file back to the home directory of the client VM.
While still in the home directory of the client VM, we issued the command to deploy the K8s cluster. Here is the command that was executed. Remember that the client ID and client secret are associated with the SPN, and the subscription ID is that of the Azure Stack Hub user subscription.
aks-engine deploy \
--azure-env AzureStackCloud \
--location Rack04 \
--resource-group demoK8scluster-rg \
--api-model ./kubernetes-azurestack.json \
--output-directory demoK8scluster-rg \
--client-id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx \
--client-secret xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx \
--subscription-id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Once the deployment was complete and verified by deploying mysql with Helm, we copied the apimodel.json file found in the home directory of the client VM under a subdirectory with the name of the cluster’s resource group – in this case demoK8scluster-rg – to a safe location outside of the Azure Stack Hub scale unit. This file was used as input in all of the other AKS engine operations. The generated apimodel.json contains the service principal, secret, and SSH public key used in the input API Model. It also has all the other metadata needed by the AKS engine to perform all other operations. If this gets lost, AKS engine won't be able configure the cluster down the road.
Onboarding the new K8s cluster to Azure Monitor for containers
Before introducing our first microservices-based application to the K8s cluster, we preferred to onboard the cluster to Azure Monitor for containers. Azure Monitor for containers not only provides a rich monitoring experience for AKS in Public Azure but also for K8s clusters deployed in Azure Stack using AKS engine. We wanted to see what resources were being used only by the Kubernetes system functions before deploying any applications. The steps we performed in this section were performed on one of the K8s primary nodes using an SSH connection.
The prerequisites were fairly straight forward, but we did make a couple observations while stepping through them:
- We decided to perform the onboarding using the HELM chart. For this option, the latest version of the HELM client was required. We found that as long as we performed the steps under Verify your cluster in the documentation, we did not encounter any issues.
- Since we were only running a single cluster in our environment, we found we didn’t have to do anything with regards to configuring kubectl or the HELM client to use the K8s cluster context.
- We did not have to make any changes in the environment to allow the various ports to communicate properly within the cluster or externally with Public Azure.
Note: Microsoft also supports the enabling of monitoring on this K8s cluster on Azure Stack Hub deployed with AKS engine using the API definition as an alternative to using the HELM chart. The API definition option doesn’t have a dependency on Tiller or any other component. Using this option, monitoring can be enabled during the cluster creation itself. The only manual step for this option would be to add the Container Insights solution to the Log Analytics workspace specified in the API definition.
We already had a Log Analytics Workspace at our disposal for this testing. We did make one mistake during the onboarding preparations, though. We intended to manually add the Container Insights solution to the workspace but installed the legacy Container Monitoring solution instead of Container Insights. To be on the safe side, we ran the onboarding_azuremonitor_for_containers.ps1 PowerShell script and supplied the values for our resources as parameters. The script skipped the creation of the workspace since we already had one and just installed the Container Insights solution via an ARM template in GitHub identified in the script.
At this point, we could simply issue the HELM commands under the Install the chart section of the documentation. Besides inserting the workspace ID and workspace key, we also replaced the --name parameter value with azuremonitor-containers. We did not observe anything out of the ordinary during the onboarding process. Once complete, we had to wait about 10 minutes before we could go to the Public Azure portal and see our cluster appear. We had to remember to click on the drop-down menu for Environment in the Container Insights section in Azure Monitor and select “Azure Stack (Preview)” for the cluster to appear.
We hope this blog post proves to be insightful to anyone deploying a K8s cluster on Azure Stack Hub using AKS engine. We also trust that the information provided will assist in the onboarding of that new cluster to Azure Monitor for containers. In Part 3, we will step through our experience deploying a self-hosted Docker Container Registry into Azure Stack Hub.