A Virtual Machine (VM) is essentially a computing environment created out of software that enables running programs just like a physical machine. It works on the concept of creating a "virtual" version of a computer, with dedicated amounts of CPU, memory, and storage that are borrowed from a physical server. The virtual machine is partitioned from the rest of the system, meaning that the software inside a VM cannot interfere with the host computer's primary operating system. The guest operating system runs the Virtual hardware and provides an isolated environment for running applications independently.
Containers in many ways represent a more granular, lower-overhead approach to virtualization than a virtual machine. Each container image is packaged up with application code, all dependencies, system tools, runtime, system libraries, and settings needed to run the containerized applications. They are plugged onto a container engine that sits atop the operating system infrastructure, to become containers at runtime. Containers are mainly used for cloud-native, distributed applications and to package legacy applications for increased portability and deployment simplicity.
Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) provides an effective platform for customers looking into modernizing their applications on-premises while keeping Azure consistency. Modern applications are increasingly getting built on the containerized approach, where microservices are packaged with their dependencies and configurations. Kubernetes, the core component of AKS-HCI, is open-source software for deploying and managing these containers at scale. As compute utilization increases, applications grow to span multiple containers that are deployed across multiple servers. Operating these applications at scale becomes more complex. To manage this complexity, Kubernetes provides an open-source API that determines how and where these containers will run.
Kubernetes orchestrates with a cluster of VMs and schedules the containers to run on those VMs based on their available compute resources and the containers’ resource requirements. Containers are then grouped into pods, the basic operational unit of Kubernetes. These pods scale based on the needs of the applications. Kubernetes also manages service delivery, load balancing, resource allocation, and scales based on utilization. It also keeps a check on the health of each individual resource and enables applications to self-heal automatically by restarting or replicating the containers.
Setting up and maintaining Kubernetes can be complex. AKS-HCI helps simplify setting up Kubernetes on-premises, making it faster to get started hosting Linux and Windows containers.
Windows Admin Center and PowerShell are two options for managing the life cycle of Azure Kubernetes Service clusters on Azure Stack HCI.
This figure shows the core infrastructure components of Kubernetes, which is divided into two main units based on their operating function. They are:
Management Cluster is automatically created when the Azure Kubernetes Service cluster is created on Azure Stack HCI. It is mainly responsible for provisioning and managing workload clusters where workloads are designed to run.
The Workload Cluster is a highly available deployment of Kubernetes using Linux VMs, meant for running Kubernetes control plane components and running Linux and Windows Server-based containers. Multiple workload cluster(s) can be managed by one management cluster.