What are Red Hat OpenShift Containers?
Fri, 13 Sep 2024 14:14:56 -0000
|Read Time: 0 minutes
Introduction
Welcome back to our discussion of the Dell APEX Cloud Platform for Red Hat OpenShift. Today, we’ll be taking a closer look at containers themselves. I’ll cover container makeup, application benefits, management, and continuity in greater detail. If you haven’t read through the first article in this series, Easing Application and Infrastructure Management with the Dell APEX Cloud Platform for Red Hat OpenShift, I would encourage you to give it a read. However, it isn’t necessary to understand the topic here. Let’s go ahead and jump in. Some of the examples I use in this blog are pretty specific but don’t worry. You won’t need a deep understanding of them to see how helpful containerization can be.
What is a Container?
A container image is a sandboxed grouping of an application’s code, its software dependencies, and metadata called a manifest that describes the image. For a JavaScript developer, an image might hold an application developed in Node.js and the necessary libraries. This sandboxed software enclosure encapsulates the software required for an application to function. When this image begins to run and meet workload demands, it is called a container. Containers are built to conform to specific standards. Examples of these standards could include the Open Container Initiative (OCI) and Docker.
There is another encapsulation level to be aware of when running containers. Kubernetes organizes containers into pods. Generally speaking, you have a primary application container and secondary containers called sidecars. A sidecar is a container that enhances the functionality of the main application container by providing additional functionality. A sidecar could provide logging, application monitoring, security, data synchronization, and more. The best practice is to have one process run per container, which can make processes like debugging easier. When an application needs to scale up or down, pods are the units that get instantiated or terminated to match a growing or shrinking demand. Pods also enable application containers and sidecars to share resources and networking to simplify interoperability.
Improving Applications with Containers
Now that we have a better idea of what containers are and how they generally operate, let’s focus on how businesses can benefit from their use. When an application runs on bare metal, it’s limited to the compute, memory, and storage resources available to an individual server. Virtualization allowed the creation of clusters with virtual machines (VMs) that could move between hosts as needed and share storage through the use of vSAN or an external storage array. Applications then run within these VMs and become more resilient. The drawback to virtualization is that each VM runs its own operating system, increasing overhead and impacting compute, memory, and storage resources.
Containerization allows applications to become truly cloud native. Containers don’t need to emulate hardware or run an operating system. They are allocated a specific amount of CPU and memory resources. As workloads change, container pods are created or terminated to match. Providing additional resources with a bare metal deployment requires shutting the server down and installing the appropriate hardware upgrades. A virtualized environment requires the VM settings to be modified to increase or decrease resources, as well as restarting the VM. This key difference allows containers to be more agile compared to bare metal or virtualized deployments.
Using Red Hat OpenShift to Manage Applications
At this point, you might think, “This all sounds great, but having so many different containers, pods, and other components sounds difficult to manage!”. Red Hat used the open-source framework of Kubernetes to develop OpenShift and make it much more straightforward. Deploying an application is one of the most important tasks for an administrator to complete and how a business would start using an OpenShift cluster in production. Applications can be deployed through the web console, using the command line interface, CI/CD pipeline, or even with plays from an Ansible Playbook. I’ll only cover the web console here, but it’s worth knowing the other methods exist.
Deploying an application from the web console is easy and supports multiple different methods. Once the administrator logs into the web console, the next step is to enter the Projects page. Here, we’ll find a Create Project button, which will help deploy our application. The project will need a name. Once a name is applied, we’ll switch the perspective from Administrator to Developer using the dropdown menu in the top left corner. This change will show a new pane called Topology. This pane is where we will select the source for our application. We can choose to pull from a Git repository, a container image, the catalog, a Docker file, a YAML file, or from a database catalog. Once the desired source is chosen, a wizard guides us through the application deployment step by step. Once the wizard is complete, a new pod, or multiple pods, are created to run the application.
Scaling applications up and down is even easier. Once the application is deployed, it is selectable within the web console. The Developer perspective is used to access the Topology pane and the deployed applications are then shown. All that is needed to scale the applications up or down is to click the application to select it and then use the up and down arrow buttons in the newly displayed context menu on the right. These arrow buttons will allow more pods to be created or excess pods to be terminated.
Application updates are another necessary management task. These updates are done from the same Topology pane. This time, the application is right clicked. The pop-up menu has several editing options. Simply select the option to Edit <application name> to view the workflow used to create the application. We should find the resource used to build the container, such as the Docker file or container image. The new version is entered here, which will kick off a job to build new containers and deploy updated pods.
The final management item I’ll cover is the deletion of applications. Deletion is a simple and straightforward process. Just like the other processes I’ve described; we’ll be working from the Topology pane. The application just needs to be clicked on to bring up the context menu used to increase or decrease the pod count. Inside this context menu, select the Resource option to change the view. There will be a new Actions menu in the top right corner. The option to delete the application is found here.
Application Continuity through the Cluster Lifecycle
One of the most beneficial enhancements virtualization provided over bare metal deployments is the ability to move VMs to other cluster nodes when the underlying hardware needs service or for cluster upgrades. This allowed for maintenance tasks to be completed without disrupting business operations. OpenShift fulfills this need in a very similar way. Instead of migrating VMs from one node to another, container pods are migrated instead. Since the underlying files are smaller, these migrations are generally faster. Once the workloads are migrated to pods on other nodes in the cluster, the node needing maintenance can be restarted to have updates installed or shut down for hardware service. This process must be repeated as needed to perform entire cluster upgrades.
Performing cluster upgrades on Dell APEX Cloud Platform for Red Hat OpenShift nodes benefits from this process but enhances it by providing software upgrades conforming to the Continuously Validated States Dell engineering develops and updates the cluster stack from the hardware level all the way up to the OpenShift orchestration layer. Let’s take a moment to consider the upgrade process for a cluster. Updated software versions for each hardware component must be acquired. Ideally, they are tested in a lab environment before being pushed into production. The Dell APEX Cloud Platform for Red Hat OpenShift solves both of these challenges by providing update bundles that update the cluster with software assistance. Testing of the bundles is done by Dell engineering teams, reducing the labor for customer IT teams.
Clusters may need to scale up as workloads grow. Dell APEX Cloud Platform for Red Hat OpenShift users can grow clusters easily with software automation built into the APEX Cloud Platform Foundation software. Expansion tooling has been engineered into the web interface that scans and identifies compatible nodes that can be added to the cluster. The wizard is used to configure the necessary node settings and then the node is added to the cluster. Customers can easily obtain additional nodes when needed, because hardware compatibility and testing is all performed by Dell. If a DIY solution is chosen, an identical node must be built and if matching components can’t be found, any deviation must be accounted for at update times. These challenges aren’t present when APEX Cloud Platform is chosen for OpenShift needs.
Both cluster upgrade and expansion operations are completed without impacting workloads. Businesses can continue to use cluster applications during upgrades as APEX Cloud Platform for Red Hat OpenShift sequentially updates cluster members and rebalances container pods until the entire cluster completes the upgrade cycle. Similarly, clusters continue serving applications when infrastructure teams need to expand the cluster. APEX Cloud Platform identifies nodes eligible for expansion and are configured and added to the cluster.
In the first entry in this series, I mentioned a few operators, one of which was the OpenShift Advanced Cluster Management operator. This piece of software offers additional cluster management capabilities. One of the most critical is this operator. As container workloads scale to demand multiple clusters, new management challenges arise and are solved using OpenShift Advanced Cluster Management. This operator can be deployed day one with an APEX Cloud Platform cluster.
Conclusion
Containers are the best way to deploy truly cloud-native applications. This is thanks to their design that encapsulates the application code and all the necessary dependencies to make it function. Container pods are easily migrated to other cluster nodes, providing all of the workload resiliency businesses have come to enjoy from virtualized deployments while simultaneously reducing the resource overhead by reducing the number of running operating systems. Applications can be scaled up or down with a single click of an arrow button to ensure resource allocation is always best suited for the workload with minimal waste. I hope this helped you understand containers a bit better. In the next article in this series, we’ll be covering OpenShift Serverless, which helps developers focus on their code without the need to worry about the underlying infrastructure.
Additional Resources
Simplify Cloud Application Deployments with Red Hat OpenShift (3 of 4)
Run Containers and Virtual Machines on the Same Cluster with Red Hat OpenShift (4 of 4)
APEX Cloud Platform for Red Hat OpenShift Homepage
Author: Dylan Jackson, Engineering Technologist