Workload compared with use case
The term “use case” is often used interchangeably with the term “workload.” This guide makes a clear distinction between these two terms. Here, a use case represents a situational environment, whereas a workload represents an instance of use within a situational environment.
For example, a truck that is designed to carry a large object is analogous to a device that is designed to enable handling a situational environment. This truck might be used to pick up a carton of milk from a store, or it might be used to convey a large machine part from point A to point B. Using a large truck (the situational environment tool) might be considered inappropriate for the task of picking up a carton of milk; however, it might be highly appropriate for the transportation of a large machine part—the design of the truck and its use are a good match for this purpose.
Kubernetes containers are component tools that are generally used within a situational environment (use case) so that a particular application (workload) can be handled efficiently and cost-effectively. A use case defines platform environment needs, while a workload is a task that has dependencies that must be provisioned so that the use case has the capacity and ability to accommodate the workload.
Containers are vehicles that carry and enable the development and execution of cloud-native software. Generally, they employ the new “declarative” software design model as part of a distributed-computation platform environment. Here are some key considerations in answer to the frequently asked question, “How can I move my workload to containers?”:
- Ensure that the hardware platform is well-designed for your container ecosystem needs—Select the right hardware infrastructure to enable all use-case workloads to be run at the right level of return on investment. The economics of platform infrastructure design are intensified in the container ecosystem because of the large number of active infrastructure software components, their resource overheads, and the trade-off between node cost (CapEx) and operating cost (OpEx).
- “Lift and shift” existing applications into a cloud-native container environment—You can migrate existing applications into more a cloud-native container environment. This environment delivers some of the natural benefits of operating system virtualization but also confers the full benefits of declarative-designed modular, container-based, software architecture.
- Refactor older software code—Refactoring requires much more work than lift-and-shift migration. Refactoring provides access to the full benefits of a container ecosystem but at a higher cost.
- Develop new cloud-native applications—Like refactoring, this approach gives your organization the full benefits of a container ecosystem.
- Build distributed, location-independent, microservices-based cloud-native applications—Distributed cloud-native microservices are more easily isolated, deployed, and scaled with the use of discrete container instances.
- Adopt new tools to support continuous integration and deployment (CI/CD)—DevOps teams in particular appreciate the advantages of automated build, test, and deployment operations so that container images can be handled only once through the entire concept-to-production lifecycle.
- Simplify and automate repetitive tasks and activities—Automated orchestration of frequently executed operations reduces management overhead and increases the agility and time-value of the DevOps process.
The following use-case examples are minimal and are intended only to show some of the decisions that you might face with your unique container-ecosystem.