Introduction
Financial service providers that are moving from traditional monolithic applications are redefining the services that they offer their customers. Some report the need to develop and deploy stateful data services. This need can be a challenge in the cloud-native technology area, where most application containers are implemented with the presumption that container storage is ephemeral and the drive is towards stateless data services.
Large financial service providers, as well as financial transaction trading houses, tend to deploy multiple Kubernetes clusters to limit the risk of loss of service through cluster outage. Financial service providers might say that having multiple Kubernetes clusters enables load-shedding and load-distribution for greater service integrity assurance. The following information shows that OpenShift Container Platform provides better functionality than many practitioners are currently aware of.
Addressing concerns
Key concerns that financial organizations have raised with Dell EMC include:
- Security and regulatory compliance—Red Hat maintains a web page that addresses security and compliance. For more information, see Red Hat’s Container Security Guide.
- Potential noisy-neighbor problem—Container deployment with Kubernetes and OpenShift minimizes the noisy-neighbor risk. Kubernetes, and OpenShift in particular, deploy containers so that each application container environment runs within its own end-to-end isolated network. This method uses tagged VLANs or runs over GRE tunnels.
- Ability to host mixed-transaction workloads—OpenShift 4.2 application container workloads are deployed as a project. Each project may be assigned one or more administrators. The project manager defines administrative roles, with RBAC limits placed on the functions of each role. Projects are typically isolated from each other and are unaware of the existence of a neighboring project unless the network administrator permits otherwise. For more information about multitenancy configuration, see the Configuring network isolation using OpenShift SDN in the OpenShift 4.2 documentation.
- Reliable scale-out and scale-back—Established capabilities of Kubernetes clusters and of OpenShift in particular are reliable configuration scale-out and scale-back.
- Kubernetes cluster federation and services management support—The OpenShift blog article Kubernetes Guideposts for 2019 provides useful insights into Red Hat and general Kubernetes community work that address cluster federation. Particular worker nodes of similar hardware configuration can be assigned to their own OpenShift MachineType, and containers can be configured so that they are deployed with affinity to a MachineType constraint. This technique is used in combination with RBAC limits that can be placed on certain nodes so that an OpenShift tenant can be restricted to a subset of worker nodes. This option provides considerable flexibility in cluster design and can be used to avoid multicluster federation requirements.
OpenShift Container Platform 4.2 includes release of the OpenShift Service Mesh. With OpenShift Service Mesh, you can connect, secure, and monitor microservices in your OpenShift Container Platform environment.
The preferred method to provision of persistent storage is CSI. Dell EMC will provide comprehensive CSI support for all current Dell EMC storage products, as shown in Table 5.
- Monitoring of application container and ecosystem operationOpenShift Container Platform 4.2 automatically deploys the Prometheus monitoring application. As part of the original deployment process, after Prometheus is configured, it updates itself automatically. For more information, see About cluster monitoring in the OpenShift 4.2 documentation.
- Usage metering and accounting—Metering is available in OpenShift Container Platform 4.2 For more information, see About metering in the OpenShift 4.2 documentation. Metering can be managed through custom resource definitions (CRDs), as described in the following table:
Table 12. Usage metering
MeteringConfig |
Configures the metering stack. |
Reports |
Configures the query method, frequency, and target storage location. |
ReportQueries |
Specifies SQL queries against data contained within ReportDataSources. |
ReportDataSources |
Controls the data available to ReportQueries and Reports. Allows configuring access to different databases for use within metering. |
Typical cluster resource requirements
Based on diverse field data, a typical financial services Kubernetes cluster can have
10 to 20 worker nodes, 200 to 650 CPU cores, and 1.2 to 7 TB RAM. Average CPU core utilization seldom exceeds 65 percent, which is necessary to ensure that adequate CPU cores are in reserve to handle scale-out demands. Ephemeral storage across the cluster typically requires up to 1.5 TB across the whole cluster; however, the latency of ephemeral storage significantly affects the application container user experience.