
Oracle Database Solutions on Docker Container and Kubernetes
Fri, 09 Oct 2020 18:58:32 -0000
|Read Time: 0 minutes
The Opportunity
Containers are a lightweight, stand-alone, executable package of software that includes everything that is needed to run an application: code, runtime, system tools, system libraries, and settings. A container isolates software from its environment and ensures that it works uniformly despite any differences between development and staging. Containers share the machine’s operating system kernel and do not require an operating system for each application, driving higher server efficiencies and reducing server and licensing costs.
The traditional build process for database application development is complex, time intensive and difficult to schedule. With containers and the right supporting tools, the traditional build process is transformed into a self-service, on-demand experience that enables developers to rapidly deploy applications. In the remaining sections of this article we describe how to develop the capability to have an Oracle database container running in a matter of minutes.
Oracle has a long commitment to supporting the developer communities working in containerized environments. At the DockerCon US event in April 2017, Oracle announced that its Oracle 12c database software application would be available alongside of other Oracle products on Docker Store, the standard for dev-ops developers. Dev-ops developers have pulled over four billion images from the Docker Store and are increasingly turning to the Docker Store as the canonical source for high-quality curated content. In the present-day database world, customers are invariably switching to the use of containers with Kubernetes management to build and run a wide variety of applications and services in a highly available on-premises hosted environment.
Containerized environments can reliably offer high-performance compute, storage and network capabilities with the necessary configurations. A containerized environment also reduces overhead costs by providing a repeatable process for application deployment across build, test, and production systems. To enable the deployment and management of containerized applications, organizations use Kubernetes technologies to operate at any scale including production. Kubernetes enables powerful collaboration and workflow management capabilities by deploying containers for cloud-native, distributed applications and microservices. It even allows you to repackage legacy applications for increased portability, more efficient deployment, and improved customer and employee engagement.
Figure 1: Docker containers for reducing development complexity
The Solution
For many companies, to boost productivity and time to value, container usage starts with the departments that are focused on software development. Their journey typically starts with installing, implementing, and using containers for applications that are based on the microservice architecture as shown in Figure 2. Developers want to be able to build microservices-based container applications without changing code or infrastructure.
This approach enables portability between data centers and obviates the need for changes in traditional applications enabling faster development and deployment cycles. Oracle Docker containers run the microservices while Kubernetes is used for container orchestration. Also, the microservices running within Docker containers can communicate with the Oracle databases by using messaging services.
Figure 2: Architecture for Oracle Database featuring Docker and Kubernetes
Using orchestration and automation for containerized applications, developers can self-provision an Oracle database, thereby increasing flexibility and productivity while saving substantial time in creating a production copy for development and testing environments. This solution enables development teams to quickly provision isolated applications without the traditional complexities.
Our Dell EMC engineers recently tested and validated a solution for Oracle database using Docker containers and Kubernetes. The solution uses Oracle Database in containers, Kubernetes, and the Container Storage Interface (CSI) Driver for Dell EMC PowerFlex OS to show how dev/ops teams can transform their development processes.
Dell EMC engineers demonstrated two use cases for this solution. Both of our use cases feature four Dell EMC PowerEdge R640 servers, which are an integral part of Dell EMC VxFlex Ready Nodes, and a CSI Driver for Dell EMC PowerFlex that were hosted in our DellEMC labs.
Use Case 1
In use case 1, the DellEMC engineers manually provisioned the container-based development and testing environment shown in Figure 3 as follows:
- Install Docker.
- Activate the Docker Enterprise Edition-License.
- Run the Oracle 12c database within the Docker container.
- Build and run the Oracle 19c database in the Docker container.
- Import the sample Oracle schemas that are pulled from GitHub into the Oracle 12c and 19c database.
- Install Oracle SQL Developer and query tables from the container to demonstrate that the connection from Oracle SQL Developer to Oracle database functions.
Figure 3: Use Case 1 - Architecture
The key benefit of our first use case was the time that we saved by using Docker containers instead of the traditional manual installation and configuration method of building a typical Oracle database environment. Use Case 1 planning also demonstrated the importance of selecting the Docker registry location and storage provisioning options that are most appropriate for the requirements of a typical development and testing environment.
Use Case 2
Use Case 2 demonstrates the value of CSI plug-in integration with Kubernetes and Dell EMC Power Flex storage to automate storage configuration. Kubernetes orchestration with PowerFlex provides a container deployment strategy with persistent storage. It demonstrates the ease, simplicity, and speed in scaling out a development and testing environment from production Oracle databases. In this use case, a developer provisions the Oracle database in containers on the same infrastructure described in Use Case 1 only this time using Kubernetes with the CSI Driver for Dell EMC PowerFlex. Figure 4 depicts the detailed architecture of Use Case 2.
Figure 4. Use Case 2 – Architecture
Use Case 2 demonstrates how Docker, Kubernetes, and the CSI Driver for Dell EMC PowerFlex accelerate the development life cycle for Oracle applications. Kubernetes configured with the CSI Driver for Dell EMC VxFlex OS simplified and automated the provisioning and removal of containers with persistent storage. Engineers used yaml configuration files along with the kubectl command to quickly deploy and delete containers and complete pods. Our solution demonstrates that developers can provision Oracle databases in containers without the complexities that are associated with installing the database and provisioning storage.
Use Case Observations and Benefits
Adding Kubernetes container orchestration is an essential addition for database developers on a containerized development journey. Automation becomes essential with the expansion of containerized application deployments. In this case, it enabled our developers to bypass the complexities that are associated with plain scripting. Instead, our solution uses open source Kubernetes to accomplish the developer’s objectives. The CSI plug-in integrates with Kubernetes and exposes the capabilities of the Dell EMC PowerFlex storage system, enabling the developer to:
- Take a snapshot of the Oracle database, including the sample schema that was pulled from the GitHub site.
- Protect the work of the existing Oracle database, which was changed before taking the snapshot. We can protect any state. Use the CSI plug-in Driver for Dell EMC PowerFlex OS to create a snapshot that is installed in Kubernetes to provide persistent storage.
- Restore an Oracle 19c database to its pre-deletion state using a snapshot, even after removing the containers and the attached storage.
In our second use case, using Kubernetes combined with the CSI Driver for DellEMC PowerFlex OS simplified and automated the provisioning and removal of containers and storage. In this use case, we used yaml files along with the kubectl command to deploy and delete the containers and pods. All these components facilitate the automation of the container hosting the Oracle database on the top of PowerFlex.
Kubernetes, enhanced with the CSI Driver for Dell EMC VxFlex OS, provides the capability to attach and manage Dell EMC VxFlex OS storage system volumes to containerized applications. Our developers worked with a familiar Kubernetes interface to modify a copy of Oracle database schema gathered from the Github repository database and connect it to the Oracle database container. After modifying the database, the developer protected all progress by using the snapshot feature of Dell EMC VxFlex OS storage system and creating a point-in-time copy of the database.
Comparing Use Case 1 to Use Case 2 demonstrated how we can easily shift away from the complexities of scripting and using the command line to implement a self-service model that accelerates container management. The move to a self-service model, which increases developer productivity by removing bottlenecks, becomes increasingly important as the Docker container environment grows.
Summary
The power of containers and automation show how tasks that traditionally required multiple roles—developers and others working with the storage and database administrators - can be simplified. Kubernetes with the CSI plug-in enables developers and others to do more in less time and with fewer complexities. The time savings means that coding projects can be completed faster, benefiting both the developers and the business-side employees and customers. Overall, the key benefit shown in comparing our two use cases was the transformation from a manually managed container environment to an orchestrated system with more capabilities.
Innovation drives transformation. In the case of Docker containers and Kubernetes, the key benefit is a shift to rapid application deployment services. Oracle and many others have embraced containers and provide images of applications, such as for the Oracle 12c database, that can be deployed in days and instantiated in seconds. Installations and other repetitive tasks are replaced with packaged applications that enable the developer to work quickly in the database. The ease of using Docker and Kubernetes, combined with rapid provisioning of persistent storage, transforms development by removing wait time and enabling the developer to move closer to the speed of thought.
The addition of the Kubernetes orchestration system and the CSI Driver for Dell EMC VxFlex OS brings a rich user interface that simplifies provisioning containers and persistent storage. In our testing, we found that Kubernetes plus the CSI Driver for Dell EMC VxFlex OS enabled developers to provision containerized applications with persistent storage. This solution features point-and-click simplicity and frees valuable time so that the storage administrator can focus on business-critical tasks.
Related Blog Posts

Running containerized applications on Microsoft Azure's hybrid ecosystem - Introduction
Mon, 17 Aug 2020 18:48:17 -0000
|Read Time: 0 minutes
Running containerized applications on Microsoft Azure’s hybrid ecosystem
Introduction
A vast array of services and tooling has evolved in support of microservices and container-based application development patterns. One indispensable asset in the technology value stream found in most of these patterns is Kubernetes (K8s). Technology professionals like K8s because it has become the de-facto standard for container orchestration. Business leaders like it for its potential to help disrupt their chosen marketplace. However, deploying and maintaining a Kubernetes cluster and its complimentary technologies can be a daunting task to the uninitiated.
Enter Microsoft Azure’s portfolio of services, tools, and documented guidance for developing and maintaining containerized applications. Microsoft continues to invest heavily in simplifying this application modernization journey without sacrificing features and functionality. The differentiators of the Microsoft approach are two-fold. First, the applications can be hosted wherever the business requirements dictate – i.e. public cloud, on-premises, or spanning both. More importantly, there is a single control plane, Azure Resource Manager (ARM), for managing and governing these highly distributed applications.
In this blog series, we share the results of hands-on testing in the Dell Technologies labs with container-related services that span both Public Azure and on-premises with Azure Stack Hub. Azure Stack Hub provides a discrete instance of ARM, which allows us to leverage a consistent control plane even in environments with no connectivity to the Internet. It might be helpful to review articles rationalizing the myriad of announcements made at Microsoft Ignite 2019 about Microsoft’s hybrid approach from industry experts like Kenny Lowe, Thomas Maurer, and Mary Branscombe before delving into the hands-on activities in this blog.
Services available in Public Azure
Azure Kubernetes Service (AKS) is a fully managed platform service hosted in Public Azure. AKS makes it simple to define, deploy, debug, and upgrade even the most complex Kubernetes applications. With AKS, organizations can accelerate past the effort of deploying and maintaining the clusters to leveraging the clusters as target platforms for their CI/CD pipelines. DevOps professionals only need to concern themselves with the management and maintenance of the K8s agent nodes and leave the management of the master nodes to Public Azure.
AKS is just one of Public Azure’s container-related services. Azure Monitor, Azure Active Directory, and Kubernetes role-based access controls (RBAC) provide the critical governance needed to successfully operate AKS. Serverless Kubernetes using Azure Container Instances (ACI) can add compute capacity without any concern about the underlying infrastructure. In fact, ACI can be used to elastically burst from AKS clusters when workload demand spikes. Azure Container Registry (ACR) delivers a fully managed private registry for storing, securing, and replicating container images and artifacts. This is perfect for organizations that do not want to store container images in publicly available registries like Docker Hub.
Leveraging the hybrid approach
Microsoft is working diligently to deliver the fully managed AKS resource provider to Azure Stack Hub. The first step in this journey is to use AKS engine to bootstrap K8s clusters on Azure Stack Hub. AKS engine provides a command-line tool that helps you create, upgrade, scale, and maintain clusters. Customers interested in running production-grade and fully supported self-managed K8s clusters on Azure Stack Hub will want to use AKS engine for deployment and not the Kubernetes Cluster (preview) marketplace gallery item. This marketplace item is only for demonstration and POC purposes.
AKS engine can also upgrade and scale the K8s cluster it deployed on Azure Stack Hub. However, unlike the fully managed AKS in Public Azure, the master nodes and the agent nodes need to be maintained by the Azure Stack Hub operator. In other words, this is not a fully managed solution today. The same warning applies to the self-hosted Docker Container Registry that can be deployed to an on-premises Azure Stack Hub via a QuickStart template. Unlike ACR in Public Azure, Azure Stack Hub operators must consider backup and recovery of the images. They would also need to deploy new versions of the QuickStart template as they become available to upgrade the OS or the container registry itself.
If no requirements prohibit the sending of monitoring data to Public Azure and the proper connectivity exists, Azure Monitor for containers can be leveraged for feature-rich monitoring of the K8s clusters deployed on Azure Stack Hub with AKS engine. In addition, Azure Arc for Data Services can be leveraged to run containerized images of Azure SQL Managed Instances or Azure PostgreSQL Hyperscale on this same K8s cluster. The Azure Monitor and Azure Arc for Data Services options would not be available in submarine scenarios where there would be no connectivity to Azure whatsoever. In the disconnected scenario, the customer would have to determine how best to monitor and run data services on their K8s cluster independent of Public Azure.
Here is a summary of the articles in this blog post series:
Part 1: Running containerized applications on Microsoft Azure’s hybrid ecosystem – Provides an overview of the concepts covered in the blog series.
Part 2: Deploy K8s Cluster into an Azure Stack Hub user subscription – Setup an AKS engine client VM, deploy a cluster using AKS engine, and onboard the cluster to Azure Monitor for Containers.
Part 3: Deploy a self-hosted Docker Container Registry – Use one of the Azure Stack Hub QuickStart templates to setup container registry and push images to this registry. Then, pull these images from the registry into the K8s cluster deployed with AKS engine in Part 2.

Deploy a self-hosted Docker Container Registry on Azure Stack Hub
Thu, 22 Jul 2021 10:35:43 -0000
|Read Time: 0 minutes
Deploy a self-hosted Docker Container Registry on Azure Stack Hub
Introduction
Welcome to Part 3 of a three-part blog series on running containerized applications on Microsoft Azure’s hybrid ecosystem. In this part, we step through how we deployed a self-hosted, open-source Docker Registry v2 to an Azure Stack Hub user subscription. We also discuss how we pushed a microservices-based application to the self-hosted registry and then pulled those images from that registry onto the K8s cluster we deployed using AKS engine.
Here are the links to all the series articles:
Part 1: Running containerized applications on Microsoft Azure’s hybrid ecosystem – Provides an overview of the concepts covered in the blog series.
Part 2: Deploy K8s Cluster into an Azure Stack Hub user subscription – Setup an AKS engine client VM, deploy a cluster using AKS engine, and onboard the cluster to Azure Monitor for containers.
Part 3: Deploy a self-hosted Docker Container Registry – Use one of the Azure Stack Hub QuickStart templates to setup container registry and push images to this registry. Then, pull these images from the registry into the K8s cluster deployed with AKS engine in Part 2.
Here is a diagram depicting the high-level architecture in the lab for review.
There are a few reasons why an organization may want to deploy and maintain a container registry on-premises rather than leveraging a publicly accessible registry like Docker Hub. This approach may be particularly appealing to customers in air-gapped deployments where there is unreliable connectivity to Public Azure or no connectivity at all. This can work well with K8s clusters that have been deployed using AKS engine at military installations or other such locales. There may also be regulatory compliance, data sovereignty, or data gravity issues that can be addressed with a local container registry. Some customers may simply require tighter control over where their images are being stored or need to integrate image storage and distribution tightly into their in-house development workflows.
Prerequisites
The deployment of Docker Registry v2 into an Azure Stack Hub user subscription was performed using the 101-vm-linux-docker-registry Azure Stack Hub QuickStart template. There are two phases to this deployment:
1. Creation of a storage account and key vault in a discrete resource group by running the setup.ps1 PowerShell script provided in the template repo.
2. The setup.ps1 script also created an azuredeploy.parameters.json file that is used in the PowerShell command to deploy the ARM template that describes the container registry VM and its associated resources. These resources get provisioned into a separate resource group.
Be aware that there is another experimental repository in GitHub called msazurestackworkloads for deploying Docker Registry. The main difference is that the code in the msazurestackworkloads repo includes artifacts for creating a marketplace gallery item that an Azure Stack Hub user could deploy into their subscription. Again, this is currently experimental and is not supported for deploying the Docker Registry.
One of the prerequisite steps was to obtain an X.509 SSL certificate in PFX format for loading into key vault. We pointed to its location during the running of the setup.ps1 script. We used our lab’s internal standalone CA to create the certificate, which is the same CA used for deploying the K8s cluster with AKS engine. We thought we’d share the steps we took to obtain this certificate in case any readers aren’t familiar with the process. All these steps must be completed from the same workstation to ensure access to the private key.
We performed these steps on the management workstation:
1. Created an INF file that looked like the following. The subject’s CN is the DNS name we provided for the registry VM’s public IP address.
[Version]
Signature="$Windows NT$"
[NewRequest]
Subject = "CN=cseregistry.rack04.cloudapp.azs.mhclabs.com,O=CSE Lab,L=Round Rock,S=Texas,C=US"
Exportable = TRUE
KeyLength = 2048
KeySpec = 1
KeyUsage = 0xA0
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
HashAlgorithm = SHA256
RequestType = PKCS10
[Strings]
szOID_SUBJECT_ALT_NAME2 = "2.5.29.17"
szOID_ENHANCED_KEY_USAGE = "2.5.29.37"
szOID_PKIX_KP_SERVER_AUTH = "1.3.6.1.5.5.7.3.1"
szOID_PKIX_KP_CLIENT_AUTH = "1.3.6.1.5.5.7.3.2"
[Extensions]
%szOID_SUBJECT_ALT_NAME2% = "{text}dns=cseregistry.rack04.cloudapp.azs.mhclabs.com"
%szOID_ENHANCED_KEY_USAGE% = "{text}%szOID_PKIX_KP_SERVER_AUTH%,%szOID_PKIX_KP_CLIENT_AUTH%"
[RequestAttributes]
2. We used the certreq.exe command to generate a CSR that I then submitted to the CA.
certreq.exe -new cseregistry_req.inf cseregistry_csr.req
3. We received a .cer file back from the standalone CA and followed the instructions here to convert this .cer file to a .pfx file for use with the container registry.
Prepare Azure Stack PKI certificates for deployment or rotation
We also needed to have CA’s root certificate in .crt file format. We originally obtained this during the K8s cluster deployment using AKS engine. This needs to be imported into the certificate store of any device that intends to interact with the container registry.
Deploy the container registry supporting infrastructure
We used the setup.ps1 PowerShell script included in the QuickStart template’s GitHub repo for creating the supporting infrastructure for the container registry. We named the new resource group created by this script demoregistryinfra-rg. This resource group contains a storage account and key vault. The registry is configured to use the Azure storage driver to persist the container images in the storage account blob container. The key vault stores the credentials required to authenticate to the registry as a secret and secures the certificate. A service principal (SPN) created prior to executing the setup.ps1 script is leveraged to access the storage account and key vault.
Here are the values we used for the variables in our setup.ps1 script for reference (sensitive information removed, of course). Notice that the $dnsLabelName value is only the hostname of the container registry server and not the fully qualified domain name.
$location = "rack04"
$resourceGroup = "registryinfra-rg"
$saName = "registrystorage"
$saContainer = "images"
$kvName = "registrykv"
$pfxSecret = "registrypfxsecret"
$pfxPath = "D:\file_system_path\cseregistry.pfx"
$pfxPass = "mypassword"
$spnName = "Application (client) ID"
$spnSecret = "Secret"
$userName = "cseadmin"
$userPass = "!!123abc"
$dnsLabelName = "cseregistry"
$sshKey = "SSH key generated by PuttyGen"
$vmSize = "Standard_F8s_v2"
$registryTag = "2.7.1"
$registryReplicas = "5"
Deploy the container registry using the QuickStart template
We deployed the QuickStart template using a similar PowerShell command as the one indicated in the README.md of the GitHub repo. Again, the azuredeploy.parameters.json file was created automatically by the setup.ps1 script. This was very straightforward. The only thing to mention is that we created a new resource group when deploying the QuickStart template. We could have also selected an existing resource group that did not contain any resources.
Testing the Docker Registry with the Sentiment Analyzer application
At this point, it was time to test the K8s cluster and self-hosted container registry running on Azure Stack Hub from end-to-end. For this, we followed a brilliant blog article entitled Learn Kubernetes in Under 3 Hours: A Detailed Guide to Orchestrating Containers written by Rinor Maloku. This was a perfect introduction to the world of creating a microservices-based application running in multiple containers. It covers Docker and Kubernetes fundamentals and is an excellent primer for anyone just getting started in the world of containers and container orchestration. The name of the application is Sentiment Analyser, and it uses text analysis to ascertain the emotion of a sentence.
Learn Kubernetes in Under 3 Hours: A Detailed Guide to Orchestrating Containers
We won’t share all the notes we took while walking through the article. However, there are a couple tips we wanted to highlight as they pertain to testing the K8s cluster and new self-hosted Docker Registry in the lab:
- The first thing we did to prepare for creating the Sentiment Analyser application was to setup a development Ubuntu 18.04 VM in a unique resource group in our lab’s user subscription on Azure Stack Hub. We installed Firefox on the VM and Xming on the management workstation so we could test the functionality of the application at the required points in the process. Then, it was just a matter of setting up PuTTY properly with X11 forwarding.
- Before installing anything else on the VM, we imported the root certificate from our lab’s standalone CA. This was critical to facilitate the secure connection between this VM and the registry VM for when we started pushing the images.
- Whenever Rinor’s article talked about pushing the container images to Docker Hub, we instead targeted the self-hosted registry running on Azure Stack Hub. We had to take these steps on the development VM to facilitate those procedures:
- Logged into the container registry using the credentials for authentication that we specified in the setup.ps1 script.
- Tagged the image to target the container registry.
sudo docker tag <image ID> cseregistry.rack04.cloudapp.azs.mhclabs.com/sentiment-analysis-frontend
- Pushed the image to the container registry.
sudo docker push cseregistry.rack04.cloudapp.azs.mhclabs.com/sentiment-analysis-frontend
- Once the images were pushed to the registry, we found out that we could view them from the management workstation by browsing to the following URL and authenticating to the registry:
https://cseregistry.rack04.cloudapp.azs.mhclabs.com/v2/_catalog
The following output was given after we pushed all the application images:
{"repositories":["sentiment-analysis-frontend","sentiment-analysis-logic","sentiment-analysis-web-app"]}
- We noticed that we did not have to import the lab’s standalone CA’s root certificate on the primary node before attempting to pull the image from the container registry. We assumed that the cluster picked up the root certificate from the Azure Stack Hub system, as there was a file named /etc/ssl/certs/azsCertificate.pem on the primary node from where we were running kubectl.
- Prior to attempting to create the K8s pod for the Sentiment Analyser frontend, we had to create a Kubernetes cluster secret. This is always necessary when pulling images from private repositories – whether they are private repos on Docker Hub or privately hosted on-premises using a self-hosted container registry. We followed the instructions here to create the secret:
Pull an Image from a Private Registry
In the section of this article entitled Create a Secret by providing credentials on the command line, we discovered a couple items to note:
- When issuing the kubectl create secret docker-registry command, we had to enclose the password in single quotes because it was a complex password.
kubectl create secret docker-registry regcred --docker-
server=cseregistry.rack04.cloudapp.azs.mhclabs.com --docker-username=cseadmin --docker-
password='!!123abc' --docker-email=<email address>
We’ve now gotten into the habit of verifying the secret after I create it by using the following command:
kubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
- Then, when we came to the modification of the YAML files in the article, we learned just how strict YAML is about the formatting of the file. When we added the imagePullSecrets: object, we had to ensure it perfectly lined up with the containers: object. Also, interesting to note is that the numbers in the right-hand column were not necessary to duplicate.
Here is the content of the file that worked, but this blog post interface will not be able to display the indentation correctly:
apiVersion: v1
kind: Pod # 1
metadata:
name: sa-frontend
labels:
app: sa-frontend # 2
spec: # 3
containers:
- image: cseregistry.rack04.cloudapp.azs.mhclabs.com/sentiment-analysis-frontend # 4
name: sa-frontend # 5
ports:
- containerPort: 80 # 6
imagePullSecrets:
- name: regcred
At this point, we were able to observe the fully functional Sentiment Analyser application running on our K8s cluster on Azure Stack Hub. We were not only running this application on-premises in a highly prescriptive, self-managed Kubernetes cluster, but we were also able to do so while leveraging a self-hosted Docker Registry for the transferring of the images. We could also proceed to Azure Monitor for containers using the Public Azure portal to monitor our running containerized application and create thresholds for timely alerting on any potential issues.
Articles in this blog series
We hope this blog post proves to be insightful to anyone deploying a self-hosted container registry on Azure Stack Hub. It has been a great learning experience stepping through the deployment of a containerized application using the Microsoft Azure toolset. There are so many other things we want to try like Deploying Azure Cognitive Services to Azure Stack Hub and using Azure Arc to run Azure data services on our K8s cluster on Azure Stack Hub. We look forward to sharing more of our testing results on these exciting capabilities in the near future.