
Built to Scale with VCF on VxRail and Oracle 19C RAC
Fri, 17 Apr 2020 05:21:03 -0000
|Read Time: 0 minutes
The newly released Oracle RAC on Dell EMC VxRail with VMware Cloud Foundations (VCF) Reference Architecture (RA) guides customers to building an efficient and high performing hyperconverged infrastructure to run their OLTP workloads. Scalability was the primary goal of this RA, and performance was highlighted as the numbers were generated. As Oracle RAC scaled, TPM increased to over 1 million TPM, while read IOPs showed sub-milli-second (0.64-0.70 ms) performance. The performance achieved with VxRail is a great added benefit to the core design points for Oracle RAC environments of which the primary focus is the availability and resiliency of the solution. Links to a reference architecture (“Oracle RAC on VMware Cloud Foundation on Dell EMC VxRail”) and a solution brief (“Deploying Oracle RAC on Dell EMC VxRail “) are available here and at the end of this post.
The RAC solution with VxRail scaled-out easily — you simply add a new node to join an existing VxRail cluster. The VxRail Manager provides a simple path that automatically discovers and non-disruptively adds each new node. VMware vSphere and vSAN can then rebalance resources and workloads across the cluster, creating a single resource pool for compute and storage.
The VxRail clusters were built with eight P570F nodes; four for the VCF Management Domain and four for the Oracle RAC Workload Domain.
Specifics on the build, including the hardware and software used, are detailed within the reference architecture. It also provides information on the testing, tools used, and results.
This graph shows the performance of TPM and Response Time when increasing the RAC node count from one to four. Notice that the average TPM increased with near-linear trendline (shown by the dotted line) as additional RAC nodes were added, while total application response time was maintained at 20 milliseconds or less.
Note: TPM near-linear trendline is shown in the above graph (blue dotted line), As additional RAC nodes are added, an increase in performance is seen as well as an increase in RAC high availability. TPM linear performance (scale equal performance per each note) growth is not achieved due to RAC nodes’ dependency on concurrency of access, instance, network, or other factors. See the RA for additional performance related information.
Summary of performance
Different-sized databases kept the TPM at the same level (about one million transactions) while keeping the application response time at 20ms or below. When increasing the database size, the physical read and write IOPS increased near-linearly, as reported from the Oracle AWR. This indicated that more read and write I/O requests were served by the backend storage, under the same configuration. Overall, when the peak client IOPS was up to 100,000, vSAN still provided excellent storage performance at sub-milliseconds at read and single-digit milliseconds latency at write.
Sidebar about Oracle licensing: While not mentioned in the RA; the VxRail offers several facilities to both control Oracle licenses and in some cases eliminates the need for costly licensed options. These include a broad choice of CPU core configurations, some with fewer cores and higher processing power per core, to maximize the customer’s Oracle workload performance while minimizing the license requirements. Costly add on options such as encryption and compression can be provided via vSAN and are handled by VxRail. Further, and the vSphere hypervisor features, like DRS, allow Oracle VMs to be contained to only licensed nodes.
You can speak to a Dell Technologies’ Oracle specialist for more details on how to control Oracle licensing costs for VMware environments.
Conclusion
Oracle Database 19c on VxRail offers customers performance, scalability, reliability, and security for all their operational and analytical workloads. The Oracle RAC on VxRail test environment was first created to highlight the architecture. It also had the added benefit of showcasing the great performance VxRail delivers. If you need more performance, it is simple to adjust the configuration by adding more VxRail nodes to the cluster. If you need more storage, add more drives to meet the scale required of the database. Dell Technologies has Oracle specialists to ensure the VxRail cluster will meet the scale and performance outcomes desired for Oracle environments.
Additional Resources:
Reference Architecture - Oracle RAC on VMware Cloud Foundation on Dell EMC VxRail
Solution Brief - Deploying Oracle RAC on Dell EMC VxRail
Author: Vic Dery, Senior Principal Engineer, VxRail Technical Marketing
Special thank you to David Glynn for assisting with the reviews
Related Blog Posts

Take VMware Tanzu to the Cloud Edge with Dell Technologies Cloud Platform
Mon, 02 Nov 2020 15:50:28 -0000
|Read Time: 0 minutes
Dell Technologies and VMware are happy to announce the availability of VMware Cloud Foundation 4.1.0 on VxRail 7.0.100.
This release brings support for the latest versions of VMware Cloud Foundation and Dell EMC VxRail to the Dell Technologies Cloud Platform and provides a simple and consistent operational experience for developer ready infrastructure across core, edge, and cloud. Let’s review these new features.
Updated VMware Cloud Foundation and VxRail BOM
Cloud Foundation 4.1 on VxRail 7.0.100 introduces support for the latest versions of the SDDC listed below:
- vSphere 7.0 U1
- vSAN 7.0 U1
- NSX-T 3.0 P02
- vRealize Suite Lifecycle Manager 8.1 P01
- vRealize Automation 8.1 P02
- vRealize Log Insight 8.1.1
- vRealize Operations Manager 8.1.1
- VxRail 7.0.100
For the complete list of component versions in the release, please refer to the VCF on VxRail release notes. A link is available at the end of this post.
VMware Cloud Foundation Software Feature Updates
VCF on VxRail Management Enhancements
vSphere Cluster Level Services (vCLS)
vSphere Cluster Services is a new capability introduced in the vSphere 7 Update 1 release that is included as a part of VCF 4.1. It runs as a set of virtual machines deployed on top of every vSphere cluster. Its initial functionality provides foundational capabilities that are needed to create a decoupled and distributed control plane for clustering services in vSphere. vCLS ensures cluster services like vSphere DRS and vSphere HA are all available to maintain the resources and health of the workloads running in the clusters independent of the availability of vCenter Server. The figure below shows the components that make up vCLS from the vSphere Web Client.
Figure 1
Not only is vSphere 7 providing modernized data services like embedded vSphere Native Pods with vSphere with Tanzu but features like vCLS are now beginning the evolution of modernizing to distributed control planes too!
VCF Managed Resources and VxRail Cluster Object Renaming Support
VCF can now rename resource objects post creation, including the ability to rename domains, datacenters, and VxRail clusters.
The domain is managed by the SDDC Manager. As a result, you will find that there are additional options within the SDDC Manager UI that will allow you to rename these objects.
VxRail Cluster objects are managed by a given vCenter server instance. In order to change cluster names, you will need to change the name within vCenter Server. Once you do, you can go back to the SDDC Manager and after a refresh of the UI, the new cluster name will be retrieved by the SDDC Manager and shown.
In addition to the domain and VxRail cluster object rename, SDDC Manager now supports the use of a customized Datacenter object name. The enhanced VxRail VI WLD creation wizard process has been updated to include inputs for Datacenter Name and is automatically imported into the SDDC Manager inventory during the VxRail VI WLD Creation SDDC Manager workflow. Note: Make sure the Datacenter name matches the one used during the VxRail Cluster First Run. The figure below shows the Datacenter Input step in the enhanced VxRail VI WLD creation wizard from within SDDC Manager.
Figure 2
Being able to customize resource object names makes VCF on VxRail more flexible in aligning with an IT organization’s naming policies.
VxRail Integrated SDDC Manager WLD Cluster Node Removal Workflow Optimization
Furthering the Dell Technologies and VMware co-engineering integration efforts for VCF on VxRail, new workflow optimizations have been introduced in VCF 4.1 that take advantage of VxRail Manager APIs for VxRail cluster host removal operations.
When the time comes for VCF on VxRail cloud administrators to remove hosts from WLD clusters and repurpose them for other domains, admins will use the SDDC Manager “Remove Host from WLD Cluster” workflow to perform this task. This remove host operation has now been fully integrated with native VxRail Manager APIs to automate removing physical VxRail hosts from a VxRail cluster as a single end-to-end automated workflow that is kicked off from the SDDC Manager UI or VCF API. This integration further simplifies and streamlines VxRail infrastructure management operations all from within common VMware SDDC management tools. The figure below illustrates the SDDC Manager sub tasks that include new VxRail API calls used by SDDC Manager as a part of the workflow.
Figure 3
Note: Removed VxRail nodes require reimaging prior to repurposing them into other domains. This reimaging currently requires Dell EMC support to perform.
I18N Internationalization and Localization (SDDC Manager)
SDDC Manager now has international language support that meets the I18N Internationalization and Localization standard. Options to select the desired language are available in the Cloud Builder UI, which installs SDDC Manager using the selected language settings. SDDC Manager will have localization support for the following languages – German, Japanese, Chinese, French, and Spanish. The figure below illustrates an example of what this would look like in the SDDC Manager UI.
Figure 4
vRealize Suite Enhancements
VCF Aware vRSLCM
New in VCF 4.1, the vRealize Suite is fully integrated into VCF. The SDDC Manager deploys the vRSLCM and creates a two way communication channel between the two components. When deployed, vRSLCM is now VCF aware and reports back to the SDDC Manager what vRealize products are installed. The installation of vRealize Suite components utilizes built standardized VVD best practices deployment designs leveraging Application Virtual Networks (AVNs).
Software Bundles for the vRealize Suite are all downloaded and managed through the SDDC Manager. When patches or updates become available for the vRealize Suite, lifecycle management of the vRealize Suite components is controlled from the SDDC Manager, calling on vRSLCM to execute the updates as part of SDDC Manager LCM workflows. The figure below showcases the process for enabling vRealize Suite for VCF.
Figure 5
VCF Multi-Site Architecture Enhancements
VCF Remote Cluster Support
VCF Remote Cluster Support enables customers to extend their VCF on VxRail operational capabilities to ROBO and Cloud Edge sites, enabling consistent operations from core to edge. Pair this with an awesome selection of VxRail hardware platform options and Dell Technologies has your Edge use cases covered. More on hardware platforms later…For a great detailed explanation on this exciting new feature check out the link to a detailed VMware blog post on the topic at the end of this post.
VCF LCM Enhancements
NSX-T Edge and Host Cluster-Level and Parallel Upgrades
With previous VCF on VxRail releases, NSX-T upgrades were all encompassing, meaning that a single update required updates to all the transport hosts as well as the NSX Edge and Manager components in one evolution.
With VCF 4.1, support has been added to perform staggered NSX updates to help minimize maintenance windows. Now, an NSX upgrade can consist of three distinct parts:
- Updating of edges
- Can be one job or multiple jobs. Rerun the wizard.
- Must be done before moving to the hosts
- Updating the transport hosts
- Once the hosts within the clusters have been updated, the NSX Managers can be updated.
Multiple NSX edge and/or host transport clusters within the NSX-T instance can be upgraded in parallel. The Administrator has the option to choose some clusters without having to choose all of them. Clusters within a NSX-T fabric can also be chosen to be upgraded sequentially, one at a time. Below are some examples of how NSX-T components can be updated.
NSX-T Components can be updated in several ways. These include updating:
- NSX-T Edges and Host Clusters within an NSX-T instance can be upgraded together in parallel (default)
- NSX-T Edges can be upgraded independently of NSX-T Host Clusters
- NSX-T Host Clusters can be upgraded independently of NSX-T Edges only after the Edges are upgraded first
- NSX-T Edges and Host Clusters within an NSX-T instance can be upgraded sequentially one after another.
The figure below visually depicts these options.
Figure 6
These options provide Cloud admins with a ton of flexibility so they can properly plan and execute NSX-T LCM updates within their respective maintenance windows. More flexible and simpler operations. Nice!
VCF Security Enhancements
Read-Only Access Role, Local and Service Accounts
A new ‘view-only’ role has been added to VCF 4.1. For some context, let’s talk a bit now about what happens when logging into the SDDC Manager.
First, you will provide a username and password. This information gets sent to the SDDC Manager, who then sends it to the SSO domain for verification. Once verified, the SDDC Manager can see what role the account has privilege for.
In previous versions of Cloud Foundation, the role would either be for an Administrator or it would be for an Operator.
Now, there is a third role available called a ‘Viewer’. Like its name suggests, this is a view only role which has no ability to create, delete, or modify objects. Users who are assigned this role may not see certain items in the SDDC Manger UI, such as the User screen. They may also see a message saying they are unauthorized to perform certain actions.
Also new, VCF now has a local account that can be used during an SSO failure. To help understand why this is needed let’s consider this: What happens when the SSO domain is unavailable for some reason? In this case, the user would not be able to login. To address this, administrators now can configure a VCF local account called admin@local. This account will allow the performing of certain actions until the SSO domain is functional again. This VCF local account is defined in the deployment worksheet and used in the VCF bring up process. If bring up has already been completed and the local account was not configured, then a warning banner will be displayed on the SDDC Manager UI until the local account is configured.
Lastly, SDDC Manager now uses new service accounts to streamline communications between SDDC manager and the products within Cloud Foundation. These new service accounts follow VVD guidelines for pre-defined usernames and are administered through the admin user account to improve inter-VCF communications within SDDC Manager.
VCF Data Protection Enhancements
As described in this blog, with VCF 4.1, SDDC Manager backup-recovery workflows and APIs have been improved to add capabilities such as backup management, backup scheduling, retention policy, on-demand backup & auto retries on failure. The improvement also includes Public APIs for 3rd party ecosystem and certified backup solutions from Dell PowerProtect.
VxRail Software Feature Updates
VxRail Networking Enhancements
VxRail 4 x 25Gbps pNIC redundancy
VxRail engineering continues innovate in areas that drive more value to customers. The latest VCF on VxRail release follows through on delivering just that for our customers. New in this release, customers can use the automated VxRail First Run Process to deploy VCF on VxRail nodes using 4 x 25Gbps physical port configurations to run the VxRail System vDS for system traffic like Management, vSAN, and vMotion, etc. The physical port configuration of the VxRail nodes would include 2 x 25Gbps NDC ports and additional 2 x 25Gbps PCIe NIC ports.
In this 4 x 25Gbps set up, NSX-T traffic would run on the same System vDS. But what is great here (and where the flexibility comes in) is that customers can also choose to separate NSX-T traffic on its own NSX-T vDS that uplinks to separate physical PCIe NIC ports by using SDDC Manager APIs. This ability was first introduced in the last release and can also be leveraged here to expand the flexibility of VxRail host network configurations.
The figure below illustrates the option to select the base 4 x 25Gbps port configuration during VxRail First Run.
Figure 7
By allowing customers to run the VxRail System VDS across the NDC NIC ports and PCIe NIC ports, customers gain an extra layer of physical NIC redundancy and high availability. This has already been supported with 10Gbps based VxRail nodes. This release now brings the same high availability option to 25Gbps based VxRail nodes. Extra network high availability AND 25Gbps performance!? Sign me up!
VxRail Hardware Platform Updates
Recently introduced support for ruggedized D-Series VxRail hardware platforms (D560/D560F) continue expanding the available VxRail hardware platforms supported in the Dell Technologies Cloud Platform.
These ruggedized and durable platforms are designed to meet the demand for more compute, performance, storage, and more importantly, operational simplicity that deliver the full power of VxRail for workloads at the edge, in challenging environments, or for space-constrained areas.
These D-Series systems are a perfect match when paired with the latest VCF Remote Cluster features introduced in Cloud Foundation 4.1.0 to enable Cloud Foundation with Tanzu on VxRail to reach these space-constrained and challenging ROBO/Edge sites to run cloud native and traditional workloads, extending existing VCF on VxRail operations to these locations! Cool right?!
To read more about the technical details of VxRail D-Series, check out the VxRail D-Series Spec Sheet.
Well that about covers it all for this release. The innovation train continues. Until next time, feel free to check out the links below to learn more about DTCP (VCF on VxRail).
Jason Marques
Twitter - @vwhippersnapper
Additional Resources
VMware Blog Post on VCF Remote Clusters
Cloud Foundation on VxRail Release Notes
VxRail page on DellTechnologies.com
VCF on VxRail Interactive Demos

The Latest VxRail Platform Innovation is Now Included in Your Cloud
Tue, 18 Aug 2020 15:32:11 -0000
|Read Time: 0 minutes
The Dell Technologies Cloud Platform, VCF on VxRail, now supports the latest VxRail HCI System Software release featuring a new and improved first run experience, host geo-location tagging capabilities, hardware platform updates, and enhanced security features
Dell Technologies and VMware are happy to announce the general availability VCF 4.0.1.1 on VxRail 7.0.010.
This release brings support for the latest version of VxRail to the Dell Technologies Cloud Platform. Let’s review what these new features are all about.
Updated VxRail Software Bill of Materials
Please check out the VCF on VxRail release notes for a full listing of the supported software BOM associated with this release. You can find the link at the bottom of page.
VxRail Hardware Platform Updates
VxRail 7.0.010 brings about new support for ruggedized D-Series VxRail hardware platforms (D560/D560F). These ruggedized and durable platforms are designed to meet the demand for more compute, performance, storage, and more importantly, operational simplicity that deliver the full power of VxRail for workloads at the edge, in challenging environments, or for space-constrained areas. To read more about the technical details of VxRail D-Series, check out the VxRail D-Series Spec Sheet.
Also, this release is reintroducing GPU support that was not in the initial VCF 4.0 on VxRail 7.0 release.
New and Improved VxRail First Run Experience
A new Day 1 VxRail cluster first run workflow and UI enhancements have been updated. The new day 1 VxRail first run deployment wizard is comprised of 13 steps or top level tasks. This day 1 workflow update was required to support new VxRail HCI System software enhancements.
The new UI provides for improved levels of configuration data entry flexibility during deployment. These options include things like allowing unique hostnames for each ESX host without forcing a name configuration, allowing for non-sequential IP addresses for hosts in the cluster, support for a geographical location ID tag, e.g. Rack Name or Rack Location are now supported. It provides a cleaner interface with a consistent look and feel for Information, Warnings, and Errors. There is improved validation, providing a higher level of feedback when errors are encountered of validation checks fail. And finally, options to manually enter all the configuration parameters or upload a pre-defined configuration via a YAML or JSON file are till available too! The figure below illustrates the new first run steps and UI.

New VxRail API to Automate Day 1 VxRail First Run Cluster Creation
This feature allows for fast and consistent VxRail cluster deployments using the programmatic extensibility of a REST API. It provides administrators with an additional option for creating VxRail clusters in addition to the VxRail Manager first run UI.
Day 1 Support to Initially Deploy Up to Six Nodes in a VxRail Cluster During VxRail First Run
The previous maximum node deployment supported in the VxRail first run was four. Administrators who needed larger VxRail cluster sizes over four nodes would have needed to create the cluster with four nodes and once that was in place, perform node expansions to get to the desired cluster size. This new feature helps reduce time needed to initially create larger VxRail clusters by allowing for a larger starting point of six VxRail nodes.
VxRail Host Geo-Location Tagging
This is probably one of the coolest and most underrated features in the release in my opinion. VxRail Manager now supports geographic location tags for VxRail hosts. This capability allows for important admin-defined host metadata that can assist many customers in gaining greater visibility of the physical location of the HCI infrastructure that makes up their cloud. This information is configured as “Host Settings” during VxRail first run as illustrated in the figure below.

As shown, the two values that make up the geo-location tags are Rack Name and Rack Position. These values are stored in the iDRAC of each VxRail host. You may be asking yourself, “Great! I have the ability to add additional metadata for my VxRail hosts but what can I do with it?”. Well, together, these values help a cloud administrator identify a VxRail host’s position within a given rack within the data center. Cloud administrators can then leverage this data to choose the VxRail host order they want to be displayed in the VxRail Manager vCenter plugin Physical View. The figure below illustrates what this would look like.

As datacenter environments grow, VxRail host expansion operations can be used to add additional infrastructure capacity. The VxRail “Add VxRail Hosts” automated expansion workflows have been updated to include a new Host Location step which allows for the ability add geo-location Rack Name and Rack Position metadata for the new hosts being added to an existing VxRail Cluster. The figure below shows what a host expansion operation would look like.

In this fast paced world of digital transformation, it is not uncommon for cloud datacenter infrastructure to be moved within a datacenter after it has already been installed. This could be due to physical rack expansion design changes or infrastructure repurposing. These situations were also considered with using VxRail geo-location tags. Thus, there is an option to dynamically edit an existing host’s geo-location information. When this is performed, VxRail Manager will automatically update the host’s iDRAC with the new values. The figure below shows what the host edit would look like.

All these geo-location management capabilities provide VCF on VxRail administrators with full stack physical to virtual infrastructure mapping that help further extend the Cloud Foundation management experience and simplify operations! And this capability is only available with the Dell Technologies Cloud Platform (VCF on VxRail)! How cool is that?!
VxRail Security Enhancements
Added Security Compliance With The Addition of FIPS 140-2 Level 1 Validated Cryptography For VxRail Manager
Cloud Foundation on VxRail offers intrinsic security built into every layer of the solution stack, from hardware silicon to storage to compute to networking to governance controls. This helps customers make security a built part of the platform for your traditional workloads as well as container based cloud native workloads rather than something that is bolted on after the fact.
Building on the intrinsic security capabilities of the platform are the following new features:
VxRail Manager is now FIPS 140-2 compliant, offering built-in intrinsic encryption, meeting the high levels of security standards required by the US Department of Defense.
From VxRail 7.0.010 onward, VxRail has ‘FIPS inside’! This would entail having built-in features such as:
- VxRail Manager Data-in-Transit (e.g., HTTPS interfaces, SSH)
- VxRail Manager's SLES12 FIPS usage
- VxRail Manager - encryption used for password caching
Disable VxRail LCM operations from vCenter
In order to limit administrator configuration error by allowing for the performing of VxRail LCM operations from within vCenter rather than through SDDC Manager, all VCF on VxRail deployments will natively lockdown the vSphere Web Client VxRail Manager Plugin Updates screen out of the box. This enforces administrators to use SDDC Manager for all LCM operations which will guarantee that the full stack of HW/SW used have all been qualified and validated for their environment. The figure below illustrates what this looks like.

Disable VxRail Host Rename/Re-IP operations in vCenter
Continuing with the idea of trying to limit administration configuration errors, this feature deals with trying to avoid configuration errors by not allowing administrators to perform VxRail Host Edit operations from within vCenter that are not supported in VCF. This helps maintain an operating experience in which all VCF on VxRail deployments will natively lockdown the vSphere Web Client VxRail Manager Plugin Hosts screen out of the box. The figure below illustrates what this looks like

Now those are some intrinsic security features!
Well that about covers all the new features! Thanks for taking the time to learn more about this latest release. As always, check out some of the links at the bottom of this page to access additional VCF on VxRail resources.
Jason Marques
Twitter - @vwhippersnapper
Additional Resources