
Lifecycle Management for vSAN Ready Nodes and VxRail Clusters: Part 2 – Cloud Foundation Use Cases
Wed, 03 Aug 2022 21:32:15 -0000
|Read Time: 0 minutes
In my previous post I explored the customer experience between using vSphere Lifecycle Manager Images (vLCM Images) and VxRail Manager to maintain HCI stack integrity and completing full stack software updates for standard vSphere cluster use cases. It was clear to see that VxRail Manager optimized operational efficiencies while taking ownership of software validation of the complete cluster to remove the burden of testing and reducing the overall risk during a lifecycle management event. However, common questions I frequently get are: Do those same values carry over when using VxRail as part of a VMware Cloud Foundation on VxRail (Dell Technologies Cloud Platform) deployment? Is vLCM Images even used in VCF on VxRail deployments? In this post I want to dive into answering these questions.
There are some excellent resources available on the VxRail InfoHub web portal. Along with several blog posts that discuss the unique integration between SDDC Manager and VxRail Manager in the area of LCM (like this one), I suggest that you check them out if you are unfamiliar with VCF and SDDC Manager functionality as it will help in following along in this post.
Before we dive in, there are a few items that you should be aware of regarding SDDC Manager and vLCM. I won’t go into all of them here, but you can check out the VCF 4.1 Release Notes, vLCM Requirements and Limitations, VCF 4.1 Admin Guide, and Tanzu documentation for more details. A few worth highlighting include:
- You cannot deploy a Service VM to an NSX Manager that is associated with a workload domain that is using vSphere Lifecycle Manager Images
- Management domains, and thus VCF consolidated architecture deployments, can only support vSphere Lifecycle Manager Baselines (formerly known as VUM) based updates because vLCM Images use is not supported in the Management domain default cluster.
- VMware vSphere Replication is considered a non-integrated solution and cannot be used in conjunction with vLCM Images.
- While vLCM Images supports both NSX-T and vSphere with Kubernetes, it does not support both enabled at the same time within a cluster. This means you cannot use vLCM Images with vSphere with Kubernetes within a VCF workload domain deployment.
As in my last post, the main area of focus here is around the customer experience with VMware Cloud Foundation and VRSLCM and VxRail, specifically:
- Defining the initial baseline node image configuration
- Planning for a cluster update
- Executing the cluster update
- Sustaining cluster integrity over the long term
Oh, and one last important point to make before we get into the details. As of this writing, vLCM is only used when deploying VCF on server/vSAN Ready Nodes and is not used when deploying VCF on VxRail. As a result, all information covered here when comparing vLCM with VxRail Manager essentially compares the LCM experience of running VCF on servers/vSAN Ready Nodes vs VCF on VxRail.
Defining the Initial Baseline Node Image Configuration
How is it Done With vLCM Images?
We have covered this in detail in my previous post. The requirements for VCF-based systems also remains the same but one thing to highlight in VCF use cases is that the customer is still responsible for the installation, configuration, and ongoing updating of the hardware vendor HSM components used in the vLCM Images-based environment. SDDC Manager does not automatically deploy these components nor lifecycle them for you.
VCF deployments do differ in the area of initial VCF node imaging. In VCF deployments there are two initial node imaging options for customers:
- A manual install of ESXi and associated driver and firmware packages
- A semi-automated process using a service called VMware Imaging Appliance (VIA) that runs as a part of the Cloud Builder appliance.
The VIA service tool uses a PXE Boot environment to image nodes that need to be on the same L2 domain as the appliance and are reachable over an untagged VLAN (VLAN ID 0). ESXi Images and VIBs can be uploaded to the Cloud Builder appliance VIA service. Hostnames and IP address are assigned during the imaging process. Once initial imaging is complete, and Cloud Builder has run though its automated workflow, you are left with a provisioned Management Domain. (One important consideration here regarding initial node baseline images: customers need to ensure that the hardware and software components included in these images are validated against the VCF and ESXi software and hardware BOMs that have been certified for the version of VCF that will be installed in their environment.) This default cluster within the Management Domain cannot use vLCM Images for future cluster LCM updates.
When you are creating a new VI Workload Domain in a VCF on a vSAN Ready Nodes deployment, that is, when you can choose to enable vLCM Images as your method of choice for cluster updates or alternatively, you can also select vLCM Baselines. (Note: When using vLCM Baselines, firmware updates are not maintained as part of cluster lifecycle management). If you opt to use vLCM Images, you cannot revert to using vLCM Baselines for cluster management. So, it is very important to choose wisely and understand what LCM operating model is needed prior to deploying the workload domain. Because this blog post focuses on vLCM Images, let’s review what is involved when you select this option.
To begin, it’s important to know that you cannot create a vLCM Images-based workload domain until you import an image into SDDC Manager. But you cannot import an image into SDDC Manager until you have a vLCM Images enabled cluster.
To get over this chicken and egg scenario, the administrator needs to create an empty cluster within the Management Domain where you can set up the image requirements and assign firmware and driver profiles that you have validated during the planning and preparation phase for the initial cluster build. The following figure provides an example of creating the temporary cluster needed to configure vLCM Images.
Figure 1 Creating a temporary cluster to enable vLCM Images as part of the initial setup.
When defining vLCM images, similar to when defining the initial baseline node images, customers are responsible for ensuring that these images are validated against the VCF software BOM that has been certified for the version of VCF that is installed in the environment.
When you are satisfied with the image configuration and you have defined the Driver, Firmware and Cluster Profiles, export the required JSON, ESX ISO, and ZIP files from vSphere UI to your local file system, as shown in the following figure. These files include:
- SOFTWARE_SPEC_1386209123.json
- ISO_IMAGE_1904428419.iso
- OFFLINE_BUNDLE_1829789659.zip
Figure 2 Exporting Images
Next, within the vCenter UI, go to the Development Center menu and choose the API Explorer Tab. At this stage you need to run several API commands.
To do this, first select your endpoint (vCenter Server) from the drop-down option, then select the vCenter Related APIs. When completed, you will be presented with all the applicable vCenter APIs for your chosen end point. Expand the Cluster section and execute the GET API command below for /rest/vCenter/Cluster as shown in the following figure.
Figure 3 In Developer Center: List all Clusters
This displays all the clusters managed by that vCenter and provides a variable for each cluster. Click on the vcenter.cluster.summary (Dell-VSRN-Temp) and copy this value (that is, Domain-c2022 in my example) that you will use in the next step.
Change the focus on the API explorer to ESX and execute a GET API command for /api/esx/settings/clusters/Domain-c2022/software.
Fill in the cluster id parameter (Domain-c2022) as the required value to run the API command (see the following figure). Once executed, click on the download json option and an additional json file downloads to your local file system.
Figure 4 Execute the Cluster software API Command
At this point in time, you have four files
- SOFTWARE_SPEC_1386209123.json
- ISO_IMAGE_1904428419.iso
- OFFLINE_BUNDLE_1829789659.zip
- Reponse-body.json
Finally, within SDDC Manager, select Repository then Image Management and Import Cluster Image. Here you need to import the four files mentioned above. As you import the individual files, make sure that you specify a name for the cluster image and import them in the correct order. Once the import is successful, you can now start to deploy your first vLCM Images enabled workload domain.
How is it Done Using VxRail LCM?
VxRail key integrations with Cloud Foundation start even before any VCF on VxRail components are installed at Dell facilities, as part of the Dell manufacturing process. Here, the nodes are loaded with a VxRail Continuously Validated State image that includes all pre-validated vSphere, vSAN, and hardware firmware components. This means that once VxRail nodes are racked, stacked, and powered on within your datacenter, they are ready to be used to install a new VCF instance, create new workload domains, expand existing workload domains with new clusters, or a expand clusters on an existing system.
For new VCF deployments, Cloud Builder has unique integrated workflows that tailor a streamlined deployment process with VxRail, leveraging existing capabilities for VxRail cluster management operations. Once SDDC Manager is deployed using the Cloud Builder connectivity, two update bundle repositories can then be configured.
Figure 5 SDDC Manager Repository Settings
The first is to the VMware repository which is used for the VMware software such as vSphere, NSX, and SDDC Manager. The second is for the Dell EMC repository for the VxRail software. Once you configure and authenticate with the appropriate user account credentials in SDDC Manager, it will automatically connect to the VxRail repository at Dell EMC and pull down the next available VxRail update package. Each available VxRail update package will have already been validated, tested, and certified with the version of VCF running in the customer’s environment.
Figure 6 VxRail Software Bundle in SDDC Manager
The following figure summarizes the steps needed for defining initial baseline node images for VCF using vLCM Images and VCF using VxRail Manager.
Figure 7 Initial baseline node images and configuration
Planning for a Cluster Update
How is it Done Using vLCM Images?
Although we have reviewed this in detail before, it is worth mentioning here again. Ownership of this process lies on the shoulders of the administrator. In this model, customers would take on the responsibility validating and testing the software and driver combination of their desired state image to ensure full stack interoperability and integrity, and ensuring that the component versions fall within the supported VCF software BOM being used in their environment.
How is it Done Using VxRail LCM?
The VxRail approach is much different. The VxRail engineering teams spend 1000s of test hours across multiple platforms to validate each release. The end user is given a single image to leverage knowing that Dell Technologies has completed the very heavy lift for platform validation. As I mentioned above, SDDC Manager will download the correct bundle based on your VCF Release and mark it available within your SDDC Manager. When a customer sees a new image available, they are guaranteed that it is already compatible with their VCF deployment. This curated bundle management and validation is part of the turnkey experience customers gain with VCF on VxRail.
The following figure illustrates the differences in planning a cluster update for VCF with vLCM Images and VCF with VxRail.
Figure 8 Planning for a cluster update
Executing the Cluster Update
How is it Done Using vLCM Images?
Defining the baseline node image is vital for defining the hardware health of your cluster. Defining a target version for your system’s next update is equally as important. It should involve testing the specific combination of components for the image that is desired. This would be in addition to some of the standard interoperability validation performed by the Ready Node hardware vendor when updates to server hardware firmware and drivers are released. Once the hardware baseline is known, the ESXi image must be imported into vCenter. Drivers, firmware, and Cluster Profiles must then be defined in vCenter so they can be ready to be exported.
We use the same process as originally outlined for the initial setup: Export the images, run the relevant APIs calls, and import the files into SDDC Manager. Every future update will follow the same process as I’ve outlined. Additional firmware and driver profiles will have to be created if new workload domains or clusters are added with different server hardware configuration. Thus, a deployment that caters to multiple hardware use cases will end up with several driver/firmware profiles that will need to be managed and tested independently.
How is it Done Using VxRail LCM?
SDDC Manager is the orchestration engine, defining:
- When each update is applicable
- Ensuring that each update is made available in the correct order, and
- Ensuring that components such as SDDC Manager, vCenter, NSX-T, and VxRail components are updated and coordinated in the correct manner.
For VxRail LCM updates, SDDC Manager will send API calls directly to each VxRail Manager for every cluster being updated to initiate a cluster upgrade. From that point on VxRail Manager will take ownership of the VxRail update execution using the same native VxRail Manager LCM execution process that is used in non-VCF VxRail deployments. During LCM execution, VxRail Manager provides constant feedback to the SDDC Manager throughout the process. VxRail updates these components:
- VMware ESXi
- vSAN
- Hardware firmware
- Hardware drivers
To understand the full range of hardware components that are updated with each release, I urge you to check out the VxRail 7.0 Support Matrix.
The following figure summarizes the steps required to execute cluster updates for VCF with vLCM Images and VCF with VxRail.
Figure 9 Executing a cluster update workflow
Sustaining Cluster Integrity Over the Long Term
How is it Done Using vLCM Images?
Unlike standalone vSphere cluster deployments where vLCM Images manages images on a per cluster basis, VMware Cloud Foundation allows you to manage all cluster images, once imported and repurpose them for other clusters or workload domains. A definite improvement, but each new update requires you create the image, firmware, and driver combinations in vCenter first and then import into SDDC Manager. Of course, this is after you have repeated the planning phases and have completed all the driver and firmware interoperability testing.
Also, it is important to note that if your cluster is being managed by vLCM Images, and you need to expand your clusters with hardware that is not identical to the original hosts (this can happen in situations in which hardware components go end of sale or you have different hardware or firmware requirements for different nodes), you can no longer leverage vLCM Images or change back to using vLCM Baselines. So proper planning is very important.
How is it Done Using VxRail LCM?
VxRail LCM supports customers’ ability to grow their clusters with heterogenous nodes over time. Different generations of servers or servers with differing hardware characteristics can be mixed within a cluster, in accordance with application profile requirements. A single pre-validated image will be made available that will cover all hardware profiles. All of this is factored into each VxRail Continuously Validated State update bundle that is applied to each individual cluster based on its current component's version state.
Conclusion
When we piece together the bigger picture with all the LCM stages combined, it provides an excellent representation of the ease of management when VxRail is at the heart of your VCF deployment.
Figure 10 Comparing vLCM Images and VxRail LCM cluster update operations
It’s clear to see that VxRail, with its pre-validated engineered approach, can provide a differentiated customer experience when it comes to operational efficiency, during both the initial deployment phase and the continuous lifecycle management of the HCI.
While vLCM Images provides a significant improvement from manually applying the updates, the planning and testing required can become quite iterative. And when newer hardware profiles are introduced over the lifespan of the system, things could become more difficult to manage, introducing additional complexity.
By contrast, VxRail provides a single update file for each release that is curated and made accessible within SDDC Manager natively, with no additional administration effort required. It’s simplicity at its finest, and simplicity is at the core of the VxRail turnkey customer experience.
Cliff Cahill
Dell EMC VxRail Engineering Technologist
Twitter: @cliffcahill
LinkedIn: http://linkedin.com/in/cliffcahill
Additional Resources
VCF on VxRail Interactive Demo
VxRail page on DellTechnologies.com
Related Blog Posts

Take VMware Tanzu to the Cloud Edge with Dell Technologies Cloud Platform
Wed, 12 Jul 2023 16:23:35 -0000
|Read Time: 0 minutes
Dell Technologies and VMware are happy to announce the availability of VMware Cloud Foundation 4.1.0 on VxRail 7.0.100.
This release brings support for the latest versions of VMware Cloud Foundation and Dell EMC VxRail to the Dell Technologies Cloud Platform and provides a simple and consistent operational experience for developer ready infrastructure across core, edge, and cloud. Let’s review these new features.
Updated VMware Cloud Foundation and VxRail BOM
Cloud Foundation 4.1 on VxRail 7.0.100 introduces support for the latest versions of the SDDC listed below:
- vSphere 7.0 U1
- vSAN 7.0 U1
- NSX-T 3.0 P02
- vRealize Suite Lifecycle Manager 8.1 P01
- vRealize Automation 8.1 P02
- vRealize Log Insight 8.1.1
- vRealize Operations Manager 8.1.1
- VxRail 7.0.100
For the complete list of component versions in the release, please refer to the VCF on VxRail release notes. A link is available at the end of this post.
VMware Cloud Foundation Software Feature Updates
VCF on VxRail Management Enhancements
vSphere Cluster Level Services (vCLS)
vSphere Cluster Services is a new capability introduced in the vSphere 7 Update 1 release that is included as a part of VCF 4.1. It runs as a set of virtual machines deployed on top of every vSphere cluster. Its initial functionality provides foundational capabilities that are needed to create a decoupled and distributed control plane for clustering services in vSphere. vCLS ensures cluster services like vSphere DRS and vSphere HA are all available to maintain the resources and health of the workloads running in the clusters independent of the availability of vCenter Server. The figure below shows the components that make up vCLS from the vSphere Web Client.
Figure 1
Not only is vSphere 7 providing modernized data services like embedded vSphere Native Pods with vSphere with Tanzu but features like vCLS are now beginning the evolution of modernizing to distributed control planes too!
VCF Managed Resources and VxRail Cluster Object Renaming Support
VCF can now rename resource objects post creation, including the ability to rename domains, datacenters, and VxRail clusters.
The domain is managed by the SDDC Manager. As a result, you will find that there are additional options within the SDDC Manager UI that will allow you to rename these objects.
VxRail Cluster objects are managed by a given vCenter server instance. In order to change cluster names, you will need to change the name within vCenter Server. Once you do, you can go back to the SDDC Manager and after a refresh of the UI, the new cluster name will be retrieved by the SDDC Manager and shown.
In addition to the domain and VxRail cluster object rename, SDDC Manager now supports the use of a customized Datacenter object name. The enhanced VxRail VI WLD creation wizard process has been updated to include inputs for Datacenter Name and is automatically imported into the SDDC Manager inventory during the VxRail VI WLD Creation SDDC Manager workflow. Note: Make sure the Datacenter name matches the one used during the VxRail Cluster First Run. The figure below shows the Datacenter Input step in the enhanced VxRail VI WLD creation wizard from within SDDC Manager.
Figure 2
Being able to customize resource object names makes VCF on VxRail more flexible in aligning with an IT organization’s naming policies.
VxRail Integrated SDDC Manager WLD Cluster Node Removal Workflow Optimization
Furthering the Dell Technologies and VMware co-engineering integration efforts for VCF on VxRail, new workflow optimizations have been introduced in VCF 4.1 that take advantage of VxRail Manager APIs for VxRail cluster host removal operations.
When the time comes for VCF on VxRail cloud administrators to remove hosts from WLD clusters and repurpose them for other domains, admins will use the SDDC Manager “Remove Host from WLD Cluster” workflow to perform this task. This remove host operation has now been fully integrated with native VxRail Manager APIs to automate removing physical VxRail hosts from a VxRail cluster as a single end-to-end automated workflow that is kicked off from the SDDC Manager UI or VCF API. This integration further simplifies and streamlines VxRail infrastructure management operations all from within common VMware SDDC management tools. The figure below illustrates the SDDC Manager sub tasks that include new VxRail API calls used by SDDC Manager as a part of the workflow.
Figure 3
Note: Removed VxRail nodes require reimaging prior to repurposing them into other domains. This reimaging currently requires Dell EMC support to perform.
I18N Internationalization and Localization (SDDC Manager)
SDDC Manager now has international language support that meets the I18N Internationalization and Localization standard. Options to select the desired language are available in the Cloud Builder UI, which installs SDDC Manager using the selected language settings. SDDC Manager will have localization support for the following languages – German, Japanese, Chinese, French, and Spanish. The figure below illustrates an example of what this would look like in the SDDC Manager UI.
Figure 4
vRealize Suite Enhancements
VCF Aware vRSLCM
New in VCF 4.1, the vRealize Suite is fully integrated into VCF. The SDDC Manager deploys the vRSLCM and creates a two way communication channel between the two components. When deployed, vRSLCM is now VCF aware and reports back to the SDDC Manager what vRealize products are installed. The installation of vRealize Suite components utilizes built standardized VVD best practices deployment designs leveraging Application Virtual Networks (AVNs).
Software Bundles for the vRealize Suite are all downloaded and managed through the SDDC Manager. When patches or updates become available for the vRealize Suite, lifecycle management of the vRealize Suite components is controlled from the SDDC Manager, calling on vRSLCM to execute the updates as part of SDDC Manager LCM workflows. The figure below showcases the process for enabling vRealize Suite for VCF.
Figure 5
VCF Multi-Site Architecture Enhancements
VCF Remote Cluster Support
VCF Remote Cluster Support enables customers to extend their VCF on VxRail operational capabilities to ROBO and Cloud Edge sites, enabling consistent operations from core to edge. Pair this with an awesome selection of VxRail hardware platform options and Dell Technologies has your Edge use cases covered. More on hardware platforms later…For a great detailed explanation on this exciting new feature check out the link to a detailed VMware blog post on the topic at the end of this post.
VCF LCM Enhancements
NSX-T Edge and Host Cluster-Level and Parallel Upgrades
With previous VCF on VxRail releases, NSX-T upgrades were all encompassing, meaning that a single update required updates to all the transport hosts as well as the NSX Edge and Manager components in one evolution.
With VCF 4.1, support has been added to perform staggered NSX updates to help minimize maintenance windows. Now, an NSX upgrade can consist of three distinct parts:
- Updating of edges
- Can be one job or multiple jobs. Rerun the wizard.
- Must be done before moving to the hosts
- Updating the transport hosts
- Once the hosts within the clusters have been updated, the NSX Managers can be updated.
Multiple NSX edge and/or host transport clusters within the NSX-T instance can be upgraded in parallel. The Administrator has the option to choose some clusters without having to choose all of them. Clusters within a NSX-T fabric can also be chosen to be upgraded sequentially, one at a time. Below are some examples of how NSX-T components can be updated.
NSX-T Components can be updated in several ways. These include updating:
- NSX-T Edges and Host Clusters within an NSX-T instance can be upgraded together in parallel (default)
- NSX-T Edges can be upgraded independently of NSX-T Host Clusters
- NSX-T Host Clusters can be upgraded independently of NSX-T Edges only after the Edges are upgraded first
- NSX-T Edges and Host Clusters within an NSX-T instance can be upgraded sequentially one after another.
The figure below visually depicts these options.
Figure 6
These options provide Cloud admins with a ton of flexibility so they can properly plan and execute NSX-T LCM updates within their respective maintenance windows. More flexible and simpler operations. Nice!
VCF Security Enhancements
Read-Only Access Role, Local and Service Accounts
A new ‘view-only’ role has been added to VCF 4.1. For some context, let’s talk a bit now about what happens when logging into the SDDC Manager.
First, you will provide a username and password. This information gets sent to the SDDC Manager, who then sends it to the SSO domain for verification. Once verified, the SDDC Manager can see what role the account has privilege for.
In previous versions of Cloud Foundation, the role would either be for an Administrator or it would be for an Operator.
Now, there is a third role available called a ‘Viewer’. Like its name suggests, this is a view only role which has no ability to create, delete, or modify objects. Users who are assigned this role may not see certain items in the SDDC Manger UI, such as the User screen. They may also see a message saying they are unauthorized to perform certain actions.
Also new, VCF now has a local account that can be used during an SSO failure. To help understand why this is needed let’s consider this: What happens when the SSO domain is unavailable for some reason? In this case, the user would not be able to login. To address this, administrators now can configure a VCF local account called admin@local. This account will allow the performing of certain actions until the SSO domain is functional again. This VCF local account is defined in the deployment worksheet and used in the VCF bring up process. If bring up has already been completed and the local account was not configured, then a warning banner will be displayed on the SDDC Manager UI until the local account is configured.
Lastly, SDDC Manager now uses new service accounts to streamline communications between SDDC manager and the products within Cloud Foundation. These new service accounts follow VVD guidelines for pre-defined usernames and are administered through the admin user account to improve inter-VCF communications within SDDC Manager.
VCF Data Protection Enhancements
As described in this blog, with VCF 4.1, SDDC Manager backup-recovery workflows and APIs have been improved to add capabilities such as backup management, backup scheduling, retention policy, on-demand backup & auto retries on failure. The improvement also includes Public APIs for 3rd party ecosystem and certified backup solutions from Dell PowerProtect.
VxRail Software Feature Updates
VxRail Networking Enhancements
VxRail 4 x 25Gbps pNIC redundancy
VxRail engineering continues innovate in areas that drive more value to customers. The latest VCF on VxRail release follows through on delivering just that for our customers. New in this release, customers can use the automated VxRail First Run Process to deploy VCF on VxRail nodes using 4 x 25Gbps physical port configurations to run the VxRail System vDS for system traffic like Management, vSAN, and vMotion, etc. The physical port configuration of the VxRail nodes would include 2 x 25Gbps NDC ports and additional 2 x 25Gbps PCIe NIC ports.
In this 4 x 25Gbps set up, NSX-T traffic would run on the same System vDS. But what is great here (and where the flexibility comes in) is that customers can also choose to separate NSX-T traffic on its own NSX-T vDS that uplinks to separate physical PCIe NIC ports by using SDDC Manager APIs. This ability was first introduced in the last release and can also be leveraged here to expand the flexibility of VxRail host network configurations.
The figure below illustrates the option to select the base 4 x 25Gbps port configuration during VxRail First Run.
Figure 7
By allowing customers to run the VxRail System VDS across the NDC NIC ports and PCIe NIC ports, customers gain an extra layer of physical NIC redundancy and high availability. This has already been supported with 10Gbps based VxRail nodes. This release now brings the same high availability option to 25Gbps based VxRail nodes. Extra network high availability AND 25Gbps performance!? Sign me up!
VxRail Hardware Platform Updates
Recently introduced support for ruggedized D-Series VxRail hardware platforms (D560/D560F) continue expanding the available VxRail hardware platforms supported in the Dell Technologies Cloud Platform.
These ruggedized and durable platforms are designed to meet the demand for more compute, performance, storage, and more importantly, operational simplicity that deliver the full power of VxRail for workloads at the edge, in challenging environments, or for space-constrained areas.
These D-Series systems are a perfect match when paired with the latest VCF Remote Cluster features introduced in Cloud Foundation 4.1.0 to enable Cloud Foundation with Tanzu on VxRail to reach these space-constrained and challenging ROBO/Edge sites to run cloud native and traditional workloads, extending existing VCF on VxRail operations to these locations! Cool right?!
To read more about the technical details of VxRail D-Series, check out the VxRail D-Series Spec Sheet.
Well that about covers it all for this release. The innovation train continues. Until next time, feel free to check out the links below to learn more about DTCP (VCF on VxRail).
Jason Marques
Twitter - @vwhippersnapper
Additional Resources
VMware Blog Post on VCF Remote Clusters
Cloud Foundation on VxRail Release Notes
VxRail page on DellTechnologies.com
VCF on VxRail Interactive Demos

Announcing VMware Cloud Foundation 4.0.1 on Dell EMC VxRail 7.0
Wed, 03 Aug 2022 15:21:13 -0000
|Read Time: 0 minutes
The latest Dell Technologies Cloud Platform release introduces new support for vSphere with Kubernetes for entry cloud deployments and more
Dell Technologies and VMware are happy to announce the general availability VCF 4.0.1 on VxRail 7.0.
This release offers several enhancements including vSphere with Kubernetes support for entry cloud deployments, enhanced bring up features for more extensibility and accelerated deployments, increased network configuration options, and more efficient LCM capabilities for NSX-T components. Below is the full listing of features that can be found in this release:
- Kubernetes in the management domain: vSphere with Kubernetes is now supported in the management domain. With VMware Cloud Foundation Workload Management, you can deploy vSphere with Kubernetes on the management domain default cluster starting with only four VxRail nodes. This means that DTCP entry cloud deployments can take advantage of running Kubernetes containerized workloads alongside general purpose VM workloads on a common infrastructure!
- Multi-pNIC/multi-vDS during VCF bring-up: The Cloud Builder deployment parameter workbook now provides five vSphere Distributed Switch (vDS) profiles that allow you to perform bring-up of hosts with two, four, or six physical NICs (pNICs) and to create up to two vSphere Distributed Switches for isolating system (Management, vMotion, vSAN) traffic from overlay (Host, Edge, and Uplinks) traffic.
- Multi-pNIC/multi-vDS API support: The VCF API now supports configuring a second vSphere Distributed Switch (vDS) using up to four physical NICs (pNICs), providing more flexibility to support high performance use cases and physical traffic separation.
- NSX-T cluster-level upgrade support: Users can upgrade specific host clusters within a workload domain so that the upgrade can fit into their maintenance windows bringing about more efficient upgrades.
- Cloud Builder API support for bring-up operations – VCF on VxRail deployment workflows have been enhanced to support using a new Cloud Builder API for bring-up operations. VCF software installation on VxRail during VCF bring-up can now be done using either an API or GUI providing even more platform extensibility capabilities.
- Automated externalization of the vCenter Server for the management domain: Externalizing the vCenter Server that gets created during the VxRail first run (the one used for the management domain) is now automated as part of the bring-up process. This enhanced integration between the VCF Cloud Builder bring-up automation workflow and VxRail API helps to further accelerate installation times for VCF on VxRail deployments.
- BOM Updates: Updated VCF software Bill of Materials with new product versions.
Jason Marques
Twitter - @vwhippersnapper
Additional Resources