Your Browser is Out of Date

Nytro.ai uses technology that works best in other browsers.
For a full experience use one of the browsers below

Dell.com Contact Us
United States/English
Jason Marques
Jason Marques

Jason has been with Dell for 16+ years, helping customers realize the benefits of deploying Dell and VMware technologies into their environments to optimize their business impact. He focuses on communicating multicloud strategy for next generation IT. He supports initiatives surrounding the broad set of Dell/VMware integrated solutions, particularly with HCI and cloud native technologies.

Assets

Home > Integrated Products > VxRail > Blogs

VMware VxRail VMware Cloud Foundation

VCF on VxRail – More business-critical workloads welcome!

Jason Marques Jason Marques

Wed, 07 Feb 2024 22:30:10 -0000

|

Read Time: 0 minutes

New platform enhancements for stronger mobility and flexibility 

February 4, 2020


Today, Dell EMC has made the newest VCF 3.9.1 on VxRail 4.7.410 release available for download for existing VCF on VxRail customers with plans for availability for new customers coming on February 19, 2020. Let’s dive into what’s new in this latest version.

Expand your turnkey cloud experience with additional unique VCF on VxRail integrations

This release continues the co-engineering innovation efforts of Dell EMC and VMware to provide our joint customers with better outcomes. We tackle the area of security in this case. VxRail password management for VxRail Manager accounts such as root and mystic as well as ESXi have been integrated into the SDDC Manager UI Password Management framework. Now the components of the full SDDC and HCI infrastructure stack can be centrally managed as one complete turnkey platform using your native VCF management tool, SDDC Manager. Figure 1 illustrates what this looks like.

Figure 1


Support for Layer 3 VxRail Stretched Cluster Configuration Automation

Building off the support for Layer 3 stretched clusters introduced in VCF 3.9 on VxRail 4.7.300 using manual guidance, VCF 3.9.1 on VxRail 4.7.410 now supports the ability to automate the configuration of Layer 3 VxRail stretched clusters for both NSX-V and NSX-T backed VxRail VI Workload Domains. This is accomplished using CLI in the VCF SOS Utility.


Greater management visibility and control across multiple VCF instances

For new installations, this release now provides the ability to extend a common management and security model across two VCF on VxRail instance deployments by sharing a common Single Sign On (SSO) Domain between the PSCs of multiple VMware Cloud Foundation instances so that the management and the VxRail VI Workload Domains are visible in each of the instances. This is known as a Federated SSO Domain.

What does this mean exactly?   Referring to Figure 2, this translates into the ability for Site B to join the SSO instance of Site A.  This allows VCF to further align to the VMware Validated Design (VVD)  to share SSO domains where it makes sense based upon Enhanced Linked Mode 150ms RTT limitation.

This would leverage a recent option made available in the VxRail first run to connect the VxRail cluster to an existing SSO Domain (PSCs).  So, when you stand up the VxRail cluster for the second MGMT Domain that is affiliated with the second VCF instance deployed in Site B, you would connect it to the SSO (PSCs) that was created by the first MGMT domain of the VCF instance in Site A.


Figure 2


Application Virtual Networks – Enabling Stronger Mobility and Flexibility with VMware Cloud Foundation

One of the new features in the 3.9.1 release of VMware Cloud Foundation (VCF) is use of Application Virtual Networks (AVNs) to completely abstract the hardware and realize the true value from a software-defined cloud computing model. Read more about it on VMware’s blog post here. Key note on this feature: It is automatically set up for new VCF 3.9.1 installations. Customers who are upgrading from a previous version of VCF would need to engage with the VMware Professional Services Organization (PSO) to configure AVN at this time. Figure 3 shows the message existing customers will see when attempting the upgrade.


Figure 3


VxRail 4.7.410 platform enhancements

VxRail 4.7.410 brings a slew of new hardware platforms and hardware configuration enhancements that expand your ability to support even more business-critical applications.


Figure 4


Figure 5


There you have it! We hope you find these latest features beneficial. Until next time…


Jason Marques

Twitter - @vwhippersnapper

Additional Resources

VxRail page on DellTechnologies.com

VCF on VxRail Guides

VCF on VxRail Whitepaper

VCF 3.9.1 on VxRail 4.7.410 Release Notes

VxRail Videos

VCF on VxRail Interactive Demos 


Home > Integrated Products > VxRail > Blogs

VCF on VxRail Cloud Foundation on VxRail

Learn About the Latest VMware Cloud Foundation 5.1 on Dell VxRail 8.0.200 Release

Jason Marques Jason Marques

Tue, 05 Dec 2023 17:06:36 -0000

|

Read Time: 0 minutes

Pairing more configuration flexibility with more integrated automation delivers even more simplified outcomes to meet more business needs!

More is what sums up this latest Cloud Foundation on VxRail release! This new release is based on the latest software bill of materials (BOM) featuring vSphere 8.0 U2, vSAN 8.0 U2, and NSX 4.1.2. Read on for more details.…

 

Operations and serviceability user experience updates

SDDC Manager WFO UI custom host networking configuration enhancements

With this enhancement, the administrator can configure networking of a new workload domain or VxRail cluster using either “Default” VxRail Network Profiles or a “Custom” Network Profile configuration. Cloud Foundation on VxRail already supports the ability for administrators to deploy custom host networking configurations using the SDDC Manager WFO API deployment method, however this new feature now brings this support to the SDDC Manager WFO UI deployment method, making it even easier to operationalize.

The following demo walks through using the SDDC Manager WFO UI to create a new workload domain with a VxRail cluster that is configured with vSAN ESA and VxRail vLCM mode enabled and a custom network profile.

New VCF Infrastructure as Code (IaC) tooling with new Terraform VCF Provider and PowerCLI VCF Module

Infrastructure teams can now utilize the Terraform Provider for VCF and the VCF module that is now integrated into VMware’s official PowerCLI tool to perform Infrastructure-as-code (IaC), allowing them to deploy, manage, and operate VMware Cloud Foundation on VxRail deployments. 

By using prebuilt IaC best practices code that is designed to take advantage of interfacing with a single VCF API, IaC teams are able to perform infrastructure provisioning tasks that can accelerate IaC usage and lessen the burden to develop and maintain code for individual infrastructure components intended to deliver similar outcomes.

Important Note: Not all operations using these tools may be supported in Cloud Foundation on VxRail. Please refer to tool documentation links at the bottom of this post for details.

    

LCM updates

Day 1 VxRail vLCM mode compatibility for management and workload domains

VMware Cloud Foundation 5.1 on VxRail 8.0.200 now supports the configuration and deployment of new domains using vSphere Lifecycle Manager Images (vLCM) enabled VxRail clusters, depicted in figure 1. VxRail vLCM enabled clusters can leverage VxRail Manager to unify not only your ESXi Image but also your BIOS/firmware/drivers through a single update process, all controlled/orchestrated by VxRail Manager using the integrated SDDC Manager’s native LCM operations experience via VxRail APIs. VxRail clusters will have their VxRail Continuously Validated State image managed at the cluster level by VxRail Manager just like in VxRail standard LCM mode enabled clusters. 

Figure 1 is a high-level diagram of VxRail vLCM mode architecture

Figure 1. High-level VxRail vLCM mode architecture

Mixed-mode support for workload domains as a steady state

Existing VMware Cloud Foundation 5.x on VxRail 8.x deployments now allow administrators to run workload domains of different VCF 5.x versions as a steady state”.  Administrators can now update the management domain and any other workload domain of a VCF 5.0 deployment to the latest VCF 5.x version without the need to upgrade all workload domains. Mixed-mode support also allows administrators to leverage the benefits of new SDDC Manager features in the management domain without having to upgrade a full VCF 5.x on VxRail 8.x instance. 

Asynchronous download support for SDDC Manager update precheck files

SDDC Manager update precheck files can now be downloaded and updated asynchronously from full release updates, an addition to similar async VxRail specific precheck file updates that already exist within VxRail Manager. This feature allows administrators to download, deploy, and run SDDC Manager update prechecks tailored to a specific VMware Cloud Foundation on VxRail releases. SDDC Manager precheck files are created by VMware engineering and contain detailed checks for SDDC Manager to run prior to upgrading to a newer VCF on VxRail target release, as shown in the following figure. 

Figure 2. High-level process of asynchronous download support for SDDC Manager update precheck files

 

Networking updates

Support for the separation of DvPG for management appliances and ESXi host (VMKernel) management

Prior to this release, the default networking topology deployed by VMware Cloud Foundation on VxRail consisted of ESXi host management interfaces (vmkernel interface) and management components (vCenter server, SDDC Manager, NSX components, VxRail Manager, etc.) being applied to the same Distributed Virtual Port Group (DvPG). This new DvPG separation feature enables traffic isolation between management component VMs and ESXi Host Management vmkernel Interfaces, helping align to an organization’s desired security posture. Figure 3 illustrates this new configuration architecture.

Figure 3. New DvPG architecture

Configure custom NSX Edge cluster without 2-tier routing (via API)

VMware Cloud Foundation 5.1 on VxRail 8.0.200 now provides the option to deploy a custom NSX Edge cluster without the need to configure both a Tier-0 and Tier-1 gateway. These types of NSX Edge cluster deployments can be configured using the SDDC Manager (API only).

Static IP-based NSX Tunnel End Point and Sub Transport Node Profile assignment support for L3 aware clusters and L2/L3 vSAN stretched clusters

VxRail stretched clusters that are deployed using vSAN OSA can now be configured with vLCM mode enabled. In addition, administrators can now configure NSX Host TEPs to utilize a NSX static IP pool and no longer need to manually maintain an external DHCP server to support Layer 3 vSAN OSA stretched clusters, as illustrated in the following figure.

Figure 4. TEP Configuration Flexibility Example for vSAN Stretched Clusters

Building off these capabilities, deployments of VxRail stretched clusters with vSAN OSA which are configured using static IP Pools can now also leverage Sub-Transport Node Profiles (Sub-TNP), a feature introduced with NSX-T 3.2.2 and NSX 4.1.

Sub-TNPs can be used to prepare clusters of hosts without L2 adjacency to the Host TEP VLAN. This is useful for customers with rack-based IP schemas and allows Host TEP IPs to be configured on their own separate networks. Configuring vSAN stretched clusters using NSX Sub-TNP provides increased security, allowing administrators to enable and configure Distributed Malware Prevention and Detection. An example of this is depicted in the following figure.

Figure 5. Sub-TNP vSAN L3 Stretched Cluster Configuration Example

Note: Stretched VxRail with vSAN ESA clusters are not yet supported.

Support for multiple VDS for NSX host networking configurations

This release now provides the option to configure multiple VDS for NSX through the SDDC Manager WFO UI and WFO API.

Administrators can now configure additional VxRail host VDS prepared for NSX (VDS for NSX) to configure using VLAN Transport Zones (VLAN TZs), as shown in the following figure. This provides administrators the added benefit of configuring NSX Distributed Firewall (DFW) for workloads in VLAN transport zones, allowing security to be more granular. These capabilities further simplify the configuration of advanced networking and security for Cloud Foundation on VxRail.

Figure 6 is VxRail host

Figure 6. Configuring additional VxRail host VDS for NSX to configure using VLAN TZs

 

Security and access updates

OKTA SSO identity federation support

VMware Cloud Foundation 5.1 on VxRail 8.0.200 now supports the option to configure the VMware Identity Broker for federation using Okta (3rd party IDP).  Once configured, federated users can seamlessly move between vCenter Server and NSX Manager consoles without being prompted to re-authenticate.  

 

Storage updates

vSAN OSA/ESA support for management and workload domain VxRail clusters

VMware Cloud Foundation 5.1 on VxRail 8.0.200 adds support for both vSAN OSA-based and vSAN ESA-based VxRail clusters when deploying a new management domain (greenfield VCF on VxRail instance) and new workload domains/clusters in VCF on VxRail instances that have been upgraded to this latest release. VCF requires that vSAN ESA-based cluster deployments have vLCM mode enabled. Also, as of this release, only 15th generation VxRail vSAN ESA compatible hardware platforms are supported. 16th generation VxRail platform support is planned for a future release.

Support for vSAN OSA/ESA remote datastores as principal storage when used with VxRail dynamic node workload domain clusters

This release adds support of VxRail dynamic node compute-only clusters in cross cluster capacity sharing use cases. This means that vSAN OSA or ESA remote datastores sourced from a standard VxRail HCI cluster with vSAN within the same workload domain can now be used as principal storage for VxRail dynamic node- compute only workload domain clusters. This capability is available via the SDDC Manager WFO script deployment method only. 

 

Platform and scale updates

Increased VCF remote cluster maximum support for up to 16 nodes and up to 150ms latency

There are new validated updates to the maximum supported latency requirements for use of VCF remote clusters. These links now require 10 Mbps of bandwidth available and a latency less than 150ms.

There have also been updates regarding VCF remote cluster size scalability ranges. A VCF remote cluster now requires a minimum of 3 hosts when using local vSAN as cluster principal storage or 2 hosts when using supported Dell external storage principal storage with VxRail dynamic nodes. On the max scale limit side, VCF remote clusters cannot exceed the new maximum of 16 VxRail hosts in either case. 

Note: Support for this feature is expected to be available after GA.

Support for 2-node workload domain VxRail dynamic node clusters when using VMFS on FC Dell external storage as principal storage

Cloud Foundation on VxRail now supports the ability to deploy 2-node dynamic node-based workload domain clusters when using VMFS on FC Dell external storage as cluster Principal storage.

Increased GPU scale for Private AI

Nvidia GPUs can be configured for AI / ML to support a variety of different use cases. In VMware Cloud Foundation 5.1 on VxRail 8.0.200, where GPUs have been configured for vGPUs, a VM can now be configured with up to 16 vGPU profiles that represent all of a GPU or parts of a GPU. These enhancements allow customers to support larger Generative AI and large-language model (LLM) workloads while delivering maximum performance.

 

VxRail hardware platform updates

15th generation VxRail E660N and P670N all-NVMe vSAN ESA hardware platform support

Cloud Foundation on VxRail administrators can now use VxRail hardware platforms that have been qualified to run vSAN ESA and VxRail 8.0.200 software. The all-NVMe VxRail platforms such as the 15th generation VxRail E660N and P670N can now be ordered and deployed in Cloud Foundation 5.1 on VxRail 8.0.200 environments.

 

Hybrid cloud management updates

VCF mixed licensing mode support

VMware Cloud Foundation 5.1 on VxRail 8.0.200 introduces support for both Key-based and Keyless licensing for existing deployments, as illustrated in the following figure. 

To enable the deployment, the management domain must first be cloud connected and subscribed.  Once complete, enhanced SDDC Manager workflows allow administrators the option to license a new workload domain using Keyless licenses (cloud connected subscription) or Key-based licenses (perpetual or cloud disconnected subscription). This deployment scenario is referred to as Mixed Licensing Mode. All licensing used within a domain must be homogenous, meaning all components within a domain must use either a Key-based or Keyless license and not a combination thereof.

Figure 7. Understanding Key-based and Keyless licensing for existing deployments

VMware Cloud Disaster Recovery service for VCF cloud connected subscription deployments

VMware Cloud Foundation on VxRail cloud connected subscriptions now support VMware Cloud Disaster Recovery (VCDR) as an add-on service through the VMware Cloud Portal.

 

Other asynchronous release-independent related updates

VMware redefines Cloud Foundation product lifecycle policies

The product lifecycle policies for new and existing VMware Cloud Foundation releases have been redefined by VMware. VCF on VxRail product lifecycle policies align with VMware’s VCF product lifecycle policy. 

End of General Support for VCF 5.x is now four (4) years from the original VCF 5.0 launch date. This change allows IT teams to run their VMware Cloud Foundation on VxRail deployments for longer before planning an upgrade, providing more control for IT organizations to adopt a cloud operating model that evolves at the pace of their business. 

 

Summary

Well, there you have it! Another release in the books. If you want even more information beyond what was discussed here, feel free to check out the resources linked below. See you next time!

 

Resources

 

Author: Jason Marques

Twitter:  @vWhipperSnapper 

Home > Integrated Products > VxRail > Blogs

VMware VxRail Kubernetes VMware Cloud Foundation Tanzu DTCP

Take VMware Tanzu to the Cloud Edge with Dell Technologies Cloud Platform

Jason Marques Jason Marques

Wed, 12 Jul 2023 16:23:35 -0000

|

Read Time: 0 minutes

Dell Technologies and VMware are happy to announce the availability of VMware Cloud Foundation 4.1.0 on VxRail 7.0.100.

This release brings support for the latest versions of VMware Cloud Foundation and Dell EMC VxRail to the Dell Technologies Cloud Platform and provides a simple and consistent operational experience for developer ready infrastructure across core, edge, and cloud. Let’s review these new features.

Updated VMware Cloud Foundation and VxRail BOM

Cloud Foundation 4.1 on VxRail 7.0.100 introduces support for the latest versions of the SDDC listed below:

  • vSphere 7.0 U1 
  • vSAN 7.0 U1 
  • NSX-T 3.0 P02
  • vRealize Suite Lifecycle Manager 8.1 P01
  • vRealize Automation 8.1 P02
  • vRealize Log Insight 8.1.1
  • vRealize Operations Manager 8.1.1
  • VxRail 7.0.100

For the complete list of component versions in the release, please refer to the VCF on VxRail release notes. A link is available at the end of this post.

VMware Cloud Foundation Software Feature Updates

VCF on VxRail Management Enhancements

vSphere Cluster Level Services (vCLS)

vSphere Cluster Services is a new capability introduced in the vSphere 7 Update 1 release that is included as a part of VCF 4.1. It runs as a set of virtual machines deployed on top of every vSphere cluster. Its initial functionality provides foundational capabilities that are needed to create a decoupled and distributed control plane for clustering services in vSphere. vCLS ensures cluster services like vSphere DRS and vSphere HA are all available to maintain the resources and health of the workloads running in the clusters independent of the availability of vCenter Server. The figure below shows the components that make up vCLS from the vSphere Web Client.

Figure 1

Not only is vSphere 7 providing modernized data services like embedded vSphere Native Pods with vSphere with Tanzu but features like vCLS are now beginning the evolution of modernizing to distributed control planes too! 

VCF Managed Resources and VxRail Cluster Object Renaming Support

VCF can now rename resource objects post creation, including the ability to rename domains, datacenters, and VxRail clusters.

The domain is managed by the SDDC Manager. As a result, you will find that there are additional options within the SDDC Manager UI that will allow you to rename these objects. 

VxRail Cluster objects are managed by a given vCenter server instance. In order to change cluster names, you will need to change the name within vCenter Server. Once you do, you can go back to the SDDC Manager and after a refresh of the UI, the new cluster name will be retrieved by the SDDC Manager and shown.

In addition to the domain and VxRail cluster object rename, SDDC Manager now supports the use of a customized Datacenter object name. The enhanced VxRail VI WLD creation wizard process has been updated to include inputs for Datacenter Name and is automatically imported into the SDDC Manager inventory during the VxRail VI WLD Creation SDDC Manager workflow. Note: Make sure the Datacenter name matches the one used during the VxRail Cluster First Run. The figure below shows the Datacenter Input step in the enhanced VxRail VI WLD creation wizard from within SDDC Manager.

 

Figure 2

Being able to customize resource object names makes VCF on VxRail more flexible in aligning with an IT organization’s naming policies.

VxRail Integrated SDDC Manager WLD Cluster Node Removal Workflow Optimization

Furthering the Dell Technologies and VMware co-engineering integration efforts for VCF on VxRail, new workflow optimizations have been introduced in VCF 4.1 that take advantage of VxRail Manager APIs for VxRail cluster host removal operations.

 When the time comes for VCF on VxRail cloud administrators to remove hosts from WLD clusters and repurpose them for other domains, admins will use the SDDC Manager “Remove Host from WLD Cluster” workflow to perform this task. This remove host operation has now been fully integrated with native VxRail Manager APIs to automate removing physical VxRail hosts from a VxRail cluster as a single end-to-end automated workflow that is kicked off from the SDDC Manager UI or VCF API. This integration further simplifies and streamlines VxRail infrastructure management operations all from within common VMware SDDC management tools. The figure below illustrates the SDDC Manager sub tasks that include new VxRail API calls used by SDDC Manager as a part of the workflow.

 Figure 3

 Note: Removed VxRail nodes require reimaging prior to repurposing them into other domains. This reimaging currently requires Dell EMC support to perform.

I18N Internationalization and Localization (SDDC Manager)

SDDC Manager now has international language support that meets the I18N Internationalization and Localization standard. Options to select the desired language are available in the Cloud Builder UI, which installs SDDC Manager using the selected language settings. SDDC Manager will have localization support for the following languages – German, Japanese, Chinese, French, and Spanish. The figure below illustrates an example of what this would look like in the SDDC Manager UI.

Figure 4

vRealize Suite Enhancements 

VCF Aware vRSLCM

New in VCF 4.1, the vRealize Suite is fully integrated into VCF. The SDDC Manager deploys the vRSLCM and creates a two way communication channel between the two components. When deployed, vRSLCM is now VCF aware and reports back to the SDDC Manager what vRealize products are installed. The installation of vRealize Suite components utilizes built standardized VVD best practices deployment designs leveraging Application Virtual Networks (AVNs).

Software Bundles for the vRealize Suite are all downloaded and managed through the SDDC Manager. When patches or updates become available for the vRealize Suite, lifecycle management of the vRealize Suite components is controlled from the SDDC Manager, calling on vRSLCM to execute the updates as part of SDDC Manager LCM workflows. The figure below showcases the process for enabling vRealize Suite for VCF.

 Figure 5

VCF Multi-Site Architecture Enhancements

VCF Remote Cluster Support

VCF Remote Cluster Support enables customers to extend their VCF on VxRail operational capabilities to ROBO and Cloud Edge sites, enabling consistent operations from core to edge. Pair this with an awesome selection of VxRail hardware platform options and Dell Technologies has your Edge use cases covered. More on hardware platforms later…For a great detailed explanation on this exciting new feature check out the link to a detailed VMware blog post on the topic at the end of this post.

VCF LCM Enhancements

NSX-T Edge and Host Cluster-Level and Parallel Upgrades

With previous VCF on VxRail releases, NSX-T upgrades were all encompassing, meaning that a single update required updates to all the transport hosts as well as the NSX Edge and Manager components in one evolution.

With VCF 4.1, support has been added to perform staggered NSX updates to help minimize maintenance windows. Now, an NSX upgrade can consist of three distinct parts:

  • Updating of edges
    1. Can be one job or multiple jobs. Rerun the wizard.
    2. Must be done before moving to the hosts
  • Updating the transport hosts
  • Once the hosts within the clusters have been updated, the NSX Managers can be updated.

Multiple NSX edge and/or host transport clusters within the NSX-T instance can be upgraded in parallel.  The Administrator has the option to choose some clusters without having to choose all of them. Clusters within a NSX-T fabric can also be chosen to be upgraded sequentially, one at a time. Below are some examples of how NSX-T components can be updated.

NSX-T Components can be updated in several ways. These include updating:

  • NSX-T Edges and Host Clusters within an NSX-T instance can be upgraded together in parallel (default) 
  • NSX-T Edges can be upgraded independently of NSX-T Host Clusters
  • NSX-T Host Clusters can be upgraded independently of NSX-T Edges only after the Edges are upgraded first
  • NSX-T Edges and Host Clusters within an NSX-T instance can be upgraded sequentially one after another.

The figure below visually depicts these options.

 Figure 6

These options provide Cloud admins with a ton of flexibility so they can properly plan and execute NSX-T LCM updates within their respective maintenance windows. More flexible and simpler operations. Nice! 

VCF Security Enhancements

Read-Only Access Role, Local and Service Accounts

A new ‘view-only’ role has been added to VCF 4.1. For some context, let’s talk a bit now about what happens when logging into the SDDC Manager. 

First, you will provide a username and password. This information gets sent to the SDDC Manager, who then sends it to the SSO domain for verification. Once verified, the SDDC Manager can see what role the account has privilege for. 

In previous versions of Cloud Foundation, the role would either be for an Administrator or it would be for an Operator. 

 Now, there is a third role available called a ‘Viewer’. Like its name suggests, this is a view only role which has no ability to create, delete, or modify objects. Users who are assigned this role may not see certain items in the SDDC Manger UI, such as the User screen. They may also see a message saying they are unauthorized to perform certain actions.

 Also new, VCF now has a local account that can be used during an SSO failure. To help understand why this is needed let’s consider this: What happens when the SSO domain is unavailable for some reason? In this case, the user would not be able to login. To address this, administrators now can configure a VCF local account called admin@local. This account will allow the performing of certain actions until the SSO domain is functional again. This VCF local account is defined in the deployment worksheet and used in the VCF bring up process. If bring up has already been completed and the local account was not configured, then a warning banner will be displayed on the SDDC Manager UI until the local account is configured.

Lastly, SDDC Manager now uses new service accounts to streamline communications between SDDC manager and the products within Cloud Foundation. These new service accounts follow VVD guidelines for pre-defined usernames and are administered through the admin user account to improve inter-VCF communications within SDDC Manager.

VCF Data Protection Enhancements

 As described in this blog, with VCF 4.1, SDDC Manager backup-recovery workflows and APIs have been improved to add capabilities such as backup management, backup scheduling, retention policy, on-demand backup & auto retries on failure. The improvement also includes Public APIs for 3rd party ecosystem and certified backup solutions from Dell PowerProtect.

VxRail Software Feature Updates

VxRail Networking Enhancements

VxRail 4 x 25Gbps pNIC redundancy

VxRail engineering continues innovate in areas that drive more value to customers. The latest VCF on VxRail release follows through on delivering just that for our customers. New in this release, customers can use the automated VxRail First Run Process to deploy VCF on VxRail nodes using 4 x 25Gbps physical port configurations to run the VxRail System vDS for system traffic like Management, vSAN, and vMotion, etc. The physical port configuration of the VxRail nodes would include 2 x 25Gbps NDC ports and additional 2 x 25Gbps PCIe NIC ports.

In this 4 x 25Gbps set up, NSX-T traffic would run on the same System vDS. But what is great here (and where the flexibility comes in) is that customers can also choose to separate NSX-T traffic on its own NSX-T vDS that uplinks to separate physical PCIe NIC ports by using SDDC Manager APIs. This ability was first introduced in the last release and can also be leveraged here to expand the flexibility of VxRail host network configurations.

The figure below illustrates the option to select the base 4 x 25Gbps port configuration during VxRail First Run.

Figure 7

By allowing customers to run the VxRail System VDS across the NDC NIC ports and PCIe NIC ports, customers gain an extra layer of physical NIC redundancy and high availability. This has already been supported with 10Gbps based VxRail nodes. This release now brings the same high availability option to 25Gbps based VxRail nodes. Extra network high availability AND 25Gbps performance!? Sign me up!

VxRail Hardware Platform Updates

Recently introduced support for ruggedized D-Series VxRail hardware platforms (D560/D560F) continue expanding the available VxRail hardware platforms supported in the Dell Technologies Cloud Platform. 

These ruggedized and durable platforms are designed to meet the demand for more compute, performance, storage, and more importantly, operational simplicity that deliver the full power of VxRail for workloads at the edge, in challenging environments, or for space-constrained areas. 

These D-Series systems are a perfect match when paired with the latest VCF Remote Cluster features introduced in Cloud Foundation 4.1.0 to enable Cloud Foundation with Tanzu on VxRail to reach these space-constrained and challenging ROBO/Edge sites to run cloud native and traditional workloads, extending existing VCF on VxRail operations to these locations! Cool right?!

To read more about the technical details of VxRail D-Series, check out the VxRail D-Series Spec Sheet.

Well that about covers it all for this release. The innovation train continues. Until next time, feel free to check out the links below to learn more about DTCP (VCF on VxRail).

 

Jason Marques

Twitter - @vwhippersnapper

 

Additional Resources

VMware Blog Post on VCF Remote Clusters

Cloud Foundation on VxRail Release Notes

VxRail page on DellTechnologies.com

VxRail Videos

VCF on VxRail Interactive Demos


Home > Integrated Products > VxRail > Blogs

VMware VxRail Kubernetes Tanzu

Deploying VMware Tanzu for Kubernetes Operations on Dell VxRail: Now for the Multicloud

Jason Marques Jason Marques

Wed, 17 May 2023 15:56:43 -0000

|

Read Time: 0 minutes

VMware Tanzu for Kubernetes Operations (TKO) on Dell VxRail is a jointly validated Dell and VMware reference architecture solution designed to streamline Kubernetes use for the enterprise. The latest version has been extended to showcase multicloud application deployment and operations use cases. Read on for more details.

VMware Tanzu and Dell VxRail joint solutions

VMware TKO on Dell VxRail is yet another example of the strong partnership and joint development efforts that Dell and VMware continue to deliver on behalf of our joint customers so they can find success in their infrastructure modernization and digital transformation efforts. It is an addition to an existing portfolio of jointly developed and/or engineered products and reference architecture solutions that are built upon VxRail as the foundation to help customers accelerate and simplify their Kubernetes adoption.

Figure 1 highlights the joint VMware Tanzu and Dell VxRail offerings available today. Each is specifically designed to meet customers where they are in their journey to Kubernetes adoption.

Figure 1.  Joint VMware Tanzu and Dell VxRail solutions

VMware TKO on VxRail

VMware Tanzu For Kubernetes Operations on Dell VxRail reference architecture updates

This latest release of the jointly developed reference architecture builds off the first release. To learn more about what TKO on VxRail is and our objective for jointly developing it, take a look at this blog post introducing its first iteration.

Okay… Now that you are all caught up, let’s dive into what is new in this latest version of the reference architecture.

Additional TKO multicloud components

Let’s dive a bit deeper and highlight what we see as the essential building blocks for your cloud infrastructure transformation that are included in the TKO edition of Tanzu.

First, you’re going to need a consistent Kubernetes runtime like Tanzu Kubernetes Grid (TKG) so you can manage and upgrade clusters consistently as you move to a multicloud Kubernetes environment.

Next, you’re going to need some way to manage your platform and having a management plane like Tanzu Mission Control (TMC) that provides centralized visibility and control over your platform will be critical to helping you roll this out to distributed teams.

Also, having platform-wide observability like Aria Operations for Applications (formerly known as Tanzu/Aria Observability) ensures that you can effectively monitor and troubleshoot issues faster. Having data protection capabilities allows you to protect your data both at rest and in transit, which is critical if your teams will be deploying applications that run across clusters and clouds. And with NSX Advanced Load Balancer, TKO can also help you implement global load balancing and advanced traffic routing that allows for automated service discovery and north-south traffic management.

TKO on VxRail, VMware and Dell’s joint solution for core IT and cloud platform teams, can help you get started with your IT modernization project and enable you to build a standardized platform that will support you as you grow and expand to more clouds.

In the initial release of the reference architecture with VxRail, Tanzu Mission Control (TMC) and Aria Operations for Applications were used, and a solid on-premises foundation was established for building our multicloud architecture onward. The following figure shows the TKO items included in the first iteration.

VMware Tanzu for Kubernetes Operations features.

Figure 2.  Base TKO components used in initial version of reference architecture

In this second phase, we extended the on-premises architecture to a true multicloud environment fit for a new generation of applications.

Added to the latest version of the reference architecture are VMware Cloud on AWS, an Amazon EKS service, Tanzu Service Mesh, and Global Server Load Balancing (GSLB) functionality provided by NSX Advanced Load Balancer to build a global namespace for modern applications.

New TMC functionalities were also added that were not part of the first reference architecture, such as EKS LCM and continuous delivery capabilities. Besides the fact that AWS is still the most widely used public cloud provider, the reason AWS was used for this reference architecture is because the VMware SaaS products have the most features available for AWS cloud services. Other hyperscaler public cloud provider services are still in the VMware development pipeline. For example, today you can perform life cycle management of Amazon EKS clusters through Tanzu Mission Control. This life cycle management capability isn’t available yet with other cloud providers. The following figure highlights the high-level set of components used in this latest reference architecture update.

Figure 3.  Additional components used in latest version of TKO on VxRail RA

New multicloud testing environment

To test this multicloud architecture, the Dell and VMware engineering teams needed a true multicloud environment. Figure 4 illustrates a snapshot of the multisite/multicloud lab infrastructure that our VMware and Dell engineering teams built to provide a “real-world” environment to test and showcase our solutions. We use this environment to work on projects with internal teams and external partners.

Figure 4.  Dell/VMware Multicloud Innovation Lab Environments

The environment is made up of five data centers and private clouds across the US, all connected by VMware SD-WAN, delivering a private multicloud environment. An Equinix data center provides the fiber backbone to connect with most public cloud providers as well as VMware Cloud Services. 

Extended TKO on VxRail multicloud architecture

Figure 5 shows the multicloud implementation of Tanzu for Kubernetes Operations on VxRail. Here you have K8s clusters on-premises and running on multiple cloud providers. 

 

Figure 5.  TKO on VxRail Reference Architecture Multicloud Architecture

Tanzu Mission Control (TMC), which is part of Tanzu for Kubernetes Operations, provides you with a management plane through which platform operators or DevOps team members can manage the entire K8s environment across clouds. Developers can have self-service access, authenticated by either cloud identity providers like Okta or Microsoft Active Directory or through corporate Active Directory federation. With TMC, you can assign consistent policies across your cross-cloud K8s clusters. DevOps teams can use the TMC Terraform provider to manage the clusters as infrastructure-as-code. 

Through TMC support for K8s open-source project technologies such as Velero, teams can back up clusters either to Azure blob, Amazon S3, or on-prem S3 storage solutions such as Dell ECS, Dell ObjectScale, or another object storage of their choice. 

When you enable data protection for a cluster, Tanzu Mission Control installs Velero with Restic (an open-source backup tool), configured to use the opt-out approach. With this approach, Velero backs up all pod volumes using Restic.

TMC integration with Aria Operations for Applications (formerly Tanzu/Aria Observability) delivers fine-grained insights and analytics about the microservices applications running across the multicloud environments.

TMC also has integration with Tanzu Service Mesh (TSM), so you can add your clusters to TSM. When the TKO on VxRail multicloud reference architecture is implemented, users would connect to their multicloud microservices applications through a single URL provided by NSX Advanced Load Balancer (formerly AVI Load Balancer) in conjunction with TSM. TSM provides advanced, end-to-end connectivity, security, and insights for modern applications—across application end users, microservices, APIs, and data—enabling compliance with service level objectives (SLOs) and data protection and privacy regulations.

TKO on VxRail business outcomes

Dell and VMware know what business outcomes matter to enterprises, and together we help customers map those outcomes to transformations.

Figure 6 highlights the business outcomes that customers are asking for and that we are delivering through the Tanzu portfolio on VxRail today. They also set the stage to inform our joint development teams about future capabilities we look forward to delivering.

Figure 6.  TKO on VxRail and business outcomes alignment

Learn more at Dell Technologies World 2023

Want to dive deeper into VMware Tanzu for Kubernetes Operations on Dell VxRail? Visit our interactive Dell Technologies and VMware booths at Dell Technologies World to talk with any of our experts. You can also attend our session Simplify & Streamline via VMware Tanzu for Kubernetes Operations on VxRail.

Also, feel free to check out the VMware Blog on this topic, written by Ather Jamil from VMware. It includes some cool demos showing TKO on VxRail in action!

Author: Jason Marques (Dell Technologies)
Twitter:
@vWhipperSnapper

Contributor: Ather Jamil (VMware)

Resources

 

 

Home > Integrated Products > VxRail > Blogs

NVIDIA VxRail VMware Cloud Foundation GPU VCF VCF Async Patch Tool VCF on VxRail serviceability Cloud Foundation on VxRail

What’s New: VMware Cloud Foundation 4.5.1 on Dell VxRail 7.0.450 Release and More!

Jason Marques Jason Marques

Thu, 11 May 2023 15:55:52 -0000

|

Read Time: 0 minutes

This latest Cloud Foundation (VCF) on VxRail release includes updated versions of software BOM components, a bunch of new VxRail platform enhancements, and some good ol’ under-the-hood improvements that lay the groundwork for future features designed to deliver an even better customer experience. Read on for the highlights…

VCF on VxRail operations and serviceability enhancements

View Nvidia GPU hardware details in VxRail Manager vCenter plugin ‘Physical View’ and VxRail API

Leveraging the power of GPU acceleration with VCF on VxRail delivers a lot of value to organizations looking to harness the power of their data. VCF on VxRail makes operationalizing infrastructure with Nvidia GPUs easier with native GPU visualization and details using the VxRail Manager vCenter Plugin ‘Physical View’ and VxRail API. Administrators can quickly gain deeper-level hardware insights into the health and details of the Nvidia GPUs running on their VxRail nodes, to easily map the hardware layer to the virtual layer, and to help improve infrastructure management and serviceability operations.

Figure 1 shows what this looks like.

Figure 1.  Nvidia GPU visualization and details – VxRail vCenter Plugin ‘Physical View’ UI

Support for the capturing, displaying, and proactive Dell dial home alerting for new VxRail iDRAC system events and alarms

Introduced in VxRail 7.0.450 and available in VCF 4.5.1 on VxRail 7.0.450 are enhancements to VxRail Manager intelligent system health monitoring of iDRAC critical and warning system events. With this new feature, new iDRAC warning and critical system events are captured, and through VxRail Manager integration with both iDRAC and vCenter, alarms are triggered and posted in vCenter.

Customers can view these events and alarms in the native vCenter UI and the VxRail Manager vCenter Plugin Physical View which contains KB article links in the event description to provide added details and guidance on remediation. These new events also trigger call home actions to inform Dell support about the incident.

These improvements are designed to improve the serviceability and support experience for customers of VCF on VxRail. Figures 2 and 3 show these events as they appear in the vCenter UI ‘All Issues’ view and the VxRail Manager vCenter Plugin Physical View UI, respectively.

                        

Figure 2.  New iDRAC events displayed in the vCenter UI ‘All Issues’ view

Figure 3.  New iDRAC events displayed in the VxRail Manager vCenter Plugin UI ‘Physical View’

Support for the capturing, displaying, and proactive dial home alerting for new iDRAC NIC port down events and alarms

To further improve system serviceability and simplify operations, VxRail 7.0.450 introduces the capturing of new iDRAC system events related to host NIC port link status. These include NIC port down warning events, each of which is indicated by a NIC100 event code and a ‘NIC port is started/up’ info event.

A NIC100 event indicates either that a network cable is not connected, or that the network device is not working.

A NIC101 event indicates that the transition from a network link ‘down’ state to a network link ‘started’ or ‘up’ state has been detected on the corresponding NIC port.

VxRail Manager now creates new VxM events that track these NIC port states.

As a result, users can be alerted through an alarm in vCenter when a NIC port is down. VxRail Manager will also generate a dial-home event when a NIC port is down. When the condition is no longer present, VxRail Manager will automatically clear the alarm by generating a clear-alarm event.

Finally, to reduce the number of false positive events and prevent unnecessary alarm and dial home events, VxRail Manager implements an intelligent throttling mechanism to handle situations in which false positive alarms related to network maintenance activities could occur. This makes the alarms/events that are triggered more credible for an admin to act against.

Table 1 contains a summary of the details of these two events and the VxRail Manager serviceability behavior.

Table 1.  iDRAC NIC port down and started event and behavior details

Let’s double click on this serviceability behavior in a bit more detail.

Figure 4 depicts the behavior process flow VxRail Manager takes when iDRAC discovers and triggers a NIC port down system event. Let’s walk through the details now:

1.  The first thing that occurs is that iDRAC discovers that the NIC port state has gone down and triggers a NIC port down event.

2.  Next, iDRAC will send that event to VxRail Manager.

3.  At this stage VxRail Manager will validate how long the NIC port down event has been active and check whether a NIC port started (or up) event has been triggered within a 30-minute time frame since the original NIC port down event occurred. With this check, if there has not been a NIC port started event triggered, VxRail Manager will begin throttling NIC port down event communication in order to prevent duplicate alerts about the same event.

If during the 30-minute window, a NIC port started event has been detected, VxRail Manager will cease throttling and clear the event.

4. When the VxRail Manager event throttling state is active, VxRail Manager will log it in its event history.

5. VxRail Manager will then trigger a vCenter alarm and post the event to vCenter.

6.  Finally, VxRail Manager will trigger a NIC port down dial home event communication to backend Dell Support Systems, if connected.

  Figure 4.  Processing VxRail NIC port down events, and VxRail Manager throttling logic

Figure 5 shows what this looks like in the vCenter UI.

  Figure 5.  VxRail NIC port down trigger alarm in vCenter UI

Figure 6 shows what this looks like in the VxRail Manager vCenter Plugin ‘Physical View’ UI.

 Figure 6.  VxRail Manager vCenter Plugin ‘Physical View’ UI view of a VxRail NIC port down event

VCF on VxRail storage updates

Support for new PowerMax 2500 and 8500 storage arrays with VxRail 14G and 15G dynamic nodes using VMFS on FC principal storage

Starting in VCF 4.5.1 on VxRail 7.0.450, support has been added for the latest next gen Dell PowerMax 2500 and 8500 storage systems as VMFS on FC principal storage when deployed with 14G and 15G VxRail dynamic node clusters in VI workload domains.

Figure 7 lists the Dell storage arrays that support VxRail dynamic node clusters using VMFS on FC principal storage for VCF on VxRail, along with the corresponding supported FC HBA makes and models.

Note: Compatible supported array firmware and software versions are published in the Dell E-Lab Support Matrix for reference.

  Figure 7.  Supported Dell storage arrays used as VMFS on FC principal storage

VCF on VxRail lifecycle management enhancements

VCF Async Patch Tool 1.0.1.1 update

This tool addresses both LCM and security areas. Although it is not officially a feature of any specific VCF on VxRail release, it does get released asynchronously (pun intended) and is designed for use in VCF and VCF on VxRail environments. Thus, it deserves a call out.

For some background, the VCF Async Patch Tool is a new CLI based tool that allows cloud admins to apply individual component out-of-band security patches to their VCF on VxRail environment, separately from an official VCF LCM update release. This enables organizations to address security vulnerabilities faster without having to wait for a full VCF release update. It also allows admins to install these patches themselves without needing to engage support resources to get them applied manually.

With this latest AP Tool 1.0.1.1 release, the AP Tool now supports the ability to use patch VxRail (which includes all of the components in a VxRail update bundle including VxRail Manager and ESXi software components, and VxRail HW firmware/drivers) within VCF on VxRail environments. This is a great addition to the tool’s initial support for patching vCenter and NSX Manager in its first release. VCF on VxRail customers now have a centralized and standardized process for applying security patches for core VCF and VxRail software and core VxRail HCI stack hardware components (such as server BIOS or pNIC firmware/driver for example), all in a simple and integrated manner that VCF on VxRail customers have come to expect from a jointly engineered integrated turnkey hybrid cloud platform.

Note: Hardware patching is made possible due to how VxRail implements HW updates with the core VxRail update bundle. All VxRail patches for VxRail Manager, ESXi, and HW components are delivered in a the VxRail update bundle and leveraged by the AP Tool to apply.

From an operational standpoint, when patches for the respective software and hardware components have been applied, and a new VCF on VxRail BOM update is available that includes the security fixes, admins can use the tool to download the latest VCF on VxRail LCM release bundles and upgrade their environment back to an official in-band VCF on VxRail release BOM. After that, admins can continue to use the native SDDC Manager LCM workflow process for applying additional VCF on VxRail upgrades. Figure 8 highlights this process at a high level.

  Figure 8.  Async Patch Tool overview                                                  

You can access VCF Async Patch Tool instructions and documentation from VMware’s website.

Summary

In this latest release, the new features and platform improvements help set the stage for even more innovation in the future. For more details about bug fixes in this release, see VMware Cloud Foundation on Dell VxRail Release Notes. For this and other Cloud Foundation on VxRail information, see the following additional resources.

Author: Jason Marques

Twitter: @vWhipperSnapper

Additional Resources


Home > Integrated Products > VxRail > Blogs

VMware Cloud Foundation VCF VCF on VxRail VCF on VxRail storage

Getting To Know VMware Cloud Foundation on Dell VxRail Flexible Storage Options

Jason Marques Jason Marques

Thu, 09 Feb 2023 20:45:22 -0000

|

Read Time: 0 minutes

Have you been tasked with executing your company’s cloud transformation strategy and you’re worried about creating yet another infrastructure silo just for a subset of workloads to run in this cloud and, as a result, need a solution that delivers storage flexibility? Then you have come to the right place. 

Dell Technologies and VMware have you covered with VMware Cloud Foundation on Dell VxRail. VCF on VxRail delivers the cloud infrastructure that can deliver storage flexibility that meets you where you are in your cloud adoption journey.

This new whiteboard video walks through these flexible storage options that you can take advantage of, that might align to your business, workload, and operational needs. Check it out below. 


And for more information about VCF on VxRail, visit the VxRail Info Hub page.

Author: Jason Marques

Twitter: @vWhipperSnapper

Additional Resources

 Videos


Home > Integrated Products > VxRail > Blogs

VxRail Kubernetes Tanzu VCF K8s

Improved management insights and integrated control in VMware Cloud Foundation 4.5 on Dell VxRail 7.0.400

Jason Marques Jason Marques

Tue, 11 Oct 2022 12:59:13 -0000

|

Read Time: 0 minutes

The latest release of the co-engineered hybrid cloud platform delivers new capabilities to help you manage your cloud with the precision and ease of a fighter jet pilot in the cockpit! The new VMware Cloud Foundation (VCF) on VxRail release includes support for the latest Cloud Foundation and VxRail software components based on vSphere 7, the latest VxRail P670N single socket All-NVMe 15th Generation HW platform, and VxRail API integrations with SDDC Manager. These components streamline and automate VxRail cluster creation and LCM operations, provide greater insights into platform health and activity status, and more! There is a ton of airspace to cover, ready to take off? Then buckle up and let’s hit Mach 10, Maverick!

VCF on VxRail operations and serviceability enhancements

Support for VxRail cluster creation automation using SDDC Manager UI

The best pilots are those that can access the most fully integrated tools to get the job done all from one place: the cockpit interface that they use every day. Cloud Foundation on VxRail administrators should also be able to access the best tools, minus the cockpit of course.

The newest VCF on VxRail release introduces support for VxRail cluster creation as a fully integrated end-to-end SDDC Manager workflow, driven from within the SDDC Manager UI. This integrated API-driven workload domain and VxRail cluster SDDC Manager feature extends the deep integration capabilities between SDDC Manager and VxRail Manager. This integration enables users to VxRail clusters when creating new VI workload domains or expanding existing workload domains (by adding new VxRail clusters into them) all from an SDDC Manager UI-driven end-to-end workflow experience.

In the initial SDDC Manager UI deployment workflow integration, only unused VxRail nodes discovered by VxRail Manager are supported. It also only supports clusters that are using one of the VxRail predefined network profile cluster configuration options. This method supports deploying VxRail clusters using both vSAN and VMFS on FC as principal storage options.

Another enhancement allows administrators to provide custom user-defined cluster names and custom user-defined VDS and port group names as configuration parameters as part of this workflow.

You can watch this new feature in action in this demo.

Now that’s some great co-piloting!

Support for SDDC Manager WFO Script VxRail cluster deployment configuration enhancements

Th SDDC Manager WFO Script deployment method was first introduced in VCF 4.3 on VxRail 7.0.202 to support advanced VxRail cluster configuration deployments within VCF on VxRail environments. This deployment method is also integrated with the VxRail API and can be used with or without VxRail JSON cluster configuration files as inputs, depending on what type of advanced VxRail cluster configurations are desired.

Note:

  • The legacy method for deploying VxRail clusters using the VxRail Manager Deployment Wizard has been deprecated with this release.
  • VxRail cluster deployments using the SDDC Manager WFO Script method currently require the use of professional services.

Proactive notifications about expired passwords and certificates in SDDC Manager UI and from VCF public API

To deliver improved management insights into the cloud infrastructure system and its health status, this release introduces new proactive SDDC Manager UI notifications for impending VCF and VxRail component expired passwords and certificates. Now, within 30 days of expiration, a notification banner is automatically displayed in the SDDC Manager UI to give cloud administrators enough time to plan a course of action before these components expire. Figure 1 illustrates these notifications in the SDDC Manager UI.

Figure 1. Proactive password and certificate expiration notifications in SDDC Manager UI

VCF also displays different types of password status categories to help better identify a given account’s password state. These status categories include: 

  • Active – Password is in a healthy state and not within a pending expiry window. No action is necessary.
  • Expiring – Password is in a healthy state but is reaching a pending expiry date. Action should be taken to use SDDC Manager Password Management to update the password.
  • Disconnected – Password of component is unknown or not in sync with the SDDC Manager managed passwords database inventory. Action should be taken to update the password at the component and remediate with SDDC Manager to resync.

The password status is displayed on the SDDC Manager UI Password Management dashboard so that users can easily reference it. 

Figure 2. Password status display in SDDC Manager UI

Similarly, certificate status state is also monitored. Depending on the certificate state, administrators can remediate expired certificates using the automated SDDC Manager certificate management capabilities, as shown in Figure 3.

Figure 3. Certificate status and management in SDDC Manager UI

Finally, administrators looking to capture this information programmatically can now use the VCF public API to query the system for any expired passwords and certificates. 

Add and delete hosts from WLD clusters within a workload domain in parallel using SDDC Manager UI or VCF public API

Agility and efficiency are what cloud administrators strive for. The last thing anyone wants is to have to wait for the system to complete a task before being able to perform the next one. To address this, VCF on VxRail now allows admins to add and delete hosts in clusters within a workload domain in parallel using the SDDC Manager UI or VCF Public API. This helps to perform infrastructure management operations faster: some may even say at Mach 9!

Note:

  • Prerequisite: Currently, VxRail nodes must be added to existing clusters using VxRail Manager first prior to executing SDDC Manager add host workflow operations in VCF.
  • Currently a maximum of 10 operations of each type can be performed simultaneously. Always check the VMware Configuration Maximums Guide for VCF documentation for the latest supported configuration maximums.

SDDC Manager UI: Support for Day 2 renaming of VCF cluster objects

To continue making the VCF on VxRail platform more accommodating to each organization’s governance policies and naming conventions, this release enables administrators to rename VCF cluster objects from within the SDDC Manager UI as a Day 2 operation.

New menu actions to rename the cluster are visible in-context when operating on cluster objects from within the SDDC Manager UI. This is just the first step in a larger initiative to make VCF on VxRail even more adaptable with naming conventions across many other VCF objects in the future. Figure 4 describes new in-context rename cluster menu option looks like.

Figure 4.  Day 2 Rename Cluster Menu Option in SDDC Manager UI

Support for assigning user defined tags to WLD, cluster, and host VCF objects in SDDC Manager

VCF on VxRail now incorporates SDDC Manager support for assigning and displaying user defined tags for workload domain, cluster, and host VCF objects.

Administrators now see a new Tags pane in the SDDC Manager UI that displays tags that have been created and assigned to WLD, cluster, and host VCF objects. If no tags exist, are not assigned, or if changes to existing tags are needed, there is an assign link that allows an administrator to assign the tag or link and launch into that object in vCenter where tag management (create, delete, modify) can be performed. When tags are instantiated, VCF syncs them and allow administrators to assign and display them in the tags pane in the SDDC Manager UI, as shown in Figure 5.

Figure 5. User-defined tags visibility and assignment, using SDDC Manager

Support for SDDC Manager onboarding within SDDC Manager UI

VCF on VxRail is a powerful and flexible hybrid cloud platform that enables administrators to manage and configure the platform to meet their business requirements. To help organizations make the most of their strategic investments and start operationalizing them quicker, this release introduces support for a new SDDC Manager UI onboarding experience.

The new onboarding experience:

  • Focuses on Learn and plan and Configure SDDC Manager phases with drill down to configure each phase
  • Includes in-product context that enables administrators to learn, plan, and configure their workload domains, with added details including documentation articles and technical illustrations
  • Introduces a step-by-step UI walkthrough wizard for initial SDDC Manager configuration setup
  • Provides an intuitive UI guided walkthrough tour of SDDC Manager UI in stages of configuration that reduces the learning curve for customers
  • Provides opt-out and revisit options for added flexibility

Figure 6 illustrates the new onboarding capabilities.

 

       

Figure 6. SDDC Manager Onboarding and UI Tour Experience

VCF on VxRail lifecycle management enhancements

VCF integration with VxRail Retry API

The new VCF on VxRail release delivers new integrations with SDDC Manager and the VxRail Retry API to help reduce overall LCM performance time. If a cloud administrator has attempted to perform LCM operations on a VxRail cluster within their VCF on VxRail workload domain and only a subset of those nodes within the cluster can be upgraded successfully, another LCM attempt would be required to fully upgrade the rest of the nodes in the cluster.

Before VxRail Retry API, the VxRail Manager LCM would start the LCM from the first node in the cluster and scan each one to determine if it required an upgrade or not, even if the node was already successfully upgraded. This rescan behavior added unnecessary time to the LCM execution window for customers with large VxRail clusters.

The VxRail Retry API has made LCM even smarter. During an LCM update where a cluster has a mix of updated and non-updated nodes, VxRail Manager automatically skips right to the non-updated nodes only and runs through the LCM process from there until all remaining non-updated nodes are upgraded. This can provide cloud administrators with significant time savings. Figure 7 shows the behavior difference between standard and enhanced VxRail Retry API Behavior.

Figure 7. Comparison between standard and enhanced VxRail Retry API LCM Behavior 

The VxRail Retry API behavior for VCF 4.5 on VxRail 7.0.400 has been natively integrated into the SDDC Manager LCM workflow. Administrators can continue to manage their VxRail upgrades within the SDDC Manager UI per usual. They can also take advantage of these improved operational workflows without any additional manual configuration changes.

Improved SDDC Manager prechecks

More prechecks have been integrated into the platform that help fortify platform stability and simplify operations. These are:

  • Verification of valid licenses for software components
  • Checks for expired NSX Edge cluster passwords
  • Verification of system inconsistent state caused by any prior failed workflows
  • Additional host maintenance mode prechecks
    1. Determine if a host is in maintenance mode
    2. Determine whether CPU reservation for NSX-T is beyond VCF recommendation
    3. Determine whether DRS policy has changed from the VCF recommended (Fully Automated)
  • Additional filesystem capacity and permissions checks

While VCF on VxRail has many core prechecks that monitor many common system health issues, VCF on VxRail will continue to integrate even more into the platform with each new release.

Support for vSAN health check silencing

The new VCF on VxRail release also includes vSAN health check interoperability improvements. These improvements allow VCF to:

  • Address common upgrade blockers due to vSAN HCL precheck false positives
  • Allow vSAN pre-checks to be more granular, which enables the administrator to only perform those that are applicable to their environment
  • Display failed vSAN health checks during LCM operations of domain-level pre-checks and upgrades
  • Enable the administrators to silence the health checks

Display VCF configurations drift bundle progress details in SDDC Manager UI during LCM operations

In a VCF on VxRail context, configuration-drift is a set of configuration changes that are required to bring upgraded BOM components (such as vCenter, NSX, and so on) with a new VCF on VxRail installation. These configuration changes are delivered by VCF configuration-drift LCM update bundles.

VCF configuration drift update improvements deliver greater visibility into what specifically is being changed, improved error details for better troubleshooting, and more efficient behavior for retry operations.

VCF Async Patch Tool support

VCF Async Patch Tool support offers both LCM and security enhancements.

Note: This feature is not officially included in this new release, but it is newly available.

The VCF Async Patch Tool is a new CLI based tool that allows cloud administrators to apply individual component out-of-band security patches to their VCF on VxRail environment, separate from an official VCF LCM update release. This enables organizations to address security vulnerabilities faster without having to wait for a full VCF release update. It also gives administrators control to install these patches without requiring the engagement of support resources.

Today, VCF on VxRail supports the ability to use the VCF Async Patch Tool for NSX-T and vCenter security patch updates only. Once patches have been applied and a new VCF BOM update is available that includes the security fixes, administrators can use the tool to download the latest VCF LCM release bundles and upgrade their environment back to an official in-band VCF release BOM. After that, administrators can continue to use the native SDDC Manager LCM workflow process to apply additional VCF on VxRail upgrades.

Note: Using VCF Async Patch Tool for VxRail and ESXi patch updates is not yet supported for VCF on VxRail deployments. There is currently separate manual guidance available for customers needing to apply patches for those components.

Instructions on downloading and using the VCF Async Patch Tool can be found here.

VCF on VxRail hardware platform enhancements

Support for 24-drive All-NVMe 15th Generation P670N VxRail platform

The VxRail 7.0.400 release delivers support for the latest VxRail 15th Generation P670N VxRail hardware platform. This 2U1N single CPU socket model delivers an All-NVMe storage configuration of up to 24 drives for improved workload performance. Now that would be powerful single engine aircraft!

Time to come in for a landing…

I don’t know about you, but I am flying high with excitement about all the innovation delivered with this release. Now it’s time to take ourselves down for a landing. For more information, see the following additional resources so you can become your organization’s Cloud Ace.

Author: Jason Marques

Twitter: @vWhipperSnapper

Additional resources

VMware Cloud Foundation on Dell VxRail Release Notes

VxRail page on DellTechnologies.com

VxRail page on Info Hub

VCF on VxRail Interactive Demo

 VxRail Youtube channel 

Home > Workload Solutions > Container Platforms > SUSE Containers as a Service > Blogs

VxRail SUSE Rancher

Find Your Edge: Running SUSE Rancher and K3s with SLE Micro on Dell VxRail

Jason Marques Jason Marques

Wed, 28 Sep 2022 10:26:37 -0000

|

Read Time: 0 minutes

Find Your Edge: Running SUSE Rancher and K3s with SLE Micro on Dell VxRail

Existing examples of this collaboration have already begun to bear fruit with work done to validate SUSE Rancher and RKE2 on Dell VxRail. You can find more information on that in a Solution Brief here and blog post here. This initial example was to highlight deploying and operating Kubernetes clusters in a core datacenter use case.

But what about providing examples of jointly validated solutions for near edge use cases? More and more organizations are looking to deploy solutions at the edge since that is an increasing area where data is being generated and analyzed. As a result, this is where the focus of our ongoing technology validation efforts recently moved.

Our latest validation exercise featured deploying SUSE Rancher and K3s with the SUSE Linux Enterprise Micro operating system (SLE Micro) and running it on Dell VxRail hyperconverged infrastructure. These technologies were installed in a non-production lab environment by a team of SUSE and Dell VxRail engineers. All the installation steps that were used followed the SUSE documentation without any unique VxRail customization. This illustrates the seamless compatibility of using these technologies together and allowing for standardized deployment practices with the out-of-the-box system capabilities of both VxRail and the SUSE products.

Solution Components Overview

Before jumping into the details of the solution validation itself, let’s do a quick review of the major components that we used.

  • SUSE Rancher is a complete software stack for teams that are adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes (K8s) clusters, including lightweight K3s clusters, across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.
  • K3s is a CNCF sandbox project that delivers a lightweight yet powerful certified Kubernetes distribution.
  • SUSE Linux Enterprise Micro (SLE Micro) is an ultra-reliable, lightweight operating system purpose built for containerized and virtualized workloads.
  • Dell VxRail is the only fully integrated, pre-configured, and tested HCI system optimized with VMware vSphere, making it ideal for customers who want to leverage SUSE Rancher, K3s, and SLE Micro through vSphere to create and operate lightweight Kubernetes clusters on-premises or at the edge.

Validation Deployment Details

Now, let’s dive into the details of the deployment for this solution validation.

First, we deployed a single VxRail cluster with these specifications:

  • 4 x VxRail E660F nodes running VxRail 7.0.370 version software
    • 2 x Intel® Xeon® Gold 6330 CPUs
    • 512 GB RAM
    • Broadcom Adv. Dual 25 Gb Ethernet NIC
    • 2 x vSAN Disk Groups:
    •     1 x 800 GB Cache Disk
    •     3 x 4 TB Capacity Disks
  • vSphere K8s CSI/CNS

After we built the VxRail cluster, we deployed a set of three virtual machines running SLE Micro 5.1. We installed a multi-node K3s cluster running version 1.23.6 with Server and Agent services, Etcd, and a ContainerD container runtime on these VMs. We then installed SUSE Rancher 2.6.3 on the K3s cluster.  Also included in the K3s cluster for Rancher installation were Fleet GitOps services, Prometheus monitoring and metrics capture services, and Grafana metrics visualization services. All of this formed our Rancher Management Server. 

We then used Rancher to deploy managed K3s workload clusters. In this validation, we deployed the two managed K3 workload clusters on the Rancher Management Server cluster. These managed workload clusters were single node and six-node K3s clusters all running on vSphere VMs with the SLE Micro operating system installed.

You can easily modify this validation to be more highly available and production ready. The following diagram shows how to incorporate more resilience.

Figure 1: SUSE Rancher and K3s with SLE Micro on Dell VxRail - Production Architecture

The Rancher Management Server stays the same, because it was already deployed with a highly available four-node VxRail cluster and three SLE Micro VMs running a multi-node K3 cluster. As a production best practice, managed K3s workload clusters should run on separate highly available infrastructure from the Rancher Management Server to maintain separation of management and workloads. In this case, you can deploy a second four-node VxRail cluster. For the managed K3s workload clusters, you should use a minimum of three-node clusters to provide high availability for both the Etcd services and the workloads running on it. However, three nodes are not enough to provide node separation and high availability for the Etcd services and workloads. To remedy this, you can deploy a minimum six-node K3s cluster (as shown in the diagram with the K3s Kubernetes 2 Prod cluster).

Summary

Although this validation features Dell VxRail, you can also deploy similar architectures using other Dell hardware platforms, such as Dell PowerEdge and Dell vSAN Ready Nodes running VMware vSphere! 

For more information and to see other jointly validated reference architectures using Dell infrastructure with SUSE Rancher, K3s, and more, check out the following resource pages and documentation. We hope to see you back here again soon.

Author: Jason Marques

Twitter: @vWhippersnapper

Dell Resources

SUSE Resources

Home > Integrated Products > VxRail > Blogs

VxRail SUSE Rancher

Find Your Edge: Running SUSE Rancher and K3s with SLE Micro on Dell VxRail

Jason Marques Jason Marques

Tue, 16 Aug 2022 13:51:15 -0000

|

Read Time: 0 minutes

The goal of our ongoing partnership between Dell Technologies and SUSE is to bring validated modern products and solutions to market that enable our joint customers to operate CNCF-Certified Kubernetes clusters in the core, in the cloud, and at the edge, to support their digital businesses and harness the power of their data.

Existing examples of this collaboration have already begun to bear fruit with work done to validate SUSE Rancher and RKE2 on Dell VxRail. You can find more information on that in a Solution Brief here and blog post here. This initial example was to highlight deploying and operating Kubernetes clusters in a core datacenter use case.

But what about providing examples of jointly validated solutions for near edge use cases? More and more organizations are looking to deploy solutions at the edge since that is an increasing area where data is being generated and analyzed. As a result, this is where the focus of our ongoing technology validation efforts recently moved.

Our latest validation exercise featured deploying SUSE Rancher and K3s with the SUSE Linux Enterprise Micro operating system (SLE Micro) and running it on Dell VxRail hyperconverged infrastructure. These technologies were installed in a non-production lab environment by a team of SUSE and Dell VxRail engineers. All the installation steps that were used followed the SUSE documentation without any unique VxRail customization. This illustrates the seamless compatibility of using these technologies together and allowing for standardized deployment practices with the out-of-the-box system capabilities of both VxRail and the SUSE products.

Solution Components Overview

Before jumping into the details of the solution validation itself, let’s do a quick review of the major components that we used.

SUSE Rancher is a complete software stack for teams that are adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes (K8s) clusters, including lightweight K3s clusters, across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.

K3s is a CNCF sandbox project that delivers a lightweight yet powerful certified Kubernetes distribution.

SUSE Linux Enterprise Micro (SLE Micro) is an ultra-reliable, lightweight operating system purpose built for containerized and virtualized workloads.

Dell VxRail is the only fully integrated, pre-configured, and tested HCI system optimized with VMware vSphere, making it ideal for customers who want to leverage SUSE Rancher, K3s, and SLE Micro through vSphere to create and operate lightweight Kubernetes clusters on-premises or at the edge.

Validation Deployment Details

Now, let’s dive into the details of the deployment for this solution validation.

First, we deployed a single VxRail cluster with these specifications:

  • 4 x VxRail E660F nodes running VxRail 7.0.370 version software
    • 2 x Intel® Xeon® Gold 6330 CPUs
    • 512 GB RAM
    • Broadcom Adv. Dual 25 Gb Ethernet NIC
    • 2 x vSAN Disk Groups:
    •     1 x 800 GB Cache Disk
    •     3 x 4 TB Capacity Disks
  • vSphere K8s CSI/CNS

After we built the VxRail cluster, we deployed a set of three virtual machines running SLE Micro 5.1. We installed a multi-node K3s cluster running version 1.23.6 with Server and Agent services, Etcd, and a ContainerD container runtime on these VMs. We then installed SUSE Rancher 2.6.3 on the K3s cluster.  Also included in the K3s cluster for Rancher installation were Fleet GitOps services, Prometheus monitoring and metrics capture services, and Grafana metrics visualization services. All of this formed our Rancher Management Server. 

We then used Rancher to deploy managed K3s workload clusters. In this validation, we deployed the two managed K3 workload clusters on the Rancher Management Server cluster. These managed workload clusters were single node and six-node K3s clusters all running on vSphere VMs with the SLE Micro operating system installed.

You can easily modify this validation to be more highly available and production ready. The following diagram shows how to incorporate more resilience.

Figure 1: SUSE Rancher and K3s with SLE Micro on Dell VxRail - Production Architecture

The Rancher Management Server stays the same, because it was already deployed with a highly available four-node VxRail cluster and three SLE Micro VMs running a multi-node K3 cluster. As a production best practice, managed K3s workload clusters should run on separate highly available infrastructure from the Rancher Management Server to maintain separation of management and workloads. In this case, you can deploy a second four-node VxRail cluster. For the managed K3s workload clusters, you should use a minimum of three-node clusters to provide high availability for both the Etcd services and the workloads running on it. However, three nodes are not enough to provide node separation and high availability for the Etcd services and workloads. To remedy this, you can deploy a minimum six-node K3s cluster (as shown in the diagram with the K3s Kubernetes 2 Prod cluster).

Summary

Although this validation features Dell VxRail, you can also deploy similar architectures using other Dell hardware platforms, such as Dell PowerEdge and Dell vSAN Ready Nodes running VMware vSphere! 

For more information and to see other jointly validated reference architectures using Dell infrastructure with SUSE Rancher, K3s, and more, check out the following resource pages and documentation. We hope to see you back here again soon.

Author: Jason Marques

Twitter: @vWhippersnapper

Dell Resources

SUSE Resources

Home > Integrated Products > VxRail > Blogs

VMware VxRail VMware Cloud Foundation

Innovation with Cloud Foundation on VxRail

Jason Marques Jason Marques

Wed, 03 Aug 2022 21:32:16 -0000

|

Read Time: 0 minutes

VCF 3.9 ON VxRail 4.7.300 Improves Management, Flexibility, and Simplicity at Scale 

December, 2019

As you may already know, VxRail is the HCI foundation for the Dell Technologies Cloud Platform. With the new Dell Technologies On Demand offerings we combine the benefits of bringing automation and financial models similar to public cloud to on-premises environments. VMware Cloud Foundation on Dell EMC VxRail allows customers to manage all cloud operations through a familiar set of tools, offering a consistent experience, with a single vendor support relationship from Dell EMC.

Joint engineering between VMware and Dell EMC continuously improves VMware Cloud Foundation on VxRail. This has made VxRail the first hyperconverged system fully integrated with VMware Cloud Foundation SDDC Manager and is the only jointly engineered HCI system with deep VMware Cloud Foundation integration. VCF on VxRail to delivers unique integrations with Cloud Foundation that offer a seamless, automated upgrade experience. Customers adopting VxRail as the HCI foundation for Dell Technologies Cloud Platform will realize greater flexibility and simplicity when managing VMware Cloud Foundation on VxRail at scale. These benefits are further illustrated with the new features available in the latest version of VMware Cloud Foundation 3.9 on VxRail 4.7.300.

The first feature expands the ability to support global management and visibility across large, complex multi-region private and hybrid clouds. This is delivered through global multi-instance management of large-scale VCF 3.9 on VxRail 4.7.300 deployments with a single pane of glass (see figure below). Customers who have many VCF on VxRail instances deployed throughout their environment now have a common dashboard view into all of them to further simplify operations and gain insights.

Figure 1

The new features don’t just stop there, VCF 3.9 on VxRail 4.7.300 provides greater networking flexibility. VMware Cloud Foundation 3.9 on VxRail 4.7.300 adds support for Dell EMC VxRail layer 3 networking stretch cluster configurations, allowing customers to further scale VCF on VxRail environments for more highly available use cases in order to support mission-critical workloads. The layer 3 support applies to both NSX-V and NSX-T backed workload domain clusters.

Another area of new network flexibility features is the ability to select the host physical network adapters (pNICs) you want to assign for NSX-T traffic on your VxRail workload domain cluster (see figure below). Users can now select the pNICs used for the NSX-T Virtual Distributed Switch (N-VDS) from the SDDC Manager UI in the Add VxRail Cluster workflow. This allows you the flexibility to choose from a set of VxRail host physical network configurations that best aligns to your desired NSX-T configuration business requirements. Do you want to deploy your VxRail clusters using the base network daughter card (NDC) ports on each VxRail host for all standard traffic but use separate PCIe NIC ports for NSX-T traffic? Go for it! Do you want to use 10GbE connections for standard traffic and 25GbE for NSX-T traffic? We got you there too! Host network configuration flexibility is now in your hands and is only available with VCF on VxRail.

Figure 2

Finally, no good VCF on VxRail conversation can go by without talking about Lifecycle Management. VMware Cloud Foundation 3.9 on VxRail 4.7.300 also delivers simplicity and flexibility for managing at scale with greater control over workload domain upgrades. Customers now have the flexibility to select the clusters within a multi-cluster workload domain to upgrade in order to better align with business requirements and maintenance windows. Upgrading VCF on VxRail clusters is further simplified with VxRail Smart LCM (4.7.300 release) which determines exactly which firmware components need to be updated on each cluster, pre-stages each node in a cluster saving up to 20% of upgrade time (see next figure). The scheduling of these cluster upgrades is also supported. With VCF 3.9 and VxRail smart LCM, you can streamline the upgrade process across your hybrid cloud.

Figure 3

As you can see the innovation continues with Cloud Foundation on VxRail.

Jason Marques  Twitter - @vwhippersnapper  Linked In -  linkedin.com/in/jason-marques-47022837

Additional Resources

VxRail page on DellEMC.com

VCF on VxRail Guides

VCF on VxRail Whitepaper

VMware release notes for VCF 3.9 on VxRail 4.7.300

VxRail Videos

VCF on VxRail Interactive Demos 


Home > Integrated Products > VxRail > Blogs

VMware VxRail VMware Cloud Foundation life cycle management

VMware Cloud Foundation on VxRail Integration Features Series: Part 1—Full Stack Automated LCM

Jason Marques Jason Marques

Wed, 03 Aug 2022 21:32:15 -0000

|

Read Time: 0 minutes

Full Stack Automated Lifecycle Management

It’s no surprise that VMware Cloud Foundation on VxRail features numerous unique integrations with many VCF components, such as SDDC Manager and even VMware Cloud Builder. These integrations are the result of the co-engineering efforts by Dell Technologies and VMware with every release of VCF on VxRail. The following figure highlights some of the components that are part of this integration effort.

These integrations of VCF on VxRail offer customers a unique set of features in various categories, from security to infrastructure deployment and expansion, to deep monitoring and visibility that have all been developed to drive infrastructure operations.

Where do these integrations exist? The following figure outlines how they impact a customer’s Day 0 to Day 2 operations experience with VCF on VxRail.

In this series I will showcase some of these unique integration features, including some of the more nuanced ones. But for this initial post, I want to highlight one of the most popular and differentiated customer benefits that emerged from this integration work: full stack automated lifecycle management (LCM).

VxRail already delivers a differentiated LCM customer experience through its Continuously Validated States capabilities for the entire VxRail hardware and software stack. (As you may know, the VxRail stack includes the hardware and firmware of compute, network, and storage components, along with VMware ESXi, VMware vSAN, and the Dell EMC VxRail HCI System software itself, which includes VxRail Manager.)

With VCF on VxRail, VxRail Manager is integrated natively into the SDDC Manager LCM management framework through the SDDC Manager UI, and through VxRail Manager APIs for LCM by SDDC Manager when executing LCM workflows. This integration allows SDDC Manager to leverage all of the LCM capabilities that natively exist in VxRail right out of the box. SDDC Manager can then execute SDDC software LCM AND drive native VxRail HCI system LCM. It does this by leveraging native VxRail Manager APIs and the continuously validated state update packages for both the VxRail software and hardware components.

All of this happens seamlessly behind the scenes when administrators use the SDDC Manager UI to kick off native SDDC Manager workflows. This means that customers don’t have to leave the SDDC Manager UI management experience at all for full stack SDDC software and VxRail HCI infrastructure LCM operations. How cool is that?! The following figure illustrates the concepts behind this effective relationship.

For more details about how this LCM experience works, check out my lightboard talk about it!

Also, if you want to get some hands on experience in walking through performing LCM operations for the full VCF on VxRail stack, check out the VCF on VxRail Interactive Demo to see this and some of the other unique integrations!

I am already hard at work writing up the next blog post in the series. Check back soon to learn more.

Jason Marques

Twitter - @vwhippersnapper

Additional Resources

VxRail page on DellTechnologies.com

VCF on VxRail Guides

VCF on VxRail Whitepaper

VCF 3.9.1 on VxRail 4.7.410 Release Notes

VxRail Videos

VCF on VxRail Interactive Demos

Home > Integrated Products > VxRail > Blogs

VxRail DTCP

The Dell Technologies Cloud Platform – Smaller in Size, Big on Features

Jason Marques Jason Marques

Wed, 03 Aug 2022 21:32:15 -0000

|

Read Time: 0 minutes

The latest VMware Cloud Foundation 4.0 on VxRail 7.0 release introduces a more accessible entry cloud option with support for new four node configurations. It also delivers a simple and direct path to vSphere with Kubernetes at cloud scale.

The Dell Technologies team is very excited to announce that May 12, 2020 marked the general availability of our latest Dell Technologies Cloud Platform release, VMware Cloud Foundation 4.0 on VxRail 7.0. There is so much to unpack in this release across all layers of the platform, from the latest features of VCF 4.0 to newly supported deployment configurations new to VCF on VxRail. To help you navigate through all of the goodness, I have broken out this post into two sections: VCF 4.0 updates and new features introduced specifically to VCF on VxRail deployments. Let’s jump right to it!




VMware Cloud Foundation 4.0 Updates

A lot great information on VCF 4.0 features was already published by VMware as a part of their Modern Apps Launch earlier this year. If you haven’t caught yourself up, check out links to some VMware blogs at the end of this post. Some of my favorite new features include new support for vSphere for Kubernetes (GAMECHANGER!), support for NSX-T in the Management Domain, and the NSX-T compatible Virtual Distributed Switch.

Now let’s dive into the items that are new to VCF on VxRail deployments, specifically ones that customers can take advantage of on top of the latest VCF 4.0 goodness.


New to VCF 4.0 on VxRail 7.0 Deployments

VCF Consolidated Architecture Four Node Deployment Support for Entry Level Cloud (available beginning May 26, 2020)

New to VCF on VxRail is support for the VCF Consolidated Architecture deployment option. Until now, VCF on VxRail required that all deployments use the VCF Standard Architecture. This was due to several factors: a major one was that NSX-T was not supported in the VCF Management Domain until this latest release. Having this capability was a prerequisite before we could support  the consolidated architecture with VCF on VxRail.

Before we jump into the details of a VCF Consolidated Architecture deployment, let's review what the current VCF Standard deployment is all about.


VCF Standard Architecture Details


This deployment would consist of:

  • A minimum of seven VxRail nodes (however eight is recommended)
  • A four node Management Domain dedicated to run the VCF management software and at least one dedicated workload domain that consists of a three node cluster (however four is recommended) to run user workloads
  • The Management Domain runs its own dedicated vCenter and NSX-T instance
  • The workload domains are deployed with their own dedicated vCenter instances and choice of dedicated or shared NSX-T instances that are separate from the Management Domain NSX-T instance.


A summary of features includes:

  • Requires a minimum of 7 nodes (8 recommended)
  • A Management Domain dedicated to run management software components 
  • Dedicated VxRail VI domain(s) for user workloads
  • Each workload domain can consist of multiple clusters
  • Up to 15 domains are supported per VCF instance including the Management Domain
  • vCenter instances run in linked-mode
  • Supports vSAN storage only as principal storage
  • Supports using external storage as supplemental storage


This deployment architecture design is preferred because it provides the most flexibility, scalability, and workload isolation for customers scaling their clouds in production. However, this does require a larger initial infrastructure footprint, and thus cost, to get started.

For something that allows customers to start smaller, VMware developed a validated VCF Consolidated Architecture option. This allows for the Management domain cluster to run both the VCF management components and a customer’s general purpose server VM workloads. Since you are just using the Management Domain infrastructure to run both your management components and user workloads, your minimum infrastructure starting point consists of the four nodes required to create your Management Domain. In this model, vSphere Resource Pools are used to logically isolate cluster resources to the respective workloads running on the cluster. A single vCenter and NSX-T instance is used for all workloads running on the Management Domain cluster. 


VCF Consolidated Architecture Details


A summary of features of a Consolidated Architecture deployment:

  • Minimum of 4 VxRail nodes
  • Infrastructure and compute VMs run together on shared management domain
  • Resource Pools used to separate and isolate workload types
  • Supports multi-cluster and scale to documented vSphere maximums
  • Does not support running Horizon Virtual Desktop or vSphere with Kubernetes workloads
  • Supports vSAN storage only as principal storage
  • Supports using external storage as supplemental storage for workload clusters

For customers to get started with an entry level cloud for general purpose VM server workloads, this option provides a smaller entry point, both in terms of required infrastructure footprint as well as cost.

With the Dell Technologies Cloud Platform, we now have you covered across your scalability spectrum, from entry level to cloud scale! 


Automated and Validated Lifecycle Management Support for vSphere with Kubernetes Enabled Workload Domain Clusters

How is it that we can support this? How does this work? What benefits does this provide you, as a VCF on VxRail administrator, as a part of this latest release? You may be asking yourself these questions. Well, the answer is through the unique integration that Dell Technologies and VMware have co-engineered between SDDC Manager and VxRail Manager. With these integrations, we have developed a unique set of LCM capabilities that can benefit our customers tremendously. You can read more about the details in one of my previous blog posts here.

VCF 4.0 on VxRail 7.0 customers who benefit from the automated full stack LCM integration that is built into the platform can now include in this integration vSphere with Kubernetes components that are a part of the ESXi hypervisor! Customers are future proofed to be able to automatically LCM vSphere with Kubernetes enabled clusters when the need arises with fully automated and validated VxRail LCM workflows natively integrated into the SDDC Manager management experience. Cool right?! This means that you can now bring the same streamlined operations capabilities to your modern apps infrastructure just like you already do for your traditional apps! The figure below illustrates the LCM process for VCF on VxRail.


VCF on VxRail LCM Integrated Workflow


Introduction of initial support of VCF (SDDC Manager) Public APIs

VMware Cloud Foundation first introduced the concept of SDDC Manager Public APIs back in version 3.8. These APIs have expanded in subsequent releases and have been geared toward VCF deployments on Ready Nodes.

Well, we are happy to say that in this latest release, the VCF on VxRail team is offering initial support for VCF Public APIs. These will include a subset of the various APIs that are applicable to a VCF on VxRail deployment. For a full listing of the available APIs, please refer to the VMware Cloud Foundation on Dell EMC VxRail API Reference Guide.

Another new API related feature in this release is the availability of the VMware Cloud Foundation Developer Center. This provides some very handy API references and code samples built right into the SDDC Manager UI. These references are readily accessible and help our customers to better integrate their own systems and other third party systems directly into VMware Cloud Foundation on VxRail. The figure below provides a summary and a sneak peek at what this looks like.


VMware Cloud Foundation Developer Center SDDC Manager UI View


Reduced VxRail Networking Hardware Configuration Requirements

Finally, we end out journey of new features on the hardware front. In this release, we have officially reduced the minimum VxRail node networking hardware configurations required for VCF use cases. With the introduction of vSphere 7.0 in VCF 4.0, admins can now use the vSphere Distributed Switch (VDS) for NSX-T. The need for a separate N-VDS switch has been deprecated. So why is this important and how does this lead to VxRail node network hardware configuration improvements? 

Well, up until now, VxRail and SDDC management networks have been configured to use the VDS. And this VDS would be configured to use at least two physical NIC ports as uplinks for high availability. When introducing the use of NSX-T on VxRail, an administrator would need to create a separate N-VDS switch for the NSX-T traffic to use. This switch would require its own pair of dedicated uplinks for high availability. Thus, in VCF on VxRail environments in which NSX-T would be used, each VxRail node would require a minimum of four physical NIC ports to support the two different pairs of uplinks for each of the switches. This resulted in a higher infrastructure footprint for both the VxRail nodes and for a customer’s Top of Rack Switch infrastructure because they would need to turn on more ports on the switch to support all of these host connections. This, in turn, would come with a higher cost.

Fast forward to this release -- now we can run NSX-T traffic on the same VDS as the VxRail and SDDC Manager management traffic. And when you can share the same VDS, you can get away with reducing the number of physical uplink ports to provide high availability down to two and reduce the upfront hardware footprint and cost across the board! Win win! The following figure highlights this new feature.


NSX-T Dual pNIC Features


Well, that about sums it all up. Thanks for coming on this journey and learning about the boat load of new features in VCF 4.0 on VxRail 7.0. As always, feel free to check out the additional resources for more information. Until next time, stay safe and stay healthy out there!

Jason Marques

Twitter -@vwhippersnapper




Additional Resources

What’s New in Cloud Foundation 4 VMware Blog Post

Delivering Kubernetes At Scale With VMware Cloud Foundation (Part 1) VMware Blog Post

Consistency Makes the Tanzu Difference VMware Blog Post


VxRail page on DellTechnologies.com

VCF on VxRail Guides

VMware Cloud Foundation 4.0 on VxRail 7.0 Documentation and Release Notes

VxRail Videos

VCF on VxRail Interactive Demos

Home > Integrated Products > VxRail > Blogs

VMware PowerMax VxRail VMware Cloud Foundation SRDF

Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications

Jason Marques Jason Marques

Wed, 03 Aug 2022 21:32:15 -0000

|

Read Time: 0 minutes

Reference Architecture Validation Whitepaper Now Available!

Many of us here at Dell Technologies regularly have conversations with customers and talk about what we refer to as the “Power of the Portfolio.” What does this mean exactly? It is essentially a reference to the fact that, as Dell Technologies, we have a robust and broad portfolio of modern IT infrastructure products and solutions across storage, networking, compute, virtualization, data protection, security, and more! At first glance, it can seem overwhelming to many. Some even say it could be considered complex to sort through. But we, as Dell Technologies, on the other hand, see it as an advantage. It allows us to solve a vast majority of our customers’ technical needs and support them as a strategic technology partner. 

It is one thing to have the quality and quantity of products and tools to get the job done -- it’s another to leverage this portfolio of products to deliver on what customers want most: business outcomes.

As Dell Technologies continues to innovate, we are making the best use of the technologies we have and are developing ways to use them together seamlessly in order to deliver better business outcomes for our customers. The conversations we have are not about this product OR that product but instead they are about bringing together this set of products AND that set of products to deliver a SOLUTION giving our customers the best of everything Dell Technologies has to offer without compromise and with reduced risk.


Figure 1: Cloud Foundation on VxRail Platform Components


The Dell Technologies Cloud Platform is an example of one of these solutions. And there is no better example that illustrates how to take advantage of the “Power of the Portfolio” than one that appears in a newly published reference architecture white paper that focuses on validating the use of the Dell EMC PowerMax system with SRDF/Metro in a Dell Technologies Cloud Platform (VMware Cloud Foundation on a Dell EMC VxRail) multi-site stretched-cluster deployment configuration (Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications).This configuration provides the highest levels of application availability for customers who are running mission-critical workloads in their Cloud Foundation on VxRail private cloud that would otherwise not be possible with core DTCP alone.

Let’s briefly review some of the components used in the reference architecture and how they were configured and tested. 


Using external storage with VCF on VxRail

Customers commonly ask whether they can use external storage in Cloud Foundation on VxRail deployments. The answer is yes! This helps customers ease into the transition to a software-defined architecture from an operational perspective. It also helps customers leverage the investments in their existing infrastructure for the many different workloads that might still require external storage services.

External storage and Cloud Foundation have two important use cases: principal storage and supplemental storage. 

  • Principal storage - SDDC Manager provisions a workload domain that uses vSAN, NFS, or Fiber Channel (FC) storage for a workload domain cluster’s principal storage (the initial shared storage that is used to create a cluster). By default, VCF uses vSAN storage as the principal storage for a cluster. The option to use NFS and FC-connected external storage is also available. This option enables administrators to create a workload domain cluster whose principal storage can be a previously provisioned NFS datastore or an FC-based VMFS datastore instead of vSAN. External storage as principal storage is only supported on VI Workload Domains as vSAN is the required principal storage for the management domain in VCF.
  • Supplemental storage - This involves mounting previously provisioned external NFS, iSCSI, vVols, or FC storage to a Cloud Foundation workload domain cluster that is using vSAN as the principal storage. Supporting external storage for these workload domain clusters is comparable to the experience of administrators using standard vSphere clusters who want to attach secondary datastores to those clusters. 

At the time of writing, Cloud Foundation on VxRail supports supplemental storage use cases only. This is how external storage was used in the reference architecture solution configuration.


PowerMax Family

The Dell EMC PowerMax is the first Dell EMC hardware platform that uses an end-to-end Non-Volatile Memory Express (NVMe) architecture for customer data. NVMe is a set of standards that define a PCI Express (PCIe) interface used to efficiently access data storage volumes based on Non-Volatile Memory (NVM) media, which includes modern NAND-based flash along with higher-performing Storage Class Memory (SCM) media technologies. The NVMe-based PowerMax array fully unlocks the bandwidth, IOPS, and latency performance benefits that NVM media and multi-core CPUs offer to host-based applications—benefits that are unattainable using the previous generation of all-flash storage arrays. For a more detailed technical overview of the PowerMax Family, please check out the whitepaper Dell EMC PowerMax: Family Overview

The following figure shows the PowerMax 2000 and PowerMax 8000 models.


Figure 2: PowerMax product family

SRDF/Metro

The Symmetrix Remote Data Facility (SRDF) maintains real-time (or near real-time) copies of data on a PowerMax production storage array at one or more remote PowerMax storage arrays. SRDF has three primary applications: 

  • Disaster recovery
  • High availability
  • Data migration

In the case of this reference architecture, SRDF/Metro was used to provide enhanced levels of high availability across two availability zone sites. For a complete technical overview of SRDF, please check out this great SRDF whitepaper: Dell EMC SRDF.


Solution Architecture

Now that we are familiar with the components used in the solution, let’s discuss the details of the solution architecture that was used. 

This overall solution design provides enhanced levels of flexibility and availability that extend the core capabilities of the VCF on VxRail cloud platform. The VCF on VxRail solution natively supports a stretched-cluster configuration for the management domain and a VI workload domain between two availability zones by using vSAN stretched clusters. A PowerMax SRDF/Metro with Metro Stretched Cluster (vMSC) configuration is added to protect VI workload domain workloads by using supplementary storage for the workloads that are running on them.

Two types of vMSC configurations are verified with stretched Cloud Foundation on VxRail: uniform and non-uniform.

  • Uniform host access configuration - vSphere hosts from both sites are all connected to a storage node in the storage cluster across all sites. Paths presented to vSphere hosts are stretched across a distance.
  • Non-uniform host access configuration - vSphere hosts at each site are connected only to storage nodes at the same site. Paths presented to vSphere hosts from storage nodes are limited to the local site.

The following figure shows the topology used in the reference architecture of the Cloud Foundation uniform stretched-cluster configuration with PowerMax SRDF/Metro.

Figure 3: Cloud Foundation on VxRail uniform stretched-cluster config with PowerMax SRDF/Metro 


The following figure shows the topology used in the reference architecture of the Cloud Foundation on VxRail non-uniform stretched cluster configuration with PowerMax SRDF/Metro.

 Figure 4: Cloud Foundation on VxRail non-uniform stretched-cluster config with PowerMax SRDF/Metro 


Solution Validation Testing Methodology

We completed solution validation testing across the following major categories for both iSCSI and FC connected devices:

  • Functional Verification Tests - This testing addresses the basic operations that are performed when PowerMax is used as supplementary storage with VMware VCF on VxRail.
  • High Availability Tests - HA testing helps validate the capability of the solution to avoid a single point of failure, from the hardware component port level up to the IDC site level.
  • Reliability Tests - In general, reliability testing validates whether the components and the whole system are reliable enough with a certain level of stress running on them.

For complete details on all of the individual validation test scenarios that were performed, and the pass/fail results, check out the whitepaper.


Summary

To summarize, this white paper describes how Dell EMC engineers integrated VMware Cloud Foundation on VxRail with PowerMax SRDF/Metro and provides the design configuration steps that they took to automatically provision PowerMax storage by using the PowerMax vRO plug-in. The paper validates that the Cloud Foundation on VxRail solution functions as expected in both a PowerMax uniform vMSC configuration and a non-uniform vMSC configuration by passing all the designed test cases. This reference architecture validation demonstrates the power of the Dell Technologies portfolio to provide customers with modern cloud infrastructure technologies that deliver the highest levels of application availability for business-critical and mission-critical applications running in their private clouds.

Find the link to the white paper below along with other VCF on VxRail resources and see how you can leverage the “Power of the Portfolio” to support your business!

Jason Marques

Twitter - @vwhippersnapper


Additional Resources

Extending Dell Technologies Cloud Platform Availability for Mission Critical Applications Reference Architecture Validation Whitepaper

VxRail page on DellTechnologies.com

VxRail Videos

VCF on VxRail Interactive Demos

Home > Integrated Products > VxRail > Blogs

VMware VxRail Kubernetes VMware Cloud Foundation Tanzu

Simpler Cloud Operations and Even More Deployment Options Please!

Jason Marques Jason Marques

Wed, 03 Aug 2022 21:32:14 -0000

|

Read Time: 0 minutes

The latest VMware Cloud Foundation on Dell EMC VxRail release debuts LCM and storage enhancements, support for transitioning from VCF Consolidated to VCF Standard Architecture, AMD-based VxRail hardware platforms, and more!


Dell Technologies and VMware are happy to announce the availability of VMware Cloud Foundation 4.2.0 on VxRail 7.0.131.

This release brings about support for the latest versions of VCF and Dell EMC VxRail that provide a simple and consistent operational experience for developer ready infrastructure across core, edge, and cloud. Let’s review these new updates and enhancements.

Some Important Updates:

VCF on VxRail Management Operations

Ability For Customers to Perform Their Own VxRail Cluster Expansion Operations in VCF on VxRail Workload Domains. Sometimes some of the best announcements that come with a new release have nothing to do with a new technical feature but instead are about new customer driven serviceability operations. The VCF on VxRail team is happy to announce this new serviceability enhancement. Customers no longer must purchase a professional services engagement simply to expand a single site layer 2 configured VxRail WLD cluster deployment by adding nodes to it. This should save time and money and give customers the freedom to perform these operations on their own.

This aligns to existing support that already exists for customers performing similar cluster expansion operations for VxRail systems deployed as standard clusters in non-VCF use cases. 

Note: There are some restrictions on which cluster configurations support customer driven expansion serviceability. Stretched VxRail cluster deployments and layer 3 VxRail cluster configurations will still require engagement with professional services as these are more advanced deployment scenarios. Please reach out to your local Dell Technologies account team for a complete list of the cluster configurations that are supported for customer driven expansions. 

VCF on VxRail Deployment and Services

Support for Transitioning From VCF on VxRail Consolidated Architecture to VCF on VxRail Standard Architecture. Continuing the operations improvements, the VCF on VxRail team is also happy to announce this new capability. We introduced support for VCF Consolidated Architecture deployments in VCF on VxRail back in May 2020. You can read about it here. VCF Consolidated Architecture deployments provide customers a way to familiarize themselves with VCF on VxRail in their core datacenters without a significant investment in cost and infrastructure footprint. Now, with support for transitioning from VCF Consolidated Architecture to VCF Standard Architecture, customers can expand as their scale demands it in their core, edge, or distributed datacenters! Now that’s flexible!

Please reach out to your local Dell Technologies account team for details on the transition engagement process requirements.

And Some Notable Enhancements:

VxRail Hardware Platform

AMD-based VxRail Platform Support in VCF 4.x Deployments. With the latest VxRail 7.0.131 HCI System Software release, ALL available AMD-based VxRail series models are now supported in VCF 4.x deployments. These models include VxRail E-Series and P-Series and support single socket 2nd Gen AMD EYPC™ processors with 8 to 64 cores, allowing for extremely high core densities per socket.

The figure below shows the latest VxRail HW platform family.

 


For more info on these AMD platforms, check out my colleague David Glynn’s blog post on the subject here when AMD platform support was first introduced to the VxRail family last year. (Note: New 2U P-Series options have been released since that post.)

VCF on VxRail Multi-Site Architecture

NSX-T 3.1 Federation Now Supported with VCF 4.2 on VxRail 7.0.131. NSX-T Federation provides a cloud-like operating model for network administrators by simplifying the consumption of networking and security constructs. NSX-T Federation includes centralized management, consistent networking and security policy configuration with enforcement and synchronized operational state across large scale federated NSX-T deployments. With NSX-T Federation, VCF on VxRail customers can leverage stretched networks and unified security policies across multi-region VCF on VxRail deployments, providing workload mobility and simplified disaster recovery. This initial support will be through prescriptive manual guidance that will be made available soon after VCF on VxRail solution general availability.  For a detailed explanation of NSX-T Federation, check out this VMware blog post here

The figure below depicts what the high-level architecture would look like.


VCF on VxRail Storage

VCF 4.2 on VxRail 7.0.131 Support for VMware HCI Mesh. VMware HCI Mesh is a vSAN feature that provides for “Disaggregated HCI” exclusively through software. In the context of VCF on VxRail, HCI Mesh allows an administrator to easily define a relationship between two or more vSAN clusters contained within a workload domain. It also allows a vSAN cluster to borrow capacity from other vSAN clusters, improving the agility and efficiency in an environment. This disaggregation allows the administrator to separate compute from storage. HCI Mesh uses vSAN’s native protocols for optimal efficiency and interoperability between vSAN clusters. HCI Mesh accomplishes this by using a client/server mode architecture. vCenter is used to configure the remote datastore on the client side. Various configuration options are possible that can allow for multiple clients to access the same datastore on a server. VMs can be created that utilize the storage capacity provided by the server. This can enable other common features, such as performing a vMotion of a VM from one vSAN cluster to another. 

The figure below depicts this architecture.

VCF on VxRail Networking

This release continues to extend networking flexibility to further adapt to various customer environments and to reduce deployment efforts. 

Customer-Defined IP Pools for NSX-T TEP IP Addresses for the Management Domain and Workload Domain Hosts. To extend networking flexibility, this release introduces NSX-T TEP IP Address Pools. This enhances the existing support for using DHCP to assign NSX-T TEP IPs. This new feature allows customers to avoid deploying and maintaining a separate DHCP server for this purpose. Admins can select to use IP Pools as part of VCF Bring Up by entering this information in the Cloud Builder template configuration file. The IP Pool will then be automatically configured during Bring Up by Cloud Builder. There is also a new option to choose DHCP or IP Pools during new workload domain deployments in the SDDC Manager. 

The figure below illustrates what this looks like. Once domains are deployed, IP address blocks are managed through each domain’s NSX Manager respectively. 

             

 

pNIC-Level Redundancy Configuration During VxRail First Run. Network flexible configurations are further extended with this feature in VxRail 7.0.131. It allows an administrator to configure the VxRail System VDS traffic across NDC and PCIe pNICs automatically during VxRail First Run using a new VxRail Custom NIC Profile option. Not only does this help provide additional high availability network configurations for VCF on VxRail domain clusters, it also helps to further simplify operations by removing the need for additional Day 2 activities in order to get to the same host configuration outcome.

Specify the VxRail Network Port Group Binding Mode During VxRail First Run. To further accelerate and simplify VCF on VxRail deployments, VxRail 7.0.131 has introduced this new enhancement designed with VCF in mind. VCF requires all host Port Group Binding Modes be set to Ephemeral. VxRail First Run now enables admins to have this parameter configured automatically, reducing the number of manual steps needed to prep VxRail hosts for VCF on VxRail use. Admins can set this parameter using the VxRail First Run JSON configuration file or manually enter it into the UI during deployment. 

 The figure below illustrates an example of what this looks like in the Dell EMC VxRail Deployment Wizard UI.

 

VCF on VxRail LCM

New SDDC Manager LCM Manifest Architecture. This new LCM Manifest architecture changes the way SDDC Manager handles the metadata required to enable upgrade operations as compared to the legacy architecture used up until this release.

With the legacy LCM Manifest architecture: 

  • The metadata used to determine upgrade sequencing and availability was published as part of the LCM bundle itself or was part of SDDC Manager VM.
  • Did not allow for changes to the metadata after the bundle was published. This limited the ability for VMware to modify upgrade sequencing without requiring an upgrade to a new VCF release.

The newly updated LCM Manifest architecture helps address these challenges by enabling dynamic updates to LCM metadata, enabling future functionality such as recalling upgrade bundles or modifying skip level upgrade sequencing.

VCF Skip-Level Upgrades Using SDDC Manager UI and Public API. Keeping up with new releases can be challenging and scheduling maintenance windows to perform upgrades may not be as readily available for every customer. The goal behind this enhancement is to provide VCF on VxRail administrators the flexibility to reduce the number of stepwise upgrades needed in order to get to the latest SDDC Manager/VCF release if they are multiple versions behind. All required upgrade steps are now automated as a single SDDC Manager orchestrated LCM workflow and is built upon the new SDDC Manager LCM Manifest architecture. VCF skip level upgrades allow admins to quickly and directly adopt code versions of choice and to reduce maintenance window requirements. 

Note: To take advantage of VCF skip level upgrades for future VCF releases, customers must be at a minimum of VCF 4.2.

The figure below shows what this option looks like in the SDDC Manager UI. 

Improvements to Upgrade Resiliency Through VCF Password Prechecks. Other LCM enhancements in this release come in the area of password prechecks. When performing an upgrade, VCF needs to communicate to various components to complete various actions. Of course, to do this, the SDDC Manager needs to have valid credentials. If the passwords have expired or have been changed outside of VCF, the patching operation fails. To avoid any potential issues, VCF now checks to ensure that the credentials needed are valid prior to commencing the patching operation. These checks will occur both during the execution of the pre-check validation as well as during an upgrade of a resource, such as ESXi, NSX-T, vCenter, or VxRail Manager. Check out what this looks like in the figure below


Automated In-Place vRSLCM Upgrades. Upgrading vRSLCM in the past required the deployment of a net new vRSLCM appliance. With VCF 4.2, the SDDC Manager keeps the existing vRSLCM appliance, takes a snapshot of it, then transfers the upgrade packages directly to it and upgrades everything in place. This provides a more simplified and streamlined LCM experience.

VCF API Performance Enhancements. Administrators who use a programmatic approach will experience a quicker retrieval of information through the caching of certain information when executing API calls.

VCF on VxRail Security

Mitigate Man-In-The-Middle Attacks. Want to prevent Man-In-The-Middle Attacks on your VCF on VxRail cloud infrastructure? This release is for youIntroduced in VCF 4.2, customers can leverage SSH RSA fingerprint and SSL thumbprint enforcement capabilities that are natively built-into SDDC Manager in order to verify the authenticity of cloud infrastructure components (vCenter, ESXi, and VxRail Manager). Customers can choose to enable this feature for their VCF on VxRail deployment during VCF Bring Up by filling in the affiliated parameter fields in the Cloud Builder configuration file.

An SSH RSA Fingerprint comes from the host SSH public key while an SSL Thumbprint comes from the host’s certificates. One or more of these data points can be used to validate the authenticity of VCF on VxRail infrastructure components when being added and configured into the environment. For the Management Domain, both SSH fingerprints and SSL thumbprints are available to use while Workload Domains have SSH Fingerprints available. See what this looks like in the figure below. 

             


Natively Integrated Dell Technologies Next Gen SSO Support With SDDC Manager. Dell Technologies Next Gen SSO is a newly implemented backend service used in authenticating with Dell Technologies support repositories where VxRail update bundles are published. With the native integration that SDDC Manager has with monitoring and downloading the latest supported VxRail upgrade bundles from this depot, SDDC Manager now utilizes this new SSO service for its authentication. While this is completely transparent to customers, existing VCF on VxRail customers may need to log SDDC Manager out of their current depot connection and re-authenticate with their existing credentials to ensure future VxRail updates are accessible by SDDC Manager.   

New Advanced Security Add-on for VMware Cloud Foundation License SKUs: Though not necessarily affiliated with the VCF 4.2 on VxRail 7.0.131 BOM directly, new VMware security license SKUs for Cloud Foundation are now available for customers who want to bring their own VCF licenses to VCF on VxRail deployments. 

The Advanced Security Add-on for VMware Cloud Foundation now includes advanced threat protection, and workload and endpoint security that provides the following capabilities:

  • Carbon Black Workload Advanced: This includes Next Generation Anti-Virus, Workload Audit/Remediation, and Workload EDR.
  • Advanced Threat Prevention Add-on for NSX Data Center – Coming in Advanced and Enterprise Plus editions, this includes NSX Firewall, NSX Distributed IDS/IPS, NSX Intelligence, and Advanced Threat Prevention
  • NSX Advanced Load Balancer with Web Application Firewall

Updated VMware Cloud Foundation and VxRail BOM

VMware Cloud Foundation 4.2.0 on VxRail 7.0.131 introduces support for the latest versions of the SDDC and VxRail. For the complete list of component versions in the release, please refer to the VCF on VxRail release notes. A link is available at the end of this post.

Well, that about covers it for this release. The innovation continues with co-engineered features coming from all layers of the VCF on VxRail stack. This further illustrates the commitment that Dell Technologies and VMware have to drive simplified turnkey customer outcomes. Until next time, feel free to check out the links below to learn more about VCF on VxRail.

Jason Marques
Twitter - @vwhippersnapper

 Additional Resources


Home > Integrated Products > VxRail > Blogs

VxRail VMware Cloud Foundation

Cloud Foundation on VxRail is Even More “Dynamic” and “Power”-ful

Jason Marques Jason Marques

Wed, 03 Aug 2022 21:32:14 -0000

|

Read Time: 0 minutes

Dell Technologies and VMware are happy to announce the availability of VMware Cloud Foundation 4.3.1 on Dell EMC VxRail 7.0.241. This new release extends flexible VxRail platform principal storage options with Dell EMC Fibre Channel storage, new VxRail storage integration enhancements, new VxRail dynamic node and 15th generation hardware platform support, new LCM enhancements, and security and deployment updates. Read on for details!

Cloud Foundation on VxRail: Storage Enhancements

VMFS on FC Principal Storage with Dell EMC PowerMax, PowerStore, and Unity XT Storage, and 14th Generation VxRail Dynamic Nodes

More co-engineered goodness makes its way into this release with new support for VMFS on FC Principal storage options for VI Workload Domains.

First, VxRail 7.0.241 supports deploying clusters using VMFS over FC storage as its principal storage instead of vSAN when using new 14th Generation VxRail dynamic nodes. Customers of Dell EMC PowerMax, Dell EMC Powerstore-T, and Dell EMC Unity XT can now leverage their existing storage array investments for VxRail environments. They can also take advantage of the benefits of the VxRail HCI System Software cluster validation, automation, and life cycle management for the compute, hardware, and software infrastructure components.

This principal FC storage support also extends into use cases for Cloud Foundation on VxRail. VCF 4.3.1 now provides updated SDDC Manager awareness of these new types of VMFS on FC principal storage-based 14th Generation VxRail dynamic node clusters. In addition, we have also updated SDDC Manager workflows to support adding either VMFS on FC principal storage-based 14th Generation VxRail dynamic node clusters or vSAN based principal storage VxRail HCI node clusters into VI Workload Domains. 

With this latest enhancement VCF on VxRail delivers even more storage flexibility to best meet your workload requirements. The figure below illustrates the different ways in which storage can be leveraged in VCF on VxRail deployments across workload domain types and across principal and supplemental storage use cases. (Note that external storage, including remote vSAN HCI Mesh datastores, were already supported with VCF on VxRail but as supplemental storage prior to this latest release.)

Figure 1: Cloud Foundation on VxRail Supported Storage Options

To get some hands on with creating a new VxRail VI workload domain using VMFS on FC principal storage and VxRail dynamic nodes with PowerStore-T, check out this new interactive demo that walks you through the process.

Cloud Foundation on VxRail LCM Enhancements

SDDC Manager LCM Precheck and Cluster Operation Workflows Integrated with VxRail Health Check API

SDDC Manager has always enabled VCF administrators to perform ad hoc LCM prechecks. These prechecks are used to validate the VCF environment health and configuration status of workload domains, to avoid running into issues while executing LCM and cluster management related workflows. 

This latest release includes more co-engineered enhancements to these prechecks by integrating them with native VxRail Health Check APIs. As a result, SDDC Manager ad hoc precheck, LCM, and cluster management related workflows will call on these VxRail APIs to obtain detailed VxRail system-specific cluster health and configuration checks. This brings administrators a more turnkey platform experience that now factors in underlying HCI system HW/SW delivered by VxRail, all within the native SDDC Manager administration experience.

Figure 2: Integrated VxRail Health Check API with SDDC Manager LCM precheck

VxRail Hardware Platform Enhancements

Intel-based 15th Generation VxRail HCI Nodes and 14th Generation VxRail Dynamic Nodes

The VxRail 7.0.241 release brings about new HW platform support with Intel-based 15th Generation VxRail HCI nodes and new 14th Generation VxRail dynamic nodes. For more information on the latest VxRail hardware platforms, check out these blogs:

Cloud Foundation on VxRail Deployment Enhancements

New VxRail First Run Options to Set Network MTU and Setting Multiple VxRail System VDS via UI

As part of the management cluster prep for VCF on VxRail deployments, the network MTU size of the management cluster system network must be configured before Cloud Builder executes the VCF Bring up. This ensures that the management cluster can support the prerequisites for the deployment and installation of NSX-T, and to align it with the required VCF best practice design architecture.

Prior to this release, these network settings would have needed to be manually configured. Now, they are set as part of the standard VxRail First Run cluster deployment automation process. Doing this streamlines prerequisite management cluster configuration for VCF and speeds up VCF on VxRail deployments, to bring about a faster Time-To-Value (TTV) for customers.

One can now also use the VxRail Manager First Run UI Deployment Wizard to deploy VxRail clusters with Multiple System VDS configured. In previous versions of VxRail, this was only available when using the VxRail API. This wizard allows you to configure this and other cluster settings to simplify cluster deployments, while delivering on more flexible cluster configurations options.


Cloud Foundation on VxRail Security Enhancements

ESXi Lockdown Mode For VxRail Clusters

No blog post is complete without calling out new security feature enhancements. And VCF 4.3.1 on VxRail 7.0.241 delivers. Introduced in this release is new support for ESXi lockdown mode for VxRail clusters. 

After a workload domain and a corresponding VxRail cluster have been created, a user can use the vSphere Web Client to configure lockdown mode on a given VxRail host. VCF also allows you to enable or disable lockdown mode for a workload domain or cluster by using the SOS command line utility. Using the SOS command automates the process of enabling or disabling this feature over several hosts quickly. (Important: VCF currently only supports the implementation of normal lockdown mode. It is the SOS utility that configures this lockdown mode. ‘Strict’ lockdown mode is currently not supported.) 

Well there you have it. Tons of new VCF on VxRail goodies to ponder for now. As always, for more information on VxRail and VCF on VxRail, please check out the links at the end of this blog and other VxRail related blogs here on the InfoHub.

Additional Resources

VMware Cloud Foundation on Dell EMC VxRail Release Notes

VxRail page on DellTechnologies.com

VxRail Videos

VCF on VxRail Interactive Demos 

Author Information

Author: Jason Marques

Twitter - @vWhipperSnapper

 

 
 

Home > Integrated Products > VxRail > Blogs

VMware VxRail Kubernetes VMware Cloud Foundation DTCP

Announcing VMware Cloud Foundation 4.0.1 on Dell EMC VxRail 7.0

Jason Marques Jason Marques

Wed, 03 Aug 2022 15:21:13 -0000

|

Read Time: 0 minutes

The latest Dell Technologies Cloud Platform release introduces new support for vSphere with Kubernetes for entry cloud deployments and more  

Dell Technologies and VMware are happy to announce the general availability VCF 4.0.1 on VxRail 7.0. 


This release offers several enhancements including vSphere with Kubernetes support for entry cloud deployments, enhanced bring up features for more extensibility and accelerated deployments, increased network configuration options, and more efficient LCM capabilities for NSX-T componentsBelow is the full listing of features that can be found in this release:


  • Kubernetes in the management domain: vSphere with Kubernetes is now supported in the management domain. With VMware Cloud Foundation Workload Management, you can deploy vSphere with Kubernetes on the management domain default cluster starting with only four VxRail nodes. This means that DTCP entry cloud deployments can take advantage of running Kubernetes containerized workloads alongside general purpose VM workloads on a common infrastructure! 
  • Multi-pNIC/multi-vDS during VCF bring-up: The Cloud Builder deployment parameter workbook now provides five vSphere Distributed Switch (vDS) profiles that allow you to perform bring-up of hosts with two, four, or six physical NICs (pNICs) and to create up to two vSphere Distributed Switches for isolating system (Management, vMotionvSAN) traffic from overlay (Host, Edge, and Uplinks) traffic. 
  • Multi-pNIC/multi-vDS API support: The VCF API now supports configuring a second vSphere Distributed Switch (vDS) using up to four physical NICs (pNICs), providing more flexibility to support high performance use cases and physical traffic separation. 
  • NSX-T cluster-level upgrade support: Users can upgrade specific host clusters within a workload domain so that the upgrade can fit into their maintenance windows bringing about more efficient upgrades. 
  • Cloud Builder API support for bring-up operations – VCF on VxRail deployment workflows have been enhanced to support using a new Cloud Builder API for bring-up operations. VCF software installation on VxRail during VCF bring-up can now be done using either an API or GUI providing even more platform extensibility capabilities.
  • Automated externalization of the vCenter Server for the management domain: Externalizing the vCenter Server that gets created during the VxRail first run (the one used for the management domain) is now automated as part of the bring-up process. This enhanced integration between the VCF Cloud Builder bring-up automation workflow and VxRail API helps to further accelerate installation times for VCF on VxRail deployments.
  • BOM Updates: Updated VCF software Bill of Materials with new product versions. 


Jason Marques 

Twitter -@vwhippersnapper 

 

Additional Resources 


Home > Integrated Products > VxRail > Blogs

VMware VxRail Kubernetes VCF validated solution

Announcing VMware Cloud Foundation 4.4.1 on Dell VxRail 7.0.371

Jason Marques Jason Marques

Wed, 25 May 2022 14:07:35 -0000

|

Read Time: 0 minutes

With each turn of the calendar, as winter dissipates and the warmer spring weather brings new life back into the world, a certain rite of passage comes along with it: Spring Cleaning! As much as we all hate to do it, it is necessary to ensure that we keep everything operating in tip top shape. Whether it be errands like cleaning inside your home or repairing the lawn mower to be able to cut the grass, we all have them, and we all recognize they are important, no matter how much we try to avoid it. 

The VMware Cloud Foundation (VCF) on Dell VxRail team also believes in applying a spring cleaning mindset when it comes to your VCF on Dell VxRail cloud environment. This will allow your cloud environment to keep running in an optimal state and better serve you and your consumers. 

So, in the spirit of the spring season, Dell is happy to announce the release of Cloud Foundation 4.4.1 on VxRail 7.0.371. Beginning on May 25, 2022, existing VCF on VxRail customers will be able to LCM to this latest version while support for new deployments will be made available beginning June 2, 2022.

This new release introduces the following “spring cleaning” enhancements:

  • New component software version updates
  • New VxRail LCM logic improvements
  • New VxRail serviceability enhancements
  • VCF and VxRail software security bug fixes
  • VCF on VxRail with VMware Validated Solution Enhancements

VCF on VxRail life cycle management enhancements

New VxRail prechecks and vSAN resync timeout improvements

Starting with this release, the VxRail LCM logic has been modified to address scenarios when the cluster update process may fail to put a node into Maintenance Mode. This LCM logic enhancement is leveraged in addition to similar SDDC Manager prechecks that already exist. All VxRail prechecks are used when SDDC Manager calls on VxRail to run its precheck workflow prior to an LCM update. SDDC Manager does this by using its integration with the VxRail Health Check API. SDDC Manager also calls on these prechecks during an LCM update using its integration with the VxRail LCM API. So, VCF on VxRail customers benefit from this VxRail enhancement seamlessly. 

Failing to enter Maintenance Mode can cause VxRail cluster updates to fail. Finding ways to mitigate this type of failure will significantly enhance the LCM reliability experience for many VCF on VxRail customers. 

Figure 1: VCF on VxRail LCM

The following list describes scenarios in which a VxRail node could fail to enter maintenance mode, but are improved with the latest enhancements:

  • If VMtools are mounted to customer VM workloads: VxRail LCM precheck now checks for this state to detect if VMtools are mounted. If this exists, it is the administrator’s responsibility to address the issue in their environment before initiating a VxRail cluster update.
  • If VMs are pinned to specific hosts:  VxRail LCM precheck will now detect whether there is host pinning configured for VMs.   If this exists, it is the administrator’s responsibility to address the configuration in their environment before initiating a cluster update.
  • vSAN Resync Time Timeout: During the cluster update process, a VxRail node can fail if vSAN resync takes too long. When the system waits before the node is put into Maintenance Mode, it causes a timeout. To prevent this from happening, the VxRail vSAN Resync timeout value has been increased by 2x while the cluster update waits for the vSAN resync to finish.

VCF on VxRail serviceability enhancements

Support for next generation Dell secure remote service connectivity agent and gateway

VxRail systems will now use the next generation secure remote service connectivity agent and the Secure Connect Gateway to connect to the Dell cloud for dial home serviceability. This new connectivity agent running within VxRail will also be used on all Dell infrastructure products.  

Figure 2: Next Generation Dell Secure remote connectivity agent and gateway architecture

The Secure Connect Gateway is the 5th generation gateway that acts as a centralization point for Dell products in the customer environment to manage the connection to the Dell cloud.  This remote connectivity enables a bi-directional communication between the product and Dell cloud.  Products can send telemetry data and event information to the Dell cloud which can be used to facilitate remote support by Dell services as well as to deliver cloud services such as CloudIQ, MyService360, Licensing Portal, and Service Link.

The latest generation remote service connector is intended to provide a uniform telemetry experience across all Dell ISG products.   By providing standardization, customers can reduce redundant infrastructure used to provide remote services for all their Dell products. The connectivity agent also introduces a simpler setup experience by streamlining and automating setup process of the secure remote service for new VxRail cluster deployments.

 

Figure 3: Enabling secure remote gateway connectivity

For existing VxRail clusters running an earlier version than VCF 4.4.1 on VxRail 7.0.371 in a VCF on VxRail deployment, the migration effort to adopt the new secure connect gateway requires the administrator to first upgrade their older generation dell serviceability gateways in their environment (whether it’s the 3rd generation Secure Remote Service gateway or the 4th generation Dell SupportAssist Enterprise gateway).  

Once the gateways are upgraded to the latest 5th generation Dell Secure Connect Gateway, the VCF on VxRail administrator can perfrom the VxRail cluster update for the migration, as part of the standard VCF on VxRail LCM process. The built-in VxRail LCM precheck steps will inform the administrator to upgrade their gateways if necessary. The VxRail cluster update will now retrieve the gateway configuration for the connectivity agent and convert the device or access key to a unique connectivity key for remote connection authentication. Administrators should be aware that this additional migration work may add a one time 15 minutes or so time increase to the total cluster update time.

New nodes that are shipped with VxRail 7.0.350 or higher will also now include a unique connectivity key for the secure remote gateway. Dell manufacturing will embed this key into the iDRAC of the VxRail nodes. So, instead of a user logging onto the Dell support portal to retrieve the access key to enable secure remote services, the enablement process will automatically retrieve this unique connectivity key from iDRAC for the connectivity agent to enable the connection. This feature is designed to simplify and streamline the secure connect gateway serviceability setup experience.

Customers can also have a direct connection to Dell cloud bypassing having a gateway deployed.  This option is available for any clusters running VxRail 7.0.350 and higher.

VxRail dial home payload improvements

VxRail dial home payload improvements have been introduced to help provide Dell support with additional key cluster information in the dial home payload itself and capture more system error conditions to help further improve VCF on VxRail serviceability and reduce time to resolution of any VxRail related issues. 

 Additional payload information now includes: 

  • Smart Logs: Smart logging automatically collects the logs on the node of the call-home event, which provides additional information to the Support team when necessary. Starting with VCF 4.4.1 on VxRail 7.0.371, smart logging functionality has been redesigned to achieve the following tasks: 
    1. Adapt smart logging workflow to the new secure remote gateway architecture
    2. Associate smart log with Dell Service Request (SR) such that the smart log file can be included in the SR as a link.
  • Sub-component details: These include information such as the part number and slot number for CRU/FRU items such as disk drives and memory DIMMs for more efficient auto-dispatch of these failed components.
  • VxRail cluster personality identifier information: To help making the troubleshooting experience more efficient, this cluster metadata information allows Dell Support to know that the VxRail clusters are deployed within a VCF on VxRail environment.

 Also included are additional error conditions that are now captured to bring VxRail events in parity with existing PowerEdge events and additional ADC error states. And finally, to reduce the cost of service and improve the customer experience by avoiding a deluge of unnecessary event information, some events are no longer being reported.

VxRail physical view UI update now includes Fiber Channel HBA hardware view

New support for FC HBA Physical HW views have been introduced as part of the VxRail Manager vCenter Plugin Physical View UI for E560F, P570F, and V570F VxRail nodes that support externally attached storage.

 Supported FC HBAs include the following Emulex and QLogic models:

  • Emulex LPE 35002 Dual Port 32 Gb HBA
  • Emulex LPE 31002 Dual Port 16 Gb HBA
  • QLogic 2772 Dual Port 32 Gb HBA
  • QLogic 2692 Dual Port 16 Gb HBA

 

Figure 4: Fiber Channel HBA physical hardware view in VxRail Manager vCenter Plugin – firmware

 This new functionality provides a similar UI viewing experience to what administrators are already used to seeing, regarding physical NICs and NIC ports. This new FC HBA view will include port link status and firmware/driver version information. An example of the firmware/driver views is shown in Figure 4.

VCF on VxRail security enhancements

VCF and VxRail software security vulnerability fixes

This release includes several security vulnerabilities fixes for both VxRail and VCF software components.

 VxRail Software 7.0.371 contains fixes that resolve multiple security vulnerabilities. Some of these include:

  • DSA-2022-084
  • DSA-2022-056
  • DSA-2021-255
  • iDRAC8 Updates 

 For more information, see iDRAC8 2.82.82.82 Release Notes 

 For more details on the DSAs, see the Dell Security Advisory (DSA) portal and search for DSA IDs.

 VCF 4.4.1 Software: This contains fixes that resolve issues in NSX-T by introducing support for NSX-T 3.1.7.3.2. For more information about these issues, see the VMware KB Article.

 vRealize Suite Software: In the last VCF 4.4 on VxRail 7.0.320 release we introduced vRealize Flexible Upgrades. Read more about it here. As a result, the vRealize Suite components (other than vRealize Suite Lifecycle Manager) are no longer a part of the VCF core software package. So if there are security vulnerabilities that are discovered and relevant patches that need to be applied, the process of doing so has changed. No longer will those vRealize component software updates be delivered and applied through VCF software update bundles. Administrators now must apply them independently using vRSLCM starting from the VCF 4.4 on VxRail 7.0.320 release.

 I bring this up because there has been some vRealize Suite component security patches that have also been released that are relevant to VCF 4.4.1 on VxRail 7.0.371 deployments. See this blog post, written by my peers on the VMware team, describing the issue related to VMSA-2022-0011 and how to apply the fixes for it.

 VCF on VxRail with VMware Validated Solution enhancements

New VCF on VxRail qualification with VMware Validated Solutions

For those of you who aren’t aware, VMware Validated Solutions are technical validated implementations built and tested by VMware and VMware Partners. These solutions are designed to help customers solve common business problems using VMware Cloud Foundation as the foundational infrastructure. Types of solutions include Site Protection and Disaster Recovery for VMware Cloud Foundation using multi-site VCF deployments with stretched NSX-T networks and Advanced Load Balancing for VMware Cloud Foundation using VMware NSX Advanced Load Balancer for workloads on VCF. These validated solution designs have been enhanced over time to include VMware developed automation scripts to help customers further simplify and accelerate getting these implemented. You can learn more about them here.

 Although this solution is not directly tied to this latest VCF 4.4.1 on VxRail 7.0.371 release as a release feature itself, VMware and Dell can now qualify the VMware Validated Solutions on VCF on VxRail. All VVS solutions that are qualified will be marked with a VxRail tag. 

Figure 5: VMware Validated Solutions Portal

These solutions get updated asynchronously from VCF releases. Be sure to check the VMware VVS portal for the latest updates on existing solutions or to see when new solutions are added.

That’s a wrap

Thanks for taking the time learn more about VMware Cloud Foundation on Dell VxRail. For even more solution information, see the Additional Resources links at the bottom of this post. I don’t know about you, but I feel squeaky clean already! Can’t say the same about my outdoor landscaping though...I should probably go address that…

Author: Jason Marques

Twitter: @vWhipperSnapper

Additional Resources

VMware Cloud Foundation on Dell VxRail Release Notes

VxRail page on DellTechnologies.com

VxRail page on InfoHub

VCF on VxRail Interactive Demos

VxRail Videos

Home > Integrated Products > VxRail > Blogs

NVIDIA VxRail Kubernetes VMware Cloud Foundation Tanzu VCF

New Year’s Resolutions Fulfilled: Cloud Foundation on VxRail

Jason Marques Jason Marques

Thu, 10 Feb 2022 13:24:57 -0000

|

Read Time: 0 minutes

New Year’s Resolutions Fulfilled: VMware Cloud Foundation 4.4 on VxRail 7.0.320

Many of us make New Year’s resolutions for ourselves with each turn of the calendar. We hope everyone is still on track!

The Cloud Foundation on VxRail team wanted to establish our own resolutions too. And with that, Dell Technologies and VMware have come together to fulfill our resolution of continuing to innovate by making operating and securing cloud platforms easier for our customers while helping them unlock the power of their data.

And as a result, we are happy to announce the availability of our first release of the new year: VMware Cloud Foundation 4.4 on Dell VxRail 7.0.320! This release includes Cloud Foundation and VxRail software component version updates that include patches to some recent widely known security vulnerabilities. It also adds support for Dell ObjectScale on the vSAN Data Persistence Platform (vDPp), support for additional 15th generation VxRail platforms, new security hardening features, lifecycle management improvements, new Nvidia GPU workload support, and more. Phew! So be resolute and read on for the details.

VCF on VxRail Storage Enhancements

VCF on VxRail Lifecycle Management Enhancements

VCF on VxRail Hardware Platform Enhancements

VCF on VxRail Developer and AI-Ready Enterprise Platform Enhancements

VCF on VxRail Operations Enhancements

VCF on VxRail Security Enhancements

VCF on VxRail Storage Enhancements

Support for vSAN Data Persistence Platform and Dell ObjectScale Modern Stateful Object Storage Services

Initially introduced in vSphere 7.0 U1, the vSAN Data Persistence Platform (vDPp) is now supported as part of in VCF 4.4 on VxRail 7.0.320. Check out this great VMware blog post to learn more about vDPp.

Beginning in this release, support for running the new Dell ObjectScale data service on top of vDPp is also available. This new next-gen cloud native software defined object storage service is geared toward those IT teams who are looking to extend their cloud platform to run Kubernetes native stateful modern application data services. To learn more about ObjectScale please refer to this blog post. Note: VCF on VxRail currently supports using vDPp in a vSAN “Shared Nothing Architecture Mode” only. 

The following figure illustrates the high-level architecture of vDPp.

 Figure 1 – vDPp and ObjectScale 

As a result of this new capability, VCF on VxRail customers can further extend the storage flexibility the platform can support with S3 compatible object storage delivered as part of the turnkey cloud infrastructure management/operations experience.

Giving customers more storage flexibility resolution: Check!

VCF on VxRail Lifecycle Management Enhancements

Improved SDDC Manager LCM Prechecks

This release brings even more intelligence that is embedded into the SDDC Manager LCM precheck workflow. When performing an upgrade, the SDDC Manager needs to communicate to various components to complete various actions as well as requiring that certain system resources be configured correctly and are available.

To avoid any potential issues during LCM activities, VCF administrators can run SDDC Manager prechecks to weed any issues out before any LCM operation is executed. In this latest release SDDC Manager now adds six additional checks. These include:

  • Password validity (including expired passwords)
  • File system permissions
  • File system capacity
  • CPU reservation for NSX-T Managers
  • Hosts in maintenance mode
  • DRS configuration mode

All these checks apply to ESXi, vCenter, NSX-T, NSX-T Edge VMs, VxRail Manager, and vRealize Suite components in the VCF on VxRail environment. Figure 2 below illustrates some examples of what these prechecks look like from the SDDC Manager UI.

 Figure 2 – New SDDC Manager Prechecks

Giving customers enhanced LCM improvements resolution: Check!

vRealize Suite Lifecycle Manager Flexible Upgrades

VCF 4.4 has been enhanced to allow vRealize suite products to be updated independently without having to upgrade the VCF SDDC stack.

 

 Figure 3 – vRSLCM Flexible Upgrades 

This means that from VCF 4.4 on, administrators will use vRSLCM to manage vRealize Suite update bundles and orchestrate and apply those upgrades to vRealize Suite products (vRealize Automation, vRealize Operations, vRealize Log Insight, Workspace ONE Access, and more) independently from the core VCF version upgrade to help better align with an organization’s business requirements. It also helps decouple VCF infrastructure team updates from DevOps team updates enabling teams to consume new vRealize features quickly. And finally, it enables an independent update cadence between VCF and vRealize versions which boosts and improves interoperability flexibility. And who doesn’t like flexibility? Am I right?

One last note with this enhancement: SDDC Manager will no longer be used to manage vRealize Suite component update bundles and orchestrate vRealize Suite component LCM updates. With this change, future versions of VCF will not include vRealize Suite components as part of its software components. vRSLCM will still be a part of VCF software components validated for compatibility for each VCF release since that will continue to be deployed and updated using SDDC Manager. As such, SDDC Manager continues to manage vRSLCM install and update bundles just as it has done up to this point.

Giving customers enhanced LCM flexibility resolution: Check!

VCF on VxRail Hardware Platform Enhancements

Support For New 15th Generation Intel-Based VxRail Dynamic Node Platforms

VxRail 7.0.320 includes support for the latest 15th Generation VxRail dynamic nodes for the E, P, and V series models. These can be used when deploying VMFS on FC Principal storage VxRail VI Workload Domain clusters. Figure 4 below highlights details for each model.

       

 Figure 4 – New 15th Generation VxRail dynamic node models

Also, as it relates to using VxRail dynamic nodes when deploying VMFS on FC Principal storage, support for using NVMe over FC configurations has also been introduced since it is a part of the VxRail 7.0.320 release that VCF on VxRail customers can just inherit from VxRail. It’s like finding a fifth chicken nugget in the bag after ordering the four-piece meal! Wait, it is New Year’s—I should have used a healthier food example. Oops!

Support For New 15th Generation Intel-Based VxRail With vSAN Platforms (S670 and E660N)

In addition to new 15th generation dynamic nodes, this release introduces support for two new 15th generation VxRail node types, the S670 and E660N. The S670 is our 2U storage density optimized hybrid platform based on the PowerEdge R750 while the E660N is our 1U “everything” all NVMe platform based on the PowerEdge R650.

Giving customers more hardware platform choices resolution: Check!

VCF on VxRail Developer and AI-Ready Enterprise Platform Enhancements

NVIDIA GPU Options for AI and ML Workload Use Cases

As AI and ML applications are becoming more critical within organizations, IT teams are looking at the best approaches to run them within their own data centers to ensure ease of manageability and scale, improved security, and maintaining governance.

As a follow on to the innovative and collaborative partnerships between Dell Technologies, VMware, and NVIDIA that were first introduced at VMworld 2021, we are happy to announce, with this VCF on VxRail release, the ability to run GPUs within VMware Cloud Foundation 4.4 on VxRail 7.0.320 to deliver an end-to-end AI-Ready enterprise platform that is simple to deploy and operate.                   

                      

Figure 5 – VCF with Tanzu on VxRail + NVIDIA AI-Ready Enterprise Platform

VMware Cloud Foundation with Tanzu, when used together with NVIDIA certified systems like VxRail and NVIDIA AI Enterprise Suite software, deliver an end-to-end AI / ML enterprise platform. And with VxRail being the first and only HCI Integrated System certified with NVIDIA AI Enterprise Suite and its supported GPUs, IT teams can deliver and provision GPU resources quickly in a variety of ways, while also allowing data scientists to easily consume and scale GPU resources quickly when they need it.

While getting into all the details on getting this set up is beyond the scope of this blog post, you can find more information on using NVIDIA GPUs with VxRail and NVIDIA AI Enterprise Software Suite using the link at the end of this post. VMware has additional information about this new support in a blog post that you can check out using the link at the bottom of this page.

Giving customers a simple path to unlock the power of their data resolution: Check!

VCF on VxRail Operations Enhancements

Configure DNS/NTP From SDDC Manager UI

This new feature simplifies and streamlines DNS and NTP Day 2 management operations for cloud administrators. In previous releases, all DNS and NTP configuration was included in the VCF Bring Up Parameter file that was used by Cloud Builder at the time of VCF on VxRail installation. But there was no straightforward way to make updates or changes to these settings once VCF on VxRail has been deployed. Now, if additional modifications are needed to these configurations, they can be performed within the SDDC Manager UI as a simple Day 2 operation. This feature integrates SDDC Manager with native VxRail APIs to automate VxRail cluster DNS/NTP settings. The figure below shows what this looks like.

                                             

 Figure 6 – DNS/NTP Day 2 Configuration From SDDC Manager UI

Giving customers a simpler and more flexible day 2 operations experience resolution: Check!

VCF on VxRail Security Enhancements

Activity Logging For VCF REST API Call-Driven Actions

Administrators can now ensure audit tracking for activity that takes place using the VCF REST API. In this release, SDDC Manager logs capture SDDC Manager API activity from SDDC Manager UI and other sources with user context. This can be used to ensure audit tracking of VCF activity and making analyzing logs easier to understand. Figure 5 below illustrates this activity. The log entries include the following data points:

  • Timestamp
  • Username
  • Client IP
  • User agent
  • API called
  • API method

 Figure 7 – SDDC Manager REST API Activity Logging

Each of the SDDC Manager core services has a dedicated activity log. These logs are in the respective /var/log/vmware/vcf/*service*/ service directories on the SDDC Manager VM.

Giving customers enhanced security logging resolution – Check!

Enhanced Access Security Hardening

This release disables the SSH service on ESXi hosts by default, following the vSphere security configuration guide recommendation.

This applies to new and upgraded VMware Cloud Foundation 4.4 on VxRail 7.0.320 deployments.

Giving customers enhanced default platform security hardening resolution: Check!

Log4j and Apache HTTP Server Fixes

No security conversation is complete without addressing the headache that has been the talk of the technology world recently and that is the Log4j and Apache HTTP Server vulnerability discoveries. VCF on VxRail customers can be rest assured that as a part of this release fixes for these vulnerabilities are included.

Kicking Log4j and Apache HTTP bugs to the curb resolution: Check!

To wrap up…

Well, that about covers it for this new batch of updates. For the full list of new features, please refer to the release notes listed below. There are additional resource links at the bottom of this post. We hope to continue making good on our VCF on VxRail platform resolutions throughout the year! Hopefully, we all can say the same for ourselves in other areas of our lives. Now, where is that treadmill...?

Author: Jason Marques

Twitter: @vWhipperSnapper

Additional resources

VMware Cloud Foundation on Dell VxRail Release Notes

VxRail page on DellTechnologies.com

VxRail page on InfoHub

VxRail Videos

Virtualizing GPUs for AI Workloads with NVIDIA AI Enterprise Suite and VxRail Whitepaper

VMware Blog Post on new VCF 4.4 support of NVIDIA AI Enterprise Suite and GPUs

Home > Integrated Products > VxRail > Blogs

VMware VxRail VMware Cloud Foundation

The Latest VxRail Platform Innovation is Now Included in Your Cloud

Jason Marques Jason Marques

Tue, 25 Aug 2020 10:33:16 -0000

|

Read Time: 0 minutes

The Dell Technologies Cloud Platform, VCF on VxRail, now supports the latest VxRail HCI System Software release featuring a new and improved first run experience, host geo-location tagging capabilities, hardware platform updates, and enhanced security features

Dell Technologies and VMware are happy to announce the general availability VCF 4.0.1.1 on VxRail 7.0.010. 

This release brings support for the latest version of VxRail to the Dell Technologies Cloud Platform. Let’s review what these new features are all about. 


Updated VxRail Software Bill of Materials 

Please check out the VCF on VxRail release notes for a full listing of the supported software BOM associated with this release. You can find the link at the bottom of page. 



VxRail Hardware Platform Updates 

VxRail 7.0.010 brings about new support for ruggedized D-Series VxRail hardware platforms (D560/D560F). These ruggedized and durable platforms are designed to meet the demand for more compute, performance, storage, and more importantly, operational simplicity that deliver the full power of VxRail for workloads at the edge, in challenging environments, or for space-constrained areas. To read more about the technical details of VxRail D-Series, check out the VxRail D-Series Spec Sheet. 


Also, this release is reintroducing GPU support that was not in the initial VCF 4.0 on VxRail 7.0 release. 



New and Improved VxRail First Run Experience  

A new Day 1 VxRail cluster first run workflow and UI enhancements have been updated. The new day 1 VxRail first run deployment wizard is comprised of 13 steps or top level tasks. This day 1 workflow update was required to support new VxRail HCI System software enhancements. 


The new UI provides for improved levels of configuration data entry flexibility during deployment. These options include things like allowing unique hostnames for each ESX host without forcing a name configuration, allowing for non-sequential IP addresses for hosts in the cluster, support for a geographical location ID tag, e.g. Rack Name or Rack Location are now supported. It provides a cleaner interface with a consistent look and feel for Information, Warnings, and Errors. There is improved validation, providing a higher level of feedback when errors are encountered of validation checks fail. And finally, options to manually enter all the configuration parameters or upload a pre-defined configuration via a YAML or JSON file are till available too! The figure below illustrates the new first run steps and UI. 


 

Figure 1

 

New VxRail API to Automate Day 1 VxRail First Run Cluster Creation 

This feature allows for fast and consistent VxRail cluster deployments using the programmatic extensibility of a REST API. It provides administrators with an additional option for creating VxRail clusters in addition to the VxRail Manager first run UI.  



Day 1 Support to Initially Deploy Up to Six Nodes in a VxRail Cluster During VxRail First Run 

The previous maximum node deployment supported in the VxRail first run was four. Administrators who needed larger VxRail cluster sizes over four nodes would have needed to create the cluster with four nodes and once that was in place, perform node expansions to get to the desired cluster size. This new feature helps reduce time needed to initially create larger VxRail clusters by allowing for a larger starting point of six VxRail nodes. 



VxRail Host Geo-Location Tagging 

This is probably one of the coolest and most underrated features in the release in my opinion. VxRail Manager now supports geographic location tags for VxRail hosts. This capability allows for important admin-defined host metadata that can assist many customers in gaining greater visibility of the physical location of the HCI infrastructure that makes up their cloud. This information is configured as “Host Settings” during VxRail first run as illustrated in the figure below. 



Figure 2

 

As shown, the two values that make up the geo-location tags are Rack Name and Rack Position. These values are stored in the iDRAC of each VxRail host. You may be asking yourself, “Great! I have the ability to add additional metadata for my VxRail hosts but what can I do with it?”. Well, together, these values help a cloud administrator identify a VxRail host’s position within a given rack within the data center. Cloud administrators can then leverage this data to choose the VxRail host order they want to be displayed in the VxRail Manager vCenter plugin Physical View. The figure below illustrates what this would look like. 



Figure 3

 

As datacenter environments grow, VxRail host expansion operations can be used to add additional infrastructure capacity. The VxRail “Add VxRail Hosts” automated expansion workflows have been updated to include a new Host Location step which allows for the ability add geo-location Rack Name and Rack Position metadata for the new hosts being added to an existing VxRail Cluster. The figure below shows what a host expansion operation would look like. 



Figure 4

 

In this fast paced world of digital transformation, it is not uncommon for cloud datacenter infrastructure to be moved within a datacenter after it has already been installed. This could be due to physical rack expansion design changes or infrastructure repurposing. These situations were also considered with using VxRail geo-location tags. Thus, there is an option to dynamically edit an existing host’s geo-location information. When this is performed, VxRail Manager will automatically update the host’s iDRAC with the new values. The figure below shows what the host edit would look like. 



Figure 5

 

All these geo-location management capabilities provide VCF on VxRail administrators with full stack physical to virtual infrastructure mapping that help further extend the Cloud Foundation management experience and simplify operations! And this capability is only available with the Dell Technologies Cloud Platform (VCF on VxRail)! How cool is that?! 



VxRail Security Enhancements 


Added Security Compliance With The Addition of FIPS 140-2 Level 1 Validated Cryptography For VxRail Manager 

Cloud Foundation on VxRail offers intrinsic security built into every layer of the solution stack, from hardware silicon to storage to compute to networking to governance controls.  This helps customers make security a built part of the platform for your traditional workloads as well as container based cloud native workloads rather than something that is bolted on after the fact. 

 

Building on the intrinsic security capabilities of the platform are the following new features: 


VxRail Manager is now FIPS 140-2 compliant, offering built-in intrinsic encryption, meeting the high levels of security standards required by the US Department of Defense. 


From VxRail 7.0.010 onward, VxRail has ‘FIPS inside’! This would entail having built-in features such as: 

  • VxRail Manager Data-in-Transit (e.g., HTTPS interfaces, SSH) 
  • VxRail Manager's SLES12 FIPS usage 
  • VxRail Manager - encryption used for password caching 


Disable VxRail LCM operations from vCenter 

In order to limit administrator configuration error by allowing for the performing of VxRail LCM operations from within vCenter rather than through SDDC Manager, all VCF on VxRail deployments will natively lockdown the vSphere Web Client VxRail Manager Plugin Updates screen out of the box. This enforces administrators to use SDDC Manager for all LCM operations which will guarantee that the full stack of HW/SW used have all been qualified and validated for their environment. The figure below illustrates what this looks like. 



Figure 6

 


Disable VxRail Host Rename/Re-IP operations in vCenter 


Continuing with the idea of trying to limit administration configuration errors, this feature deals with trying to avoid configuration errors by not allowing administrators to perform VxRail Host Edit operations from within vCenter that are not supported in VCF. This helps maintain an operating experience in which all VCF on VxRail deployments will natively lockdown the vSphere Web Client VxRail Manager Plugin Hosts screen out of the box. The figure below illustrates what this looks like



Figure 7

 

Now those are some intrinsic security features! 

 

Well that about covers all the new features! Thanks for taking the time to learn more about this latest release. As always, check out some of the links at the bottom of this page to access additional VCF on VxRail resources. 


Jason Marques 

Twitter -@vwhippersnapper 



Additional Resources