
Find Your Edge: Running SUSE Rancher and K3s with SLE Micro on Dell VxRail
Tue, 16 Aug 2022 13:51:15 -0000
|Read Time: 0 minutes
The goal of our ongoing partnership between Dell Technologies and SUSE is to bring validated modern products and solutions to market that enable our joint customers to operate CNCF-Certified Kubernetes clusters in the core, in the cloud, and at the edge, to support their digital businesses and harness the power of their data.
Existing examples of this collaboration have already begun to bear fruit with work done to validate SUSE Rancher and RKE2 on Dell VxRail. You can find more information on that in a Solution Brief here and blog post here. This initial example was to highlight deploying and operating Kubernetes clusters in a core datacenter use case.
But what about providing examples of jointly validated solutions for near edge use cases? More and more organizations are looking to deploy solutions at the edge since that is an increasing area where data is being generated and analyzed. As a result, this is where the focus of our ongoing technology validation efforts recently moved.
Our latest validation exercise featured deploying SUSE Rancher and K3s with the SUSE Linux Enterprise Micro operating system (SLE Micro) and running it on Dell VxRail hyperconverged infrastructure. These technologies were installed in a non-production lab environment by a team of SUSE and Dell VxRail engineers. All the installation steps that were used followed the SUSE documentation without any unique VxRail customization. This illustrates the seamless compatibility of using these technologies together and allowing for standardized deployment practices with the out-of-the-box system capabilities of both VxRail and the SUSE products.
Solution Components Overview
Before jumping into the details of the solution validation itself, let’s do a quick review of the major components that we used.
SUSE Rancher is a complete software stack for teams that are adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes (K8s) clusters, including lightweight K3s clusters, across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.
K3s is a CNCF sandbox project that delivers a lightweight yet powerful certified Kubernetes distribution.
SUSE Linux Enterprise Micro (SLE Micro) is an ultra-reliable, lightweight operating system purpose built for containerized and virtualized workloads.
Dell VxRail is the only fully integrated, pre-configured, and tested HCI system optimized with VMware vSphere, making it ideal for customers who want to leverage SUSE Rancher, K3s, and SLE Micro through vSphere to create and operate lightweight Kubernetes clusters on-premises or at the edge.
Validation Deployment Details
Now, let’s dive into the details of the deployment for this solution validation.
First, we deployed a single VxRail cluster with these specifications:
- 4 x VxRail E660F nodes running VxRail 7.0.370 version software
- 2 x Intel® Xeon® Gold 6330 CPUs
- 512 GB RAM
- Broadcom Adv. Dual 25 Gb Ethernet NIC
- 2 x vSAN Disk Groups:
- 1 x 800 GB Cache Disk
- 3 x 4 TB Capacity Disks
- vSphere K8s CSI/CNS
After we built the VxRail cluster, we deployed a set of three virtual machines running SLE Micro 5.1. We installed a multi-node K3s cluster running version 1.23.6 with Server and Agent services, Etcd, and a ContainerD container runtime on these VMs. We then installed SUSE Rancher 2.6.3 on the K3s cluster. Also included in the K3s cluster for Rancher installation were Fleet GitOps services, Prometheus monitoring and metrics capture services, and Grafana metrics visualization services. All of this formed our Rancher Management Server.
We then used Rancher to deploy managed K3s workload clusters. In this validation, we deployed the two managed K3 workload clusters on the Rancher Management Server cluster. These managed workload clusters were single node and six-node K3s clusters all running on vSphere VMs with the SLE Micro operating system installed.
You can easily modify this validation to be more highly available and production ready. The following diagram shows how to incorporate more resilience.
Figure 1: SUSE Rancher and K3s with SLE Micro on Dell VxRail - Production Architecture
The Rancher Management Server stays the same, because it was already deployed with a highly available four-node VxRail cluster and three SLE Micro VMs running a multi-node K3 cluster. As a production best practice, managed K3s workload clusters should run on separate highly available infrastructure from the Rancher Management Server to maintain separation of management and workloads. In this case, you can deploy a second four-node VxRail cluster. For the managed K3s workload clusters, you should use a minimum of three-node clusters to provide high availability for both the Etcd services and the workloads running on it. However, three nodes are not enough to provide node separation and high availability for the Etcd services and workloads. To remedy this, you can deploy a minimum six-node K3s cluster (as shown in the diagram with the K3s Kubernetes 2 Prod cluster).
Summary
Although this validation features Dell VxRail, you can also deploy similar architectures using other Dell hardware platforms, such as Dell PowerEdge and Dell vSAN Ready Nodes running VMware vSphere!
For more information and to see other jointly validated reference architectures using Dell infrastructure with SUSE Rancher, K3s, and more, check out the following resource pages and documentation. We hope to see you back here again soon.
Author: Jason Marques
Dell Resources
- Dell VxRail Hyperconverged Infrastructure
- Dell VxRail System TechBook
- Solution Brief – Running SUSE Rancher and K3s with SLE Micro on Dell VxRail
- Reference Architecture - SUSE Rancher, K3s, and SUSE Linux Enterprise Micro for Edge Computing: Based on Dell PowerEdge XR11 and XR12 Servers
SUSE Resources
Related Blog Posts

Find Your Edge: Running SUSE Rancher and K3s with SLE Micro on Dell VxRail
Wed, 28 Sep 2022 10:26:37 -0000
|Read Time: 0 minutes
Find Your Edge: Running SUSE Rancher and K3s with SLE Micro on Dell VxRail
Existing examples of this collaboration have already begun to bear fruit with work done to validate SUSE Rancher and RKE2 on Dell VxRail. You can find more information on that in a Solution Brief here and blog post here. This initial example was to highlight deploying and operating Kubernetes clusters in a core datacenter use case.
But what about providing examples of jointly validated solutions for near edge use cases? More and more organizations are looking to deploy solutions at the edge since that is an increasing area where data is being generated and analyzed. As a result, this is where the focus of our ongoing technology validation efforts recently moved.
Our latest validation exercise featured deploying SUSE Rancher and K3s with the SUSE Linux Enterprise Micro operating system (SLE Micro) and running it on Dell VxRail hyperconverged infrastructure. These technologies were installed in a non-production lab environment by a team of SUSE and Dell VxRail engineers. All the installation steps that were used followed the SUSE documentation without any unique VxRail customization. This illustrates the seamless compatibility of using these technologies together and allowing for standardized deployment practices with the out-of-the-box system capabilities of both VxRail and the SUSE products.
Solution Components Overview
Before jumping into the details of the solution validation itself, let’s do a quick review of the major components that we used.
- SUSE Rancher is a complete software stack for teams that are adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes (K8s) clusters, including lightweight K3s clusters, across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.
- K3s is a CNCF sandbox project that delivers a lightweight yet powerful certified Kubernetes distribution.
- SUSE Linux Enterprise Micro (SLE Micro) is an ultra-reliable, lightweight operating system purpose built for containerized and virtualized workloads.
- Dell VxRail is the only fully integrated, pre-configured, and tested HCI system optimized with VMware vSphere, making it ideal for customers who want to leverage SUSE Rancher, K3s, and SLE Micro through vSphere to create and operate lightweight Kubernetes clusters on-premises or at the edge.
Validation Deployment Details
Now, let’s dive into the details of the deployment for this solution validation.
First, we deployed a single VxRail cluster with these specifications:
- 4 x VxRail E660F nodes running VxRail 7.0.370 version software
- 2 x Intel® Xeon® Gold 6330 CPUs
- 512 GB RAM
- Broadcom Adv. Dual 25 Gb Ethernet NIC
- 2 x vSAN Disk Groups:
- 1 x 800 GB Cache Disk
- 3 x 4 TB Capacity Disks
- vSphere K8s CSI/CNS
After we built the VxRail cluster, we deployed a set of three virtual machines running SLE Micro 5.1. We installed a multi-node K3s cluster running version 1.23.6 with Server and Agent services, Etcd, and a ContainerD container runtime on these VMs. We then installed SUSE Rancher 2.6.3 on the K3s cluster. Also included in the K3s cluster for Rancher installation were Fleet GitOps services, Prometheus monitoring and metrics capture services, and Grafana metrics visualization services. All of this formed our Rancher Management Server.
We then used Rancher to deploy managed K3s workload clusters. In this validation, we deployed the two managed K3 workload clusters on the Rancher Management Server cluster. These managed workload clusters were single node and six-node K3s clusters all running on vSphere VMs with the SLE Micro operating system installed.
You can easily modify this validation to be more highly available and production ready. The following diagram shows how to incorporate more resilience.
Figure 1: SUSE Rancher and K3s with SLE Micro on Dell VxRail - Production Architecture
The Rancher Management Server stays the same, because it was already deployed with a highly available four-node VxRail cluster and three SLE Micro VMs running a multi-node K3 cluster. As a production best practice, managed K3s workload clusters should run on separate highly available infrastructure from the Rancher Management Server to maintain separation of management and workloads. In this case, you can deploy a second four-node VxRail cluster. For the managed K3s workload clusters, you should use a minimum of three-node clusters to provide high availability for both the Etcd services and the workloads running on it. However, three nodes are not enough to provide node separation and high availability for the Etcd services and workloads. To remedy this, you can deploy a minimum six-node K3s cluster (as shown in the diagram with the K3s Kubernetes 2 Prod cluster).
Summary
Although this validation features Dell VxRail, you can also deploy similar architectures using other Dell hardware platforms, such as Dell PowerEdge and Dell vSAN Ready Nodes running VMware vSphere!
For more information and to see other jointly validated reference architectures using Dell infrastructure with SUSE Rancher, K3s, and more, check out the following resource pages and documentation. We hope to see you back here again soon.
Author: Jason Marques
Dell Resources
- Dell VxRail Hyperconverged Infrastructure
- Dell VxRail System TechBook
- Solution Brief – Running SUSE Rancher and K3s with SLE Micro on Dell VxRail
- Reference Architecture - SUSE Rancher, K3s, and SUSE Linux Enterprise Micro for Edge Computing: Based on Dell PowerEdge XR11 and XR12 Servers
SUSE Resources

Improved management insights and integrated control in VMware Cloud Foundation 4.5 on Dell VxRail 7.0.400
Tue, 11 Oct 2022 12:59:13 -0000
|Read Time: 0 minutes
The latest release of the co-engineered hybrid cloud platform delivers new capabilities to help you manage your cloud with the precision and ease of a fighter jet pilot in the cockpit! The new VMware Cloud Foundation (VCF) on VxRail release includes support for the latest Cloud Foundation and VxRail software components based on vSphere 7, the latest VxRail P670N single socket All-NVMe 15th Generation HW platform, and VxRail API integrations with SDDC Manager. These components streamline and automate VxRail cluster creation and LCM operations, provide greater insights into platform health and activity status, and more! There is a ton of airspace to cover, ready to take off? Then buckle up and let’s hit Mach 10, Maverick!
VCF on VxRail operations and serviceability enhancements
Support for VxRail cluster creation automation using SDDC Manager UI
The best pilots are those that can access the most fully integrated tools to get the job done all from one place: the cockpit interface that they use every day. Cloud Foundation on VxRail administrators should also be able to access the best tools, minus the cockpit of course.
The newest VCF on VxRail release introduces support for VxRail cluster creation as a fully integrated end-to-end SDDC Manager workflow, driven from within the SDDC Manager UI. This integrated API-driven workload domain and VxRail cluster SDDC Manager feature extends the deep integration capabilities between SDDC Manager and VxRail Manager. This integration enables users to VxRail clusters when creating new VI workload domains or expanding existing workload domains (by adding new VxRail clusters into them) all from an SDDC Manager UI-driven end-to-end workflow experience.
In the initial SDDC Manager UI deployment workflow integration, only unused VxRail nodes discovered by VxRail Manager are supported. It also only supports clusters that are using one of the VxRail predefined network profile cluster configuration options. This method supports deploying VxRail clusters using both vSAN and VMFS on FC as principal storage options.
Another enhancement allows administrators to provide custom user-defined cluster names and custom user-defined VDS and port group names as configuration parameters as part of this workflow.
You can watch this new feature in action in this demo.
Now that’s some great co-piloting!
Support for SDDC Manager WFO Script VxRail cluster deployment configuration enhancements
Th SDDC Manager WFO Script deployment method was first introduced in VCF 4.3 on VxRail 7.0.202 to support advanced VxRail cluster configuration deployments within VCF on VxRail environments. This deployment method is also integrated with the VxRail API and can be used with or without VxRail JSON cluster configuration files as inputs, depending on what type of advanced VxRail cluster configurations are desired.
Note:
- The legacy method for deploying VxRail clusters using the VxRail Manager Deployment Wizard has been deprecated with this release.
- VxRail cluster deployments using the SDDC Manager WFO Script method currently require the use of professional services.
Proactive notifications about expired passwords and certificates in SDDC Manager UI and from VCF public API
To deliver improved management insights into the cloud infrastructure system and its health status, this release introduces new proactive SDDC Manager UI notifications for impending VCF and VxRail component expired passwords and certificates. Now, within 30 days of expiration, a notification banner is automatically displayed in the SDDC Manager UI to give cloud administrators enough time to plan a course of action before these components expire. Figure 1 illustrates these notifications in the SDDC Manager UI.
Figure 1. Proactive password and certificate expiration notifications in SDDC Manager UI
VCF also displays different types of password status categories to help better identify a given account’s password state. These status categories include:
- Active – Password is in a healthy state and not within a pending expiry window. No action is necessary.
- Expiring – Password is in a healthy state but is reaching a pending expiry date. Action should be taken to use SDDC Manager Password Management to update the password.
- Disconnected – Password of component is unknown or not in sync with the SDDC Manager managed passwords database inventory. Action should be taken to update the password at the component and remediate with SDDC Manager to resync.
The password status is displayed on the SDDC Manager UI Password Management dashboard so that users can easily reference it.
Figure 2. Password status display in SDDC Manager UI
Similarly, certificate status state is also monitored. Depending on the certificate state, administrators can remediate expired certificates using the automated SDDC Manager certificate management capabilities, as shown in Figure 3.
Figure 3. Certificate status and management in SDDC Manager UI
Finally, administrators looking to capture this information programmatically can now use the VCF public API to query the system for any expired passwords and certificates.
Add and delete hosts from WLD clusters within a workload domain in parallel using SDDC Manager UI or VCF public API
Agility and efficiency are what cloud administrators strive for. The last thing anyone wants is to have to wait for the system to complete a task before being able to perform the next one. To address this, VCF on VxRail now allows admins to add and delete hosts in clusters within a workload domain in parallel using the SDDC Manager UI or VCF Public API. This helps to perform infrastructure management operations faster: some may even say at Mach 9!
Note:
- Prerequisite: Currently, VxRail nodes must be added to existing clusters using VxRail Manager first prior to executing SDDC Manager add host workflow operations in VCF.
- Currently a maximum of 10 operations of each type can be performed simultaneously. Always check the VMware Configuration Maximums Guide for VCF documentation for the latest supported configuration maximums.
SDDC Manager UI: Support for Day 2 renaming of VCF cluster objects
To continue making the VCF on VxRail platform more accommodating to each organization’s governance policies and naming conventions, this release enables administrators to rename VCF cluster objects from within the SDDC Manager UI as a Day 2 operation.
New menu actions to rename the cluster are visible in-context when operating on cluster objects from within the SDDC Manager UI. This is just the first step in a larger initiative to make VCF on VxRail even more adaptable with naming conventions across many other VCF objects in the future. Figure 4 describes new in-context rename cluster menu option looks like.
Figure 4. Day 2 Rename Cluster Menu Option in SDDC Manager UI
Support for assigning user defined tags to WLD, cluster, and host VCF objects in SDDC Manager
VCF on VxRail now incorporates SDDC Manager support for assigning and displaying user defined tags for workload domain, cluster, and host VCF objects.
Administrators now see a new Tags pane in the SDDC Manager UI that displays tags that have been created and assigned to WLD, cluster, and host VCF objects. If no tags exist, are not assigned, or if changes to existing tags are needed, there is an assign link that allows an administrator to assign the tag or link and launch into that object in vCenter where tag management (create, delete, modify) can be performed. When tags are instantiated, VCF syncs them and allow administrators to assign and display them in the tags pane in the SDDC Manager UI, as shown in Figure 5.
Figure 5. User-defined tags visibility and assignment, using SDDC Manager
Support for SDDC Manager onboarding within SDDC Manager UI
VCF on VxRail is a powerful and flexible hybrid cloud platform that enables administrators to manage and configure the platform to meet their business requirements. To help organizations make the most of their strategic investments and start operationalizing them quicker, this release introduces support for a new SDDC Manager UI onboarding experience.
The new onboarding experience:
- Focuses on Learn and plan and Configure SDDC Manager phases with drill down to configure each phase
- Includes in-product context that enables administrators to learn, plan, and configure their workload domains, with added details including documentation articles and technical illustrations
- Introduces a step-by-step UI walkthrough wizard for initial SDDC Manager configuration setup
- Provides an intuitive UI guided walkthrough tour of SDDC Manager UI in stages of configuration that reduces the learning curve for customers
- Provides opt-out and revisit options for added flexibility
Figure 6 illustrates the new onboarding capabilities.
Figure 6. SDDC Manager Onboarding and UI Tour Experience
VCF on VxRail lifecycle management enhancements
VCF integration with VxRail Retry API
The new VCF on VxRail release delivers new integrations with SDDC Manager and the VxRail Retry API to help reduce overall LCM performance time. If a cloud administrator has attempted to perform LCM operations on a VxRail cluster within their VCF on VxRail workload domain and only a subset of those nodes within the cluster can be upgraded successfully, another LCM attempt would be required to fully upgrade the rest of the nodes in the cluster.
Before VxRail Retry API, the VxRail Manager LCM would start the LCM from the first node in the cluster and scan each one to determine if it required an upgrade or not, even if the node was already successfully upgraded. This rescan behavior added unnecessary time to the LCM execution window for customers with large VxRail clusters.
The VxRail Retry API has made LCM even smarter. During an LCM update where a cluster has a mix of updated and non-updated nodes, VxRail Manager automatically skips right to the non-updated nodes only and runs through the LCM process from there until all remaining non-updated nodes are upgraded. This can provide cloud administrators with significant time savings. Figure 7 shows the behavior difference between standard and enhanced VxRail Retry API Behavior.
Figure 7. Comparison between standard and enhanced VxRail Retry API LCM Behavior
The VxRail Retry API behavior for VCF 4.5 on VxRail 7.0.400 has been natively integrated into the SDDC Manager LCM workflow. Administrators can continue to manage their VxRail upgrades within the SDDC Manager UI per usual. They can also take advantage of these improved operational workflows without any additional manual configuration changes.
Improved SDDC Manager prechecks
More prechecks have been integrated into the platform that help fortify platform stability and simplify operations. These are:
- Verification of valid licenses for software components
- Checks for expired NSX Edge cluster passwords
- Verification of system inconsistent state caused by any prior failed workflows
- Additional host maintenance mode prechecks
- Determine if a host is in maintenance mode
- Determine whether CPU reservation for NSX-T is beyond VCF recommendation
- Determine whether DRS policy has changed from the VCF recommended (Fully Automated)
- Additional filesystem capacity and permissions checks
While VCF on VxRail has many core prechecks that monitor many common system health issues, VCF on VxRail will continue to integrate even more into the platform with each new release.
Support for vSAN health check silencing
The new VCF on VxRail release also includes vSAN health check interoperability improvements. These improvements allow VCF to:
- Address common upgrade blockers due to vSAN HCL precheck false positives
- Allow vSAN pre-checks to be more granular, which enables the administrator to only perform those that are applicable to their environment
- Display failed vSAN health checks during LCM operations of domain-level pre-checks and upgrades
- Enable the administrators to silence the health checks
Display VCF configurations drift bundle progress details in SDDC Manager UI during LCM operations
In a VCF on VxRail context, configuration-drift is a set of configuration changes that are required to bring upgraded BOM components (such as vCenter, NSX, and so on) with a new VCF on VxRail installation. These configuration changes are delivered by VCF configuration-drift LCM update bundles.
VCF configuration drift update improvements deliver greater visibility into what specifically is being changed, improved error details for better troubleshooting, and more efficient behavior for retry operations.
VCF Async Patch Tool support
VCF Async Patch Tool support offers both LCM and security enhancements.
Note: This feature is not officially included in this new release, but it is newly available.
The VCF Async Patch Tool is a new CLI based tool that allows cloud administrators to apply individual component out-of-band security patches to their VCF on VxRail environment, separate from an official VCF LCM update release. This enables organizations to address security vulnerabilities faster without having to wait for a full VCF release update. It also gives administrators control to install these patches without requiring the engagement of support resources.
Today, VCF on VxRail supports the ability to use the VCF Async Patch Tool for NSX-T and vCenter security patch updates only. Once patches have been applied and a new VCF BOM update is available that includes the security fixes, administrators can use the tool to download the latest VCF LCM release bundles and upgrade their environment back to an official in-band VCF release BOM. After that, administrators can continue to use the native SDDC Manager LCM workflow process to apply additional VCF on VxRail upgrades.
Note: Using VCF Async Patch Tool for VxRail and ESXi patch updates is not yet supported for VCF on VxRail deployments. There is currently separate manual guidance available for customers needing to apply patches for those components.
Instructions on downloading and using the VCF Async Patch Tool can be found here.
VCF on VxRail hardware platform enhancements
Support for 24-drive All-NVMe 15th Generation P670N VxRail platform
The VxRail 7.0.400 release delivers support for the latest VxRail 15th Generation P670N VxRail hardware platform. This 2U1N single CPU socket model delivers an All-NVMe storage configuration of up to 24 drives for improved workload performance. Now that would be powerful single engine aircraft!
Time to come in for a landing…
I don’t know about you, but I am flying high with excitement about all the innovation delivered with this release. Now it’s time to take ourselves down for a landing. For more information, see the following additional resources so you can become your organization’s Cloud Ace.
Author: Jason Marques
Twitter: @vWhipperSnapper
Additional resources
VMware Cloud Foundation on Dell VxRail Release Notes
VxRail page on DellTechnologies.com