Your Browser is Out of Date

Nytro.ai uses technology that works best in other browsers.
For a full experience use one of the browsers below

Dell.com Contact Us
US(English)

Blogs

Blog posts that relate to solutions for SUSE Containers as a Service on Dell hardware.

blogs (6)

  • SUSE Rancher
  • Dell PowerFlex

SUSECON23 – That’s a Wrap!

Simon Stevens Simon Stevens

Fri, 14 Jul 2023 15:25:36 -0000

|

Read Time: 0 minutes

SUSECON23 – That’s a Wrap!

Last week, I had the pleasure of being able to attend SUSECON23, which was held in Munich and was the first “in-person” conference that SUSE had run post pandemic. I was lucky enough to be invited to speak at one of the break-out sessions, alongside Gerson Guevara, who works in the Technical Alliances Team at SUSE. It is always a pleasure to meet up with interesting people that you can learn things from, and Gerson was a case in point! Without doubt, he is a great person and it was an honor to co-present the session with him.

Together, we jointly presented a session on “Joint Initiatives & Solutions Between SUSE and Dell PowerFlex” to a packed room. We discussed the joint Dell PowerFlex and SUSE Rancher solutions that we have available today. We also gave a tantalising glimpse of what the future might hold, by discussing the early results from a proof of concept that our Engineering teams have been working on together for a few months.

Figure 1: Co-Presenting the joint Dell PowerFlex and SUSE session at SUSECON23, Munich, June 2023

What I find very interesting is that a lot of our joint customers remain blissfully unaware of the amount of collaborative work that the Dell PowerFlex and SUSE teams have done together over the years, so I thought it best to give a quick overview on what is already out there and available today.

SUSE and Dell Technologies already have a deep partnership, one that has been built over a 20+ year period. Our joint solutions can be found in sensors and in machines, they exist in the cloud, on-premises and at the edge, all of them built with a strong open source backbone. As such, Dell and SUSE are addressing the operational and security challenges of managing multiple Kubernetes clusters, whilst also providing DevOps teams with integrated tools for running containerized workloads. What many people outside Dell might not be aware of is that a number of SUSE products & tool sets are used by Dell developers when creating the next-generation of Dell products and solutions.  

When one looks at the wide range of Kubernetes Management platforms that are available, SUSE are certainly amongst the leaders in that market today. SUSE completed its acquisition of Rancher Labs back in December 2020; by doing so, SUSE was able to bring together multiple technologies to help organizations. SUSE Rancher is a free, 100% open-source Kubernetes management platform that simplifies cluster installation and operations, whether they are on-premises, in the cloud, or at the edge, giving DevOps teams the freedom to build and run containerized applications anywhere. Rancher also supports all the major public cloud distributions, including EKS, AKS, GKE and K3s at Edge. It provides simple, consistent cluster operations, including provisioning, version management, visibility and diagnostics, monitoring and alerting, and centralized audit. Rancher itself is free and has a community support model, so for customers that absolutely need an enterprise-level of support, they can opt for Rancher Prime, which is the model that includes full enterprise support and access to SUSE’s trusted private container registries.

 Current Joint Dell PowerFlex-SUSE Rancher Solutions

Back to my earlier point about the collaboration that exists between the SUSE and Dell PowerFlex Solutions Teams. We have been working together on joint solutions for several years now and we are constantly updating our white papers to ensure that remain up to date. To simplify things we have consolidated several white papers into one, so that it not only describes how to deploy SUSE Rancher clusters with PowerFlex, but also how to then protect those systems using Dell PowerProtect Data Manager. The white paper is available on the Dell Info Hub and you can download it from here. It describes how to deploy Rancher in virtual environments, running on top of VMware ESXi, as well as deploying Rancher running on bare-metal nodes. Let me quickly run you through the various solutions that are detailed in the white paper:

Figure 2: Dell PowerFlex + Rancher Prime on top of VMware ESXi

As can be seen from Figure 2 above, the RKE2 cluster is running in a two-layer PowerFlex deployment architecture – that is to say, the PowerFlex storage resides on separate nodes than the compute nodes. This separation of storage and compute means that this architecture lends itself really well to Kubernetes environments where there still tends to be a massive disparity between the number of compute nodes needed versus the amount of persistent storage needed. PowerFlex can provide a consolidated software-defined storage infrastructure – think of scenarios where there are lots of clusters, running a mixture of both Kubernetes and VMware workloads, and where you want a shared storage platform to simplify storage operations across all workstreams.

Figure 2 also shows that the RKE2 clusters are deployed on top of VMware ESXi, to obtain the benefits of running each of the RKE2 nodes as VMs.

Figure 3: Dell PowerFlex + Rancher Prime on bare-metal compute nodes

The white paper also discusses deployment of Rancher clusters running on bare-metal HCI nodes – that is to say, deploying the Rancher Cluster onto nodes that have SSDs in them. The paper talks through an option where first SUSE SLES15 SP is installed, and then PowerFlex is installed to use the SSDs in the nodes to create a storage cluster. Then finally, the RKE2 cluster gets deployed, using the PowerFlex storage as the storage class for persistent volume claims. It is worth noting that “HCI-on-Bare-metal” PowerFlex deployments are usually done using PowerFlex custom nodes or outside of PowerFlex Manager control, as we currently do not have a PowerFlex Manager template that deploys either SLES15 SP3 or RKE2.

Figure 4: Persistent storage for SUSE Rancher clusters with PowerFlex CSI driver

Kubernetes clusters that want to access persistent storage resources need to use API calls via the CSI driver for the storage platform being used. With SUSE Rancher, it is easy to deploy the PowerFlex CSI driver, as this is available to be installed via a single-click from the SUSE Rancher Marketplace. Alternatively, the latest CSI driver is also available directly from the Dell CSM Github at: https://dell.github.io/csm-docs/docs/csidriver/ 

Figure 5: Rancher Cluster Data Protection Using Dell PowerProtect Data Manager

Finally, the white paper also discusses how to integrate SUSE Rancher-managed Kubernetes clusters with Dell PowerProtect Data Manager (PPDM) for data protection in one of two ways: by directly connecting to an RKE2 downstream single node with control plane and etcd roles, or through a load balancer, when there are multiple RKE2 nodes with control plane and etcd roles in an RKE2 downstream cluster.

Hence you can see that Dell and SUSE have worked incredibly closely to create a number of solutions, which not only give customers a choice in how they deploy their RKE2 clusters with PowerFlex (with or without VMware, using either two-layer or hyperconverged options) but also shows how such solutions can be fully protected and restored using Dell DDPM.

 A Glimpse into the Future…?

Back to the breakout session at SUSECON, everyone in the room was excited to discover what the the “hush-hush” news from the Proof-of-Concept was all about! Suffice to say that what we were presenting is not on any product roadmaps, nor has it been committed to by either company. What we were able to show was an 8-minute “summary video” that was created in our joint PoC lab environment. The video explained how we were able to use PowerFlex as a storage class in SUSE Harvester HCI clusters. For those not in the know,  SUSE Harvester provides small Kubernetes clusters for ROBO, Near-Edge and Edge use cases. Even though Harvester can make use of the Longhorn storage that resides in the Harvester cluster nodes, we are hearing of use cases where hundreds of Harvester clusters all want to access a single shared-storage platform. Does that ring any bells with my observations above? Anyway, suffice to say, such solutions are still being looked at as potential use cases – in my opinion, it is not one that I would have normally associated with edge use-cases, but having seen what was happening myself at SUSECON23 last week, I am happy to stand corrected and will continue to monitor this space going forward!

For the rest of SUSECON23, I shared my time between attending the various keynote sessions, manning the Dell booth and talking with partners and other conference attendees. There was an amount of interest in what Dell are doing in and around the container space, so it was good to be manning a booth from which we were able to show how our Dell CSM app-mobility module makes it simple to migrate Kubernetes applications between multiple Kubernetes clusters.

This was also the first time that I had attended an ‘in-person’ conference since before the pandemic, so it was genuinely fantastic to meet and be able to talk to people again. Sometimes you do not realize just much you have missed things until you do them again! However, I was also reminded of something that I had not missed when it came to finally saying “Auf Wiedersehen” to Munich and embarking on my journey back to the UK: thanks to the incredible storms that swept Germany last Thursday, we eventually took off from Munich 4 hours late, on what I was reliably informed was the last flight that the German ATC allowed to take off before they closed off German Airspace for the night. But when all is said and done….. here’s to SUSECON24!!

Simon Stevens, Dell PowerFlex Engineering Technologist

Read Full Blog
  • Kubernetes
  • CSI
  • PowerStore
  • SUSE Rancher

Dell PowerStore and Unity XT CSI Drivers Now Available in the Rancher Marketplace

Henry Wong Henry Wong

Fri, 03 Feb 2023 11:39:04 -0000

|

Read Time: 0 minutes


I am excited to announce that the PowerStore CSI driver and the Unity XT CSI driver are now available in the Rancher Marketplace. Customers have always been able to deploy the CSI drivers on any compatible Kubernetes cluster through a series of manual steps and command lines. If you are using Rancher to manage your Kubernetes clusters, you can now seamlessly deploy the drivers to the managed Kubernetes clusters through the familiar Rancher UI.

Dell CSI drivers

PowerStore CSI driver and Unity XT CSI driver are storage providers for Kubernetes that provide persistent storage for containers. Many containerized workloads, such as databases, often require storing data for a long period of time. The data also needs to follow the containers whenever they move between the Kubernetes nodes. With Dell CSI drivers, database applications can easily request and mount the storage from Dell storage systems as part of the automated workflow. Customers also benefit from the advanced data protection and data reduction features of Dell storage systems.

SUSE Rancher

Rancher is a high-performing open-source Kubernetes management platform. For those who operate and manage multiple Kubernetes clusters across on-premises and in the cloud, Rancher is an attractive solution because of its powerful features that unify the management and security of multiple Kubernetes clusters. Rancher can deploy and manage clusters running on on-premises infrastructure, such as VMware vSphere and on cloud providers such as Azure AKS, Google GKS, and Amazon EKS. Rancher also enhances and simplifies security with centralized user authentication, access control, and observability. The integrated App Catalog provides easy access to third-party applications and simplifies the deployment of complex applications.

The benefits of deploying Dell CSI drivers through the Rancher App Catalog are:

  • The App Catalog is based on Helm, a Kubernetes package manager. Dell CSI drivers include the Helm charts in the App Catalog to facilitate the installation and deployment process.
  • You can be confident that both Dell and SUSE have validated the deployment process.
  • A single management UI to manage all aspects of your Kubernetes clusters.
  • Enhances and centralizes user authentication and access control.
  • Simplifies the deployment process with fewer command lines and an intuitive HTML5 UI.
  • Pre-defined configurations are supplied. You can take the default values or make any necessary adjustments based on your needs.
  • Makes it easy to monitor and troubleshoot issues. You can view the status and log files of the cluster components and applications directly in the UI.

How to deploy the CSI driver in Rancher

Let me show you a simple deployment of the CSI driver in Rancher here.

NOTE: Dell CSI drivers are regularly updated for compatibility with the latest Kubernetes version. Keep in mind that the information in this article might change in future releases. To get the latest updates, check the documentation on the Dell Github page (https://dell.github.io/csm-docs/docs).

1.  First, review the requirements of the CSI driver. On the Rancher home page, click on a managed cluster. Then, on the left side panel, go to Apps > Charts. In the filter field, enter dell csi to narrow down the results. Click on the CSI driver you want to install. The install page displays the driver’s readme file that describes the overall installation process and the prerequisites for the driver. Perform all necessary prerequisite steps before moving on to the next step.

These prerequisites include, but are not limited to, ensuring that the iSCSI software, NVMe software, and NFS software are available on the target Kubernetes nodes, and that FC zoning is in place.

2.  Create a new namespace for the CSI driver in which the driver software will be installed. On the left side panel, go to Cluster > Projects/Namespaces and create a new namespace. Create a csi-powerstore namespace for PowerStore or a unity namespace for Unity XT.

You can optionally define the Container Resource Limit if desired.

3.  The CSI driver requires the array connection and credential information. Create a secret to store this information for the storage systems. On the left side panel, go to Cluster > Storage > Secrets.

For PowerStore:

  • Create an Opaque (generic) secret using a key-value pair in the csi-powerstore namespace.
  • The secret name must be powerstore-config, with the single key name config. Copy the contents of the secret.yaml file to the value field. A sample secret.yaml file with parameter definitions is available here.
  • You can define multiple arrays in the same secret.

For Unity XT:

  • Create an Opaque (generic) secret using the key-value pair in the unity namespace.
  • The secret name must be unity-creds, with the single key name config. Copy the contents of the secret.yaml file to the value field. A sample secret.yaml file is available here.
  • You can define multiple arrays in the same secret.
  • The Unity XT CSI driver also requires a certificate secret for Unity XT certificate validation. The secrets are named unity-certs-0, unity-certs-1, and so on. Each secret contains the X509 certificate of the CA that signed the Unisphere SSL certificate, in PEM format. More information is available here.

4.  Now, we are ready to install the CSI driver. Go to Apps > Charts and select the CSI driver. Click Install to start the guided installation process.

Select the appropriate namespace (csi-powerstore or unity) for the corresponding driver.

The guided installation also pre-populates the driver configuration in key/value parameters. Review and modify the configuration to suit your requirements. You can find the detailed information about these parameters in the Helm Chart info page (Click the ViewChart Info button on the installation page). (A copy of the values.yaml file that the installation uses is available here for PowerStore and here for Unity XT.)

When the installation starts, you can monitor its progress in Rancher and observe the different resources being created and started. The UI also offers easy access to the resource log files to help troubleshooting issues during the installation.

5.  Before using the CSI driver to provision Dell storage, we need to create StorageClasses that define which storage array to use and their attributes. The StorageClasses are used in Persistent Volumes to dynamically provision persistent storage.

To create StorageClasses for Dell storage systems, we use the Import YAML function to create them. If you use the Create function under Storage > StorageClasses, the UI does not offer the Dell storage provisioners in the drop-down menu. Copy and paste the contents of the StorageClass yaml file to the Import Dialog window. (Sample PowerStore StorageClasses yaml files are available here; sample Unity XT StorageClasses yaml files are available here.)

Congratulations! You have now deployed the Dell CSI driver in a Kubernetes Cluster using Rancher and are ready to provision persistent storage for the cluster applications.

Conclusion

Deploying and managing Dell CSI drivers on multiple Kubernetes clusters is made simple with Rancher. Dell storage systems are ideal storage platforms for containers to satisfy the need for flexible, scalable, and highly efficient storage. The powerful features of Rancher streamline the deployment and operation of Kubernetes clusters with unified management and security.

Resources

Author: Henry Wong, Senior Principal Engineering Technologist

Read Full Blog
  • APEX Private Cloud
  • SUSE Rancher
  • Terraform

Using Terraform to Deploy SUSE Rancher in an APEX Private Cloud Environment

Juan Carlos Reyes Juan Carlos Reyes

Fri, 03 Feb 2023 11:39:04 -0000

|

Read Time: 0 minutes


Automating deployments and managing hardware through code is a beautiful thing. Not only does it free up time, it also enables environment standardization. Infrastructure as Code (IaC) manages and provides an infrastructure through machine-readable definition files rather than manual processes.

In this blog, I demonstrate how to use HashiCorp’s Terraform, a popular open-source Infrastructure-as-Code software tool, in an APEX Private Cloud environment to deploy a fully functional SUSE Rancher cluster. By doing so, infrastructure engineers can set up environments for developers in a short time. All of this is accomplished by leveraging vSphere, Rancher, and standard Terraform providers.

Note: The PhoenixNAP GitHub served as the basis for this deployment.

 

Pre-requisites

In this blog, we assume that the following steps have been completed:

  1. Network – This is a three-node RKE2 cluster and an optional HAProxy load balancer. Assign three IPs and DNS names for the RKE2 nodes and the same for the single load balancer.
  2. Virtual Machine Gold Image – This virtual machine template will be the basis for the RKE2 nodes and the load balancer. To create a SLES 15 SP4 template with the required add-ons, see the blog Using HashiCorp Packer in Dell APEX Private Cloud.
  3. Admin Account – Have a valid vCenter account with enough permissions to create and provision components.
  4. Here is the GitHub repo with all the files and templates to follow along. Using Terraform, here are the files used to provision Rancher:
    • Main.tf file – Defines how secrets, tokens, and certificates are stored and defined in the variables.tf file. This file provides the steps for providers and resources to create the vSphere infrastructure and the commands to deploy provisioners. This is where the Rancher integration is outlined.
    • Versions.tf – Specifies which version of Terraform, Rancher, and vSphere on which providers are required to run code that contains no syntax or fatal errors.
    • Variables.tf – This file can be used for defining defaults for certain variables. It includes CPU, memory, and vCenter information.
    • Templates Folder - During the RKE2 clustering, we require a configuration YAML file that contains the RKE2 Token. This folder stores Terraform’s templates that are used to create the RKE2 configuration files. These files contain Subject Alternative Name (SAN) information and the secret required for subsequent nodes to join the cluster. There is a method to obfuscate the configuration file in a template format, making it more secure when uploading the code to a GitHub repo.
    • HAProxy Folder – This folder contains the certificate privacy enhanced mail (PEM) file, key, and configuration file for the HAProxy load balancer.
    • Files folder – The configuration files are stored after being created from the templates. You also find the scripts to deploy RKE2 and Rancher.

Creating the HAProxy Node

The first virtual machine defined in the Main.tf file is the HAProxy load balancer. The resource “vsphere_virtual_machine” creation has a standard configuration such as assigning memory, CPU, network, and so on. The critical part is when we start provisioning files to the template files. We use file provisioners to add the HAProxy configuration, certificate, and key files to the virtual machine.

Note: HashiCorp recommends using provisioners as a last-resort option. The reason is that they do not track the state of the object that is modifying and require credentials that are exposed if not appropriately handled.

I used the following command to create a valid self-signed certificate in SLES15. The name of the PEM file must be “cacerts.pem” because it is a requirement by Rancher to propagate appropriately.

openssl req -newkey rsa:2048 -nodes -keyout certificate.key -x509 -days 365 -out cacerts.pem -addext "subjectAltName = DNS:rancher.your.domain"

Next, we use a remote execution provisioner that outlines the commands to install and configure HAProxy in the virtual machine:

    inline = [
      "sudo zypper addrepo https://download.opensuse.org/repositories/server:http/SLE_15/server:http.repo",
      "sudo zypper --gpg-auto-import-keys ref",
      "sudo zypper install -y haproxy",
      "sudo mv /tmp/haproxy.cfg /etc/haproxy",
      "sudo mv /tmp/certificate.pem /etc/ssl/",
      "sudo mv /tmp/certificate.pem.key /etc/ssl/",
      "sudo mkdir /run/haproxy/",
      "sudo systemctl enable haproxy",
      "sudo systemctl start haproxy"
    ]

We add a standard OpenSUSE repo with access to the HAProxy binaries that are compatible with SLES15. Next, the HAProxy installation takes place and moves critical files to the correct location. The last couple systemctl commands start the HAProxy service.

Here is the sample HAProxy configuration file:

global
        log /dev/log    daemon
        log /var/log    local0
        log /var/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon
 
        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private
        maxconn 1024
        # Default ciphers to use on SSL-enabled listening sockets.
        # For more information, see ciphers(1SSL). This list is from:
        # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
         ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
        ssl-default-bind-options ssl-min-ver TLSv1.2 prefer-client-ciphers
         tune.ssl.default-dh-param 2048
        cpu-map  1 1
        cpu-map  2 2
        cpu-map  3 3
        cpu-map  4 4
 
defaults
        log     global
        mode    http
        option  httplog
        option   forwardfor
        option  dontlognull
        timeout connect 50000s
        timeout client  50000s
        timeout server  50000s
        retries 4
        maxconn 2000000
 
frontend www-http
        mode http
        stats enable
        stats uri /haproxy?stats
        bind *:80
        http-request set-header X-Forwarded-Proto http
        option http-server-close
        option forwardfor except 127.0.0.1
        option forwardfor header X-Real-IP
        # MODIFY host
        acl host_rancher hdr(host) -i rancher.apca1.apextme.dell.com
        acl is_websocket hdr(Upgrade) -i WebSocket
        acl is_websocket hdr_beg(Host) -i wss
        use_backend rancher if host_rancher
 
frontend www-https
        bind *:443 ssl crt /etc/ssl/certificate.pem alpn h2,http/1.1
        option http-server-close
        http-request set-header X-Forwarded-Proto https if { ssl_fc }
        redirect scheme https code 301 if !{ ssl_fc }
        option forwardfor except 127.0.0.1
        option forwardfor header X-Real-IP
        # MODIFY host
        acl host_rancher hdr(host) -i rancher.apca1.apextme.dell.com
        acl is_websocket hdr(Upgrade) -i WebSocket
        acl is_websocket hdr_beg(Host) -i wss
        use_backend rancher if host_rancher
 
frontend kubernetes
        # MODIFY IP
        bind 100.80.28.72:6443
        option tcplog
        mode tcp
        default_backend kubernetes-master-nodes
 
frontend supervisor_FE
        # MODIFY IP
        bind 100.80.28.72:9345
        option tcplog
        mode tcp
        default_backend supervisor_BE
 
backend rancher
        redirect scheme https code 301 if !{ ssl_fc }
        mode http
        balance roundrobin
        option httpchk HEAD /healthz HTTP/1.0
        # MODIFY IPs
        server rke-dev-01 100.80.28.73:80 check
        server rke-dev-02 100.80.28.74:80 check
        server rke-dev-03 100.80.28.75:80 check
 
backend kubernetes-master-nodes
        mode tcp
        balance roundrobin
        option tcp-check
        # MODIFY IPs
        server rke-dev-01 100.80.28.73:6443 check
        server rke-dev-02 100.80.28.74:6443 check
        server rke-dev-03 100.80.28.75:6443 check
 
backend supervisor_BE
        mode tcp
        balance roundrobin
        option tcp-check
        # MODIFY IPs
        server rke-dev-01 100.80.28.73:9345 check
        server rke-dev-02 100.80.28.74:9345 check
        server rke-dev-03 100.80.28.75:9345 check

To troubleshoot the configuration file, you can execute the following command:

haproxy -f /path/to/haproxy.cfg -c

Another helpful troubleshooting tip for HAProxy is to inspect the status page for more information about connections to the load balancer. This is defined in the configuration file as stats uri /haproxy?stats. Use a browser to navigate to the page http://serverip/haproxy?stats.

After HAProxy starts successfully, the script deploys the RKE2 nodes. Again, the initial infrastructure configuration is standard. Let’s take a closer look to the files config.yaml and script.sh that are used to configure RKE2. The script.sh file contains the commands that will download and start the RKE2 service on the node. The script.sh file is copied to the virtual machine via the file provisioner and also made executable in the remote-exec provisoner. In a separate file provisioner module, the config.yaml file is moved to a newly created rke2 folder, the default location where the rke2 service looks for such a file.

Here is a look at the script.sh file:

sudo curl -sfL https://get.rke2.io | sudo sh -
sudo systemctl enable rke2-server.service
n=0
until [ "$n" -ge 5 ]
do
   sudo systemctl start rke2-server.service && break  # substitute your command here
   n=$((n+1))
   sleep 60
done

Notice that the start service command is in a loop to ensure that the service is running before moving on to the next node.

Next, we make sure to add the host information of the other nodes to the current virtual machine host file.

The subsequent two nodes follow the same sequence of events but use the config_server.yaml file, which contains the first node’s API address. The final node has an additional step: using the rancher_install.sh file to install Rancher on the cluster.

Here is a look at the rancher_install.sh file:

echo "Create ~/.kube"
mkdir -p /root/.kube
echo "Grab kubeconfig"
while [ ! -f /etc/rancher/rke2/rke2.yaml ]
do
  echo "waiting for kubeconfig"
  sleep 2
done
echo "Put kubeconfig to /root/.kube/config"
cp -a /etc/rancher/rke2/rke2.yaml /root/.kube/config
echo "Wait for nodes to come online."
i=0
echo "i have $i nodes"
while [ $i -le 2 ]
do
   i=`/var/lib/rancher/rke2/bin/kubectl get nodes | grep Ready | wc -l`
  echo I have: $i nodes
  sleep 2s
done
echo "Wait for complete deployment of node three, 60 seconds."
sleep 60
echo "Install helm 3"
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
echo "Modify ingress controller to use-forwarded-headers."
cat << EOF > /var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-ingress-nginx
  namespace: kube-system
spec:
  valuesContent: |-
    controller:
      config:
        use-forwarded-headers: "true"
EOF
echo "Install stable Rancher chart"
helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
/var/lib/rancher/rke2/bin/kubectl create namespace cattle-system
/var/lib/rancher/rke2/bin/kubectl -n cattle-system create secret generic tls-ca --from-file=cacerts.pem=/tmp/cacerts.pem
# Modify hostname and bootstrap password if needed
helm install rancher rancher-stable/rancher \
  --namespace cattle-system \
  --set hostname=rancher.your.domain \
  --set bootstrapPassword=admin \
  --set ingress.tls.source=secret \
  --set tls=external \
  --set additionalTrustedCAs=true \
  --set privateCA=true
 
/var/lib/rancher/rke2/bin/kubectl -n cattle-system create secret generic tls-ca-additional --from-file=ca-additional.pem=/tmp/cacerts.pem
echo "Wait for Rancher deployment rollout."
/var/lib/rancher/rke2/bin/kubectl -n cattle-system rollout status deploy/rancher

Before the installation begins, there is a step that waits for all the rke2 nodes to be ready. This rancher_install.sh script follows the installation steps from the Rancher website. For this example, we are using an external load balancer. We, therefore, modified the ingress controller to use-forwarded-headers, as stated in the Rancher documentation. The other key parts of this script are the default bootstrap password and the TLS/CA flags assigned in the helm command. To change the administrator password successfully, it must be the same as the password used by the Rancher provider. The TLS and CA flags let the pods know that a self-signed certificate is being used and not to create additional internal certificates.

Note: The wait timers are critical for this deployment because they allow the cluster to be fully available before moving to the next step. Lower wait times can lead to the processes hanging and leaving uncompleted steps.

Navigate to the working directory, then use the following command to initialize Terraform:

terraform init

This command verifies that the appropriate versions of the project’s providers are installed and available.

Next, execute the ‘plan’ and ‘apply’ commands.

terraform plan

terraform apply –auto-approve

 

The deployment takes about 15 minutes. After a successful deployment, users can log in to Rancher and deploy downstream clusters (which can also be deployed using Terraform). This project also has a non-HAProxy version if users are interested in that deployment. The main difference is setting up a manual round-robin load balance within your DNS provider.

With this example, we have demonstrated how engineers can use Terraform to set up a SUSE Rancher environment quickly for their developers within Dell APEX Private Cloud.

Author: Juan Carlos Reyes

Read Full Blog
  • VxRail
  • SUSE Rancher

Find Your Edge: Running SUSE Rancher and K3s with SLE Micro on Dell VxRail

Jason Marques Jason Marques

Wed, 28 Sep 2022 10:26:37 -0000

|

Read Time: 0 minutes

Find Your Edge: Running SUSE Rancher and K3s with SLE Micro on Dell VxRail

Existing examples of this collaboration have already begun to bear fruit with work done to validate SUSE Rancher and RKE2 on Dell VxRail. You can find more information on that in a Solution Brief here and blog post here. This initial example was to highlight deploying and operating Kubernetes clusters in a core datacenter use case.

But what about providing examples of jointly validated solutions for near edge use cases? More and more organizations are looking to deploy solutions at the edge since that is an increasing area where data is being generated and analyzed. As a result, this is where the focus of our ongoing technology validation efforts recently moved.

Our latest validation exercise featured deploying SUSE Rancher and K3s with the SUSE Linux Enterprise Micro operating system (SLE Micro) and running it on Dell VxRail hyperconverged infrastructure. These technologies were installed in a non-production lab environment by a team of SUSE and Dell VxRail engineers. All the installation steps that were used followed the SUSE documentation without any unique VxRail customization. This illustrates the seamless compatibility of using these technologies together and allowing for standardized deployment practices with the out-of-the-box system capabilities of both VxRail and the SUSE products.

Solution Components Overview

Before jumping into the details of the solution validation itself, let’s do a quick review of the major components that we used.

  • SUSE Rancher is a complete software stack for teams that are adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes (K8s) clusters, including lightweight K3s clusters, across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.
  • K3s is a CNCF sandbox project that delivers a lightweight yet powerful certified Kubernetes distribution.
  • SUSE Linux Enterprise Micro (SLE Micro) is an ultra-reliable, lightweight operating system purpose built for containerized and virtualized workloads.
  • Dell VxRail is the only fully integrated, pre-configured, and tested HCI system optimized with VMware vSphere, making it ideal for customers who want to leverage SUSE Rancher, K3s, and SLE Micro through vSphere to create and operate lightweight Kubernetes clusters on-premises or at the edge.

Validation Deployment Details

Now, let’s dive into the details of the deployment for this solution validation.

First, we deployed a single VxRail cluster with these specifications:

  • 4 x VxRail E660F nodes running VxRail 7.0.370 version software
    • 2 x Intel® Xeon® Gold 6330 CPUs
    • 512 GB RAM
    • Broadcom Adv. Dual 25 Gb Ethernet NIC
    • 2 x vSAN Disk Groups:
    •     1 x 800 GB Cache Disk
    •     3 x 4 TB Capacity Disks
  • vSphere K8s CSI/CNS

After we built the VxRail cluster, we deployed a set of three virtual machines running SLE Micro 5.1. We installed a multi-node K3s cluster running version 1.23.6 with Server and Agent services, Etcd, and a ContainerD container runtime on these VMs. We then installed SUSE Rancher 2.6.3 on the K3s cluster.  Also included in the K3s cluster for Rancher installation were Fleet GitOps services, Prometheus monitoring and metrics capture services, and Grafana metrics visualization services. All of this formed our Rancher Management Server. 

We then used Rancher to deploy managed K3s workload clusters. In this validation, we deployed the two managed K3 workload clusters on the Rancher Management Server cluster. These managed workload clusters were single node and six-node K3s clusters all running on vSphere VMs with the SLE Micro operating system installed.

You can easily modify this validation to be more highly available and production ready. The following diagram shows how to incorporate more resilience.

Figure 1: SUSE Rancher and K3s with SLE Micro on Dell VxRail - Production Architecture

The Rancher Management Server stays the same, because it was already deployed with a highly available four-node VxRail cluster and three SLE Micro VMs running a multi-node K3 cluster. As a production best practice, managed K3s workload clusters should run on separate highly available infrastructure from the Rancher Management Server to maintain separation of management and workloads. In this case, you can deploy a second four-node VxRail cluster. For the managed K3s workload clusters, you should use a minimum of three-node clusters to provide high availability for both the Etcd services and the workloads running on it. However, three nodes are not enough to provide node separation and high availability for the Etcd services and workloads. To remedy this, you can deploy a minimum six-node K3s cluster (as shown in the diagram with the K3s Kubernetes 2 Prod cluster).

Summary

Although this validation features Dell VxRail, you can also deploy similar architectures using other Dell hardware platforms, such as Dell PowerEdge and Dell vSAN Ready Nodes running VMware vSphere! 

For more information and to see other jointly validated reference architectures using Dell infrastructure with SUSE Rancher, K3s, and more, check out the following resource pages and documentation. We hope to see you back here again soon.

Author: Jason Marques

Twitter: @vWhippersnapper

Dell Resources

SUSE Resources

Read Full Blog
  • PowerProtect Data Manager
  • SUSE Rancher

Protect your SUSE Rancher managed RKE downstream Kubernetes workloads with Dell EMC PowerProtect Data Manager

Kumar Vinod Kumar Kumaresan Kumar Vinod Kumar Kumaresan

Mon, 19 Sep 2022 13:46:53 -0000

|

Read Time: 0 minutes

Protect your SUSE Rancher managed RKE downstream Kubernetes workloads with Dell EMC PowerProtect Data Managere

Together, We Stop at Nothing!

We have been continuously working to extend the level of support for Kubernetes with Dell EMC PowerProtect Data Manager, to protect Kubernetes workloads on different platforms.

With this continued services path, we now protect SUSE Rancher managed Kubernetes workloads with PowerProtect Data Manager by taking advantage of a partnership with SUSE Rancher.

Kubernetes cluster and containers have become a popular option for deploying enterprise applications in the cloud and in on-premise environments. SUSE Rancher is a Kubernetes management platform that simplifies cluster installation and operations, whether they are on-premises, in the cloud, or at the edge, giving the freedom to build and run containerized applications. PowerProtect Data Manager protects SUSE Rancher managed Kubernetes workloads and ensures high availability and consistent, reliable backup and restore for Kubernetes workloads during normal operations or during a disaster recovery situation.

Protect SUSE Rancher managed Rancher Kubernetes Engine (RKE) downstream workloads with PowerProtect Data Manager

PowerProtect Data Manager enables customers to protect, manage, and recover data for on-premises, virtualized, or cloud deployments. Using PowerProtect Data Manager, customers can discover, protect, and restore workloads in a SUSE Rancher managed Kubernetes environment to ensure that the data is easy to backup and restore.

PowerProtect Data Manager enhances the protection by sending the data directly to the Dell EMC PowerProtect DD series appliance to gain benefits from unmatched efficiency, deduplication, performance, and scalability. See the solution brief and this technical white paper for more details.

About SUSE Rancher and RKE

SUSE Rancher is an enterprise computing platform for running Kubernetes for on-premises, cloud, and edge environments. With Rancher, you can form your own Kubernetes-as-a-Service. You can create, upgrade, and manage Kubernetes clusters. Rancher can set up clusters by itself or work with a hosted Kubernetes provider. It addresses the operational and security challenges of managing multiple Kubernetes clusters anywhere. SUSE Rancher also provides IT operators and development teams with integrated tools for building, deploying, and running cloud-native workloads.

SUSE Rancher supports the management of CNCF-Certified Kubernetes distributions, such as Rancher Kubernetes Engine (RKE). RKE is a certified Kubernetes distribution for both bare-metal and virtualized servers.

Protecting data by integrating SUSE Rancher managed RKE downstream Kubernetes clusters with PowerProtect Data Manager

You can integrate PowerProtect Data Manager with SUSE Rancher managed Kubernetes clusters through Kubernetes APIs to discover namespaces and associated persistent resources PersistentVolumeClaims (PVCs). PowerProtect Data Manager discovers the Kubernetes clusters using the IP address or fully qualified domain name (FQDN). PowerProtect Data Manager uses the discovery service account and the token kubeconfig file to integrate with kube-apiserver.

PowerProtect Data Manager integrates with SUSE Rancher managed Kubernetes clusters for data protection in the following ways:

  • Directly connecting to the RKE downstream single node with controlplane and etcd roles.
  • Through an external load balancer, when there are multiple RKE nodes for high availability with controlplane and etcd roles in an RKE downstream cluster.

SUSE Rancher managed RKE downstream Kubernetes clusters integration with PowerProtect Data Manager

Adding the RKE downstream Kubernetes cluster with PowerProtect Data Manager as an asset source

Once the Kubernetes cluster is added as an asset source in PowerProtect Data Manager and the discovery is complete, the associated namespaces are available as assets for protection. PowerProtect Data Manager protects the following two types of Kubernetes cluster assets - Namespaces and PVCs. Note that PPDM also protects the associated meta data for namespaces and cluster resources that include secrets, ConfigMaps, custom resources, RoleBindings, and so on. 

During the discovery process, PowerProtect Data Manager creates the following namespaces in the cluster:

Velero-ppdm: This namespace contains a Velero pod to back up metadata and stage to target storage in bare-metal environments. It performs PVC snapshot and metadata backup for VMware cloud native storage.

PowerProtect: This namespace contains a PowerProtect controller pod to drive persistent volume claim snapshot and backup and to send the backups to the target storage using dynamically deployed cProxy pods.

Kubernetes uses persistent volumes to store persisted application data. Persistent volumes are created on external storage and then attached to a particular pod using PVCs. PVCs are included along with other namespaces in PowerProtect Data Manager backup and recovery operations. Dell EMC PowerStore, PowerMax, XtremIO, and PowerFlex storage platforms all come with CSI plugins to support containerized workloads running on Kubernetes. 

With this easy integration for data protection with PowerProtect Data Manager, Dell Technologies empowers Kubernetes admins to perform backup/recovery operations and ensure SUSE Rancher managed Kubernetes cluster workloads are available, consistent, durable, and recoverable.

For more details, see the white paper SUSE Rancher and RKE Kubernetes cluster using CSI Driver on DELL EMC PowerFlex about how to protect the SUSE Rancher managed Kubernetes workloads with PowerProtect Data Manager.

Read Full Blog
  • APEX
  • data center

Empower DevOps with On-premises Automation for Multi-cloud World

Itzik Reich Itzik Reich

Mon, 19 Sep 2022 13:46:54 -0000

|

Read Time: 0 minutes

Empower DevOps with On-premises Automation for Multi-cloud World

Today, Dell Technologies is announcing support for every major hyperscaler and container orchestration platform on the market, as well as new Dell Technologies Developer portal. The rise of DevOps is leading businesses and IT administrators to rethink how they develop and manage their infrastructure. 
Being part of a DevOps team today requires the ability to write code that delivers end-to-end automation and management of IT operations. At the core of this new world are portable cloud-native applications, built from microservices and managed by Kubernetes that can be deployed on-premises and in-cloud. 


But even the most sophisticated modern architectures can be hindered by a lack of control of containers and Kubernetes deployments. For example, a recent TechTarget study on cloud native enablement suggests¹ organizational silos and infrastructure challenges led respondents to equally weigh their cloud native acceleration delays on scale, manual processes and complexity. And with complexity comes reduced developer productivity and substandard business outcomes. 

Graphic indicating opportunities to solve DevOps challenges.

Solving customer complexity and challenges

Dell Technologies collaborates with a broad ecosystem of public cloud providers to help our customers solve these challenges and place data and applications where it makes the most sense for their business needs. Our portfolio of DevOps-ready platforms support DevOps teams to produce faster business outcomes with intelligent, automated, on-premises infrastructure that eliminates manual processes and accelerates IT’s ability to rapidly provision compute and storage resources. DevOps-ready platforms let customers run their Kubernetes orchestration in the public cloud or on-premises. These platforms support every major hyperscaler and container orchestration platform on the market, including Amazon EKS, Google Cloud Anthos, Microsoft Azure Arc, Red Hat OpenShift, SuSe Rancher and VMware Tanzu, and are based on Dell HCI integrated systems and modern enterprise storage platforms.

Expanding our DevOps-ready platforms to offer even more choice

Today, Dell Technologies is deepening its support of Amazon EKS Anywhere with the addition of Amazon EKS Anywhere on Dell PowerStore and PowerFlex. EKS Anywhere is a deployment option enabling organizations to create and operate Kubernetes clusters on-premises using VMware vSphere while making it possible to have connectivity and portability to AWS public cloud environments. Deploying EKS Anywhere on Dell Technologies infrastructure streamlines application development and delivery by allowing organizations to easily create and manage on premises Kubernetes clusters. EKS Anywhere is also supported for Dell VxRail hyperconverged infrastructure.

Additionally, Dell is strengthening its partnership with SUSE, announcing support for SUSE Rancher 2.6 on VxRail to provide full lifecycle management support for clusters in Microsoft AKS, Google GKE and Amazon EKS anywhere, giving customers freedom to mix and match solutions that best fit their business strategy. “SUSE is excited that our joint customers now have the ability to run SUSE Rancher and RKE2 on-premises on VxRail, Dell’s leading HCI platform,” said Rachel Cassidy, SUSE senior vice president of Global Channel & Alliances. “VxRail’s integrated, full stack automation and lifecycle management streamlines infrastructure operations, reducing complexity to enable DevOps teams to focus on application development across multi-cloud environments. The latest release of SUSE Rancher and RKE2 fortify IT environments by strengthening security and compliance integrations while providing full lifecycle management for hosted Kubernetes clusters.”

With flexibility to run multiple container platforms on a single Dell DevOps-ready platform that automates cluster management, customers can achieve seamless connectivity to public clouds and finally realize how easy adoption of multi-cloud container orchestration deployments can be within the parameters of IT processes and governance. All the while, enjoying the reliability, security and world-class global support that comes with Dell infrastructure.

Being able to run traditional and cloud native applications on DevOps-ready platforms is also a key element to bringing together the traditional IT administration models that are often separated by function. Overall, these consistent, trusted platforms are attractive to the IT operators and/or DevOps teams who operate their own data centers for performance, regulation, security, compliance, and costs.

Capabilities of Dell Technologies' Developer Portal.

ell Technologies developer portal is your DevOps playground

Dell Technologies has a history of supporting open ecosystems that put the customer first, and the latest way we are doing so is through an accessible development tool playground. To help organizations deliver applications and services faster, DevOps teams want easier access to open-source tools and products that can aid in the delivery of infrastructure-as-code and streamline CI|CD processes. Why not provide them through the infrastructure portfolio that is trusted and built on?

It is very exciting to announce a new destination for our DevOps organizations this week with the “Dell Technologies Developer” portal. This will serve as a one-stop shop for full-stack developers, DevOps engineers, Site Reliability Engineers and basically any IT operator looking to automate infrastructure deployment and management. DevOps can simplify management of dev-test and production environments and accelerate the adoption of microservices/container-based architectures with enterprise reliability and security across many options. DevOps engineers will be able to script their operations to match the speed of development and at scale, with control, while accelerating the process. This is where the crossroads of traditional and modern innovation through code enables consumers to be able to access qualified third-party automation tools, SDKs, Github navigation and APIs across most of their existing and planned for infrastructure platforms. Supporting Dell’s full portfolio from client to data center, the developer portal not only provides an exciting destination for the user but will also offer a robust community to interact with in the coming year.

A better way to empower DevOps

We are empowering DevOps teams to collaborate with their software and cloud native application developers more effectively by helping them cross the bridge between traditional on-premises operations and modern IT operations in a multi-cloud universe. With our broad ecosystem flexibility along with easier access to open-source tools, customers gain control over their multi-cloud strategy and simplify experiences for both IT Ops and developers. Learn more about DevOps-ready platforms and navigate your way through the Dell Technologies Developer portal.

Other resources:

Kubernetes and containers solutions and offers
Follow Itzik’ s technologies blog space

About the Author: Itzik Reich

Itzik is a VP Technologists for Dell Technologies. He runs a team of Infrastructure Solution Group technologists that are the go-to where it comes to Storage, Servers and Data Protection. Itzik also has a special focus point around Dell Containers Strateg

 

 

Read Full Blog