
Dell Container Storage Modules 1.5 Release
Thu, 12 Jan 2023 19:27:23 -0000
|Read Time: 0 minutes
Made available on December 20th, 2022, the 1.5 release of our flagship cloud-native storage management products, Dell CSI Drivers and Dell Container Storage Modules (CSM), is here!
See the official changelog in the CHANGELOG directory of the CSM repository.
First, this release extends support for Red Hat OpenShift 4.11 and Kubernetes 1.25 to every CSI Driver and Container Storage Module.
Featured in the previous CSM release (1.4), avid customers may recall a few new additions to the portfolio made available in tech preview. Primarily:
- CSM Application Mobility: Enables the movement of Kubernetes resources and data from one cluster to another no matter the source and destination (on-prem, co-location, cloud) and any type of backend storage (Dell or non-Dell)
- CSM Secure: Allows for on-the-fly encryption of PV data
- CSM Operator: Manages CSI and CSM as a single stack
Building on these three new modules, Dell Technologies is adding deeper capabilities and major improvements as part of today’s 1.5 release for CSM, including:
- CSM Application Mobility: Users can now schedule backups
- CSM Secure: Users can now “rekey” an encrypted PV
- CSM Operator: Support added for Dell’s PowerFlex CSI Driver, the Authorization Proxy Server, and the CSM Observability module for Dell PowerFlex and Dell PowerScale
For the platform updates included in today’s 1.5 release, the major new features are:
- It is now possible to set the Quality of Service of a Dell PowerFlex persistent volume. Two new parameters can be set in the StorageClass (bandwidthLimitInKbps and iopsLimit) to limit the consumption of a volume. Watch this short video to learn how it works.
- For Dell PowerScale, when a Kubernetes node is decommissioned from the cluster, the NFS export created by the driver will “Ignore the Unresolvable Hosts” and clean them later.
- Last but not least, when you have a Kubernetes cluster that runs on top of Virtual Machines backed by VMware, the CSI driver can mount FiberChannel attached LUNs.
This feature is named “Auto RDM over FC” in the CSI/CSM documentation.
The concept is that the CSI driver will connect to both Unisphere and vSphere API to create the respective objects.
When deployed with “Auto-RDM” the driver can only function in that mode. It is not possible to combine iSCSI and FC access within the same driver installation.
The same limitation applies for RDM usage. You can learn more about it at RDM Considerations and Limitations on the VMware website.
That’s all for CSM 1.5! Feel free to share feedback or send questions to the Dell team on Slack: https://dell-csm.slack.com.
Author: Florian Coulombel
Related Blog Posts

Use Go Debugger’s Delve with Kubernetes and CSI PowerFlex
Wed, 15 Mar 2023 14:41:14 -0000
|Read Time: 0 minutes
Some time ago, I faced a bug where it was important to understand the precise workflow.
One of the beauties of open source is that the user can also take the pilot seat!
In this post, we will see how to compile the Dell CSI driver for PowerFlex with a debugger, configure the driver to allow remote debugging, and attach an IDE.
Compilation
Base image
First, it is important to know that Dell and RedHat are partners, and all CSI/CSM containers are certified by RedHat.
This comes with a couple of constraints, one being that all containers use the Red Hat UBI Minimal image as a base image and, to be certified, extra packages must come from a Red Hat official repo.
CSI PowerFlex needs the e4fsprogs package to format file systems in ext4, and that package is missing from the default UBI repo. To install it, you have these options:
- If you build the image from a registered and subscribed RHEL host, the repos of the server are automatically accessible from the UBI image. This only works with podman build.
- If you have a Red Hat Satellite subscription, you can update the Dockerfile to point to that repo.
- You can use a third-party repository.
- You go the old way and compile the package yourself (the source of that package is in UBI source-code repo).
Here we’ll use an Oracle Linux mirror, which allows us to access binary-compatible packages without the need for registration or payment of a Satellite subscription.
The Oracle Linux 8 repo is:
[oracle-linux-8-baseos] name=Oracle Linux 8 - BaseOS baseurl=http://yum.oracle.com/repo/OracleLinux/OL8/baseos/latest/x86_64 gpgcheck = 0 enabled = 1
And we add it to final image in the Dockerfile with a COPY directive:
# Stage to build the driver image
FROM $BASEIMAGE@${DIGEST} AS final
# install necessary packages
# alphabetical order for easier maintenance
COPY ol-8-base.repo /etc/yum.repos.d/
RUN microdnf update -y && \
...
Delve
There are several debugger options available for Go. You can use the venerable GDB, a native solution like Delve, or an integrated debugger in your favorite IDE.
For our purposes, we prefer to use Delve because it allows us to connect to a remote Kubernetes cluster.
Our Dockerfile employs a multi-staged build approach. The first stage is for building (and named builder) from the Golang image; we can add Delve with the directive:
RUN go install github.com/go-delve/delve/cmd/dlv@latest
And then compile the driver.
On the final image that is our driver, we add the binary as follows:
# copy in the driver COPY --from=builder /go/src/csi-vxflexos / COPY --from=builder /go/bin/dlv /
In the build stage, we download Delve with:
RUN go get github.com/go-delve/delve/cmd/dlv
In the final image we copy the binary with:
COPY --from=builder /go/bin/dlv /
To achieve better results with the debugger, it is important to disable optimizations when compiling the code.
This is done in the Makefile with:
CGO_ENABLED=0 GOOS=linux GO111MODULE=on go build -gcflags "all=-N -l"
After rebuilding the image with make docker and pushing it to your registry, you need to expose the Delve port for the driver container. You can do this by adding the following lines to your Helm chart. We need to add the lines to the driver container of the Controller Deployment.
ports: - containerPort: 40000
Alternatively, you can use the kubectl edit -n powerflex deployment command to modify the Kubernetes deployment directly.
Usage
Assuming that the build has been completed successfully and the driver is deployed on the cluster, we can expose the debugger socket locally by running the following command:
kubectl port-forward -n powerflex pod/csi-powerflex-controller-uid 40000:40000
Next, we can open the project in our favorite IDE and ensure that we are on the same branch that was used to build the driver.
In the following screenshot I used Goland, but VSCode can do remote debugging too.
We can now connect the IDE to that forwarded socket and run the debugger live:
And here is the result of a breakpoint on CreateVolume call:
The full code is here: https://github.com/dell/csi-powerflex/compare/main...coulof:csi-powerflex:v2.5.0-delve.
If you liked this information and need more deep-dive details on Dell CSI and CSM, feel free to reach out at https://dell-iac.slack.com.
Author: Florian Coulombel

Why Canonicalization Should Be a Core Component of Your SQL Server Modernization (Part 2)
Wed, 12 Apr 2023 16:01:55 -0000
|Read Time: 0 minutes
In Part 1 of this blog series, I introduced the Canonical Model, a fairly recent addition to the Services catalog. Canonicalization will become the north star where all newly created work is deployed to and managed, and it’s simplified approach also allows for vertical integration and solutioning an ecosystem when it comes to the design work of a SQL Server modernization effort. The stack is where the “services” run—starting with bare-metal, all the way to the application, with seven layers up the stack.
In this blog, I’ll dive further into the detail and operational considerations for the 7 layers of the fully supported stack and use by way of example the product that makes my socks roll up and down: a SQL Server Big Data Cluster. The SQL BDC is absolutely not the only “application” your IT team would address. This conversation is used for any “top of stack application” solutions. One example is Persistent Storage – for databases running in a container. We need to solution for the very top (SQL Server) and the very bottom (Dell Technologies Infrastructure). And, many optional permutation layers.
First, a Word About Kubernetes
One of my good friends at Microsoft, Buck Woody, never fails to mention a particular truth in his deep-dive training sessions for Kubernetes. He says, “If your storage is not built on a strong foundation, Kubernetes will fall apart.” He’s absolutely correct.
Kubernetes or “K8s” is an open-source container-orchestration system for automating deployment, scaling, and management of containerized applications and is the catalyst in the creation of many new business ventures, startups, and open-source projects. A Kubernetes cluster consists of the components that represent the control plane and includes a set of machines called nodes.
To get a good handle on Kubernetes, give Global Discipline Lead Daniel Murray’s blog a read, “Preparing to Conduct Your Kubernetes Orchestra in Tune with Your Goals.”
The 7 Layers of Integration Up the Stack
Let’s look at the vertical integration one layer at a time. This process and solution conversation is very fluid at the start. Facts, IT desires, best practice considerations, IT maturity, is currently all on the table. For me, at this stage, there is zero product conversation. For my data professionals, this is where we get on a white board (or virtual white board) and answer these questions:
- Any data?
- Anywhere?
- Any way?
Answers here will help drive our layer conversations.
From tin to application, we have:
Layer 1
The foundation of any solid design of the stack starts with Dell Technologies Solutions for SQL Server. Dell Technologies infrastructure is best positioned to drive consistency up and down the stack and its supplemented by the company’s subject matter experts who work with you to make optimal decisions concerning compute, storage, and back up.
The requisites and hardware components of Layer 1 are:
- Memory, storage class memory (PMEM), and a consideration for later—maybe a bunch of all-flash storage. Suggested equipment: PowerEdge.
- Storage and CI component. Considerations here included use cases that will drive decisions to be made later within the layers. Encryption and compression in the mix? Repurposing? HA/DR conversations are also potentially spawned here. Suggested hardware: PowerOne, PowerStore, PowerFlex. Other considerations – structured or unstructured? Block? File? Object? Yes to all! Suggested hardware: PowerScale, ECS
- Hard to argue the huge importance of a solid backup and recovery plan. Suggested hardware: PowerProtect Data Management portfolio.
- Dell Networking. How are we going to “wire up”—Converged or Hyper-converged, or up the stack of virtualization, containerization and orchestration? How are all those aaS’es going to communicate? These questions concern the stack relationship integration and a key component to getting right.
Note: All of Layer 1 should consist of Dell Technologies products with deployment and support services. Full stop.
Layer 2
Now that we’ve laid our foundation from Dell Technologies, we can pivot to other Dell ecosystem solution sets as our journey continues, up the stack. Let’s keep going.
Considerations for Layer 2 are:
- Are we sticking with physical tin (bare-metal)?
- Should we apply a virtualization consolidationfactor here? ESXi, Hyper-V, KVM? Virtualization is “optional” at this point. Again, the answers are fluid right now and it’s okay to say, “it depends.” We’ll get there!
- Do we want to move to open-source in terms of a fully supported stack? Do we want the comfort of a supported model? IMO, I like a fully supported model although it comes at a cost. Implementing consolidation economics, however, like I mentioned above with virtualization and containerization, equals doing more with less.
Note: Layer 2 is optional (dependent upon future layers) and would be fully supported by either Dell Technologies, VMware or Microsoft and services provided by Dell Technologies Services or VMware Professional Services.
Layer 3
Choices in Layer 3 help drive decision or maturity curve comfort level all the way back to Layer 1. Additionally, at this juncture, we’ll also start talking about subsequent layers and thinking about the orchestration of Containers with Kubernetes.
Considerations and some of the purpose-built solutions for Layer 3 include:
- Software-defined everything such as Dell Technologies PowerFlex (formally VxFlex).
- Network and storage such as The Dell Technologies VMware Family – vSAN and the Microsoft Azure Family on-premises servers – Edge, Azure Stack Hub, Azure Stack HCI.
As we are walking through the journey to a containerized database world, at this level, is where we also need to start thinking about the CSI (Container Storage Interface) driver and where it will be supported.
Note: Layer 3 is optional (dependent upon future layers) and would be fully supported by either Dell Technologies, VMware or Microsoft and services provided by Dell Technologies Services or VMware Professional Services.
Layer 4
Ah, we’ve climbed up four rungs on the ladder and arrived at the Operating System where things get really interesting! (Remember the days when OS was tin and an OS?)
Considerations for Layer 4 are:
- Windows Server. Available in a few different forms—Desktop experience, Core, Nano.
- Linux OS. Many choices including RedHat, Ubuntu, SUSE, just to name a few.
Note: Do you want to continue the supported stack path? If so, Microsoft and RedHat are the answers here in terms of where you’ll reach for “phone-a-friend” support.
Option: We could absolutely stop at this point and deploy our application stack. Perfectly fine to do this. It is a proven methodology.
Layer 5
Container technology – the ability to isolate one process from another – dates back to 1979. How is it that I didn’t pick this technology when I was 9 years old? Now, the age of containers is finally upon us. It cannot be ignored. It should not be ignored. If you have read my previous blogs, especially “The New DBA Role – Time to Get your aaS in Order,” you are already embracing SQL Server on containers. Yes!
Considerations and options for Layer 5, the “Container Control plane” are:
- VMware VCF 4.
- RedHat OpenShift (with our target of a SQL 2019 BDC, we need 4.3+).
- AKS (Azure Kubernetes Service) – on-premises with Azure Stack Hub.
- Vanilla Kubernetes (The original Trunk/Master).
Note: Containers are absolutely optional here. However, certain options, in these layers, that will provide the runway for containers in the future. Virtualization of data and containerization of data can live on the same platform! Even if you are not ready currently. It would be good to setup for success now. Ready to start with containers, within hours, if needed.
Layer 6
The Container Orchestration plane. We all know about Virtualization sprawl. Now, we have container sprawl! Where are all these containers running? What cloud are they running? Which Hypervisor? It’s best to now manage through a single pane of glass—understanding and managing “all the things.”
Considerations for Layer 6 are:
Note: As of this blog publish date Azure Arc is not yet GA, it’s still in preview. No time like the present to start learning Arc’s in’s and out’s! Sign up for the public preview.
Layer 7
Finally, we have reached the application layer in our SQL Server Modernization. We can now install SQL Server, or any ancillary service offering in the SQL Server ecosystem. But hold on! There are a couple options to consider: Would you like your SQL services to be managed and “Always Current?” For me, the answer would be yes. And remember, we are talking about on-premises data here.
Considerations for Layer 7:
- The application for this conversation is SQL Server 2019.
- The appropriate decisions in building you stack will lead you to Azure Arc Data Services (currently in Preview), SQL Server and Kubernetes is a requirement here.
Note: With Dell Technologies solutions, you can deploy at your rate, as long as your infrastructure is solid. Dell Technologies Services has services to move/consolidate and/or upgrade old versions of SQL Server to SQL Server 2019.
The Fully Supported Stack
In terms of considering all the choices and dependencies made at each layer of building and integrating the 7 layers up the stack, there is a fully supported stack available that includes services and products from:
- Dell Technologies
- VMware
- RedHat
- Microsoft
Also, there are absolutely many open-source choices that your teams can make along the way. Perfectly acceptable to do. In the end, it comes down to who wants to support what, and when.
Dell Technologies Is Here to Help You Succeed
There are deep integration points for the fully supported stack. I can speak for all permutations representing the four companies listed above. In my role at Dell Technologies, I engage with senior leadership, product owners, engineers, evangelists, professional services teams, data scientists—you name it. We all collaborate and discuss what is best for you, the client. When you engage with Dell Technologies for the complete solution experience, we have a fierce drive to make sure you are satisfied, both in the near and long term. Find out more about our products and services for Microsoft SQL Server.
I invite you to take a moment to connect with a Dell Technologies Service Expert today and begin moving forward to your fully-support stack / SQL Server Modernization.