PowerProtect Data Manager – Protecting AWS EKS (Elastic Kubernetes Service)
Tue, 29 Nov 2022 22:22:38 -0000
|Read Time: 0 minutes
NOTE: After seeing several questions and mails about the process, I want to clarify: the process described in this blog is relevant for PPDM 19.9. I recommend that you check the latest EKS protection with PowerProtect Data Manager in this blog by Idan Kentor: https://infohub.delltechnologies.com/p/powerprotect-data-manager-how-to-protect-aws-eks-elastic-kubernetes-service-workloads/.
Recently I had the chance to deploy PowerProtect Data Manager 19.9 on AWS with a PowerProtect DD Virtual Edition and I wanted to test the AWS EKS (Elastic Kubernetes Service) protection feature to see the differences between any other Kubernetes deployments.
The PowerProtect Data Manager deployment itself was super easy. Initiated from the AWS marketplace, it created a CloudFormation stack that deployed all of the needed services after asking for network and other settings. What I especially liked about it was that it deployed the PowerProtect DD as well, so I didn’t have to deploy it separately.
Deploying and configuring the EKS cluster and its Node-Group was easy, but the installation of AWS EBS CSI drivers was a bit challenging, so I decided to share the procedure and my thoughts so others could do it just as easily.
Before you begin, you should make sure that you have kubectl and AWS CLI installed on your computer.
I started by deploying the EKS cluster using the AWS management console and used the 1.21 version (which was also the default one). I then created a Node-Group just as described in the AWS documentation. (This step involved attaching a role, but other than that it’s very intuitive and you could manage on your own without the documentation).
It’s highly recommended to read all of the relevant documentation to understand the following steps. I’ll summarize what I did.
I know it has a lot of steps and it looks scary, but this would probably make your life so much easier and will get you protecting EKS namespaces in no time!
1. Configure kubectl to allow you to connect to the EKS cluster:
aws eks --region <region-code> update-kubeconfig --name <eks-cluster-name>
2. Create a secret.yaml file on your computer, which will be used to configure the AWS EBS CSI drivers.
Add your credentials to the file itself. The required permissions are described here:
https://docs.aws.amazon.com/eks/latest/userguide/ebs-csi.html).
The yaml structure and more details are available at the AWS EBS CSI driver git page here:
aws-ebs-csi-driver/secret.yaml at master · kubernetes-sigs/aws-ebs-csi-driver · GitHub
3. Apply the secret:
kubectl apply -f secret.yaml
4. Install the EBS CSI driver:
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.2"
5. The default storage class on EKS is gp2. Because PowerProtect Data Manager does not support it, it needs to be changed to EBS-SC which works with the EBS CSI driver.
Install the EBS Storage Class.
kubectl apply -f ebs-sc.yaml
6. Apply all of these:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/examples/kubernetes/snapshot/specs/classes/snapshotclass.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
7. Change the default Storage Class to EBS:
kubectl patch storageclass gp2 -p "{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"false\"}}}"
kubectl patch storageclass ebs-sc -p "{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}"
8. Create a service account on the EKS cluster in order to connect to the PowerProtect Data Manager server and allow the EKS cluster discovery:
kubectl apply -f ppdm-discovery.yaml
9. Create another one for the protection itself:
kubectl apply -f ppdm-rbac.yaml
Both of the previous yaml files can be found on any PowerProtect Data Manager at the following path: /usr/local/brs/lib/cndm/misc/rbac.tar.gz
At this point you should already have, or should create, a new namespace on your EKS cluster and have an application that you want to protect running on it.
10. List the secrets in the powerprotect namespace that was created by running the previous yaml file:
kubectl get secret -n powerprotect
11. Get the relevant secret from the list that you got from the previous command (the name will change in every deployment, but should be in the following format):
kubectl describe secret ppdm-discovery-serviceaccount-token-45abc -n powerprotect
This will output a string with the secret that you need in order to register the EKS cluster in PowerProtect Data Manager.
12. Get the FQDN to register the EKS cluster (You’re looking for the Kubernetes control plane, and must remove the “https://”):
kubectl cluster-info
13. Get the EKS Cluster Certificate authority from the AWS EKS cluster UI, and convert it from BASE64. Use this website for example: https://www.base64decode.org/.
14. SSH to the PowerProtect Data Manager server, then create a new eks-root.pem file with the decoded BASE64 result (including the BEGIN and END CERTIFICATE lines).
15. Run the following command:
keytool -importcert -alias <your-eks-cluster-name> -keystore /etc/ssl/certificates/extserver/extserver.truststore -storepass extserver -file eks-root.pem
16. Connect to the PowerProtect Data Manager UI, and add a new Kubernetes Asset Source.
Use the FQDN from Step 12 (again, without the https://) and create new credentials with the Service Account Token that you got in Step 11.
After the EKS cluster is added as an Asset Source, you can protect the namespaces in your EKS cluster by creating a new Protection Policy. For more info, check out the interactive demos at the Dell Technologies Demo Center.
Author: Eli Persin
Related Blog Posts
Multi-cloud Protection with PowerProtect Data Manager
Thu, 14 Apr 2022 20:12:59 -0000
|Read Time: 0 minutes
What I like most about PowerProtect Data Manager is that it supports the rising demand for data protection for all kind of organizations. It’s powerful, efficient, scalable and most importantly: a simple-to-use solution. And what could be simpler than using the same product with the same user interface on any environment, including any supported cloud platform?
PowerProtect Data Manager is usually used for deploying and protecting on-prem virtual machines running on VMware vSphere environments.
While PowerProtect Data Manager excels in protecting any on-prem machines and different types of technologies, such as Kubernetes, some organizations also have a cloud strategy where some or all their workloads and services are running on the cloud.
There are also organizations that use multiple cloud platforms to host and manage their workloads, and these resources need to be protected as well, especially in the cloud where there could be additional risk management and security considerations.
The good news is that PowerProtect Data Manager provides cloud and backup admins the same abilities and interface across all the supported cloud platforms: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
AWS users can use the AWS Marketplace to deploy “Dell EMC PowerProtect Data Manager and PowerProtect DD Virtual Edition” which will trigger an automated deployment using the AWS CloudFormation service.
In this deployment method you’re asked to provide all the networking and security details ahead, and then it does everything else for you, including deploying a DDVE instance that will manage the backup copies for you (with deduplication!).
Once the CloudFormation stack is deployed, you can access the PowerProtect Data Manager through any web browser, and then add and protect your cloud resources, just as if it were an on-prem deployment – super intuitive and super easy!!
I think the trickiest part in the deployment is probably to make sure that all of the networking and firewall or other security and policy restrictions allow you to connect to the PowerProtect Data Manager VM and to the DDVE.
Check out this great whitepaper that describes the entire process of deploying PowerProtect Data Manager on AWS.
For Microsoft Azure users, the process here is similar. You can deploy PowerProtect Data Manager using the Azure Marketplace service:
This whitepaper will take you through the exact steps required to successfully deploy PowerProtect Data Manager and PowerProtect DDVE on your Azure subscription.
Didn’t I say it’s really easy and works the same way in all the cloud platforms?
GCP users can use the GCP Marketplace to deploy their PowerProtect Data Manager:
This whitepaper describes the entire deployment process with detailed screenshots on GCP.
Now you can easily protect your multi-cloud resources with the same powerful protection solution!
Author: Eli Persin
PowerProtect Data Manager – How to Protect AWS EKS (Elastic Kubernetes Service) Workloads?
Thu, 17 Nov 2022 18:03:30 -0000
|Read Time: 0 minutes
PowerProtect Data Manager supports the protection of a multitude of K8s distributions for on-prem as well as in the cloud (see the compatibility matrix). In this blog, I’ll show how to use PowerProtect Data Manager (or PPDM for short) to protect AWS Elastic Kubernetes Service (EKS) workloads.
I’ve been asked many times recently if PPDM supports protection of Amazon EKS workloads, and if so, how the configuration goes. So, I thought it would be good to talk about that in a blog -- so here we are! In essence, the challenging piece (no issues, maybe challenges 😊) is the configuration of the EBS CSI driver, so I’ll cover that extensively in this blog. And because the deployment and configuration of the EBS CSI driver has changed recently, there is all the more reason to get this information out to you.
Deploying PowerProtect Data Manager and PowerProtect DD are both pretty straightforward. You just need to launch the PowerProtect Data Manager installation from the marketplace, answer some network and other questions, and off you go. It creates an AWS CloudFormation stack that deploys all the required services of both PowerProtect Data Manager and PowerProtect DD. PowerProtect DD can be deployed separately or along with PPDM. Naturally, the newly deployed PowerProtect Data Manager can also leverage an existing PowerProtect DD.
Deploying and configuring the EKS cluster and Node groups is rather simple and can be done using the AWS management console, AWS CLI, or eksctl. For more information, the official Amazon EKS documentation is your friend.
It’s important to talk about the tools we need installed for managing Amazon EKS and to deploy and manage the EBS CSI driver:
- kubectl – Probably needs no introduction but it’s a command line tool to work with Kubernetes clusters.
- AWS CLI – A command line tool for working with AWS services. For installation instructions and further info, see Installing or updating the latest version of the AWS CLI.
- eksctl – A command line tool to create and manage EKS clusters. For more info, see Installing or updating eksctl.
Let’s look at some general steps before we go ahead and deploy the EBS CSI driver.
To get started
1. To configure AWS CLI, run the following command:
aws configure
2. List your EKS clusters:
aws eks --region <region-code> list-clusters
3. Configure kubectl to operate against your EKS cluster:
aws eks update-kubeconfig --name <your-eks-cluster-name>
Deploying the External Snapshotter
The final step before we can deploy the EBS CSI driver is to deploy the external snapshotter.
1. To deploy the snapshotter, execute the following commands:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
2. Make sure the snapshot controller pods are running:
kubectl get pods -n kube-system
EBS CSI Driver Deployment
And now for the main event, the configuration of the EBS CSI driver. There are two ways to go about it – deploying the EKS CSI Driver as an EKS add-on or as a self-managed driver. You can use either the AWS management console or AWS CLI (eksctl) to deploy the EBS CSI Driver add-on. The self-managed driver is installed and operated exclusively using kubectl.
The following procedure represents my thoughts and experience for a quick and comprehensive configuration - there are few ways to climb a mountain as they say. Refer to the documentation for all possible ways.
Option 1: Self-managed EBS CSI Driver
1. Create or use an existing IAM user and map the required policy for the EBS CSI Driver to the user:
a. Create an IAM user:
aws iam create-user --user-name <user-name>
b. Create an IAM policy and record the Policy ARN:
aws iam create-policy --policy-name <policy-name> --policy-document https://raw.githubusercontent.com/ kubernetes-sigs/aws-ebs-csi-driver/master/docs/example-iam-policy.json
c. Attach the policy to the user:
aws iam attach-user-policy --user-name <user-name> --policy-arn <policy-arn>
d. Create an access key and record the AccessKeyId and SecretAccessKey:
aws iam create-access-key --user-name <user-name>
2. Create a secret. Here we’re creating a secret and mapping it to an existing IAM user and its credentials (for example, the access keys recorded in the previous step):
kubectl create secret generic aws-secret --namespace kube-system --from-literal "key_id=<iam-user-access-key-id>" --from-literal "access_key=<iam-user-secret-access-key>"
3. Install the EBS CSI Driver:
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.12"
4. Make sure that the ebs-csi-controller and ebs-csi-nodes pods are running:
kubectl get pods -n kube-system
Option 2: EBS CSI Driver Add-on
1. Retrieve the EKS cluster OIDC provider:
aws eks describe-cluster --name <your-eks-cluster-name> --query "cluster.identity.oidc.issuer" --output text
2. Check if the OIDC provider of your cluster is not on the list of current IAM providers:
aws iam list-open-id-connect-providers
3. If the provider is not on the list, associate it by running the following command:
eksctl utils associate-iam-oidc-provider --cluster <your-eks-cluster-name> --approve
4. Create the IAM role. This would also attach the required policy and annotate the EBS CSI driver Service Account on the EKS cluster:
eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster <your-eks-cluster-name> --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy --approve --role-only --role-name <role-name>
5. Make sure that the aws-ebs-csi-driver is not installed:
aws eks list-addons --cluster-name <your-eks-cluster-name>
6. Get the AWS Account ID:
aws sts get-caller-identity --query "Account" --output text
7. Deploy the EBS CSI Driver add-on. Note that it will deploy the default add-on version for your K8s version. Specify the AWS account ID retrieved in the previous step and the IAM role specified in Step 4.
eksctl create addon --name aws-ebs-csi-driver --cluster <your-eks-cluster-name> --service-account-role-arn arn:aws:iam::<your-aws-account-id>:role/<role-name> --force
8. Make sure that the ebs-csi-controller and ebs-csi-nodes pods are running:
kubectl get pods -n kube-system
Storage Class Configuration
1. Create the Volume Snapshot Class YAML file:
cat <<EOF | tee snapclass.yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-aws-vsc driver: ebs.csi.aws.com deletionPolicy: Delete EOF
2. Create the Snapshot Class:
kubectl apply -f snapclass.yaml
3. Make sure that the Snapshot Class got created:
kubectl get volumesnapshotclass
4. Create the Storage Class YAML file:
cat <<EOF | tee ebs-sc.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ebs-sc annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: ebs.csi.aws.com volumeBindingMode: WaitForFirstConsumer EOF
5. Create the Storage Class:
kubectl apply -f ebs-sc.yaml
6. Patch the gp2 storage class to remove the default setting:
kubectl patch storageclass gp2 -p "{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"false\"}}}"
7. Make sure that the EBS Storage Class got created and that it shows up as the default storage classone:
kubectl get storageclass
Add EKS to PowerProtect Data Manager
Now, for the grand finale – adding our EKS cluster to PPDM. Follow these steps to gather some information and then register EKS to PPDM.
1. Get the K8s cluster control-plane endpoint:
kubectl cluster-info
2. To create a service account on the EKS cluster for PPDM discovery and operations, PPDM RBAC YAML files need to be applied.
a. Retrieve the rbac.tar.gz file from the PPDM appliance at the following location:
/usr/local/brs/lib/cndm/misc/rbac.tar.gz
b. On PPDM 19.12, you can download the archive from the PowerProtect Data Manager UI under System Settings -> Downloads > Kubernetes, or directly using the following URL:
https://<your-ppdm-server>/k8s-binaries-download?filename=/usr/local/brs/lib/cndm/misc/rbac.tar.gz
Note that the link will only work if you’re logged into the PPDM UI. You can also find the archive on PPDM itself, at the following path:
/usr/local/brs/lib/cndm/misc/rbac.tar.gz
c. Extract the archive, navigate to the rbac directory, and apply the two YAML files using the following commands:
kubectl apply -f ppdm-discovery.yaml kubectl apply -f ppdm-controller-rbac.yaml
d. If you’re using K8s 1.24 or later, then you must manually create the secret for the PPDM discovery storage account:
kubectl apply -f - <<EOF apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: ppdm-discovery-serviceaccount-token namespace: powerprotect annotations: kubernetes.io/service-account.name: "ppdm-discovery-serviceaccount" EOF
e. Retrieve the secret key using the following command:
kubectl describe secret $(kubectl get secret -n powerprotect | awk '/disco/{print $1}') -n powerprotect | awk '/token:/{print $2}'
3. Retrieve the EKS cluster root CA:
eksctl get cluster <your-eks-cluster-name> -o yaml | awk '/Cert/{getline; print $2}'
Without further ado, let’s navigate to the PowerProtect Data Manager UI and register our EKS cluster as a Kubernetes Asset Source.
4. Navigate to Infrastructure -> Asset Sources.
5. Enable the Kubernetes Asset Source as needed and navigate to the Kubernetes tab.
6. Add the EKS cluster as a Kubernetes Asset Source:
A few other notes:
7. Use the FDQN you retrieved in Step 1. Make sure to remove the https:// prefix.
8. Specify port 443. Make sure to add tcp/443 to the EKS security group (inbound) and the PPDM security group (outbound).
9. Create new credentials with the Service Account Token from Step 2e.
10. Root Certificate:
a. On PPDM versions earlier than 19.12, follow these steps:
- Convert the EKS root CA to BASE64 using the following command:
eksctl get cluster <your-eks-clsuter-name> -o yaml | awk '/Cert/{getline; print $2}' | base64 -d
- SSH to the PPDM server using the admin user and save the root CA in BASE64 to a file, say eks-cert.txt. Make sure to include the BEGIN and END CERTIFICATE lines.
- Execute the following command:
ppdmtool -importcert -alias <your-eks-cluster-name> -file eks-cert.txt -t BASE64
b. On PPDM 19.12 and later, click Advanced Options on the same Add Kubernetes screen and scroll down. Specify the root certificate from Step 3.
11. Verify the certificate and click Save to register the EKS cluster as a Kubernetes Asset Source.
That’s it, now you can deploy your stateful applications on your EKS cluster and protect their namespaces by creating a new Protection Policy 👍🏻.
Feel free to reach out with any questions or comments.
Thanks for reading,
Idan
Author: Idan Kentor
idan.kentor@dell.com