Each layer in the DBaaS platform can be dynamically right-sized to meet the demands of the database workloads.
Because we only tested the general-purpose tier of SQL Managed Instance, we were only able to scale up an instance rather than scale-out. Read scale-out is only available at the business-critical service tier. We tested scaling up a managed instance by adding more compute vCores and memory. The following figure shows the original compute capacity of sqlmi-07 using Azure Data Studio.
Figure 30. Original CPU and memory request and limit on sqlmi-07
We could have changed the requests and limits in Azure Data Studio, but we opted to run the following command using the Azure CLI:
az sql mi-arc edit --cores-request 4 --cores-limit 8 --memory-request 8Gi --memory-limit 16Gi -n sqlmi-07 --k8s-namespace arc --use-k8s
According to the SQL Server logs, the SQL database was unreachable for approximately 3 to 4 minutes during the scale-out procedure because the sqlmi-07 pod was terminated and re-created. Because this was a single-instance general-purpose tier SQL MI, this was expected behavior.
We also tested by increasing the size of the Persistent Volume Claim (PVC) for a SQL MI. During the setup of the AKS hybrid workload cluster, we created a custom storage class named aks-hci-disk-custom for SQL MI consumption. This custom storage class required us to change default values for fsType and volumeBindingMode according to Create custom storage class for disks. We must ensure that allowVolumeExpansion was set to true. The custom storage class yaml contents follows:
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
name: aks-hci-disk-custom
resourceVersion: "12706"
uid: 9d5b79c2-07bc-476c-9e95-25607a19a064
parameters:
blocksize: "33554432"
container: custom-config-container-1
dynamic: "true"
fsType: ext4
group: clustergroup-dbaas-databases-1
hostname: ca-44938282-1755-4ae5-baaf6b9ca50cf64e.lab.azsdemo.com
logicalsectorsize: "4096"
physicalsectorsize: "4096"
port: "55000"
provisioner: disk.csi.akshci.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
Since we were working with general-purpose SQL MIs, resizing the storage caused a small outage when the storage for the pod is re-created. We used the Resize persistent volumes (PVC) documentation for reference. We used kubectl to increase the storage for our SQL MI data drive. Decreasing the PVC size was not allowed.
Here are the steps we followed:
$miName = "sqlmi-01"
kubectl get pvc -n arc-services-ns -o yaml | wsl grep $miName
app.kubernetes.io/instance: sqlmi-01
controller: sqlmi-01
name: data-eqje0479c4gp239jiminvona-sqlmi-01-0
app.kubernetes.io/instance: sqlmi-01
arc-sqlmi-orchestrator: sqlmi-01
controller: sqlmi-01
name: data-ha-eqje0479c4gp239jiminvona-sqlmi-01-ha-0
app.kubernetes.io/instance: sqlmi-01
controller: sqlmi-01
name: datalogs-eqje0479c4gp239jiminvona-sqlmi-01-0
app.kubernetes.io/instance: sqlmi-01
controller: sqlmi-01
name: logs-eqje0479c4gp239jiminvona-sqlmi-01-0
app.kubernetes.io/instance: sqlmi-01
arc-sqlmi-orchestrator: sqlmi-01
controller: sqlmi-01
name: logs-ha-eqje0479c4gp239jiminvona-sqlmi-01-ha-0
kubectl get pvc -l app.kubernetes.io/instance=sqlmi-01 -n arc-services-ns -o=custom-columns=NAME:.metadata.name,CAPACITY:.status.capacity.storage
NAME CAPACITY
data-eqje0479c4gp239jiminvona-sqlmi-01-0 100Gi
data-ha-eqje0479c4gp239jiminvona-sqlmi-01-ha-0 15Gi
datalogs-eqje0479c4gp239jiminvona-sqlmi-01-0 50Gi
logs-eqje0479c4gp239jiminvona-sqlmi-01-0 20Gi
logs-ha-eqje0479c4gp239jiminvona-sqlmi-01-ha-0 5Gi
kubectl scale statefulsets sqlmi-01 -n arc-services-ns --replicas=0
$storage ='{\"spec\":{\"resources\":{\"requests\":{\"storage\":\"100Gi\"}}}}'
kubectl patch pvc data-eqje0479c4gp239jiminvona-sqlmi-01-0 -n arc-services-ns --type merge --patch $storage
kubectl scale statefulsets sqlmi-01 -n arc-services-ns --replicas=1
kubectl delete pod sqlmi-01-0 -n arc-services-ns