We built a HammerDB-v4.6 client container based on the Dockerfile provided in the HammerDB GitHub repository. We customized the Dockerfile to fit our requirements by only enabling the Microsoft SQL Server client libraries and removing references to other RDBMS system tests. We also added ODBC Driver 18 for SQL Server 2022. Once the client container was created, we pushed it to a private Azure Container Registry in our Azure subscription.
Then, we created a hammerdbpod.yaml file that defined the HammerDB pod specs for running in the dbaas-applications-1 workload cluster. This YAML file included the requests and limits for CPU and memory. The name and namespace attributes within the file were dynamically built from our test harness, referenced in Appendix A: SQL MI and HammerDB Test Harness details. Throughout our multiple test runs, the compute resource utilization never went above 60% for any HammerDB pod.
The contents of the hammerdbpod.yaml file follows.
- name: <hammerpod>
Note. The “command:” attributes were added to keep the container alive within the Kubernetes cluster long enough to start the HammerDB execution. 7200 seconds = 2 hours
Each HammerDB pod had a one-to-one mapping to a SQL MI pod. As more SQL MI pods were added, their associated HammerDB pods were also deployed. We also created a SQL Server 2022 VM to store reporting results and aggregation queries. All test run datasets were returned to this SQL reporting instance using linked servers that pointed to the external IP address of each SQL MI pod. The data was later aggregated to obtain average and maximum TPMs.
To drive the workload against all the SQL MIs simultaneously, we used HammerDB CLI automation. All the CLI commands can be found in Appendix B: HammerDB CLI configuration.