Home > Servers > Specialty Servers > White Papers > Deploy and Finetune Llama2 70B Chat on PowerEdge XE9680 with AMD Instinct MI300X > Setup
In this section, we will provision the server. The server has been configured as follows.
OS: Ubuntu 22.04.4 LTS
Kernel version: 5.15.0-94-generic
Docker Version: Docker version 25.0.3, build 4debf41
ROCm version: 6.0.2
Server: Dell PowerEdge XE9680
GPU: 8x AMD Instinct MI300X Accelerators
git clone -b v0.3.2 https://github.com/vllm-project/vllm.git
cd vllm
sudo docker build -f Dockerfile.rocm -t vllm-rocm:latest
sudo docker run -it \
--name vllm \
--network=host \
--device=/dev/kfd \
--device=/dev/dri \
--shm-size 16G \
--group-add=video \
--workdir=/ \
vllm-rocm:latest bash
huggingface-cli login