Home > APEX > Compute & HCI > White Papers > Automate Machine Learning with H2O Driverless AI on Dell APEX > GPU support
H2O Driverless AI can run on machines with only CPUs or machines with CPUs and GPUs. H2O Driverless AI supports A30 GPUs that are used in the Dell APEX Private Cloud AI/ML performance option. Only one GPU is supported per instance. Image and NLP use cases in H2O Driverless AI benefit significantly from GPU usage. Model building algorithms such as XGBoost (GBM/DART/RF/GLM), LightGBM (GBM/DART/RF), PyTorch (BERT models), and TensorFlow (CNN/BiGRU/ImageNet) models use GPU.
NVIDIA’s Multi-Instance GPU (MIG) feature can be used to partition the GPUs, increase overall GPU utilization, and support several types of use cases and deployments with guaranteed quality of service. For more information about GPU partitioning recommendations, see to the NVIDIA Multi-Instance GPU and NVIDIA Technical Brief.
Image and NLP use cases in H2O Driverless AI benefit significantly from GPU usage. Model building algorithms such as XGBoost (GBM/DART/RF/GLM), LightGBM (GBM/DART/RF), PyTorch (BERT models), and TensorFlow (CNN/BiGRU/ImageNet) models use GPU.