Home > Workload Solutions > Container Platforms > Red Hat OpenShift Container Platform > Archive > Machine Learning Using Red Hat OpenShift Container Platform > Distributed versus non-distributed training
We compared the performance results from running the TensorFlow benchmark on a single application node (non-distributed training) with using training on two application nodes (one PS node and two worker nodes). The following figure shows the results:
The throughput shows that a training job using two application nodes is almost 1.9 times faster than a training job using a single application node. Using the optimized Tensorflow framework distribution from Intel and applying the appropriate parameters when launching a training job enables the ML practitioner to run their Tensorflow jobs efficiently on Intel Xeon Scalable processors.