
Dell Technologies and Nokia Pave the Way for Open Ecosystem Cloud RAN
Tue, 26 Sep 2023 20:17:54 -0000
|Read Time: 0 minutes
Introduction
The advent of 5G technology has ushered in a new era of connectivity, promising faster speeds, lower latency, and the potential to transform industries across the board. Achieving the full potential of 5G requires collaboration between those who are truly driving the open ecosystem. Dell Technologies and Nokia have come together to create an open ecosystem for 5G networks, enabling Communication Service Providers (CSPs) and Enterprises to harness the power of 5G like never before.
The need for open ecosystems in 5G
5G networks incorporate a whole new level of complexity compared to previous generations of radio networks, including multi-layer cloud infrastructure. A hybrid solution combining purpose-built 5G RAN and Cloud RAN, with a combination of centralized and distributed deployments can best serve the diverse requirements of different 5G use cases. The need for performance and cost-effectiveness implies the use of automation and targeted densification. Focused attention to security breach concerns is also essential. The Zero Trust model used by Dell enforces trust across devices, users, networks, applications, infrastructure, and data with automation and orchestration. Open ecosystems drive innovation and scalability by fostering competition, reducing vendor lock-in, and enhancing cost efficiency.
Cloud RAN collaboration
- Nokia has unveiled a groundbreaking concept known as anyRAN, signaling a pivotal shift in how radio access networks (RANs) are built and evolve. Dell's latest generation of XR8000 servers plays a crucial role in this integration. These servers are known for their reliability, performance, and scalability, making them an ideal choice for hosting the cloud-native components of anyRAN. By leveraging Dell's infrastructure, Nokia can ensure that the network's foundation is stable and capable of handling the demands of 5G connectivity.
- Nokia and Dell Technologies are integrating and validating a solution combining Nokia’s 5G Cloud RAN software and a RAN SmartNIC L1 accelerator card with Dell’s purpose-designed telecom open infrastructure, including Dell PowerEdge servers, and Dell storage and switches.
- Dell Technologies and Nokia are working together to deploy research and development (R&D) and testing resources. Dell is using the Open Telecom Ecosystem Lab (OTEL) as the center for testing and validation, while Nokia focuses its work on its Nokia System Test Lab.
Nokia’s Core Networks NFVi platforms are undergoing meticulous testing and verification on the newest generation of Dell Power Edge servers. The testing, verification, and certification process at Dell's OTEL are essential steps in the journey to deploy containerized services in real-world telecommunications networks. By selecting the latest generation of Power Edge servers for this testing phase, Nokia is ensuring that NFVi 4.0 supports modern containerized requirements and is adaptable to the quickly evolving cloud platforms.
Figure 1. PowerEdge XR800
Ongoing evolution of the collaboration
Dell Technologies and Nokia have achieved significant progress since the initial announcement of this collaboration at MWC 2023. Both companies have continued to work closely to refine their offerings, address challenges, and push the boundaries of 5G technology.
Both companies hold high hopes for this collaboration, as they see it as the correct path to reduce deployment complexity, streamline processes, and expedite innovation, as demonstrated by the following quotes from leadership at Dell and Nokia:
“The ongoing evolution of this partnership underscores the commitment of Dell and Nokia to staying at the forefront of the 5G revolution.” Gautam Bhagra, Vice President, Strategic Business Development, Dell Telecom Systems Business.
“Our strategic collaboration with Dell is an important component of our innovative anyRAN approach, which brings communications service providers and enterprises full freedom of choice to mix and match purpose-built and cloud-based RAN solutions. Together we translate our collaborative advantage into a competitive advantage for our customers, in a dynamic technology landscape, where 5G meets Cloud," Pasi Toivanen, SVP and Head of Partner Cloud RAN Solutions, Mobile Networks, Nokia.
Additional resources
Dell Technologies 5G Technologies - Telecommunication Solutions
Related Blog Posts

Dell Technologies Shines in MLPerf™ Stable Diffusion Results
Wed, 06 Dec 2023 17:33:43 -0000
|Read Time: 0 minutes
Abstract
The recent release of MLPerf Training v3.1 results includes the newly launched Stable Diffusion benchmark. At the time of publication, Dell Technologies leads the OEM market in this performance benchmark for training a Generative AI foundation model, especially for the Stable Diffusion model. With the Dell PowerEdge XE9680 server submission, Dell Technologies is differentiated as the only vendor with a Stable Diffusion score for an eight-way system. The time to converge by using eight NVIDIA H100 Tensor Core GPUs is 46.7 minutes.
Overview
Generative AI workload deployment is growing at an unprecedented rate. Key reasons include increased productivity and the increasing convergence of multimodal input. Creating content has become easier and is becoming more plausible across various industries. Generative AI has enabled many enterprise use cases, and it continues to expand by exploring more frontiers. This growth can be attributed to higher resolution text to image, text-to-video generations, and other modality generations. For these impressive AI tasks, the need for compute is even more expansive. Some of the more popular generative AI workloads include chatbot, video generation, music generation, 3D assets generation, and so on.
Stable Diffusion is a deep learning text-to-image model that accepts input text and generates a corresponding image. The output is credible and appears to be realistic. Occasionally, it can be hard to tell if the image is computer generated. Consideration of this workload is important because of the rapid expansion of use cases such as eCommerce, marketing, graphics design, simulation, video generation, applied fashion, web design, and so on.
Because these workloads demand intensive compute to train, the measurement of system performance during their use is essential. As an AI systems benchmark, MLPerf has emerged as a standard way to compare different submitters that include OEMs, accelerator vendors, and others in a like-to-like way.
MLPerf recently introduced the Stable Diffusion benchmark for v3.1 MLPerf Training. It measures the time to converge a Stable Diffusion workload to reach the expected quality targets. The benchmark uses the Stable Diffusion v2 model trained on the LAION-400M-filtered dataset. The original LAION 400M dataset has 400 million image and text pairs. A subset of those images (approximately 6.5 million) is used for training in the benchmark. The validation dataset is a subset of 30 K COCO 2014 images. Expected quality targets are FID <= 90 and CLIP>=0.15.
The following figure shows a latent diffusion model[1]:
Figure 1: Latent diffusion model
[1] Source: https://arxiv.org/pdf/2112.10752.pdf
Stable Diffusion v2 is a latent diffusion model that combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. MLPerf Stable Diffusion focuses on the U-Net denoising network, which has approximately 865 M parameters. There are some deviations from the v2 model. However, these adjustments are minor and encourage more submitters to make submissions with compute constraints.
The submission uses the NVIDIA NeMo framework, included with NVIDIA AI Enterprise, for secure, supported, and stable production AI. It is a framework to build, customize, and deploy generative AI models. It includes training and inferencing frameworks, guard railing toolkits, data curation tools, and pretrained models, offering enterprises an easy, cost effective, and a fast way to adopt generative AI.
Performance of the Dell PowerEdge XE9680 server and other NVIDIA-based GPUs on Stable Diffusion
The following figure shows the performance of NVIDIA H100 Tensor Core GPU-based systems on the Stable Diffusion benchmark. It includes submissions from Dell Technologies and NVIDIA that use different numbers of NVIDIA H100 GPUs. The results shown vary from eight GPUs (Dell submission) to 1024 GPUs (NVIDIA submission). The following figure shows the expected performance of this workload and demonstrates that strong scaling is achievable with less scaling loss.
Figure 2: MLPerf Training Stable Diffusion scaling results on NVIDIA H100 GPUs from Dell Technologies and NVIDIA
End users can use state-of-the-art compute to derive faster time to value.
Conclusion
The key takeaways include:
- The latest released MLPerf Training v3.1 measures Generative AI workloads like Stable Diffusion.
- Dell Technologies is the only OEM vendor to have made an MLPerf-compliant Stable Diffusion submission.
- The Dell PowerEdge XE9680 server is an excellent choice to derive value from Image Generation AI workloads for marketing, art, gaming, and so on. The benchmark results are outstanding for Stable Diffusion v2.
MLCommons Results
https://mlcommons.org/benchmarks/training/
The preceding graphs are MLCommons results for MLPerf IDs 3.1-2019, 3.1-2050, 3.1-2055, and 3.1-2060.
The MLPerf™ name and logo are trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information.

Red Hat OpenShift - Windows compute nodes
Wed, 06 Dec 2023 10:35:35 -0000
|Read Time: 0 minutes
Red Hat OpenShift - Windows compute nodes
Red Hat® OpenShift® Container Platform is an industry-leading Kubernetes platform that enables a cloud-native development environment together with a cloud operations experience, giving you the ability to choose where you build, deploy, and run applications, all through a consistent interface. Powered by the open source-based OpenShift Kubernetes Engine, Red Hat OpenShift provides cluster management, platform services for managing workloads, application services for building cloud-native applications, and developer services for enhancing developer productivity.
Support for Windows containers
OpenShift Container Platform enables you to host and run Windows-based workloads on Windows compute nodes alongside the traditional Linux workloads that are hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux compute nodes. For more information, see Red Hat OpenShift support for Windows Containers.
As a prerequisite for installing Widows workloads, the Windows Machine Config Operator must be installed on a cluster that is configured with hybrid networking using OVN-Kubernetes. The operator configures Windows compute nodes and orchestrates the process of deploying and managing Windows workloads on a cluster.
Open Virtual Network (OVN) is the only supported networking configuration for installing Windows compute nodes. OpenShift Container Platform uses the OVN-Kubernetes network plug-in as its default network provider. You can configure the OpenShift Networking OVN-Kubernetes network plug-in to enable Linux and Windows nodes to host Linux and Windows workloads respectively. For more information, see About the OVN-Kubernetes network plugin.
Cluster architecture and components
Adding a Windows node
You will need an already installed cluster, built using the IPI installation method or the Assisted Installer. For more information about deploying an OpenShift cluster on Dell bare-metal servers, see the Red Hat OpenShift Container Platform 4.12 on Dell Infrastructure Implementation Guide.
Create a custom manifest file to configure the Hybrid OVN-Kubernetes network during the cluster deployment by running the following commands:
cat cluster-network-03-config.yml
apiVersion: operator.openshift.io/v1
kind: Network
metadata:
name: cluster
spec:
defaultNetwork:
ovnKubernetesConfig:
hybridOverlayConfig:
hybridClusterNetwork:
- cidr: 10.132.0.0/14
hostPrefix: 23
To add the server to the cluster as a worker node, you need bare-metal server with a Windows operating system. For the supported Windows versions, see Red Hat OpenShift 4.13 support for Windows Containers release notes.
- Open ports 22 and 10250 for SSH and for log collection on the Windows server.
- Create an administrator user. The administrator user’s private key is used in the secret as an authorized SSH key and to enable password-less authentication to the Windows server.
- Install the Windows Machine Config Operator on the cluster.
- In the openshift-windows-machine-config-operator namespace, create the secret from the administrator user’s private key.
- Describe the IPv4 or DNS address of the Windows instance and the administrator user in the configmap.
The WMCO operator scans for the secret created during boot, and creates another user data secret with the data that is required to interact with the Windows server using the SSH protocol. After the SSH connection is established, the operator starts processing the Windows servers that are listed in the configmap and begins to transfer files and configure the nodes. The CSRs that are generated are auto-approved, and the Windows instance is added to the cluster.
Environment overview
OpenShift Container platform is hosted on Dell PowerEdge R650 servers, enabling hybrid networking with OVN-Kubernetes. The Dell-validated environment consisted of three compute nodes. The validation team added a Windows instance to the cluster as a fourth node. The following table shows the cluster version information:
OpenShift cluster version | 4.13.21 |
Kubernetes version | 1.26.9 |
WCMO operator version | 8.1.0+0.1699557880.p |
Windows instance version | Windows server 2019 (Version 1809) |
References
Configuring hybrid networking - OVN-Kubernetes network plugin