
Edge Intelligence Trends in the Retail Industry
Tue, 07 Feb 2023 22:58:02 -0000
|Read Time: 0 minutes
Retail and Edge Computing
In today's digital landscape, the integration of Retail and Edge Computing is becoming increasingly important. Retail encompasses the sale of goods and services both online and in physical stores, while Edge computing deals with processing and analyzing data close to its source of collection, rather than in a centralized location. The use of Edge computing in retail environments can improve Customer Experiences, optimize store operations, and support new technologies such as self-checkout and augmented reality systems. This leads to a mandate for real-time data analysis, allowing for a more personalized and seamless shopping experience for customers, as well as cost reduction and increased revenue for retailers. In this post, we will delve into the advantages of Edge computing and how they align with modern AI trends in the retail industry, serving as a reference to Dell's positioning and solutions for retail organizations to effectively utilize AI to meet business needs.
Leading Retail Edge AI Trends in 2023
Focus on AI uses cases with High ROI
Edge AI holds vast potential in a variety of industries with a high ROI in the form of automation of repetitive tasks, increased efficiency, and revenue growth. Edge AI can be used to analyze customer data and predict future purchasing patterns, leading to optimized product offerings and improved sales. Personalization through tailored marketing campaigns and personalized product recommendations can boost customer loyalty and repeat business. Edge AI can track Point of Sales (POS) and online shopping transactions inventory levels in real-time, identify discrepancies, and prevent stockouts, leading to cost savings and increased revenue. It can also be used to detect and prevent theft in retail environments, increasing profitability. The technology can be applied in smart shelves and robotics for inventory management and stocking shelves, resulting in increased efficiency and reduced labor costs.
Growth in Human and Machine Collaboration
Edge AI boosts human-machine collaboration in retail by automating tasks and powering virtual assistants, predictive analytics, sensors, and robots. These AI tools free up store staff for tasks requiring human skills, offer personalized information, optimize product offerings, track inventory, detect theft, and provide real-time data analysis and recommendations to improve the customer experience.
New AI Use cases for safety
Edge AI can be used to improve safety in retail environments in various ways. Edge AI-powered cameras and sensors can be used for real-time surveillance, helping to detect potential safety hazards such as spills or broken equipment. Edge AI can also be used for crowd management, analyzing data on foot traffic to identify patterns and optimize store layouts. In case of emergencies, Edge AI can detect potential incidents such as fires or gas leaks and send real-time alerts to emergency responders. Edge AI can be used to track inventory levels in real-time and monitor individual movements, detecting falls or accidents and alerting store staff or emergency responders. Edge AI can also be used for temperature monitoring in areas such as refrigerated storage to ensure safe temperatures and prevent food-borne illnesses. Real-time alerting can be done using Edge AI to ensure quick and effective responses to safety incidents.
IT Focus on Cybersecurity at the Edge
The retail industry is rapidly adopting Edge AI technologies to improve the customer experience and enhance operational efficiency. However, this increased reliance on Edge AI also creates new cybersecurity risks, as cyber criminals seek to exploit vulnerabilities in these systems. To address these challenges, retailers are increasing their investment in cybersecurity for Edge AI technologies. Some of the key trends in Edge AI cybersecurity for retail include the implementation of advanced security protocols, such as encryption and multi-factor authentication, to protect sensitive customer data. Another trend is the use of artificial intelligence to detect and prevent cyber threats in real-time, by analyzing data from Edge AI devices and network traffic. As Edge AI continues to become more widespread in the retail industry, the need for robust cybersecurity measures will only continue to grow, making it essential for retailers to stay ahead of the latest trends and best practices.
Connecting Digital Twins to the Edge
Edge AI can be utilized to create digital twins for various aspects of the retail environment, leading to improvements in store operations and customer experience. Retailers can create digital twins of store layouts to optimize product placement and simulate customer movement. Digital twins can also be created for inventory systems, smart shelves, and automated systems to optimize stock levels, predict demand patterns, and improve task efficiency and accuracy. Predictive maintenance can be performed by creating digital twins of equipment and infrastructure to avoid downtime. Edge AI also enables real-time monitoring of retail environments by creating digital twins and analyzing data, such as foot traffic and temperature, to inform better decision-making. Virtual reality applications can also be enhanced by creating digital twins of the store, providing virtual try-ons and product demonstrations to customers.
Importance of Edge Computing
Edge computing is becoming a crucial technology as the amount of data generated from devices and sensors continues to grow. By decentralizing and distributing computing, edge computing offers several advantages, including real-time processing with lower latency, improved efficiency, enhanced security, cost-effectiveness, and increased scalability.
Benefits
Low Latency: Edge computing enables near real-time processing for applications like self-driving cars, industrial automation, and IoT devices.
Improved Efficiency: By processing data near source, edge computing reduces data transmission, improving system efficiency and reducing data storage costs.
Improved Security: Edge computing protects sensitive data by processing it near source and analyzing it before transmitting to a centralized location, reducing data breach risks.
Cost-effective: Edge computing reduces costs by reducing the need for powerful servers and large data centers, and reducing data transmission and storage costs.
Increased Scalability: Edge computing allows decentralized, distributed computing, making it easier to scale systems as needed without infrastructure constraints.
Dell Solutions for Retail AI Edge Computing
Dell Technologies collaborates with a multitude of business partners to offer market-leading software integrated with its latest PowerEdge XR4000 infrastructure for Retail AI Edge Computing. These comprehensive solutions are carefully curated and validated through Dell's Validated Designs to support retailers in realizing their AI objectives and applications. In this discourse, we will delve into three distinctive solutions that pertain to the following areas: Manage and Scale AI at the Edge, Retail Loss Prevention, and Retail Analytics.
Nvidia Fleet Command – Manage Configurations and Scale AI at the Edge
Edge AI deployment introduces new opportunities with real-time insights at the decision point, reducing latency and costs compared to data center and cloud transfer. However, bringing Edge AI to retail pipelines can be challenging due to resource limitations. Fleet Command is a hybrid cloud platform for managing large-scale AI, including edge devices, through a single web-based control plane. With Fleet Command and Dell EMC PowerEdge servers, IT administrators can remotely control AI deployments securely and efficiently, streamlining deployment and ensuring resilient AI across the network.
Adaptive Compute
Dell Technologies' systems management enables fast response to business opportunities through intelligent systems that collaborate and act independently to align outcomes with business goals, freeing IT to focus on innovation. Fleet Command simplifies AI management through centralization and one-touch provisioning, reducing the learning curve and accelerating the path to AI.
Autonomous Management
Dell Technologies' systems management enables quick response to business opportunities with smart systems that act independently to align with business goals, freeing IT to focus on innovation. Fleet Command streamlines AI management with centralized, straightforward management and one-touch provisioning.
Proactive Resilience
Dell EMC PowerEdge servers, prioritize security throughout the infrastructure and IT setup, detecting potential risks. Fleet Command adds extra protection with integrated security features that secure application and sensor data, including self-healing capabilities to minimize downtime and reduce maintenance expenses.
Figure 1. Fleet Command Platform UI
RetailAI Protect by Malong Technologies – Retail Loss Prevention
The Retail Loss Prevention solution is driven by Dell EMC PowerEdge server in-store and uses advanced product recognition technology from Malong RetailAI® Protect. The aim of the Retail Loss Prevention architecture is to prevent fraud while maintaining a seamless customer experience. It is an AI-powered system that can detect mis-scans and ticket switching in near real-time, covering a wide range of stock keeping units (SKUs). The solution components are chosen based on their compatibility and enhancement of existing POS scanners. The Malong RetailAI Protect solution can prevent retail loss in two ways: ticket switching and mis-scans. The overhead fixed-dome camera records an item with a suspect UPC barcode or an item that wasn't scanned. The video is processed in a GPU and fed into the Malong RetailAI model to predict the item's UPC. If the item wasn't scanned, the Malong RetailAI Protect system alerts the self-checkout (SCO) system after a set time interval. If the scanned UPC doesn't match the correct code, the system immediately raises an alert for the retail associate to take appropriate action.
Figure 2. Examples of mis-scanning and ticket switching
Deep North Video Analytics– Retail Analytics
Deep North Video Analytics is a leading-edge platform that performs real-time and batch processing of images that are captured by cameras that are typically located in the ceiling of a retail environment. The Deep North platform ingests the images and feeds them directly into the memory of GPU cards that are installed in a Dell PowerEdge or Dell VxRail node. Each camera stream is analyzed frame by frame, and then the Deep North inferencing algorithms produce specific metadata. This metadata is then sent to the Deep North Analytics Cloud and converted into a dashboard that provides valuable information to the store owner.
Figure 3. Visualizations in a Deep North Analytics Dashboard
Dell is your partner in your AI Edge for Retail Journey
As AI continues to evolve, keeping up with the design, development, deployment, and management of AI solutions can be a challenge for organizations lacking AI expertise. That's where Dell Technologies comes in, as your trusted partner in the AI journey. With a decade of experience as a leader in advanced computing, Dell offers industry-leading products, solutions, and expertise to empower your organization. Our specialized team of AI, HPC, and Data Analytics experts are dedicated to helping you stay ahead of the curve, with a focus on Edge AI for Retail. Our experts can assist you in leveraging Edge AI to drive business outcomes, improve customer experiences, and increase operational efficiency. Trust Dell to help you stay at the forefront of AI innovation.
Customer Solution Center
The Customer Solution Center is a dedicated resource designed to provide customers with a comprehensive range of information, recommendations, and demonstrations of technologies and platforms that support AI. Our experienced staff are well-versed in the challenges faced by customers and offer valuable insight and guidance to help organizations leverage AI to its full potential.
AI and HPC Innovation Lab
The AI and HPC Innovation Lab is a state-of-the-art infrastructure staffed by an exceptional team of computer scientists, engineers, and Ph.D.-level experts. These specialists work in close collaboration with customers and the wider AI and HPC community to advance the field through early access to emerging technologies, performance optimization of clusters, benchmarking of applications, best practice development, and publication of industry thought leadership. By engaging with the Lab, organizations have direct access to Dell's leading experts, enabling them to tailor a customized solution for their unique AI or HPC needs.
Conclusion
Edge computing has become a crucial technology in the retail industry, allowing for real-time data processing and analysis close to the information source. This leads to improved customer experiences, optimized store operations, and new technology implementation, such as self-checkout and augmented reality systems. As data generation continues to grow, edge computing offers low latency, improved efficiency, enhanced security, cost-effectiveness, and increased scalability. Edge AI is at the forefront of retail trends, offering high ROI in the form of intelligent automation and improved efficiency, human-machine collaboration, new AI use cases for safety, a focus on cybersecurity at the edge, and the creation of digital twins for improved intelligent decision making.
Related Blog Posts

Choosing a PowerEdge Server and NVIDIA GPUs for AI Inference at the Edge
Fri, 05 May 2023 16:38:19 -0000
|Read Time: 0 minutes
Dell Technologies submitted several benchmark results for the latest MLCommonsTM Inference v3.0 benchmark suite. An objective was to provide information to help customers choose a favorable server and GPU combination for their workload. This blog reviews the Edge benchmark results and provides information about how to determine the best server and GPU configuration for different types of ML applications.
Results overview
For computer vision workloads, which are widely used in security systems, industrial applications, and even in self-driven cars, ResNet and RetinaNet results were submitted. ResNet is an image classification task and RetinaNet is an object detection task. The following figures show that for intensive processing, the NVIDIA A30 GPU, which is a double-wide card, provides the best performance with almost two times more images per second than the NVIDIA L4 GPU. However, the NVIDIA L4 GPU is a single-wide card that requires only 43 percent of the energy consumption of the NVIDIA A30 GPU, considering nominal Thermal Design Power (TDP) of each GPU. This low-energy consumption provides a great advantage for applications that need lower power consumption or in environments that are more challenging to cool. The NVIDIA L4 GPU is the replacement for the best-selling NVIDIA T4 GPU, and provides twice the performance with the same form factor. Therefore, we see that this card is the best option for most Edge AI workloads.
Conversely, the NVIDIA A2 GPU exhibits the most economical price (compared to the NVIDIA A30 GPU's price), power consumption (TDP), and performance levels among all available options in the market. Therefore, if the application is compatible with this GPU, it has the potential to deliver the lowest total cost of ownership (TCO).
Figure 1: Performance comparison of NVIDIA A30, L4, T4, and A2 GPUs for the ResNet Offline benchmark
Figure 2: Performance comparison of NVIDIA A30, L4, T4, and A2 GPUs for the RetinaNet Offline benchmark
The 3D-UNet benchmark is the other computer vision image-related benchmark. It uses medical images for volumetric segmentation. We saw the same results for default accuracy and high accuracy. Moreover, the NVIDIA A30 GPU delivered significantly better performance over the NVIDIA L4 GPU. However, the same comparison between energy consumption, space, and cooling capacity discussed previously applies when considering which GPU to use for each application and use case.
Figure 3: Performance comparison of NVIDIA A30, L4, T4, and A2 GPUs for the 3D-UNet Offline benchmark
Another important benchmark is for BERT, which is a Natural Language Processing model that performs tasks such as question answering and text summarization. We observed similar performance differences between the NVIDIA A30, L4, T4, and A2 GPUs. The higher the value, the better.
Figure 4: Performance comparison of NVIDIA A30, L4, T4, and A2 GPUs for the BERT Offline benchmark
MLPerf benchmarks also include latency results, which are the time that systems take to process requests. For some use cases, this processing time can be more critical than the number of requests that can be processed per second. For example, if it takes several seconds to respond to a conversational algorithm or an object detection query that needs a real-time response, this time can be particularly impactful on the experience of the user or application.
As shown in the following figures, the NVIDIA A30 and NVIDIA L4 GPUs have similar latency results. Depending on the workload, the results can vary due to which GPU provides the lowest latency. For customers planning to replace the NVIDIA T4 GPU or seeking a better response time for their applications, the NVIDIA L4 GPU is an excellent option. The NVIDIA A2 GPU can also be used for applications that require low latency because it performed better than the NVIDIA T4 GPU in single stream workloads. The lower the value, the better.
Figure 4: Latency comparison of NVIDIA A30, L4, T4, and A2 GPUs for the ResNet single-stream and multistream benchmark
Figure 5: Latency comparison of NVIDIA A30, L4, T4, and A2 GPUs for the RetinaNet single-stream and multistream benchmark and the BERT single-stream benchmark
Dell Technologies submitted to various benchmarks to help understand which configuration is the most environmentally friendly as the data center’s carbon footprint is a concern today. This concern is relevant because some edge locations have power and cooling limitations. Therefore, it is important to understand performance compared to power consumption.
The following figure affirms that the NVIDIA L4 GPU has equal or better performance per watt compared to the NVIDIA A2 GPU, even with higher power consumption. For Throughput and Perf/watt values, higher is better; for Power(watt) values, lower is better.
Figure 6: NVIDIA L4 and A2 GPU power consumption comparison
Conclusion
With measured workload benchmarks on MLPerf Inference 3.0, we can conclude that all NVIDIA GPUs tested for Edge workloads have characteristics that address several use cases. Customers must evaluate size, performance, latency, power consumption, and price. When choosing which GPU to use and depending on the requirements of the application, one of the evaluated GPUs will provide a better result for the final use case.
Another important conclusion is that the NVIDIA L4 GPU can be considered as an exceptional upgrade for customers and applications running on NVIDIA T4 GPUs. The migration to this new GPU can help consolidate the amount of equipment, reduce the power consumption, and reduce the TCO; one NVIDIA L4 GPU can provide twice the performance of the NVIDIA T4 GPU for some workloads.
Dell Technologies demonstrates on this benchmark the broad Dell portfolio that provides the infrastructure for any type of customer requirement.
The following blogs provide analyses of other MLPerfTM benchmark results:
- Dell Servers Excel in MLPerf™ Inference 3.0 Performance
- Dell Technologies’ NVIDIA H100 SXM GPU submission to MLPerf™ Inference 3.0
- Empowering Enterprises with Generative AI: How Does MLPerf™ Help Support
- Comparison of Top Accelerators from Dell Technologies’ MLPerf™
References
For more information about Dell Power Edge servers, go to the following links:
- Dell’s PowerEdge XR7620 for Telecom/Edge Compute
- Dell’s PowerEdge XR5610 for Telecom/Edge Compute
- PowerEdge XR4520c Compute Sled specification sheet
- PowerEdge XE2420 Spec Sheet
For more information about NVIDIA GPUs, go to the following links:
MLCommonsTM Inference v3.0 results presented in this document are based on following system IDs:
ID | Submitter | Availability | System |
---|---|---|---|
2.1-0005 | Dell Technologies | Available | Dell PowerEdge XE2420 (1x T4, TensorRT) |
2.1-0017 | Dell Technologies | Available | Dell PowerEdge XR4520c (1x A2, TensorRT) |
2.1-0018 | Dell Technologies | Available | Dell PowerEdge XR4520c (1x A30, TensorRT) |
2.1-0019 | Dell Technologies | Available | Dell PowerEdge XR4520c (1x A2, MaxQ, TensorRT) |
2.1-0125 | Dell Technologies | Preview | Dell PowerEdge XR5610 (1x L4, TensorRT, MaxQ) |
2.1-0126 | Dell Technologies | Preview | Dell PowerEdge XR7620 (1x L4, TensorRT) |
Table 1: MLPerfTM system IDs

Computing on the Edge: Other Design Considerations for the Edge – Part 1
Fri, 13 Jan 2023 19:46:50 -0000
|Read Time: 0 minutes
In past blogs, the requirements for NEBS Level 3 certifications were addressed, with even higher demands depending on the Outside Plant (OSP) installation requirements. Now, additional design considerations need to be considered, to create a hardware solution that is not only going to survive the environment at the edge, but provides a platform that can be effectively deployed to the edge.
Ruggedized Chassis Design
The first design consideration that we’ll cover for an Edge Server is the Ruggedized Chassis. This is certainly a chassis that can stand up to the demands of Seismic Zone 4 testing and can also withstand impacts, drops, and vibration, right?
Not necessarily.
While earthquakes are violent, demanding, but relatively short-duration events, the shock and vibration profile can differ significantly when the server is taken out from under the Cell Tower. We are talking beyond the base of the tower, and to edge environments that might be encountered in Private Wireless or Multi-Access Edge Compute (MEC) deployments. Some vibration and shock impacts are tested in GR-63-Core, under test criteria for Transportation and Packaging, but ruggedized designsFigure 1. Portable Edge Compute Platforms need to go beyond this level of testing.
For example, the need for ruggedized servers in mining or military environments, where setting up compute can be more temporary in nature and often includes the use of portable cases, such as Pelican Cases. These cases are subject to environmental stresses and can require ruggedized rails and upgraded mounting brackets on the chassis for those rails. For longer-lasting deployments, enclosures can be less than ideal and require all the requirements of a GR-3108 Class 2 device and perhaps some additional considerations.
Dell Technologies also tests our Ruggedized (XR-series) Servers to MIL-STD-810 and Marine testing specifications. In general, MIL-STD-810 temperature requirements are aligned with GR-63-CORE on the high side but test operationally down to -57C (-70F) on the low side. This reflects some extreme parts of the world where the military is expected to operate. But MIL-STD-810 also covers land, sea, and air deployments. This means that non-operational (shipping) criteria is much more in-depth, as are acceleration, shock, and vibration. Criteria includes scenarios, such as crash survivability, where the server can be exposed to up to 40Gs of acceleration. Of course, this tests not only the server, but the enclosure and mounting rails used in testing.
So why have I detoured onto MIL-STD and Marine testing? For one, it’s interesting in the extreme “dynamic” testing requirements that are not seen in NEBS. Secondly, creating a server that is survivable in MIL-STD and Marine environments is only complementary to NEBS and creates an even more durable product that has applications beyond the Cellular Network.
Server Form Factor
Figure 2. Typical Short Depth Cell Site EnclosureAnother key factor in chassis design for the edge is the form factor. This involves understanding the physical deployment scenarios and legacy environments, leading to a server form factor that can be installed in existing enclosures without the need for major infrastructure improvements. For servers, 19 inch rackmount or 2 post mounting is common, with 1U or 2U heights. But the key driver in the chassis design for compatibility with legacy telecom environments is short depth.
Server depth is not something covered by NEBS, but supplemental documentation created by the Telecoms, and typically reflected in RFPs, define the depth required for installation into Legacy Environments. For instance, AT&T’s Network Equipment Power, Grounding, Environmental, and Physical Design Requirements document states that “newer technology” deployed to a 2 post rack, which certainly applies to deployments like vRAN and MEC, “shall not” exceed 24 inches (609mm) in depth. This disqualifies most traditional rackmount servers.
The key is deployment flexibility. Edge Compute should be able to be mounted anywhere and adapt to the constraints of the deployment environment. For instance, in a space-constrained location, front maintenance is a needed design requirement. Often these servers will be installed close to a wall or mounted in a cabinet with no rear access. In addition, supporting reversible airflow can allow the server to adapt to the cooling infrastructure (if any) already installed.
Conclusion
While NEBS requirements focus on Environmental and Electrical Testing, ultimately the design needs to consider the target deployment environment and meet the installation requirements of the targeted edge locations.