Your Browser is Out of Date uses technology that works best in other browsers.
For a full experience use one of the browsers below Contact Us
United States/English
Home > Edge > Dell NativeEdge > Blogs


Blogs (17)

  • NativeEdge

Self-Learning Series Part 1: Understanding NativeEdge

Joshua Margo Joshua Margo

Fri, 13 Oct 2023 11:52:00 -0000


Read Time: 0 minutes

We are experiencing a fundamental design shift, driven by a perfect storm that is occurring at the edge.  First, we are seeing massive amounts of data being created by data sources, from sensors, robotics, video, and other devices, most of which never existed until recently. 

Second, a new generation of technology has matured which enables us to derive value from that data in near real time. Technologies such as AI and machine learning, paired with small form factor computing and low latency 5G networking, enable us to capture, curate, analyze, and act faster than ever before. Multicloud has also matured to the point where companies can leverage any cloud platform for monitoring, reporting, and model training. 

Most importantly, unique challenges require a new approach to the edge. The diversity of hardware and environments makes testing, integrating, deploying, and managing hardware and associated software a critical design point. Edge application workloads are challenging because they must support diverse use cases like computer vision in manufacturing or inventory management in retail. Large-scale geo-distributed locations such as retail stores and distribution centers elevate business-level concerns surrounding security, support, and efficient distributed systems operations.

Addressing the Challenges at the Edge

The edge, by its very nature, has unique challenges. Together, data and the technologies that capture it are defining a new set of challenges, implications, and constraints that are fundamentally different than those involved with core datacenter and cloud models.

  • Due to environmental diversity, lifecycle management of hardware and its associated software becomes difficult, and large-scale edge deployments become a significant challenge. Complexities range from the type of network connections to the level of ruggedization and configurations. 
  • OT workloads at the edge need to support both legacy and next-generation workloads deployed in various forms, such as virtual machines (VMs), containers, and serverless designs. The technology underpinnings must be stable, secure, and highly available to meet the needs of these "edge-native" apps.
  • Distributed edge deployments, such as those found in retail outlets and distribution centers, elevate business-level concerns around security, support, and efficient distributed systems operations. Physical and logical security is crucial as the attack surface of an organization massively expands. Zero-trust security concepts must be applied from the supplier network to the production floor.
  • Managing these distributed systems in locations without technical personnel must be simple, scalable, and facilitate easy repairs. Systems must be fundamentally zero-touch once plugged in and powered on.
  • Secure operations, including the ability to deploy and secure workloads anywhere, and to centrally monitor and report on technical and business-level changes, is another critical concern at the edge. Application orchestration solutions designed for edge deployments must be able to deploy these operations workloads to the cloud of their choice.

Attempts to solve edge challenges with use-case-specific, bespoke solutions have resulted in technology silos that become operational nightmares as use cases and workloads increase over time.

These challenges are driving business and technology requirements towards a new approach—one that avoids these operational roadblocks.

The New Frontier of your Edge Strategy

At Dell, we’re changing how we approach these challenges by creating a new management and orchestration platform for the edge.

This new approach allows us to tackle these edge challenges and avoid the case-by-case solutions that result in technology silos that are difficult to scale and manage. We understand that edge infrastructure management is a large undertaking for any organization.

There is a need to reimagine edge management operations whereby enterprises can orchestrate the entire lifecycle management of applications, anywhere and anytime. They need to be empowered to scale their edge operations with consistency and security for any use case. They must be able to simplify their operations, optimize their edge investment, and secure their distributed edge estate easily.

Simply extending traditional IT and datacenter-centric practices to manage your edge estate does not work. We need a new approach to address the unique challenges at the edge.

We’ll get you started by breaking down these challenges and their solutions to help improve the application lifecycle management, and to elevate the efficiency of OT and IT teams and their collaborative productivity.

The New Approach is Ready for Action 

Dell NativeEdge is an edge operations software platform that helps businesses securely scale their edge management across their distributed edge estate. NativeEdge centralizes edge management across locations, automates operations, offers flexibility with its open design, enables zero-trust security, and provides multicloud connectivity. With NativeEdge, enterprises across various industries can securely power any edge application, anywhere, to achieve their specific business goals.

The platform also prioritizes partnerships and leverages a broad ecosystem of independent software vendors, system integrators, original equipment manufacturers (OEMs), and channel partners to deliver tailored solutions to customers through their preferred technology providers.

NativeEdge is designed to be cost-effective by offering subscription-based or software-as-a-service (SaaS) options.

The platform streamlines edge management across different industries, including but not limited to: retail, manufacturing, energy, digital cities, and healthcare. While it can be used to deploy and manage a small edge environment (a single edge site with just a few edge compute endpoints), it seamlessly scales up for deploying and operating more complex edge compute estates (multiple edge sites with many edge compute endpoints).


At the center of NativeEdge is the orchestrator, which supports edge operations such as application orchestration, fleet management, and life cycle management. Packaged as Helm charts, the NativeEdge Orchestrator can be deployed anywhere a dedicated Kubernetes cluster exists. For example, the Kubernetes cluster can be on-premise inside a VM or bare-metal server. Once the NativeEdge Orchestrator is deployed, customers can easily add NativeEdge-enabled Devices into the edge estate with secure device onboarding and zero-touch provisioning.

To learn more about how to simplify edge operations at scale, click here to see an interactive flip-book.


With the industry’s broadest portfolio of edge infrastructure hardware and our industry-leading secure supply chain, we can digitally sign and certify hardware in the factory. The chain starts at first power-on, which automates the deployment and configuration of the edge infrastructure that is managed by NativeEdge while ensuring a zero-trust chain of custody.

Users are able to eliminate operational complexity at scale with centralized management using blueprint-based deployment, zero-touch provisioning, automated onboarding operations of infrastructure, and applications from edge to multicloud.

This new paradigm eliminates supply chain risks and integration failures. It ensures that the entire solution is consistently installed properly and helps consolidate multiple applications and use cases into one architecture. Users can apply automated workflows simultaneously to thousands of devices across all locations.

To learn more about how to improve productivity and efficiency, click here to see an interactive flip-book.


Built on an open design, NativeEdge offers the flexibility to choose the independent software vendor (ISV) and cloud environment for edge application workloads. You can centrally and consistently deploy containerized and virtual applications using blueprints to work with your choice of IoT framework and OT vendor.

Make the most of your edge investments using an open design that works with software applications, IoT frameworks, multi-vendor operations technology solutions, and multicloud environments. Users can reduce proof-of-concept development time and deliver a consistent experience across multiple hardware form factors and price points.

To learn more about how to optimize your edge investment, click here to see an interactive flip-book.


All of these benefits mean absolutely nothing if they compromise an enterprise's security. The distributed nature of the edge and lack of technical staff make security and compliance the most business-critical pieces, determining the viability of any edge plan. 

The platform is built from the ground up with zero-trust security principles. We alleviate security fears by delivering a platform that ensures the integrity of edge hardware from design to deployment, including within the supply chain, to protect applications and data through hardened blueprints and digitally signed package validation. 

In many cases, local skilled resources are not available at the start of onboarding, which causes delays. NativeEdge only requires the skills needed to plug in and power on a device, and then automation takes care of everything else.

Zero trust is a security and network paradigm that seeks to prevent a violation of trust through users, applications, or devices.

  • Zero trust focuses on authenticating, authorizing, and protecting these individual users, applications, and devices, irrespective of their physical or network location.
  • Zero trust allows administrators to create users and assign role-based access control.


To learn more about how to secure with zero trust, click here to see an interactive flip-book.


Our goal for NativeEdge is to help customers securely scale their edge operations and to support any use case or combination of use cases, by enabling them to simplify their operations, optimize their investment, and secure their entire edge estate.

This new approach simplifies operations through integrated automation processes to streamline edge deployment and operations at scale, without relying on IT expertise in the field. NativeEdge does so with centralized management, zero-touch deployment and onboarding, and automated operations. 

Our strong history of industry technology partnerships at the edge has resulted in a strong edge ecosystem that can leverage the open, vendor-agnostic design of the platform, enabling customers to optimize their edge investment. We support existing and new edge use cases with an open design that works with users' choice of software applications, IoT frameworks, OT vendor solutions, and multicloud environments. We put the customers in the driver’s seat to control their edge, and to not get locked into closed or vertically integrated vendor ecosystems.

Additional Resources

To learn more about NativeEdge features and benefits, click on the following links:

This blog is a part of a self-learning series. For more information on NativeEdge, go to:

Read Full Blog
  • NativeEdge

Self-Learning Series Part 2: Delivering Zero-Trust Security with NativeEdge

Joshua Margo Joshua Margo

Tue, 17 Oct 2023 13:43:00 -0000


Read Time: 0 minutes

At the edge, there are security risks where devices are typically deployed in remote and less secure locations, making them vulnerable to physical tampering. Furthermore, when these devices are shipped throughout the supply chain, the device could be exposed to multiple different parties where there could be a malicious actor somewhere throughout the supply chain.

The distributed nature of the edge and lack of technical staff make security and compliance the most business-critical pieces, determining the viability of any edge plan. 

Maintaining hardware and software complexity for various form factors, network connections, levels of ruggedization, and configurations is a significant challenge that must be addressed for large-scale edge deployments.

This highlights the importance of ensuring that edge devices are secure, user-friendly, and straightforward to deploy.

The NativeEdge platform is built from the ground up with zero-trust security principles. We alleviate the security fears by delivering a platform that ensures the integrity of edge hardware from design to deployment, and along the supply chain to protect applications and data through hardened blueprints and digitally signed package validation.

Ensuring a Zero-Trust Chain of Custody

Our top priority is ensuring security from design to deployment and all along the supply chain to protect applications, data, and infrastructure across the edge estate using zero-trust security principles.

To address this need, Dell introduces NativeEdge secure device onboard (SDO), a solution that simplifies the deployment of NativeEdge-enabled Devices while ensuring robust security with zero-trust and zero-touch capabilities. Using NativeEdge, anyone can set up a NativeEdge-enabled Device by plugging in a network cable, powering on the device, and stepping away. Devices automatically onboard into the NativeEdge Orchestrator for zero-touch deployment across sites.

Delivering zero trust capabilities: Secure supply chain and beyond

After SDO, the NativeEdge Orchestrator securely provisions the NativeEdge Operating Environment onto the NativeEdge-enabled Device. At this point, the device can accept deployment of applications from the NativeEdge Orchestrator.

Every shipment of NativeEdge-enabled Device from the Dell manufacturing plant is secure and locked down. This is accomplished by the following:

  • Secure boot is enabled in BIOS, meaning that only Dell NativeEdge images such as Factory OS, NativeEdge Operating Environment, factory reset image, and so on can successfully boot.
  • The BIOS password is protected and locked out.
  • Boot order is locked down.
  • Secure component validation further protects PowerEdge R660 and R760 NativeEdge.
  • iDRAC (for PowerEdge models) is disabled during onboarding.
  • A single network port is available for onboarding while all other ports are disabled.

Impact Management from Deployment to Onboarding

Secure operations, including the ability to deploy and secure workloads anywhere, and centrally monitor and report on technical and business-level changes, is another critical concern at the edge. Application orchestration solutions designed for edge deployments must be able to deploy these operations workloads to the cloud of their choice.

An important feature of NativeEgde security is the secured component verification (SCV). It ensures that the devices are delivered and ready for deployment exactly as they were built by Dell manufacturing, providing an extension to the Dell Secure Supply Chain assurance process. We leverage a trusted platform module (TPM) chip to secure the hardware with integrated cryptographic keys. TPM stores some security certificates and secrets to encrypt all the management communication. It ensures that, as an edge device is onboarded to NativeEdge, the connection is highly secure, and that edge device cannot be removed from the location and managed through any other means. It can only be managed through NativeEdge.

Additionally, securing with zero trust reinforces the security of applications, data, and infrastructure at every layer:

  • By protecting hardware integrity with FDO-enabled devices
  • Fortifying data and application, from edge to cloud
  • Focusing on authenticating, authorizing, and protecting these individual users, applications, and devices irrespective of their physical or network location
  • Allowing administrators to create users and assign role-based access control

Finally, as part of zero trust, we need that tamper-proof edge hardware and software integrity. We need to make sure that something hasn't happened to that device, because at the edge, you may not have the same level of security controls that you have inside your core data center, or even inside a regional data center. These sites typically have fewer access controls than some of the other edge sites we just mentioned. By giving you consistent management and control and the ability to keep your edge infrastructure up to date, you can be assured that your edge state is not increasing the attack surface for your IT infrastructure and operations.

Security Standards that Protect Your Data

Zero-trust security principles are at the core of NativeEdge, ensuring the integrity of edge hardware, applications, and data through hardened blueprints and digitally signed package validation. While onboarding new devices or applications, the platform extends continuous security across all connected resources, providing you with peace of mind.

NativeEdge empowers you to leverage the enormous benefits of edge computing, while ensuring the integrity and safety of your systems and data.


Dell NativeEdge helps businesses secure the data pipeline from data sources to the edge applications running locally, in data centers, or on the cloud. It combines advanced security measures such as encryption, user access control, private app catalog, network segmentation, and security orchestration. The edge platform also uses telemetry and analytics to proactively assess the security posture of the edge estate without relying on experts with audit capabilities to visit every site.

Dell NativeEdge protects your edge estate with zero-trust security principles. The edge operations software platform enables secure zero-touch onboarding coupled with a hardened and secure edge operating system, which is fundamental to the fidelity of your edge estate. With Dell NativeEdge, you can rest assured that the devices, users, network, applications, and data are continually attested and validated across your expanding edge estate.

To learn more about how to secure with zero trust, click here to see an interactive flip-book.

Additional Resources

To learn more about edge security essentials, click on the following links:

This blog is a part of a self-learning series. For more information on NativeEdge, go to:

Read Full Blog
  • NativeEdge

Self-Learning Series Part 3: Using Automation to Scale and Streamline Operations

Joshua Margo Joshua Margo

Sun, 05 Nov 2023 12:54:00 -0000


Read Time: 0 minutes

Edge devices offer businesses across industries the opportunity to elevate their operations in an unprecedented way. Each edge device that is added to operations comes with multiple management challenges.

The two main challenges businesses constantly need to address are:

  1. The resources needed to deploy edge devices are not always readily available
  2. The time needed to manage edge devices is not always feasible  

If the purpose of these edge devices is to deliver data and improve efficiency, the platform that manages them should match these goals.

Additionally, the struggle to keep IT and OT functioning seamlessly is compounded by the need for edge devices to be deployed, monitored, and updated without creating bottlenecks or unnecessary repetitive tasks.

Managing these distributed systems, especially in locations that don’t have technical personnel, must be simple, scalable, and easily repeatable. Systems must be fundamentally zero-touch once plugged in and powered on.

Therefore, eliminating operational complexity at scale via a centralized management platform would require zero-touch deployment and onboarding, and automated operations of infrastructure and applications from edge to multicloud are essential.

Dell NativeEdge is the edge operations software platform that will help enterprises simplify their edge environments by automating edge operations and enforcing zero trust security.  

This blog explores how automation with NativeEdge helps simplify the operational processes, allowing for OT and IT to streamline tasks and increase edge device efficiency.

Imagine these possibilities with automation:

  • What if you could consolidate all siloed solutions and make it easier to manage and scale them using consistent, repeatable processes?
  • What if you could set up security controls across the edge one time, then enforce them automatically without IT intervention whenever you deploy more applications and devices?
  • What if you could orchestrate all your applications, third-party or home-grown, from a single catalog, across any number of devices or locations, using blueprint templates?
  • As your edge infrastructure expands, what if you could deploy and provision new devices automatically with all the required workloads?
  • What if you could also push out patches and upgrades consistently and at scale?

Dell NativeEdge makes all these possible.

Through the automation of routine and repetitive tasks like onboarding devices, orchestrating application workloads, and managing them at scale, recent analysis suggests the NativeEdge platform can speed up application lifecycle management at the edge 22 times faster than current processes.This means a large-scale edge implementation that may take 100 or more hours to deploy could be completed in under five hours with Dell NativeEdge. 

Automation’s Impact on the Whole Lifecycle Management  

Reduce Human Intervention

NativeEdge dramatically simplifies operations through deeply integrated automation processes to streamline edge deployment and operations at scale without relying on IT expertise in the field. NativeEdge does so with centralized management, zero-touch deployment and onboarding, and automated operations.  

Error-free Processes

Automating the provisioning and deployment processes enables developers to request and access the necessary resources and environments without relying on manual intervention from IT operations. This self-service approach accelerates development cycles and reduces the time required to set up and configure new environments. 

Achieve Faster and More Reliable Software Delivery

Creating tools and workflows that automate tasks like deployment, testing, and monitoring helps reduce human intervention and ensures consistent and error-free processes. This aligns closely with the principles of DevOps implementation, where development and operations teams collaborate closely to achieve faster and more reliable software or hardware delivery. 

Simplified Operations

Through the Dell NativeEdge platform, automation simplifies edge operations by providing centralized control and management of distributed edge devices and infrastructure. This simplification leads to increased operational efficiency, reduced manual intervention, improved reliability, and better utilization of edge resources. These advantages are particularly crucial in edge computing scenarios, where resources are distributed across various remote locations and need to function reliably and with minimal human intervention.

Leveraging Infrastructure as Code (IaC)

Imagine you have edge devices on a fleet of boats. Without automation, if you wanted to update the application version, you would have to send a DevOps specialist to each boat, which would take ages and raise costs astronomically. Taking a step back, if you want to find out what is wrong with the edge device, how long will it take to figure it out and how long would it take to repair it so the device is up and running properly?

NativeEdge leverages IaC to automate application provisioning, deployment, and lifecycle management on NativeEdge-enabled Devices as well as on other infrastructure with virtualized or containerized environments.

To understand how we can leverage IaC, let’s make sure we understand some basic terminology:

  • Infrastructure as code (IaC): The managing and provisioning of infrastructure through code instead of through a manual process. Using IaC, configuration files that contain your infrastructure specifications are created, which makes it easier to edit and distribute configurations.
  • Blueprint: a set of documented best practices, guidelines, and processes for implementing DevOps principles within an organization. A blueprint, in the context of automation, can be a valuable tool to facilitate the design, implementation, and management of automated processes.

Using blueprints is a powerful way to streamline infrastructure and application deployment, ensure consistency, and reduce the risk of errors in your software development and deployment process.

A common tool for creating and managing blueprints is IaC. An example on this could be using frameworks in Ansible for infrastructure provisioning, and configuration management tools like Puppet or Chef for application configuration.

Following our example above of updating an application on devices installed on a fleet of boats. You can leverage automation with Dell NativeEdge, and blueprints can facilitate the process. There are two routes to create blueprints:

  1. Internally write code or configuration files that define your blueprint. This code should specify how to set up and configure infrastructure components (servers, databases, load balancers) and application components (web servers, microservices, databases), and then upload it to NativeEdge.
  2. Alternatively, you can use the NativeEdge catalog which includes ready-to-use blueprints provided by Dell or written by independent software vendors (ISV).

Note: Components of a blueprint can often be reused in various contexts. For example, you can use the same blueprint to deploy similar microservices in different parts of your application.

Once you choose the blueprint you would like to use, it provides an option to deploy the updated application using the blueprint on all the devices running the old version across the entire fleet of boats with just a few clicks in NativeEdge. You don’t need to know how to create the VM receipt, or how to run a playbook, or how to install it. All you need to know is how to click install, and the rest is automated.

All these features in NativeEdge allow for a simplified operational process to update the edge device’s application on the fleet of boats in less time and with less technical expertise on hand. Similarly, we can apply these benefits to retail stores, manufacturing factories, or smart cities.


NativeEdge can manage your entire application lifecycle through automation tools. It helps deploy apps on any infrastructure, including public and private clouds. It is a reliable DevOps tool to speed up the building, deployment, and management of software, apps, and microservices without sacrificing operational efficiency or security.

1Estimated: Based on 2023 study of edge operations by GLG Research on behalf of Dell Technologies and estimates from test deployment of NativeEdge (Avg. of 100 responses from IT practitioners).

Additional Resources

To learn more about NativeEdge Application Orchestration, click on the following link:

This blog is a part of a self-learning series. For more information on NativeEdge, go to:

Read Full Blog
  • NativeEdge

Self-Learning Series Part 4: Explore the Open Design and Platform Architecture

Joshua Margo Joshua Margo

Sun, 19 Nov 2023 14:53:00 -0000


Read Time: 0 minutes

Edge has a unique set of challenges that require a new way of architecting to solve them. Edge computing is a distributed computing paradigm where data processing is performed closer to the data source or "edge" of the network, rather than relying solely on centralized cloud servers. 

An open design fosters a culture of innovation and collaboration. It promotes flexibility and a more future-proof approach to edge computing. However, it's essential to carefully evaluate the specific requirements of the edge computing environment and choose the approach that best aligns with the organization's goals and constraints.

In this blog, we will help you understand how to get the most out of edge investments using an open design that works with software applications, IoT frameworks, multi-vendor operations technology solutions, and multicloud environments of your choice. This will allow you to consolidate technology silos and deliver consistent management experience across devices with connectivity out of the box. 

A Unique Set of Challenges

When edge computing lacks an open design, it can face several challenges, including:

  • Vendor Lock-In—Without open standards and interoperability, organizations may become locked into a specific vendor's proprietary solutions. This limits flexibility, hinders innovation, and leads to higher costs.
  • Lack of Ecosystem—A closed system can stifle competition, reducing options and potentially raising prices.
  • Security Concerns—Closed, proprietary systems may lack transparency, making it more difficult to assess and improve security.
  • Scalability—Scalability is critical for edge computing, as the number of edge devices and their diversity can vary widely. Closed systems are more rigid and make it difficult to scale.

As a result, closed systems may limit the ability of developers and organizations to innovate and create customized edge computing solutions.

What Is Multicloud by Design?

Multicloud by design, also known as a multi-cloud strategy or multi-cloud architecture, is an intentional approach to utilizing multiple cloud service providers for various aspects of an organization's computing needs. In this strategy, a company deliberately chooses to use two or more cloud platforms, such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) to meet specific business requirements.

While multicloud offers numerous benefits, it also introduces complexities in terms of management, orchestration, and security. Organizations need to plan their multi-cloud strategy carefully, including workload placement, data synchronization, network configurations, and security measures, to ensure a successful and efficient implementation. Specialized tools and services designed for managing multi-cloud environments can assist in these efforts.

Watch the following video on how to optimize your edge investment:

A New Way of Architecting

Built on an open design, Dell NativeEdge offers the flexibility to choose the ISV applications and cloud environments for your edge application workloads. You can centrally and consistently deploy containerized and virtual applications using blueprints to work with your choice of IoT frameworks and OT vendors. Like everything else from Dell, NativeEdge is multicloud by design, enabling you to deploy applications across and new or existing environment.

Here are a few advantages of using an open design system:

  • Flexibility—Open architectures allow organizations to choose from a variety of hardware, software, and services. This flexibility is particularly important in the dynamic edge computing environment, where the diversity of devices and use cases can vary.
  • Avoiding Vendor Lock-In—With open designs, organizations are less likely to become locked into a single vendor's proprietary solutions. This reduces the risks associated with vendor dependency and enables businesses to switch or integrate different technologies more easily.
  • Cost-Effectiveness—Open design often leads to cost-effective solutions. Open-source software and standards can reduce licensing fees and minimize the need for expensive proprietary hardware, helping organizations optimize their budgets.
  • Scalability—Open architectures are typically designed with scalability in mind, making it easier to expand edge computing solutions as requirements grow or change.
  • Security and Transparency—Open-source projects are transparent, allowing users to inspect the source code for security vulnerabilities. Community review and contributions help identify and address security issues promptly.
  • Ecosystem Growth—An open design fosters a broader ecosystem of complementary software and hardware solutions, enhancing the availability of tools and services that can be integrated into the edge computing environment.

Edge Partner Ecosystem

We are working with partners to co-engineer and develop solutions that include software, partner intellectual property, products, and services.  Dell also has some of the biggest, longest-standing partnerships in the industry with companies like Microsoft, Intel, and VMware.

When market-leading companies team together to create and offer validated, proven reference architectures, then we can help you mitigate risk and accelerate your time to revenue.

As an example, with NativeEdge, the Dell Validated Design for Manufacturing Edge using Telit Cinterion can be implemented and brought to market quicker, allowing for faster and more secure deployment, lower costs, increased security, and more reliable and repetitive outcomes based on the blueprints implemented. This allows for:

  • Quicker data collection and analysis when deployed on-premises
  • Increased integration of information from existing assets across all NativeEdge-enabled Devices
  • Simpler configuration
  • Simplified connection of devices

By removing the complexity of deployment and adding the element of application-level lifecycle management, NativeEdge reduces the amount of physical touch required and creates a repeatable deployment process at scale.

Dell Technologies will continue to foster partnerships to develop open software that enables interoperability and ease of operations while avoiding being locked into expensive, proprietary technologies that limit your ability to innovate and create. For more information, visit our Edge Ecosystem.

Watch the following video: Power management company optimizes edge investments for success


Make the most of edge investments by using an open design that works with software applications, IoT frameworks, multi-vendor operations technology solutions, and multicloud environments. This enables you to deploy applications across new or existing environments. NativeEdge will support each edge use case with an open design that works with your choice of software applications, IoT frameworks, OT vendor solutions, and multicloud environments.

Dell Technologies is going to enable its existing strong edge ecosystem of partners to leverage the open, vendor-agnostic design, allowing customers to optimize their edge investment. This way, we can put the customer in the driver’s seat to control their edge.

To learn more about how to simplify edge operations at scale, click here to see an interactive flip-book.

Additional Resources

To learn more about NativeEdge Application Orchestration, click on the following links:

This blog is a part of a self-learning series. For more information on NativeEdge, go to:

Read Full Blog
  • NativeEdge
  • computer vision

Delivering an AI-Powered Computer Vision Application with NVIDIA DeepStream and Dell NativeEdge

Nati Shalom Nati Shalom

Mon, 20 May 2024 08:37:34 -0000


Read Time: 0 minutes

The Dell NativeEdge platform, with its latest 2.0 update, brings to the table an array of features designed to streamline edge operations. From centralized management to zero-touch deployment, it ensures that businesses can deploy and manage their edge solutions with unprecedented ease. The addition of blueprints for key Independent Software Vendors (ISV) and AI applications gives users the ability to get a fully automated end-to-end stack from bare metal to production grade vertical service in retail, manufacturing, energy, and other industries. In essence, it brings the best of both worlds—an open platform that is not bound to any specific ISV or cloud provider while preserving the simplicity of a vertical solution.

This blog describes the specific integration of NativeEdge with NVIDIA DeepStream to enable developers to build AI-powered, high-performance, low-latency video analytics applications and services.

Introduction to NVIDIA DeepStream

NVIDIA DeepStream is a comprehensive streaming analytics toolkit designed to facilitate the development and deployment of AI-powered applications. It is built on the GStreamer framework and is part of NVIDIA’s Metropolis platform. Its main features include:DeepStream SDKDeepStream SDK

  • Input sources—DeepStream accepts streaming data from various sources, including USB/ CSI cameras, video files, or RTSP streams.
  • AI and computer vision—It utilizes AI and computer vision to analyze streaming data and extract actionable insights.
  • SDK components—The core SDK includes hardware accelerator plugins that leverage accelerators such as VIC, GPU, DLA, NVDEC, and NVENC for compute-intensive operations.
  • Edge-to-cloud deployment—Applications can be deployed on embedded edge devices running the Jetson platform or on a larger edge or datacenter GPUs like the T4.
  • Security protocols—It offers security features like SASL/ Plain authentication and two way TLS authentication for secure communication between edge and cloud.
  • CUDA-X integration—It builds on top of NVIDIA libraries from the CUDA-X stack, including CUDA, TensorRT, and NVIDIA Triton Inference Server, abstracting these libraries in DeepStream plugins.
  • Containerization—Its applications can be containerized using NVIDIA Container Runtime, with containers available on NGC or NVIDIA’s GPU cloud registry.
  • Programming flexibility—Developers can create applications in C, C++, or Python, and for those preferring a low-code approach, DeepStream offers Graph Composer. 
  • Real-time analytics—It is optimized for real-time analytics on video, image, and sensor data, providing insights at the edge.

The key benefit of the platform is that it is optimized for NVIDIA’s hardware, providing efficient video decoding and AI inferencing. It can also handle multiple video streams in real time, making it suitable for large-scale applications.

NativeEdge Integration with NVIDIA DeepStream

Deploying an AI application at the edge involves configuring and managing potentially many versions of hardware drivers, applying specific NVIDIA configuration to the containerization platform, and deploying the DeepStream stack with specific AI inferencing models. NativeEdge uses a blueprint model to automate the operational aspect of this integration. This blueprint is delivered as part of the NativeEdge solution catalog. It streamlines the entire deployment process in a way that is consistent with other solutions in the NativeEdge portfolio.

A Deeper Look Into the NativeEdge Blueprint for NVIDIA DeepStream

DeepStream is packaged as a cloud-native container and as such it relies on the availability of a container platform to be available at the edge as a prerequisite. NativeEdge enables two methods to deliver workload at the edge: a packaged virtual machine (VM) which provides full isolation or a bare metal container which maximizes performance. Once the VM or container gets provisioned on the edge, NativeEdge pulls the relevant stack, configures the GPU passthrough, and starts running the target model to enable the inferencing process. 

Deployment Configuration

To deploy, the configuration steps allow the user to select the GPU target, deployment mode, and the actual artifacts without having to customize the automation blueprint for each case.

GPU Target

Select the GPU targets for the solution. The target can be A2 to L4 depending on the target footprint and performance.  You can find a comparison table which provides guidance on each of the GPU capabilities here.

GPU configurationGPU configuration

Deployment Mode

The deployment mode input specifies how the blueprint should configure the DeepStream container. There are three deployment modes currently supported: demo, custom model, and developer.

Blueprint deployment modesBlueprint deployment modes


This mode deploys the DeepStream container and immediately starts a Triton inferencing pipeline based on an embedded demonstration video file.

Artifacts used in this mode:

  • The base DeepStream container
  • An archive file containing pre-built Triton inferencing models

Custom Model

This mode deploys the DeepStream container along with a customer’s bespoke pipeline configuration. In addition, the customer has the option to automatically run the pipeline as soon as the DeepStream container is deployed, without any further user intervention.

Artifacts used in this mode:

  • The base DeepStream container
  • An archive file containing the customer’s pipeline configuration plus any other files or data required
  • An archive file containing pre-built Triton inferencing models (optional)


This mode deploys the DeepStream container and forces it to run in the background, so that a developer can log on to the host and access the container for work such as development or testing.

Artifacts used in this mode:

  • The base DeepStream container
  •  An archive file containing pre-built Triton inferencing models (optional)

Irrespective of which deployment type is chosen, the user also needs to supply a secret artifact, which contains information such as the endpoint of the customer’s artifact repository and the credentials required to download.

Deployment bundle ConfigurationDeployment bundle Configuration

Deploy the DeepStream Solution

To deploy the solution, select the target list of devices and the specific DeepStream solution from the NativeEdge catalog and execute the install workflow.

The install workflow parses the blueprint and auto-generates the execution graph that will automates the entire deployment process based on the provided configuration and environment setup.

Deploying DeepStream solution in NativeEdge OrchestratorDeploying DeepStream solution in NativeEdge Orchestrator

The event flow is automated through this process. The process includes everything from setting up the endpoint configuration to the actual deployment of the inferencing model. This enables the full end-to-end automation of the entire stack and thus allows the user to start using the system immediately after it completes the process, without the need for any additional human intervention.

Automation event flowAutomation event flow

Use the DeepStream Solution

Once the installation from the previous step is completed successfully, we can review any relevant outputs (or capabilities) from the blueprint.

For example, if the solution is deployed with the “demo” deployment type, an RTSP feed is automatically created, which can be used by remote clients to view the output of the DeepStream demo application.

This can be seen in the following figure:

Deployment of the solution and inferencing at the edgeDeployment of the solution and inferencing at the edge

If the “custom model” deployment type is chosen, any output produced from the DeepStream application is configured by the customer themselves. In other words, the custom pipeline could potentially create an RTSP stream, in which case a client could use a similar approach to view the stream. Alternatively, the pipeline could define a video file output instead, configured to output to the persistent storage folder that is configured by NativeEdge.


According to IDC , inferencing at the edge is projected to double the growth rate of training by 2026. This projection is in line with the anticipated expansion of edge computing use cases, as illustrated in the following figure:

Growth of inferencing opportunities in edge marketGrowth of inferencing opportunities in edge market

Dell NativeEdge is the first edge orchestration engine that automates the delivery of NVIDIA AI Enterprise software. In general, and specifically with DeepStream, NativeEdge simplifies the deployment and management of inferencing applications at the edge.

Through this integration, customers have the capability to implement their custom applications, which leverage popular frameworks, on NVIDIA AI accelerators that are compatible with Dell NativeEdge. This is complemented by the ability to incorporate their development infrastructure using the NativeEdge API or CI/CD processes. Additionally, NativeEdge provides support for orchestrating cloud services through Infrastructure as Code (IaC) frameworks like Terraform, Ansible, CloudFormation, or Azure ARM, allowing customers to manage their edge and associated cloud services using the same automation framework.

Integration with ServiceNow enables IT personnel to oversee NativeEdge Endpoints in a manner that is similar to other data center resources, utilizing the ServiceNow CMDB. This integration simplifies edge operations and supports more rapid and flexible release cycles for inferencing services to the edge, thereby helping customers keep pace with the speed of AI developments.


Read Full Blog
  • IoT
  • NativeEdge

Litmus and Dell NativeEdge - A Powerful Duo for Improving Industrial IoT Operations

Nati Shalom Nati Shalom

Wed, 08 May 2024 15:18:51 -0000


Read Time: 0 minutes

Edge AI plays a significant role in the digital transformation of the industrial Internet of things (IIoT). It improves efficiency, productivity, and decision-making processes in the following areas:

  • Predictive maintenance—AI algorithms can analyze data from sensors and other connected devices to predict equipment failures before they happen.
  • Anomaly detection—AI can identify abnormal patterns or anomalies in data collected from various sensors.
  • Operations optimization—AI algorithms can optimize industrial processes by analyzing data and adjusting parameters in real time.
  • Supply chain optimization—AI can optimize supply chain processes by analyzing data from inventory levels, demand forecasting, and logistics.
  • Quality control—AI-powered vision systems and machine learning algorithms can be implemented in manufacturing quality control. These systems can identify defects or deviations from quality standards, ensuring that only high-quality products reach the market.
  • Energy management—AI can analyze energy consumption patterns and optimize energy usage in industrial settings.
  • Continuous improvement—AI facilitates continuous improvement by learning from data over time.

Figure 1. Industrial IoT 4.0

This blog demonstrates the benefits of the edge solutions integration on top of NativeEdge with Litmus, one of the integrated solutions.

Industrial IoT Edge AI with NativeEdge and Litmus

Dell NativeEdge serves as a platform for deploying and managing edge devices and applications at the edge. One notable addition to NativeEdge’s latest version is the ability to deliver an end-to-end solution on top of the platform that includes PTC, Litmus, Telit Cinterion, Centerity, and others. This capability allows users to get a consistent and simple management from bare-metal provisioning to a full-blown solution that is fully automated. 

Figure 2. Introducing Edge Solutions on top of NativeEdge

Introduction to Litmus

Litmus is an industrial IoT platform that helps businesses collect, analyze, and manage data from IIoT devices. Dell NativeEdge is a cloud-based and on-premise software solution that helps businesses improve their email security and delivery.

Litmus includes two main parts:

  • Litmus Edge Manager
  • Litmus Edge

Litmus Edge Manager

Litmus Edge Manager serves as a central management console or interface for configuring, monitoring, and managing the Litmus Edge deployments and Litmus Edge.

Figure 3. Litmus Edge Manager

Litmus Edge

Litmus Edge is an industrial edge computing platform designed for edge inferencing locally in real time. It facilitates edge and IoT device management, supports various industrial protocols, enables analytics and machine learning at the edge, and emphasizes security measures.

Figure 4. Litmus Edge platform

Litmus Edge provides a flexible solution for organizations to optimize data processing, enhance device connectivity, and derive insights directly at the edge of their industrial IoT deployments through a simple no-code user experience.

Figure 5. No-Code Editor for Edge Inferencing

Deploying the Litmus Solution on NativeEdge

First, deploy the Litmus Edge. Multiple Litmus Edge instances can be deployed on multiple NativeEdge Endpoints. Each Litmus Edge is connected to sensors like robotic arms and CNCs. The following image shows the blueprint that provisions the Litmus Edge VM from a Litmus Edge image.

The following figure shows the Litmus Edge topology on NativeEdge. We can see the NativeEdge Litmus VM provisioned as well as the binary Litmus image and their dependencies.

We can also see that there is an SDP node, where data is streamed to and persisted. 

Figure 6. Litmus Edge blueprint topology

The second blueprint provisions the Litmus Edge Manager VM that can connect to multiple Litmus Edges on multiple NativeEdge Endpoints.

The following figure shows the Litmus Edge Manager topology on NativeEdge. The Litmus Edge Manager can also be provisioned on vSphere. We can see the NativeEdge Litmus Manager VM provisioned as well as the binary Litmus manager image and their dependencies.

Figure 7. Litmus Edge Manager blueprint topology

Let us look at how a NativeEdge user interacts with Litmus Edge. From the NativeEdge App Catalog, choose the deploy Litmus Edge Manager or Litmus Edge (or both) and go to the deployment inputs customization.

Figure 8. NativeEdge App Catalog

On the deployment inputs, you can customize the IP address and hostname to access the Litmus Edge Manager. This includes the number of vCPUs to allocate for the Litmus Manager VM. 

Figure 9. Litmus Edge deployment inputs

After deployment execution, we can see in the following figure that we provisioned multiple Litmus Edges. We can provision a fleet of Litmus Edges that are connected and managed by one Litmus Edge Manager. 

Figure 10. Litmus Edge deployment


Dell NativeEdge provides fully automated, secure device onboarding from bare metal to cloud. As a DevEdgeOps platform, NativeEdge also gives the ability to validate and continuously manage the provisioning and configuration of those device endpoints in a secured manner. This reduces the risk of failure or security breaches due to misconfiguration or human error by detecting those potential vulnerabilities earlier in the pre-deployment development process.

The introduction of NativeEdge Orchestrator enables customers to have consistent and simple management of integrated solutions across their entire fleet of new and existing devices, supporting external services, VxRail, and soon other cloud infrastructures. The separation between the device management and solution is the key to enabling consistent operational management between different solution vendors and cloud infrastructures.

The specific integration between NativeEdge and Litmus provides a full-blown IIoT management platform from bare metal to cloud. It also simplifies the ability to process data at the edge by introducing edge AI inferencing through a simple no-code interface.

The solution framework allows vendors to use Dell NativeEdge as a generic edge infrastructure framework, addressing fundamental aspects of device fleet management. Vendors can then focus on delivering the unique value of their solution, be it predictive maintenance or real-time monitoring, as demonstrated by the Litmus use case.



Read Full Blog
  • AI
  • NativeEdge

Will AI Replace Software Developers?

Nati Shalom Nati Shalom

Thu, 02 May 2024 09:38:01 -0000


Read Time: 0 minutes

Over the past year, I have been actively involved in generative artificial intelligence (Gen AI) projects aimed at assisting developers in generating high-quality code. Our team has also adopted Copilot as part of our development environment. These tools offer a wide range of capabilities that can significantly reduce development time. From automatically generating commit comments and code descriptions to suggesting the next logical code block, they have become indispensable in our workflow.

According to a recent study by McKinsey, quantify the level of productivity gain in the following areas:

Figure 1. Software engineering: speeding developer work as a coding assistant (McKinsey)

This study shows that “The direct impact of AI on the productivity of software engineering could range from 20 to 45 percent of current annual spending on the function. This value would arise primarily from reducing time spent on certain activities, such as generating initial code drafts, code correction and refactoring, root-cause analysis, and generating new system designs. By accelerating the coding process, Generative AI could push the skill sets and capabilities needed in software engineering toward code and architecture design. One study found that software developers using Microsoft’s GitHub Copilot completed tasks 56 percent faster than those not using the tool.  An internal McKinsey empirical study of software engineering teams found those who were trained to use generative AI tools rapidly reduced the time needed to generate and refactor code and engineers also reported a better work experience, citing improvements in happiness, flow, and fulfilment.

What Makes the Code Assistant (Copilot) the Killer App for Gen AI?

The remarkable progress of AI-based code generation owes its success to the unique characteristics of programming languages. Unlike natural language text, code adheres to a structured syntax with well-defined rules. This structure enables AI models to excel in analyzing and generating code.

Several factors contribute to the swift evolution of AI-driven code generation:

  • Structured nature of code–Code follows a strict format, making it amenable to automated analysis. The consistent structure allows AI algorithms to learn patterns and generate syntactically correct code.
  • Validation tools–Compilers and other development tools play a crucial role. They validate code for correctness, ensuring that generated code adheres to language specifications. This continuous feedback loop enables AI systems to improve without human intervention.
  • Repeatable work identificationAI excels at identifying repetitive tasks. In software development, there are numerous areas where routine work occurs, such as boilerplate code, data transformations, and error handling. AI can efficiently recognize and automate these repetitive patterns.

From Coding Assistant to Fully-Autonomous AI Software Engineer

The Cognition & Development Lab at Washington University in St. Louis investigates how infants and young children think, reason, and learn about the world around them. Their research focuses on the development of early social-cognitive capacities. They are the makers of Devin, the world’s first AI software engineer.

Devin possesses remarkable capabilities in software development in the following areas:

  • Complex engineering tasks–With advances in long-term reasoning and planning, Devin can plan and execute complex engineering tasks that involve thousands of decisions. Devin recalls relevant context at every step, learns over time, and even corrects mistakes.
  • Coding and debugging–Devin can write code, debug, and address bugs in codebases. It autonomously finds and fixes issues, making it a valuable teammate for developers.
  • End-to-end app development–Devin builds and deploys apps from scratch. For example, it can create an interactive website, incrementally adding features requested by the user and deploying the app.
  • AI model training and fine-tuning–Devin sets up fine-tuning for large language models, demonstrating its ability to train and improve its own AI models.
  • Collaboration and communication–Devin actively collaborates with users. It reports progress in real-time, accepts feedback, and engages in design choices as needed.
  • Real-world challenges–Devin tackles real-world GitHub issues found in open-source projects. It can also contribute to mature production repositories and address feature requests. Devin even takes on real jobs on platforms like Upwork, writing and debugging code for computer vision models.

The Devin project is a clear indication of how fast we move from simple coding assistants to more complete engineering capabilities.

Will AI Replace Software Developers?

When I asked this question recently during a Copilot training session that our team took, the answer was “No”, or to be more precise “Not yet”.  The common thinking is that it provides a productivity enhancement tool that will save developers from spending time on tedious tasks such as documentation, testing, and so on. This could have been true yesterday, but as seen with project Devin, it already goes beyond simple assistance to full development engineering. We can rely on the experience from past transformations to learn a bit more about where this is all heading.

Learning from Cloud Transformation: Parallels with Gen AI Transformation

The advent of cloud computing, pioneered by AWS approximately 15 years ago, revolutionized the entire IT landscape. It introduced the concept of fully automated, API-driven data centers, significantly reducing the need for traditional system administrators and IT operations personnel. However, beyond the mere shrinking of the IT job market, the following parallel events unfolded:

  • Traditional IT jobs shrank significantly–Small to medium-sized companies can now operate their IT infrastructure without dedicated IT operators. The cloud’s self-service capabilities have made routine maintenance and management more accessible.
  • Emergence of new job titles: DevOps, SRO, and more–As organizations embrace cloud technologies, new roles emerge. DevOps engineers, site reliability operators (SROs), and other specialized positions became essential for optimizing cloud-based systems.
  • The rise of SaaS startupsCloud computing lowered the barriers of entry for delivering enterprise-grade solutions. Startups capitalized on this by becoming more agile and growing faster than established incumbents.
  • Big tech companies’ accelerated growthTech giants like Google, Facebook, and Microsoft swiftly adopted cloud infrastructure. The self-service nature of APIs and SaaS offerings allowed them to scale rapidly, resulting in record growth rates.

Impact on Jobs and Budgets

While traditional IT jobs declined, the transformation also yielded positive outcomes:

  • Increased efficiency and qualityCompanies produced more products of higher quality at a fraction of the cost. The cloud’s scalability and automation played a pivotal role in achieving this.
  • Budget shift from traditional IT to cloudGartner’s IT spending reports reveal a clear shift in budget allocation. Cloud investments have grown steadily, even amidst the disruption caused by the introduction of cloud infrastructure, see the following figure:

Figure 2. Cloud transformation’s impact on IT budget allocation

Looking Ahead: AI Transformation

As we transition to the era of AI, we can anticipate similar trends:

  • Decline in traditional jobsJust as cloud computing transformed the job landscape, AI adoption may lead to the decline of certain traditional roles.
  • Creation of new jobsSimultaneously, AI will create novel opportunities. Roles related to AI development, machine learning, and data science will flourish.

Short Term Opportunity

Organizations will allocate more resources to AI initiatives. The transition to AI is not merely an evolutionary step; it is a strategic imperative.

According to a research conducted by ISG on behalf of Glean, Generative AI projects consumed an average of 1.5 percent of IT budgets in 2023. These budgets are expected to rise to 2.7 percent in 2024 and further increase to 4.3 percent in 2025. Organizations recognize the potential of AI to enhance operational efficiency and bridge IT talent gaps. Gartner predicts that Generative AI impacts will be more pronounced in 2025. Despite this, worldwide IT spending is projected to grow by 8 percent in 2024. Organizations continue to invest in AI and automation to drive efficiency. The White House budget proposes allocating $75 billion for IT spending at civilian agencies in 2025. This substantial investment aims to deliver simple, seamless, and secure government services through technology. 

The impact of AI extends far beyond the confines of the IT job market. It permeates nearly every facet of our professional landscape. As with any significant transformation, AI presents both risks and opportunities. Those who swiftly embrace it are more likely to seize the advantages.

So, what steps can software developers take to capitalize on this opportunity?

Tips for Software Developers in the Age of AI

In the immediate term, developers can enhance their effectiveness when working with AI assistants by acquiring a combination of the following technical skills:

  • Learn AI basics–I would recommend starting the learning with AI Terms 101. I also recommend following the leading AI podcasts. I found this useful to keep myself up to date in this space and learn some useful tips and updates from industry experts.
  • Use coding assistant tools (Copilot)–Coding assistant tools are definitely the low-hanging fruit and probably the simplest step to get into the AI development world. There is a growing list of tools that are available and can be integrated seamlessly into your existing development IDE. The following provides a useful reference to The Top 11 AI Coding Assistants to Use in 2024.
  • Learn machine learning (ML) and deep learning concepts–Understanding the fundamentals of ML and deep learning is crucial. Familiarize yourself with neural networks, training models, and optimization techniques.
  • Data science and analytics–Developers should grasp data preprocessing, feature engineering, and model evaluation. Proficiency in tools like Pandas, NumPy, and scikit-learn is beneficial.
  • Frameworks and tools–Learn about popular AI frameworks such as TensorFlow, and PyTorch. These tools facilitate model building and deployment.

More skilled developers will need to learn how to create their own “AI engineers” which they will train and fine tune to assist them with user interface (UI), backend, and testing development tasks. They could even run a team of “AI engineers” to write an entire project.

Will AI Reduce the Demand for Software Engineers?

Not necessarily. In the case of cloud transformation, developers with AI expertise will likely be in high demand. Those who will not be able to adapt to this new world are likely to stay behind and face the risk of losing their job.

It would be fair to assume that the scope of work, post-AI transformation, will grow and will not stay stagnant.  As an example, we will likely see products adding more “self-driving” capabilities, where they could run more complete tasks without the need for human feedback or enable close to human interaction with the product. 

Under this assumption, the scope of new AI projects and products is going to grow, and that growth should balance the declining demand for traditional software engineering jobs. 


As a history enthusiast, I often find parallels in the past that can serve as a guide to our future. The industrial era witnessed disruptive technological advancements that reshaped job markets. Some professions became obsolete, while new ones emerged. As a society, we adapted quickly, discovering new growth avenues. However, the emergence of AI presents unique challenges. Unlike previous disruptions, AI simultaneously impacts a wide range of job markets and progresses at an unparalleled pace. The implications are indeed profound.

Recent research by Nexford University on How Will Artificial Intelligence Affect Jobs 2024-2030 reveals some startling predictions. According to a report by the investment bank Goldman Sachs, AI could potentially replace the equivalent of 300 million full-time jobs. It could automate a quarter of the work tasks in the US and Europe, leading to new job creation and a productivity surge. The report also suggests that AI could increase the total annual value of goods and services produced globally by 7 percent. It predicts that two-thirds of jobs in the US and Europe are susceptible to some degree of AI automation, and around a quarter of all jobs could be entirely performed by AI.

The concerns raised by Yuval Noa Harari, a historian and professor at the Department of History of the Hebrew University of Jerusalem, resonate with many. The rapid evolution of AI may indeed lead to significant unemployment. 

However, when it comes to software engineers, we can assert with confidence that regardless of how automated our processes become, there will always be a fundamental need for human expertise. These skilled professionals perform critical tasks such as maintenance, updates, improvements, error corrections, and the setup of complex software and hardware systems. These systems often require coordination among multiple specialists for optimal functionality.

In addition to these responsibilities, computer system analysts play a pivotal role. They review system capabilities, manage workflows, schedule improvements, and drive automation. This profession has seen a surge in demand in recent years and is likely to remain in high demand.

In conclusion, AI represents both risk and opportunity. While it automates routine tasks, it also paves the way for innovation. Our response will ultimately determine its impact.


Read Full Blog
  • AI
  • NativeEdge
  • DevEdgeOps

Unlocking the Power of AI-Assisted DevEdgeOps Automation

Joshua Margo Joshua Margo

Wed, 27 Mar 2024 18:13:00 -0000


Read Time: 0 minutes

In today's digital landscape, the expansion of edge computing has transformed how data is processed and managed. However, with this evolution comes the challenge of managing and maintaining numerous edge deployments efficiently. DevEdgeOps is a shift-left approach that moves operational tasks to an earlier stage. This approach facilitates collaboration between IT and OT streamlining edge operations. By integrating AI-assisted techniques into DevEdgeOps practices, organizations can unlock stacks of benefits, ranging from increased productivity to improved operational efficiency.

“The automation process using infrastructure as code (IaC) is complex, especially when it comes to highly distributed edge environments. It must be balanced with the requirements of edge operating environments which are different than IT. Gen AI and Copilot-based edge automation development tools can reduce the development process of that automation code and help to meet the requirements of edge operations workloads.” says Nati Shalom, Fellow at Dell NativeEdge, introducing the topic in his blog Edge-AI trends in 2024.

In a recent study by McKinsey, it indicates a potential improvement of up to 56 percent in productivity. DevEdgeOps advocates for reducing that complexity using a shift-left approach where production issues can be identified earlier during the development phase.

A photo from McKinsey showing the benefits of using Gen AI as a coding assistant.

Figure 1. The benefits of using Gen AI as a coding assistant (Copilot) (Source: McKinsey)

Simplifying Complex Tasks with Automation

One of the primary advantages of leveraging AI in DevEdgeOps is the ability to automate and optimize complex tasks associated with edge operations. Traditional methods of managing edge environments often involve manual interventions, which are time consuming and error prone. AI-powered automation tools can streamline these processes by intelligently analyzing data patterns, predicting potential issues, and automating corrective actions. This reduces the burden on IT teams, minimizes the risk of downtime, and improves system reliability.

A case in point was the recent winner of the Dell Hackathon, Rachel Shalom, from the NativeEdge team. In her article DevOps Made Easy with Gen AI, she explores the integration of Gen AI into DevOps practices, simplifying and optimizing the development process. By leveraging Gen AI's capabilities, developers can automate tasks such as code generation, testing, and deployment, reducing time-to-market and enhancing efficiency. Through Rachel’s real-world example, she illustrates how Gen AI streamlines DevOps workflows, empowers teams to focus on innovation, and fosters collaboration between development and operations teams.

Regarding data and modeling insights, Rachel writes “You might be wondering, why not use Copilot or a commercial GPT for queries right off the bat? We gave that a shot, but it fell short of our specific need to generate configuration files. This was mainly because our proprietary internal data was not familiar to the model, leading us to the necessity of fine-tuning with a private GPT-3.5.”

A Proactive Approach to Edge Management

AI-assisted DevEdgeOps enables organizations to adopt a proactive approach to edge management. By leveraging predictive analytics and machine learning algorithms, businesses can anticipate and prevent potential issues before they escalate into critical failures. This proactive approach enhances system resilience and enables organizations to allocate resources more effectively, which optimizes operational costs.

Rapid Development and Deployment

AI-driven DevEdgeOps facilitates rapid development and deployment of edge applications. Traditional development processes often struggle to keep pace with the dynamic nature of edge computing, resulting in delays and inefficiencies. By harnessing Gen AI capabilities such as Copilot-based development tools, organizations can accelerate the development life cycle, reduce time-to-market, and stay ahead of the competition. This enhances agility and allows businesses to capitalize on emerging opportunities more effectively.

Reducing Costs with Automation

In addition to operational benefits, AI-assisted DevEdgeOps can also drive significant cost savings for organizations. The McKinsey study referenced earlier highlights the potential for a 56 percent improvement in productivity through the adoption of AI-driven automation. By automating repetitive tasks, optimizing resource utilization, and minimizing downtime, businesses can achieve substantial cost reductions while maximizing the return on investment (ROI) of their edge investments.

Furthermore, AI-assisted DevEdgeOps fosters innovation by empowering organizations to focus on value-added activities rather than repetitive mundane operational tasks. By automating routine maintenance, troubleshooting, and provisioning activities, IT teams can devote more time and resources to innovation-driven initiatives that drive business growth and competitive advantage. This enhances organizational agility and fosters a culture of continuous improvement and innovation.


The benefits of automating edge operations with AI-assisted DevEdgeOps are undeniable. By leveraging AI capabilities to streamline processes, enhance proactive management, accelerate development, and drive cost savings, organizations can unlock the full potential of their edge deployments. As the digital landscape continues to evolve, embracing AI-driven automation in DevEdgeOps will be essential for organizations looking to stay competitive, agile, and resilient in the face of the ever-changing demands.


Read Full Blog
  • NativeEdge

How can Agile Transformation Lead to a One-Team Culture?

Nati Shalom Nati Shalom

Thu, 22 Feb 2024 09:47:46 -0000


Read Time: 0 minutes

Many blogs cover the Agile process itself; however, this blog is not one of them. Instead, I want to share the lessons learned from working in a highly distributed development team across eleven countries. Our teams ranged from small startups post-acquisition to multiple teams from Dell, and we had an ambitious goal to deliver a complex product in one year! This journey started when Dell’s Project Frontier leaped to the next stage of development and became NativeEdge.  

This blog focuses on how Agile transformation enables us to transform into a one-team culture. The journey is ongoing as we get closer to declaring success. The Agile transformation process is a constant iterative process of learning and optimizing along the way, of failing and recovering fast, and above all, of committed leadership and teamwork.

Having said that, I thought that we reached an important milestone, at one year, in this journey that makes it worthwhile sharing.

Why Agile?

Agile methodologies were originally developed in the manufacturing industry with the introduction of Lean methodology by Toyota. Lean is a customer-centric methodology that focuses on delivering value to the customer by optimizing the flow of work and minimizing waste. The evolution of these principles into the software industry is known as Agile development, which focuses on rapid delivery of high-quality software. Scrum is a part of the Agile process framework and is used to rapidly adjust to changes and produce products that meet organizational needs.

Lean Manufacturing Versus Agile Software Delivery

The fact that a software product doesn’t look like a physical device doesn’t make the production and delivery process as different as many tend to think. The increasing prevalence of embedded software in physical products further blurs the line between these two worlds.

Software product delivery follows similar principles to the Lean manufacturing process of any physical product, as shown in the following table:

Lean manufacturing
Agile software development
Supply chainFeatures backlog
Manufacturing pipelineCI/CD pipeline
StationsPods, cells, squads, domains
Assembly lineBuild process
GoodsProduct release

Agile addresses the need of organizations to react quickly to market demands and transform into a digital organization. It encompasses two main principles:

  1. Project management–Large projects are better broken into smaller increments with minimal dependencies to enable parallel development rather than one large project that is serialized through dependencies. The latter would be a waterfall process where one milestone/dependency missed can cause a reset of the entire program.
  2. Team structure–The organizational structure should be broken into self-organizing teams that align with the product architecture structure. These teams are often referred to as squads, pods, or cells. Each team needs to have the capability to deliver its specific component in the architecture, as opposed to a tier-based approach where teams are organized based on skills, such as the product management team, UI team, or backend team, and so on.

What Could Lead to an Unsuccessful Agile Transformation?

Many detailed analyses show why Agile transformation fails. However, I would like to suggest a simpler explanation. Despite the similarities between manufacturing and software delivery, as outlined in the previous section, many software companies don’t operate with a manufacturing mindset.

Software companies that operate with a manufacturing mindset are companies where their leadership measures their development efficiency just as they measure other business KPIs, such as sales growth. They understand that their development efficiency directly impacts their business productivity. This is obvious in manufacturing, but for some reason, it has become less obvious in software. When you measure your development efficiency at the top leadership level and even board level, all the rest of the agile transformation issues that are reported in the failure analysis, such as resistance to change, become just symptoms of that root cause. It is, therefore, no surprise that companies like Spotify have been successful in this regard. Spotify has even published a lot of its learning and use cases, as well as open-source projects such as Backstage, which helped them differentiate themselves from other media streaming companies, just as Toyota did when they introduced Lean.

Lessons from a Recent Agile Transformation Journey

Changing a culture is the biggest challenge in any Agile transformation project. As many researchers have noted, Agile transformation requires a big cultural transformation including team structure. Therefore, it is no surprise that this came up as the biggest challenge in the Doing vs being: Practical lessons on building an agile culture article by McKinsey & Company.

Figure 1. Exhibit 1 from McKinsey & Company article: Doing vs being: Practical lessons on building an Agile culture

Our challenge was probably at the top of the scale in that regard, as our team was built out of a combination of people from all around the world. Our challenge was to create a one-team agile culture that would enable us to deliver a new and complex product in one year.

Getting to this one-team culture is tough, because it works in many ways against human nature, which is often competitive.

One thing that helped us go through this process was the fact that we all felt frustration and pain when things didn’t work. We also had a lot to lose if we failed. At this point, we realized that our only way out of this would be to adopt Agile processes and team structures. The pain that we all felt was a great source of motivation that drove everyone to get out of their comfort zone and be much more open to adopting the changes that were needed to follow a truly Agile culture.

This wasn’t a linear process by any means and involved many iterations and frustrating moments until it became what it is today. For the sake of this blog, I will spare you from that part and focus on the key lessons that we took to implement our specific Agile transformation journey.

Key Lessons for a Successful Agile Transformation

Don’t Re-invent the Wheel

There are many lessons and processes that were already defined on how to implement Agile methodologies. Many of the lessons were built on the success of other companies. So, as a lesson learned, it’s always better to build on a mature baseline and use it as a basis for customization rather than trying to come up with your own method. In our case, we chose to use the Scrum@Scale as our base methodology.

Define Your Custom Agile Process That Is Tailored to Your Organization’s Reality

As one can expect, out-of-the-box methodologies don’t consider your specific organizational reality and challenges. It is therefore very common to customize generic processes to fit your own needs. We chose to write our own guidebook, which summarizes our version of the agile roles and processes. I found that the process of writing our ‘Agile guidebook’ was more important than the book itself. It created a common vocabulary, cleared out differences, and enabled team collaboration, which later led to a stronger buy-in from the entire team.

Test Your Processes Using Real-World Simulation

Defining Agile processes can sometimes feel like an academic exercise. To ensure that we weren’t falling into this trap, we took specific use cases from our daily routine and tested them against the process that we had just defined. We measured how much those processes got clearer or better than the existing ones, and only if we all felt that we had reached a consensus did we make it official.

Restructure the Team Into Self-Organizing Teams

This task is easier said than done. It represents the most challenging aspect, as it necessitates restructuring teams to align with the skills required in each domain. Additionally, we had to ensure that each domain maintained the appropriate capacity, in line with business priorities. Flexibility was crucial, allowing us to adapt teams as priorities shifted.

In this context, it was essential that those involved in defining this structure remained unbiased and earned the trust of the entire team when proposing such changes. As part of our Agile process, we also employed simulations to validate the model’s effectiveness. By minimizing dependencies between teams for each feature development, we transformed the team structure. Initially, features required significant coordination and dependency across teams. However, we evolved to a point where features could be broken down without inter-team dependencies, as illustrated in the following figure:

Figure 2. Organizing teams into self-organizing domains teams. Breaking large features into smaller increments (2-4 sprints each) likely fits better into the domain structure than large features

Invest in Improving the Developer Experience (DevX)

Agile processes require an agile development environment. One of the constant challenges that I’ve experienced in this regard is that many organizations fail to put the right investment and leadership attention into this area.  If that is the case, you wouldn’t gain the speed and agility that you were hoping to get through the entire Agile transformation. In manufacturing terms, that's like investing in robots to automate the manufacturing pipeline but leaving humans to pass the work between them. A number of these humans could never keep up with the rest of the supply chain. This actually gets worse as the supply (feature development) gets faster. Your development speed is largely determined by how far your development processes are automated. To get to that level of automation, you need to constantly invest in the development platform. The challenge is that in most cases, the ratio between developers and DevOps can sometimes be 20:1, and that turns DevOps quickly into the next bottleneck. Platform engineering can be a solution. In a nutshell, in the shift-left model much of the ongoing responsibility for handling the feature development and testing automation to the development team and puts the main effort of the "DevOps" team to focus mostly on delivering and evolving a self-service development platform that enables the developers to do this work without having to become a DevOps expert themselves.

Keep the ‘Eye on the Ball’ With Clear KPIs

Teams can easily get distracted by daily pressures, causing focus to drift. Keeping discipline on those Agile processes is where a lot of teams fail, as they tend to take shortcuts when the delivery pressure grows. KPIs allow us to keep track of things and ensure that we’re not drifting over time, keeping our ‘eye on the ball’ even when such a distraction happens. There are many KPIs that can measure team effectiveness. The key is to pick the three that are the most important at each stage, such as stability of the release, peer review time, average time to resolve a failure, and test coverage percentage.

Don’t Try It at Home Without a Good Coach

As leaders, we often tend to be impatient and opinionated towards the ‘elephant memory’ of our colleagues. Trying to let the team figure out this sort of transformation all by themselves is a clear recipe for failure. Failure in such a process can make things much worse. On the other hand, having a highly experienced coach with good knowledge of the organization and with the right preparation was a vital facilitator in our case. We needed two iterations to come closer together. The first one was used mostly to get the ‘steam out’, which allowed us to work more effectively on all the rest of these points during the second iteration.


As I close my first year at Dell Technologies and reflect on all the things that I’ve learned, especially for someone who’s been in startups all of his career, I never expected that we could accomplish this level of transformation in less than a year. I hope that the lessons from this journey are useful and hopefully save some of the pain that we had to go through to get there. Obviously, none of this could have been accomplished without the openness and inclusive culture of the entire team in general and leadership specifically within Dell’s NativeEdge team. Thank you!


Read Full Blog
  • edge
  • NativeEdge

Edge AI Integration in Retail: Revolutionizing Operational Efficiency

Nati Shalom Nati Shalom

Mon, 12 Feb 2024 11:43:11 -0000


Read Time: 0 minutes

Edge AI plays a significant role in the digital transformation of retail warehouses and stores, offering benefits in terms of efficiency, responsiveness, and enhanced customer experience in the following areas:

  • Real-time analytics—Edge AI enables real-time analytics for monitoring and optimizing warehouse management systems (WMS). This includes tracking inventory levels, predicting demand, and identifying potential issues in the supply chain. In the store, real-time analytics can be applied to monitor customer behavior, track product popularity, and adjust pricing or promotions dynamically based on the current context using AI algorithms that analyze this data and provide personalized recommendations.
  • Inventory management—Edge AI can improve inventory management by implementing real-time tracking systems. This helps in reducing stockouts, preventing overstock situations, and improving the overall supply chain efficiency. On the store shelves, edge devices equipped with AI can monitor product levels, automate reordering processes, and provide insights into shelf stocking and arrangement.
  • Optimized supply chain—Edge AI assists in optimizing the supply chain by analyzing data at the source. This includes predicting delivery times, identifying inefficiencies, and dynamically adjusting logistics routes for both warehouses and stores.
  • Autonomous systems—Edge AI facilitates the deployment of autonomous systems, such as autonomous robots, conveyor belts, robotic arms, automated guided vehicles (AGVs), and collaborative robotics (cobots). Autonomous systems in the store can include checkout processes, inventory monitoring, and even in-store assistance.
  • Predictive maintenance—In both warehouses and stores, Edge AI can enable predictive maintenance of equipment. By analyzing data from sensors on machinery, it can predict when equipment is likely to fail, reducing downtime and maintenance costs.
  • Offline capabilities—Edge AI systems can operate offline, ensuring that critical functions can continue even when there is a loss of internet connectivity. This is especially important in retail environments where uninterrupted operations are crucial.

The Operational Complexity Behind the Edge-AI Transformation

The scale and complexity of Edge-AI transformation in retail are influenced by factors such as the number of edge devices, data volume, AI model complexity, real-time processing requirements, integration challenges, security considerations, scalability, and maintenance needs.

The Scalability and Maintenance Challenge

A mid-size retail organization is composed of tens of warehouses and hundreds of stores spread across different locations. In addition to that, it needs to support dozens of external suppliers that also need to become an integral part of the supply chain system. To enable Edge-AI retail, it will need to introduce many new sensors, devices, and systems that will enable it to automate a large part of its daily operation. This will result in hundreds of thousands of devices across the stores and warehouses.

The Edge-AI device scale challenge

Figure 1. The Edge-AI device scale challenge

The scale of the transformation depends on the number of edge devices deployed in retail environments. These devices could include smart cameras, sensors, RFID readers, and other internet of things (IoT) devices. The ability to scale the Edge-Ai solution as the retail operation grows is an essential factor. Scalability considerations involve not only the number of devices but also the adaptability of the overall architecture to accommodate increased data volume and computational requirements.

Breaking Silos Through Cloud Native and Cloud Transformation

Each device comes with its proprietary stack, making the overall management and maintenance of such a diverse and highly fragmented environment extremely challenging. To address that, Edge-Ai transformation also includes the transformation to a more common cloud-native and cloud-based infrastructure. This level of modernization is quite massive and costly and cannot happen in one go.

Cloud native and cloud transformation breaks the device management silos challenges

Figure 2. Cloud native and cloud transformation break the device management silos challenges

This brings the need to handle the integration with existing systems (brownfield) to enable smoother transformation. This often involves integration with existing retail systems, such as point-of-sale systems, inventory management software, and customer relationship management tools.

NativeEdge and Centerity Solution to Simplify Retail Edge-AI Transformation

Dell NativeEdge serves as a generic platform for deploying and managing edge devices and applications at the edge of the network. One notable addition in the latest version of NativeEdge is the ability to deliver an end-to-end solution on top of the platform that includes PTC, Litmus, Telit, Centerity, and so on. This capability allows users to get a consistent and simple management from Bare-Metal provisioning to a fully automated full-blown solution.

Using NativeEdge and Centerity as part of the open edge solution stack

Figure 3. Using NativeEdge and Centerity as part of the open edge solution stack

In this blog, we demonstrate the benefits behind the integration of NativeEdge and Centerity that simplify the retail Edge-AI transformation challenges.

Introduction to Centerity

Centerity CSM² is a purpose-built monitoring, auto-remediation, and asset management platform for enterprise retailers that provides proactive wall-to-wall observability of the in-store technology stack. The key part in the Centrity architecture is the Centerity Manager is responsible for collecting all the data from the edge devices into a common dashboard.

Centerity retail management and monitoring

Figure 4. Centerity retail management and monitoring

Using NativeEdge and Centerity to Automate the Entire Retail Operation

The following are the architecture choices made to address the Edge-AI transformation challenges with Dell NativeEdge as the edge platform and Centerity as the asset management and monitoring for both the retail warehouse and store. In this case, we have two sites, one representing a warehouse where we connect to the customer’s existing environment running on VMware infrastructure, and a retail store running in a different location.

Note: The Centrify Proxy (customer site-1 in the following figure) is used to aggregate multiple remote devices through a single network connection.

sing NativeEdge and Centerity to fully automate and manage and retail warehouse and store

Figure 5. Using NativeEdge and Centerity to fully automate and manage and retail warehouse and store

Since the store is often limited by infrastructure capacity, we will use a gateway to aggregate the data from all the devices. For this purpose, we will use a NativeEdge Endpoint as a gateway and install the Centerity monitoring agent on it. The monitoring agent will act as a proxy that on one hand connects to the individual devices in the store and, on the other hand, sends this information back to the Centerity Manager to aggregate all this information into one control plane. In this case, the warehouse runs on a private cloud based on VMware and represents a central data center. Since we have more capacity on this environment, we will collect the data directly from the device to the manager without the need for a proxy agent. The architecture is also set to enable future expansion to public clouds such as AWS and GCP.

Step 1: Use NativeEdge for zero-touch secure on-boarding of the edge infrastructure

Secure device onboarding—In this step, we will onboard three different edge compute classes (PowerEdge, OptiPlex, and Gateway) to represent a warehouse facility with diverse set of devices. NativeEdge will treat each of these devices as a separate ECE instance and, thus, provide a consistent management layer to all the devices, regardless of their compute class.

Zero-touch provisioning of edge infrastructure from BareMetal to cloud

Figure 6. Zero-touch provisioning of edge infrastructure from BareMetal to cloud

Step 2:  Deploy Centerity solution on top of NativeEdge infrastructure

This phase is broken down into two parts; The first is provisioning the Centerity Manager which is the main component and then provision the edge proxy on the target store and warehouse.

Step 2.1: Deploy and manage the Centerity Manager on VMware (Site 2)

To do that:

  1. Choose the on-prem Centerity server catalog item from the NativeEdge solution tab. Full Centerity server installation starts on VMware private cloud (external infra, not NativeEdge Endpoint).
  2. Use the deployment output to fetch the newly created Centerity server endpoint, credentials, and so on.

Step 2.2: Deploy and manage the Centerity Edge proxy (agent) on NativeEdge Endpoints

To install Centerity Edge proxy collector on each warehouse:

  1. Choose the Centerity Collector or Edge proxy catalog item.
  2. Select the target environment and deploy the proxy on all the selected sites. The installation happens in parallel installation on all sites.
  3. Fill the relevant deployment inputs and install deployment.
  4. Native Edge starting the fulfillment phase with all operations.
  5. Install and configure Centos VM per each warehouse, install edge proxy agent/ collector, and connect it to server.
  6. Execute day-2 operations, such as updating one of the warehouses using security update check, custom workflow. 

The following blueprint automates the deployment of the Centerity agent on a NativeEdge Endpoint. It launches a virtual machine (VM) on the remote device which is configured to connect to the Centerity Manager. It also optimizes the VM to support AI workload by enabling GPU passthrough.

Create an AI optimized VM on the target device

Figure 7. Create an AI optimized VM on the target device

NativeEdge can execute the above blueprint simultaneously on all the devices. The following figure shows the result of executing this blueprint on three devices.

Deploy the Edge Proxy on all the stores in one bulk

Figure 8. Deploy the Edge Proxy on all the stores in one bulk

Step 3: Connect the retail and logistic devices to Centerity

In this step, we will configure and set up the devices and connect them to the Centerity monitoring service. Note that this step is done directly on the centerity management console and not through NativeEdge console.

In this case, we chose the following endpoints within the logistic center or warehouse.

  • Tablet type – Dell Windows11
  • Mobile terminal type – Zebra TC52
  • API based devices – SES (Digital signage)
  • Printer – Bixolon (Log based)
  • Agentless based devices – Security camera

Centerity Management connected to the edge device managed by NativeEdge

Figure 9. Centerity Management connected to the edge device managed by NativeEdge

Step 4: Managing and monitoring the retail warehouse and store

In this step, we will manage the retail warehouse and store through the monitoring of the devices that we connected to the system in the previous step. This will include the following set of operations:

  • Device monitoring
  • Inventory tracking (if applicable)
  • Failures alerts
  • Auto remediation (if applicable)
  • Operational and business SLA dashboards
  • Reports
  • Generating events for proactive operational support
  • Updating and keeping up the system software for compliance
  • Breaking or fixing the workflow

Monitoring and managing retail devices

Figure 10. Monitoring and managing retail devices


Dell NativeEdge provides a fully-automated secure device onboarding from Bare Metal to the cloud. As a DevEdgeOps platform, NativeEdge also provides the ability to validate and continuously manage the provisioning and configuration of those devices in a secure way. This minimizes the risk of failure and security breaches due to misconfiguration or human errorThose potential vulnerabilities can be detected earlier in the pre-deployment development process. The introduction of NativeEdge Orchestrator enables customers to have a consistent and simple management of built-in solutions across their entire fleet of new and existing devices. The separation between the device management and solution is key to enabling consistent operational management between different solution vendors as well as cloud infrastructure. In addition to that, the ability to integrate with the retail existing infrastructure (VMware in this specific example) as well as cloud-native infrastructure simultaneously ensures smoother transformation to a modern Edge-AI-enabled infrastructure.

The specific integration between NativeEdge and Centerity in this specific use case enables customers to deliver a full-blown retail management which integrates with both their legacy and new AI enabled devices. According to recent studies, this level of end-to-end monitoring and automation can reduce the maintenance overhead and potential downtime by 57 percent.

Moving to a fully automated and monitored retail warehouse and store brings a significant TCO saving

Figure 11. Moving to a fully automated and monitored retail warehouse and store brings a significant TCO saving

It is also worth noting that the open solution framework provided by NativeEdge allows partners such as Centerity to use Dell NativeEdge as a generic edge infrastructure framework, addressing fundamental aspects of device fleet management. Vendors can then focus on delivering the unique value of their solution, be it predictive maintenance or real-time monitoring, as demonstrated by the Centerity use case in this blog.


Read Full Blog
  • NativeEdge

Streaming for Edge Inferencing; Empowering Real-Time AI Applications

Nati Shalom Nati Shalom

Tue, 06 Feb 2024 10:17:30 -0000


Read Time: 0 minutes

In the era of rapid technological advancements, artificial intelligence (AI) has made its way from research labs to real-world applications. Edge inferencing, in particular, has emerged as a game-changing technology, enabling AI models to make real-time decisions at the edge. To harness the full potential of edge inferencing, streaming technology plays a pivotal role, facilitating the seamless flow of data and predictions. In this blog, we will explore the significance of streaming for edge inferencing and how it empowers real-time AI applications.

The Role of Streaming

Streaming technology is a core component of edge inferencing, as it allows for the continuous and real-time transfer of data between devices and edge servers. Streaming can take various forms, such as video streaming, sensor data streaming, and audio streaming, or depending on the specific application's requirements. This real-time data flow enables AI models to make predictions and decisions on the fly, enhancing the overall user experience and system efficiency.

Typical Use Cases

Streaming for edge inferencing is already transforming various industries. Here are some examples:

  • Smart cities—Edge inferencing powered by streaming technology can be used for real-time traffic management, crowd monitoring, and environmental sensing. This enables cities to become more efficient and responsive.
  • Healthcare—Wearable devices and IoT sensors can continuously monitor patients, providing early warnings for health issues and facilitating remote diagnosis and treatment.
  • Retail—Real-time data analysis at the edge can enhance customer experiences, optimize inventory management, and provide personalized recommendations to shoppers.
  • Manufacturing—Predictive maintenance using edge inferencing can help factories avoid costly downtime by identifying equipment issues before they lead to failures.

Dell Streaming Data Platform

The Dell Streaming Data Platform (SDP) is a comprehensive solution for ingesting, storing, and analyzing continuously streaming data in real-time. It provides one single platform to consolidate real-time, batch, and historical analytics for improved storage and operational efficiency.

The following figure shows a high-level overview of the Streaming Data Platform architecture streaming data from the edge to the core.

Figure 1. SDP high-level architecture

Using SDP for Edge Video Ingestion Inferencing

The key advantage of SDP, in the context of edge, is the low footprint and the ability to deal with long-term data storage. Because the platform is built on Kubernetes, storing all that data is a matter of adding nodes. Now that you have all that data, using it becomes a practice in innovation. By autotiering storage upon ingestion, SDP allows for unlimited historical data retrieval for analysis alongside real-time streaming data. This enables endless business insights at your fingertips, far into the future.

One key advantage of SDP at edge deployment is its capability of supporting real-time inferencing for edge AI/ML applications as live data is ingested into SDP. Data insights can be obtained without delay while such data is also persistently stored using SDP’s innovative tiered-storage design that provides long-term storage and data protection.

For computer vision use cases, SDP provides plugins for the popular open-source multimedia processing framework GStreamer that enables easy integration with GStreamer video analytics pipelines. See the GStreamer and GStreamer Plugins

Figure 2. Edge Inferencing with SDP

Optimized for Deep Learning at Edge

Using SDP for visual embedding of computer vision

Figure 3Using SDP for visual embedding of computer vision

SDP was also optimized to process video streaming data and process it at the edge by adding frame detection. Saving video frames combined with an integrated vector database enables to handle video processing at the edge.  

SDP is optimized for deep learning by providing semantic embedding of ingested data, especially for unstructured data such as images and videos. As unstructured data is ingested into SDP, they are processed by an embedding pipeline that leverages pretrained models to extract semantic embeddings from raw data and persist such embeddings in a vector database. Such semantic embedding of raw data enables advanced data management capabilities as well as support for GenAI type of applications. For example, these embeddings can provide domain-specific context for GenAI type of query using Retrieval Augmented Generation (RAG).

Optimized for Edge AI

As AI/ML applications are becoming more widely adopted at the edge, SDP is ideally suited to support these applications by enabling real-time inference at the edge. Compared to traditional edge AI applications where data is transported to the cloud or core for inferencing, SDP can provide real-time inference right at the edge when live data is ingested so that inference latency can be greatly reduced.

In addition, SDP embraces rapidly emerging deep learning and GenAI applications by providing advanced data semantic embedding extraction and embedding vector storage, especially for unstructured multimedia data such as audio, image, and video data.

Unstructured data embedding vector generation

Figure 4. Unstructured data embedding vector generation

Long-Term Storage

SDP is designed with an innovative tiered-storage architecture. Tier-1 provides high performance local storage while tier-2 provides long-term storage. Specifically, tier-1 data storage is provided by replicated Apache Bookkeeper backed by a local storage to guarantee data durability once data is ingested into SDP. Asynchronously, data is flushed into tier-2 long-term storage such as Dell’s PowerScale to provide unlimited data storage and data protection. Alternatively, on NativeEdge, replicated Longhorn storage can also be used as long-term storage for SDP. With long term data storage, analytics applications can consume unbounded data to gain insights over a long period of time.

The following figure illustrates the long-term storage architecture in SDP.

SDP Long-Term Storage

Figure 5. SDP long-term storage

Cloud Native

SDP is fully cloud-native in its design with distributed architecture and autoscaling. SDP can be readily deployed on any Kubernetes environment in the cloud or on-premises. On NativeEdge, SDP is deployed on a K3s cluster by the NativeEdge Orchestrator. In addition, SDP can be easily scaled up and down as the data ingestion rate and application workload vary. This makes SDP flexible and elastic in different NativeEdge deployment scenarios. For example, SDP can leverage Kubernetes and autoscale its stream segment stores to adapt to changing data ingestion rates.

Automating the Deployment of SDP on Dell NativeEdge

Dell NativeEdge is an edge operations software platform that helps customers securely scale their edge operations to power any use case. It streamlines edge operations at scale through centralized management, secure device onboarding, zero-touch deployment, and automated management of infrastructure and applications. With automation, open design, zero-trust security principles, and multicloud connectivity, NativeEdge empowers businesses to attain their wanted outcomes at the edge.

Dell NativeEdge provides several features that make it ideal for deploying SDP on the edge, including:

  • Centralized management—Remotely manage your entire edge estate from a central location without requiring local, skilled support.
  • Secure device onboarding with zero-touch provisioning—Automate the deployment and configuration of the edge infrastructure managed by NativeEdge, while ensuring a zero-trust chain of custody.
  • Zero-trust security enabling technologies—From secure component verification (SCV) to secure operating environment with advanced security control to tamper-resistant edge hardware and software integrity, NativeEdge secures your entire edge ecosystem throughout the lifecycle of devices.
  • Lifecycle management—NativeEdge allows complete lifecycle management of your fleet of edge devices as well as applications.
  • Multicloud app orchestration—NativeEdge provides templatized application orchestration using blueprints. It also provides the flexibility to choose the ISV application and cloud environments for your edge application workloads.

Deploying SDP as a Cloud Native Service on top of NativeEdge-Enabled Kubernetes Cluster

In a previous blog, we provided insight into how we can turn our Edge devices into a Kubernetes cluster using NativeEdge Orchestrator. This step creates the foundation that allows us to deploy any edge service through a standard Kubernetes Helm package.

Deploying SDP solution on NativeEdge-enabled Kubernetes

Figure 6. Deploying SDP solution on NativeEdge-enabled Kubernetes

The Deployment Process

SDP is built as a cloud native service. The deployment of SDP includes a set of microservices as well as Ansible playbooks to automate the configuration management of those services.

The main blueprint that deploys the SDP app is shown in the following figure. It is a TOSCA node definition of an Application Module type, and it invokes an Ansible playbook to configure and start the deployment process.

Deploying SDP App

Figure 7. Deploying SDP App

For the deployment, SDP is one of the available services we can choose from the NativeEdge Catalog under the Solutions tab. In the following figure, an HA SDP service is deployed on top of a Kubernetes cluster.

Select the SDP service from the NativeEdge Catalog

Figure 8. Select the SDP service from the NativeEdge Catalog

In the following figure, as part of the deployment process, we can provide input parameters. In this case, we provide configuration parameters that can vary from one edge location to another. We use the same blueprint definition with different deployment parameters to adjust various edge requirements, like different location requirements, different configurations, and so on.

Deploy the SDP blueprint and enter the inputs

Figure 9. Deploy the SDP blueprint and enter the inputs

SDP service is deployed as can be seen in the following figure.

Figure 10. Create an SDP instance by performing the install workflow

Benefits of Streaming Services

Streaming services is a critical part of any edge inferencing solution and comes with the following benefits:

  • Reduced latency—Streaming ensures that data is processed when it is generated. This minimal delay is crucial for applications where even a few milliseconds can make a significant difference, such as when autonomous vehicles need to react quickly to changing road conditions.
  • Enhanced privacy—By processing data at the edge, streaming minimizes the need to send sensitive information to the cloud for processing. This enhances user privacy and security, which is a critical consideration in applications like healthcare and smart homes.
  • Improved scalability—Streaming can efficiently handle large volumes of data generated by edge devices, making it a scalable solution for applications that involve multiple devices or sensors.
  • Real-time decision making—Streaming enables AI models to make decisions in real time, which is vital for applications like predictive maintenance in industrial settings or emergency response systems.
  • Cost efficiency—By performing inferencing at the edge, streaming reduces the need for continuous cloud processing, which can be costly. This approach optimizes resource utilization and cost savings,
  • Adaptability—Streaming is flexible and adaptable, making it suitable for a wide range of applications. Whether it is processing visual data from cameras or analyzing sensor data, streaming can be customized to meet specific needs.

NativeEdge Support for Edge AI-Enabled Streaming Through Integrated SDP Integration

NativeEdge comes with built-in support for SDP which comes with an edge-optimized streaming solution geared specifically to fit edge use cases such as video inferencing.

NativeEdge is a great choice for edge AI-enabled streaming because:

  • It is optimized for edge AI data processing
  • It has a long-term storage
  • It is cloud native
  • It has a low footprint

Optimized for NativeEdge

SDP lifecycle management is fully automated on NativeEdge Orchestrator. SDP is available as Solutions in the NativeEdge Catalog. To deploy an instance of SDP on NativeEdge, a customer simply selects SDP from the NativeEdge Catalog under the Solutions tab and triggers the SDP deployment. The SDP blueprint deploys an SDP cluster on NativeEdge Endpoints. Once SDP is deployed, its ongoing day two operations are also managed by NativeEdge Orchestrator, providing a seamless experience for customers.

NativeEdge also comes with a fully optimized stack for handling AI workload through the support integrated accelerators through GPU pass through, SRIOV, and so on.

Support for Custom Streaming Platform

NativeEdge provides an open platform that easily plugs into your custom streaming platform through the support of Kubernetes and Blueprints, which is an integrated part of NativeEdge Orchestrator.


For more information, see the following:

Read Full Blog
  • NativeEdge

From Bare-Metal Edge Devices to a Full-Blown Kubernetes Cluster

Nati Shalom Nati Shalom

Tue, 02 Jan 2024 09:45:00 -0000


Read Time: 0 minutes

Deploying a Kubernetes cluster on the edge involves setting up a lightweight, efficient Kubernetes (K8s) environment suitable for edge computing scenarios. Edge computing often involves deploying clusters on remote or resource-constrained locations, such as remote data centers, or even on-premise hardware in locations with limited connectivity.

This blog describes the steps for deploying an edge-optimized Kubernetes cluster on Dell NativeEdge.

Step 1: Select an Edge-Optimized Kubernetes Stack

Our Kubernetes stack is comprised of a Kubernetes controller, storage, and a virtual IP (also known as load balancer). We have chosen open-source components as our first choice for obvious reasons.  

Standard K8s comes with a relatively high footprint cost which doesn’t fit low-cost functional edge use cases, and this is why MicroK8, K3s, K0, KubeVirt, Virtlet, and Krustlet have emerged as smaller footprint variants of Kubernetes.

We have chosen K3s as our Kubernetes cluster, Longhorn for storage, and Kube-VIP for our cluster networking.

Edge-Optimized Kubernetes Stack

Figure 1. Edge-Optimized Kubernetes Stack

The following sections provide a quick overview of each element in the stack.

Edge-Optimized Kubernetes Cluster

Edge is a constrained environment that is often limited by resource capacity. 

K3s is a lightweight, certified Kubernetes distribution designed for lightweight environments, including edge computing scenarios. It's an excellent choice for deploying Kubernetes clusters on the edge due to its reduced resource requirements and simplified installation process. 

K3s Key Features:

  • Minimal resource usage—K3s is designed to have a small memory and CPU footprint. It can run on devices with as little as 512MB of RAM and is suitable for single-node setups.
  • Reduced dependencies—K3s eliminates many of the dependencies that are present in a full Kubernetes cluster, resulting in a smaller installation size and simplified management. It uses SQLite as the default database, for example, instead of etcd.
  • Lightweight images—K3s uses lightweight container images, which further reduces its overall size. It includes only the necessary components to run a Kubernetes cluster.
  • Single binary—K3s is distributed as a single binary, making it easy to install and manage. This binary includes both the server and agent components of a Kubernetes cluster.
  • Highly compressed artifacts—K3s uses highly compressed artifacts, including container images and binary files to reduce disk space usage.
  • Reduced network overhead—K3s can operate in network-constrained environments, making it suitable for edge computing scenarios.   
  • Efficient updates—K3s is designed to handle updates efficiently, ensuring that the cluster stays small and doesn't accumulate unnecessary data.

Edge-Optimized Storage

Longhorn is an open-source, cloud-native distributed storage system for Kubernetes. It is designed to provide persistent storage for containerized applications in Kubernetes environments.

Longhorn Key Features:

  • Distributed block storage—Longhorn offers distributed block storage that can be used as persistent storage for applications running in Kubernetes pods. It uses a combination of block devices on worker nodes to create distributed storage volumes.
  • Data redundancy—Longhorn incorporates data redundancy mechanisms such as replication and snapshots to ensure data integrity and high availability. This means that even if a node or volume fails, data is not lost.
  • Kubernetes-native—Longhorn is designed specifically for Kubernetes and integrates seamlessly with it. It is implemented as a custom resource definition (CRD) within Kubernetes, making it a first-class citizen in the Kubernetes ecosystem.
  • User-friendly UI—Longhorn provides a user-friendly web-based management interface for users to easily create and manage storage volumes, snapshots, and backups. This simplifies storage management tasks.
  • Backup and restore—Longhorn offers a built-in backup and restore feature, enabling users to take snapshots of their data and restore them when needed. This is crucial for disaster recovery and data protection.
  • Cross-cluster replication—Longhorn has features for replicating data across different Kubernetes clusters, providing data availability and disaster recovery options.
  • Lightweight and resource-efficient—Longhorn is resource-efficient and lightweight, making it suitable for various environments, including edge computing, where resource constraints may exist.
  • Open source and community-driven—Longhorn is an open-source project with an active community, which means it receives regular updates and improvements.
  • Cloud-native storage solutions—It is well-suited for stateful applications, databases, and other workloads that require persistent storage in Kubernetes, offering a cloud-native approach to storage.

Kube-VIP (Load Balancer)

Kubernetes Virtual IP (Kube-VIP) is an open-source tool for providing high availability and load balancing within Kubernetes clusters. It manages a virtual IP address associated with services, ensuring continuous access to services, load balancing, and resilience to node failures.

Kube-VIP Key Features:

  • Virtual IP (VIP)—Kube-VIP manages a virtual IP address, which is associated with a Kubernetes service. This IP address can be used to access the service, and Kube-VIP ensures that the traffic is directed to healthy pods and nodes.
  • High availability—Kube-VIP supports high-availability configurations, allowing it to function even when nodes or control plane components fail. It can automatically detect node failures and reroute traffic to healthy nodes.
  • Load balancing—Kube-VIP provides load-balancing capabilities for services, distributing incoming traffic among multiple pods for the same service. This helps distribute the load evenly and improve the service's availability.
  • Support for various load-balancing algorithms—Kube-VIP supports multiple load-balancing algorithms, such as round-robin, least connections, and more, allowing you to choose the most suitable strategy for your services.
  • Integration with Kubernetes—Kube-VIP is designed to work seamlessly with Kubernetes clusters and leverages Kubernetes resources to configure and manage the virtual IP and load balancing.
  • Customizable configuration—Kube-VIP provides configuration options to fine-tune its behavior based on specific cluster requirements.
  • Support for multiple load-balancer implementations—Kube-VIP can be used with different load-balancer implementations, including Border Gateway Protocol (BGP) and other network load-balancing solutions.

Step 2: Automating the Deployment of Edge Kubernetes on Dell NativeEdge

To automate the deployment of Edge Kubernetes on NativeEdge, we need to automate the deployment of all three components of our edge architecture.

For that purpose, we use the NativeEdge Orchestrator blueprint. The blueprint provides the automation scheme for each component and allows us to compose a solution offering an end-to-end automation of the entire cluster on all its components.

Automating the Kubernetes Cluster Deployment

Figure 2. Automating the Kubernetes Cluster Deployment

Step 3: Deployment and Configuration

The following snippets show the blueprint for each of the three components that were previously described. A blueprint is a form of infrastructure as code (IaC) written in YAML format. Each blueprint uses a different automation plugin that fits each unit.

The first snippet shows the provisioning of a virtual IP address (VIP) that serves as the cluster entry point to the outside world. As with any load-balancer, it provides a single VIP address for all three nodes in the cluster. In this case, we chose a fabric plugin (SSH script) to automate the installation and configuration of that VIP service (scripts/

VIP Blueprint Snippet

Figure 3. VIP Blueprint Snippet

The second snippet shows the provision of the K3s cluster. It comes in multiple configuration flavors, a single node, and a three or five node HA cluster. We first provision the first node and then, in case of a multi node cluster, provision the rest of the nodes. All of the nodes form a cluster and result in an HA solution.

K3s Blueprint Snippet

Figure 4. K3s Blueprint Snippet

The third snippet shows the provision of Longhorn, a cloud-native HA distributed block storage for Kubernetes. It is optional and the user can decide, using inputs, whether to add HA storage to the cluster. Longhorn creates replicas of the data in other nodes' volumes in the cluster, so in the case that a node fails, you still have the other replicas.  

Storage (Longhorn) Helm Chart Blueprint

Figure 5. Storage (Longhorn) Helm Chart Blueprint

After you connect all of the components and create the HA Kubernetes cluster, you have a topology of three Kubernetes nodes (in case a of a three-node cluster), plus Kube-VIP as the VIP entry point to the cluster, and a Longhorn storage component, as shown in the following topology diagram.

Automation Topology

Figure 6. Automation Topology

This process takes a few minutes, and then you have an HA Kubernetes cluster.


The discovery phase is responsible for maintaining the list of available edge devices. The result of the discovery is a list of environment entries each containing the relevant device assets management.

This list is used as an input to the deployment phase and lets the user select the designated devices that are used for the cluster.

NativeEdge Discovery

Figure 7. NativeEdge Discovery

In the previous snippet, we see the available NativeEdge Endpoints that the user can choose from to form a cluster. The user can choose either one or three NativeEdge Endpoints to create an HA cluster. An odd number of endpoints is needed for the cluster leader election algorithm. It is essential to avoid multiple leaders getting elected, a condition known as a split-brain problem. Consensus algorithms use odd number voting to elect the leader. An example of this could be electing the node with majority votes.

Workflow Execution

Workflow execution is the phase where we map the automation plan as described in the blueprint into a chain of tasks. This calls the relevant infrastructure resource API needed to establish our cluster.

The user starts by deploying the K3S blueprint from the application catalog, as shown in the following figure.

 NativeEdge Workflow Execution

Figure 8. NativeEdge Workflow Execution

In the following figure, we can see the deployment progress bar, at 61 percent complete. It deploys all the necessary cluster resources, the K3S components, the Kube-VIP, and Longhorn.

NativeEdge Solution Deployments

Figure 9. NativeEdge Solution Deployments

Upon deployment completion, NativeEdge shows the Deployment Capabilities and Outputs, as seen in the following figure. This list includes important information such as the K3s cluster endpoint to access the cluster.

The Deployment Capabilities and Outputs display also includes events or logs of the deployment execution, where the user can view various steps of the deployment execution.

Deployment Details

Figure 10. Deployment Details

Final Notes

Edge devices can vary significantly in terms of networking capability, resource level, hardware capabilities, operating systems, and functional role, leading to fragmentation in the edge computing ecosystem.

Edge AI is a catalyst event that leads to even more significant edge device fragmentation. It requires specialized hardware accelerators like GPUs, Neural Processing Units (NPUs), or Tensor Processing Units (TPUs) to efficiently run deep learning models. Different manufacturers produce these accelerators, leading to a variety of hardware platforms and architectures. In addition to that, many organizations, especially in industries such as automotive, healthcare, and industrial IoT, develop custom edge AI solutions tailored to their specific requirements.

Kubernetes Reduces the Edge Fragmentation Complexity

Using Kubernetes at the edge can help reduce device fragmentation complexity through:

  • Abstraction—Kubernetes provides an abstraction of hardware differences.
  • Containerization—Kubernetes provides a lightweight, portable   workload execution framework, and can run consistently across various edge devices, regardless of the underlying operating system or hardware.
  • Resource management—Kubernetes provides resource management features that allow you to allocate CPU and memory resources to containers.
  • Edge clusters—Kubernetes can be set up to manage clusters of edge devices distributed across different locations, leveraging a fabric or mesh topology architecture.
  • Rolling updates and version control—Kubernetes supports rolling updates and version control of containerized applications.
  • Avoid vendor lock-in, the right Kubernetes for the job—Evolving extensions or variants of Kubernetes, may be better suited for the edge, including MicroK8, K3s, K0, KubeVirt, Virtlet, and Krustlet.

Having said that, setting up a Kubernetes cluster on edge devices can be a complex task.

NativeEdge provides a built-in blueprint that automates the entire process through a single API call. 

It is also important to note that in this specific example, we refer to a specific edge Kubernetes stack. The provided blueprint can be easily extended to fit your specific environment or your choice of Kubernetes stack.  

Read Full Blog
  • Intel
  • zero trust
  • NativeEdge

Dell NativeEdge Speeds Edge Deployments with FIDO Device Onboard (FDO)

Jeroen Mackenbach Bradley Goodman Jenna Tartaglino Joe Caisse Jeroen Mackenbach Bradley Goodman Jenna Tartaglino Joe Caisse

Tue, 26 Sep 2023 19:15:00 -0000


Read Time: 0 minutes

Edge computing is generally defined as “a distributed computing paradigm that brings computation and data storage closer to the sources of data.1” The goal of this approach is to improve response times and save bandwidth.

Beyond this definition, edge computing is critical for enterprises to drive innovation and business outcomes. Existing approaches to the edge have led to technology silos, unscalable operations, poor infrastructure utilization, and inflexible legacy ecosystems. The massive proliferation of diverse edge devices has also increased exposure to cyberattacks. Dell has addressed these challenges with the new NativeEdge solution, a key feature of which is the ability to deploy edge devices swiftly and securely. At the root of this capability is FIDO Device Onboard (FDO), an open standard defined by technology leaders within the FIDO Alliance to automatically and securely onboard devices within edge deployments as diverse as retail, manufacturing, and energy. The FDO implementation used by Dell is based on the open-source implementation that has been contributed to the Linux Foundation Edge project by Intel.

The integration of the FIDO Device Onboard (FDO) with the Dell NativeEdge solution helps organizations to deploy and manage infrastructure at the edge by utilizing zero-trust principles and a streamlined supply chain to secure the edge environment at scale. “Intel developed and contributed the base technology that became FDO. Our work with Dell and the FIDO Alliance is a great example of the power of collaboration to address the continuously evolving threat landscape faced by our edge customers,” said Sunita Shenoy, Senior Director, Edge Technology Product Management at Intel.

Edge computing is transforming industries and we are delighted that FDO is a key component in Dell's innovative NativeEdge platform," said Andrew Shikiar, executive director and CMO of the FIDO Alliance.  See the press release here: FIDO Device Onboard (FDO) Certification Program is Launched to Enable Faster, More Secure, Deployments of Edge Nodes and IoT Devices

In this blog, we will look at the edge challenges and three key elements that seek to address them: firstly Dell’s NativeEdge solution (described here), secondly the FIDO Device Onboarding (FDO) standard, and lastly the Linux Foundation Edge Open-Source software implementation of FDO (described here). 

Business Challenges at the Edge

Recent years have seen a significant shift towards the edge, as more companies deploy devices that increase the demand for more data and analytics. By deploying devices to the edge, companies can reduce latency, improve the speed of data processing, and enhance security. Further, deploying devices at the edge can also help reduce bandwidth consumption and minimize the costs that are associated with transmitting large amounts of data to the cloud. The deployment of devices at the edge has therefore become a crucial component of modern technology infrastructure, enabling businesses to improve their operational efficiency and deliver better customer experiences.

The Dell NativeEdge Solution

The NativeEdge operations software platform enables organizations to securely deploy and manage infrastructure at the edge. NativeEdge supports a wide range of NativeEdge Endpoints. It uses zero-trust principles, combined with a holistic factory integration approach and application orchestration, to create a secure edge environment. It can start small with a single device and scale out as needed, and it can be deployed centrally or globally, regardless of network connectivity challenges, absence of technical staff, or facility environment.

Driving Improved Return on Investment at the Edge

In an internal Dell analysis2 consisting of return-on-investment modeling together with nearly a hundred Dell customer interviews, and a third-party environmental consultant review for methodology validation, Dell examined the potential economic impact of running NativeEdge across 25 facilities of a composite manufacturing company. 

The study found that after three years, the company could expect to see the following benefits: 

  • Up to 132-percent return on investment for Dell NativeEdge platform costs 
  • An average of 20-minute time saving per month for every edge infrastructure asset managed with NativeEdge 
  • Savings on transportation costs by decreasing the need for site-support dispatches, helping to reduce travel time, and eliminating up to 14 metric tons of carbon dioxide emissions 

Key Elements of Dell NativeEdge

Key Elements of Dell NativeEdge

NativeEdge Functionality at a Glance

As these figures show, NativeEdge is designed to address the major aspects of managing an edge system. The first two of these aspects are closely linked as the ability to provide zero-touch provisioning (also known as onboarding) together with zero-trust security, a key tenet of which is, “Never trust, always verify."

Automating the Onboarding Process with FIDO Device Onboard (FDO)

Traditionally, the installation of edge devices has been a cumbersome and time-consuming process. Edge installers, who could be individuals such as retail store managers or factory plant managers, may lack the expertise to manage complex edge devices and operating system installations. This highlights the importance of ensuring that edge devices are user-friendly and straightforward to deploy, as mistakes in manual onboarding can lead to security issues as well as service outages.

With NativeEdge, anyone can easily set up a NativeEdge Endpoint by simply plugging in a network cable, powering on the device, and stepping away. By leveraging the FIDO Alliance’s open standard known as FIDO Device Onboard Specification 1.1, Dell assures a streamlined installation process that is as easy as possible. The FIDO Alliance is a standards organization with over 250 members that was formed in 2012 with the goal of “simpler, stronger authentication.” 

Leaders in technology from the FIDO Alliance (including Intel, Amazon, Google, Qualcomm, and Arm) created FDO. It is an open specification that defines an approach which combines 'plug and play'-like simplicity with the highest levels of security. It fully aligns with the zero-trust security framework in that neither the edge device nor the platform onto which it is being onboarded are trusted before onboarding takes place. FDO extends zero trust from the installation point back to the manufacturer.   

How FIDO Device Onboard (FDO) Works

Provisioning with FDO

The following steps are aligned with the numbers in the figure:

  1. At the manufacturing stage of the device (or later if preferred), the FDO software client is installed on the device. A trusted key (sometimes called an IDevID or LDevID) is also created inside the device to uniquely identify it. This key may be built into the silicon processor (or associated Trusted Platform Module, know as TPM) or protected within the file system. Other FDO credentials are also placed in the device. A digital proof of ownership, known as the Ownership Voucher (represented as the orange/black key shape in the figure) is created outside the device. This self-protected digital document can be transmitted as a text file. The Ownership Voucher allows the owner of the device to identify themselves during the onboarding process.  
  2. The device passes its way through the supply chain (for example, from distributor to VAR). The Ownership Voucher file follows a parallel path. 
  3. Once the target cloud or platform is selected by the device owner, the Ownership Voucher is sent to that cloud/platform. In turn, the Ownership Voucher is registered with the Rendezvous Server (RV). The RV acts in a comparable way to a Domain Name System (DNS) service. 
  4. When the time for device onboarding comes, the device is connected to the network and powered on. After the device boots up, it uses the Rendezvous Server (RV) to find its target cloud/platform. On-premise and cloud-based RVs can be programmed into the device. 
  5. Based on the information provided by the RV, the device contacts the cloud/platform. The device uses its trusted key to uniquely identify itself to the cloud/platform, and in return the cloud/platform identifies itself as the device owner using the Ownership Voucher. Next, the device and owner perform a cryptographic trick called a key exchange to create a secured, encrypted tunnel between them. 
  6. The cloud/platform can now download credentials and software agents over this encrypted tunnel (or whatever else is needed for correct device operation and management). FDO allows any kind of credential to be downloaded, so that solution owners do not have to change their existing solution when they adopt FDO.

Finally, having finished the FDO process, the device contacts its management platform, which is the platform that manages it for the rest of its lifecycle. FDO then lies dormant, although it can be re-awakened if needed, such as if the device is sold or repurposed. 

Dell NativeEdge FDO End-to-End Integration

Dell has integrated FDO into many elements of its NativeEdge solution, from its secure manufacturing facilities to the Dell Digital Locker used to store Ownership Vouchers to the NativeEdge Orchestrator. A full and detailed description of how FDO has been dovetailed into NativeEdge is available here

The following diagram shows the FDO process applied within the NativeEdge environment. 

FDO Process Diagram

The numbered steps in the diagram are explained in detail in the following steps:

  1. In the procurement process, the user selects the device configuration and places an order in the Dell store.
  2. The Dell stores receives the order and sends information to the Dell manufacturing facility.
  3. The Dell manufacturing facility builds the device and creates the Ownership Voucher.
  4. The following sub-steps occur simultaneously:
    1. The Dell manufacturing facility transfers the Ownership Voucher to the end user. This credential is passed through the supply chain, allowing the device owner to verify the device, and also giving the device a mechanism to verify its owner.
    2. The Dell manufacturing facility ships the NativeEdge Endpoint device to the user.
  5. The Ownership Voucher is delivered to the Edge Orchestrator that will control the device.
  6. The Edge Orchestrator now holds the device Ownership Voucher.  
  7. Non IT-skilled staff unbox the device, cable the device to the network, and power it on.
  8. Once connected to the network, the device contacts the Rendezvous Service configured in the device.
  9. The Rendezvous Service provides information to the device about which orchestrator it belongs to. The Rendezvous server (which may be part of the NativeEdge Orchestrator or a separate system) is a service that acts a rendezvous point between a newly powered-on device and the owner onboarding service.
  10. Once the device connects to the NativeEdge Orchestrator that holds its Ownership Voucher, it starts the Secure Component Verification (SCV) process, and if it passes, it starts the registration and onboarding. This secure onboarding process includes device and ownership identification as well as component validation. SCV is part of Dell Supply Chain Security (described here).
  11. Once the onboarding is finished, the device is automatically provisioned with the deployment of pre-defined templates and blueprints that have been assigned to the device.

Implementing FDO with the Linux Foundation Edge Open-Source Implementation

Software implementations of FDO consist of several functional elements, which are highlighted in the following generic FDO tool diagram. 

FDO with Linux

The numbered steps in the diagram are described in further detail as follows:

  1. The FDO client is placed on the device. 
  2. The Manufacturing Tool installs the device credentials and creates the Ownership Voucher. 
  3. The Rendezvous Server can be run in the cloud or on-premise.
  4. The FDO Platform Software Development Kit (SDK) is integrated into the target cloud or on-premise platform. 
  5. A Reseller tool can be used by the supply chain ecosystem to extend the Ownership Voucher’s cryptographic key. 
  6. Additionally, tools provide initial network access for the device (not shown). 

Companies have a range of options when implementing the FDO software. They can develop the software themselves directly from the specification, use one of the commercially available implementations of FDO (for example, Red Hat), or they can use the Linux Foundation Edge implementation (described here). 

The FDO software within the Linux Foundation Edge has been developed and contributed by Intel, one of the authors of the FDO specification. The code is a mixture of C and Java (depending on which part of the FDO system is being implemented). It offers client software for both Intel and other processors including Arm.

NativeEdge - Delivering on the Edge Promise

With NativeEdge, Dell set a simple but critical goal; allow customers to deploy Edge solutions quickly and securely and then manage them effectively throughout their lifetime. As with all simple goals, the challenge is in developing a solution that fully delivers on the promise. With NativeEdge, Dell has taken full advantage of FIDO Device Onboarding (FDO) together with the Linux Foundation Edge FDO project code to build on top of an industry onboarding technology that fully supports Dell’s mission to simplify deployment and management at the edge while delivering the highest levels of security. NativeEdge is now available for customers to deploy at scale. 


2  Based on internal analysis, May 2023. The internal analysis consisted of internal modeling, customer interviews, and third-party environmental consultant review for methodology validation. 

Read Full Blog
  • AI
  • Edge
  • NativeEdge
  • DevEdgeOps

DevEdgeOps Defined

Nati Shalom Nati Shalom

Mon, 25 Sep 2023 07:30:48 -0000


Read Time: 0 minutes

What is a DevEdgeOps Platform?

As the demand for edge computing continues to grow, organizations are seeking comprehensive solutions that streamline the development and operational processes in edge environments. This has led to the emergence of DevEdgeOps platforms, with specialized processes and tools. Including frameworks designed to support the unique requirements of developing, deploying, and managing applications in edge computing architectures.

Edge Operations Shift Left

Figure 1: DevEdgeOps platform, as part of the Shift Left movement, focuses on pushing the operational challenges of edge computing to the development stage.

Shift Left refers to the practice of moving activities that were traditionally performed mostly at production stage to an earlier in the development stage. It is often applied in software development and DevOps to integrate testing, security, and other considerations earlier in the development lifecycle. Similarly, in the world of edge computing, we are moving operational tasks to an earlier stage, just like how we did with Shift Left in DevOps. We call this new idea, DevEdgeOps. 

A DevEdgeOps platform facilitates collaboration between developers and operations teams, addressing challenges like network connectivity, security, scalability, and edge deployment management. 

In this blog post, we introduce edge computing, its use cases, and architecture. We explore DevEdgeOps platforms, discussing their features and impact on edge computing development and operations.

Introduction to Edge Computing

Edge computing is a distributed computing paradigm that brings data processing and analysis closer to the source of data generation, rather than relying on centralized cloud or datacenter resources. It aims to address the limitations of traditional cloud computing, such as latency, bandwidth constraints, privacy concerns, and the need for real-time decision-making.

To learn more about the use cases, unique challenges, typical architecture and taxonomy refer to my previous post: Edge Computing in the Age of AI: An Overview

Figure 2: Innovation and momentum building at the edge. Source: Edge PowerPoint slides, Dell Technologies World 2023


DevEdgeOps is a term that combines elements of development (DevOps) and edge computing. It refers to the practices, methodologies, and tools used for managing and deploying applications in edge computing environments while leveraging the principles of DevOps. In other words, it aims to enable efficient development, deployment, and management of applications in edge computing environments, combining the agility and automation of DevOps with the unique requirements of edge deployments.

DevEdgeOps Platform

A DevEdgeOps platform provides developers and operations teams with a unified environment for managing the entire lifecycle of edge applications, from development and testing to deployment and monitoring. These platforms typically combine essential DevOps practices with features specific to edge computing, allowing organizations to build, deploy, and manage edge applications efficiently.

Key Features of DevEdgeOps Platforms

  • Centralized edge application management—DevEdgeOps platforms provide centralized management capabilities for edge applications. They offer dashboards, interfaces, and APIs that allow operations teams to monitor the health, performance, and status of edge deployments in real-time. These platforms may also include features for configuration management, remote troubleshooting, and log analysis, enabling efficient management of distributed edge nodes.
  • Integration with edge infrastructure—DevEdgeOps platforms often integrate with edge infrastructure components such as edge gateways, edge servers, or cloud-based edge computing services. This integration simplifies the deployment process by providing seamless connectivity between the development platform and the edge environment, facilitating the deployment and scaling of edge applications.
  • Edge-aware development tools—DevEdgeOps platforms offer development tools tailored for edge computing. These tools assist developers in optimizing their applications for edge environments, providing features such as code editors, debuggers, simulators, and testing frameworks specifically designed for edge scenarios.
  • CI/CD pipelines for edge deployments—DevEdgeOps platforms enable the automation of continuous integration and deployment processes for edge applications. They provide pre-configured pipelines and templates that consider the unique requirements of edge environments, including packaging applications for different edge devices, managing software updates, and orchestrating deployments to distributed edge nodes.
  • Edge simulation and testing capabilities—DevEdgeOps platforms often include simulation and testing features that help developers validate the functionality and performance of edge applications in various scenarios. These features simulate edge-specific conditions such as low-bandwidth networks, intermittent connectivity, and edge device failures, allowing developers to identify and address potential issues proactively.

Final Words

The emergence of new edge use cases that combine cloud-native infrastructure and AI introduces an increased operational complexity and demands more advanced application lifecycle management. Traditional management approaches may no longer be sustainable or efficient in addressing these challenges.

In my previous post, How the Edge Breaks DevOps, I referred to the unique challenges that the edge introduces and the need for a generic platform that will abstract the complexity associated with those. In this blog, I introduced DevEdgeOps platforms that combine essential DevOps practices with features specific to edge computing. I also described the set of features that are expected to be part of this new platform category. By embracing these approaches, organizations can effectively manage operational complexity and fully harness the potential of edge computing and AI.

Read Full Blog
  • AI
  • NativeEdge
  • edge inferencing

Inferencing at the Edge

Nati Shalom Jeff White Nati Shalom Jeff White

Wed, 28 Feb 2024 13:03:00 -0000


Read Time: 0 minutes

Inferencing Defined

Inferencing, in the context of artificial intelligence (AI) and machine learning (ML), refers to the process of classifying or making predictions based on the input information. It involves using existing knowledge or learned knowledge to arrive at new insights or interpretations.

Figure 1: Inferencing use case – real-time image classificationFigure 1. Inferencing use case – real-time image classification

The Need for Edge Inferencing 

Data growth driven by data-intensive applications and ubiquitous sensors to enable real-time insight is growing three times faster than traditional methods that require network access. This drives data processing at the edge to keep up with the pace and reduce cloud cost and latency.  S&P Global Market Intelligence estimates that by 2027, 62 percent of enterprises data will be processed at the edge.

Graph showing shift in AI computation from centralized locations to the edgeFigure 2. Data growth driven by sensors, apps, and real-time insights driving AI computation to the edge

How Does Inferencing Work?

Inferencing is a crucial aspect of various AI applications, including natural language processing, computer vision, graph processes, and robotics.

The process of inferencing typically involves the following steps:

Image showing data flow from training to inferencingFigure 3. From training to inferencing

  1. Data input—The AI model receives input data, which could be text, images, audio, or any other form of structured or unstructured data.
  2. Feature extraction—For complex data like images or audio, the AI model may need to extract relevant features from the input data to represent it in a suitable format for processing.
  3. Pre-trained model—In many cases, AI models are pre-trained on large datasets using techniques like supervised learning or unsupervised learning. During this phase, the model learns patterns and relationships in the data.
  4. Applying learned knowledge—When new data is presented to the model for inferencing, it applies the knowledge it gained during the training phase to make predictions and classifications or generate responses.
  5. Output—The model produces an output based on its understanding of the input data.

Edge Inferencing

Inference at the edge is a technique that enables data-gathering from devices to provide actionable intelligence using AI techniques rather than relying solely on cloud-based servers or data centers. It involves installing an edge server with an integrated AI accelerator (or a dedicated AI gateway device) close to the source of data, which results in much faster response time. This technique improves performance by reducing the time from input data to inference insight, and reduces the dependency on network connectivity, ultimately improving the business bottom line. Inference at the edge also improves security as the large dataset does not have to be transferred to the cloud. For more information, see Edge Inferencing is Getting Serious Thanks to New Hardware, What is AI Inference at the Edge?, and Edge Inference Concept Use Case Architecture.

In short, inferencing is the process of an AI model using what it has learned to give us useful answers quickly. This can happen at the edge or on a personal device which maintains privacy and shortens response time.


Computational challenges

AI inferencing can be challenging because edge systems may not always have sufficient resources. To be more specific, here are some of the key challenges with edge inferencing:

  • Limited computational resources—Edge devices often have less processing power and memory compared to cloud servers. This may limit the complexity and size of AI models that can be deployed at the edge.
  • Model optimization—AI models may need to be optimized and compressed to run efficiently on resource-constrained edge devices while maintaining acceptable accuracy.
  • Model updates—Updating AI models at the edge can be more challenging than in a centralized cloud environment, as devices might be distributed across various locations and may have varying configurations.

Operational challenges

Handling a deep learning process involves continuous data pipeline management and infrastructure management. This leads to the following question:

  • How do I manage the acquisition to the edge platform of the models, how do I stage the model, and how do I update the model?
  • Do I have sufficient computational and network resources for the AI inference to execute properly?
  • How do I manage the drift and security (privacy protection and adversarial attack) of the model?
  • How do I manage the inference pipelines, insight pipelines, and datasets associated with the models?

Edge Inferencing by Example

To illustrate how inferencing works, we use TensorFlow as our deep learning framework.

TensorFlow is an open-source deep learning framework developed by the Google Brain team. It is widely used for building and training ML models, especially those based on neural networks.

The following example illustrates how to create a deep learning model in TensorFlow. The model takes a set of images and classifies them into separate categories, for example, sea, forest, or building.

We can create an optimized version of that TensorFlow Lite model with post-training quantization. The edge inferencing works using TensorFlow-Lite as the underlying framework and Google Edge Tensor Processing Uni (TPU) as the edge device.

This process involves the following steps:

  1. Create the model.
  2. Train the model.
  3. Save the model.
  4. Apply post-training quantization.
  5. Convert the model to TensorFlow Lite.
  6. Compile the TensorFlow Lite model using edge TPU compiler for Edge TPU devices like Coral Dev board (Google development platform that includes the Edge TPU) to TPU USB Accelerator (this allows users to add Edge TPU capabilities to existing hardware by simply plugging in the USB device).
  7. Deploy the model at the edge to make inferences.

Inferencing example using TensorFlow and TensorFlow LiteFigure 4. Image inferencing example using TensorFlow and TensorFlow Lite

You can read the full example in this post: Step by Step Guide to Make Inferences from a Deep Learning at the Edge | by Renu Khandelwal | Towards AI


Inferencing is like a magic show, where AI models surprise us with their clever responses. It's used in many exciting areas like talking to virtual assistants, recognizing objects in pictures, and making smart decisions in various applications.

Edge inferencing allows us to bring the AI processing closer to the source of the data and thus gain the following benefits:

  • Reduced latency—By performing inferencing locally on the edge device, the time required to send data to a centralized server and receive a response is significantly reduced. This is especially important in real-time applications where low latency is crucial, such as autonomous vehicles/systems, industrial automation, and augmented reality.
  • Bandwidth optimization—Edge inferencing reduces the amount of data that needs to be transmitted to the cloud, which helps optimize bandwidth usage. This is particularly beneficial in scenarios where network connectivity might be limited or costly.
  • Privacy and security—For certain applications, such as those involving sensitive data or privacy concerns, performing inferencing at the edge can help keep the data localized and minimize the risk of data breaches or unauthorized access.
  • Offline capability—Edge inferencing allows AI models to work even when there is no internet connection available. This is advantageous for applications that need to function in remote or offline environments.


What is AI Inference at the Edge? | Insights | Steatite (

Step by Step Guide to Make Inferences from a Deep Learning at the Edge | by Renu Khandelwal | Towards AI

Read Full Blog
  • AI
  • Edge
  • NativeEdge

Edge Computing in the Age of AI: An Overview

Nati Shalom Nati Shalom

Wed, 27 Sep 2023 05:19:01 -0000


Read Time: 0 minutes

Introduction to Edge Computing

Edge computing is a distributed computing paradigm that brings data processing and analysis closer to the source of data generation, rather than relying on centralized cloud or datacenter resources. It aims to address the limitations of traditional cloud computing, such as latency, bandwidth constraints, privacy concerns, and the need for real-time decision-making.

Figure 1: Innovation and momentum building at the edge. Source: Edge PowerPoint slides, Dell Technologies World 2023

Edge Computing Use Cases

Edge computing finds applications across various industries, including manufacturing, transportation, healthcare, retail, agriculture, and digital cities. It empowers real-time monitoring, control, and optimization of processes. This enables efficient data analysis and decision-making at the edge as it complements cloud computing by providing a distributed computing infrastructure. 

Here are some common examples:

  • Industrial internet of things (IIoT)—Edge computing enables real-time monitoring, control, and optimization of industrial processes. It can be used for predictive maintenance, quality control, energy management, and overall operational efficiency improvements.
  • Digital cities—Edge computing supports the development of intelligent and connected urban environments. It can be utilized for traffic management, smart lighting, waste management, public safety, and environmental monitoring.
  • Autonomous vehicles—Edge computing plays a vital role in autonomous vehicle technology. By processing sensor data locally, edge computing enables real-time decision-making, reducing reliance on cloud connectivity and ensuring quick response times for safe navigation.
  • Healthcare—Edge computing helps in remote patient monitoring, telemedicine, and real-time health data analysis. It enables faster diagnosis, personalized treatment, and improved patient outcomes.
  • Retail—Edge computing is used in retail for inventory management, personalized marketing, loss prevention, and in-store analytics. It enables real-time data processing for optimizing supply chains, improving customer experiences, and implementing dynamic pricing strategies.
  • Energy management—Edge computing can be employed in smart grids to monitor energy consumption, optimize distribution, and detect anomalies. It enables efficient energy management, load balancing, and integration of renewable energy sources.
  • Surveillance and security—Edge computing enhances video surveillance systems by enabling local video analysis, object recognition, and real-time threat detection. It reduces bandwidth requirements and enables faster response times for security incidents.
  • Agriculture—Edge computing is utilized in precision farming for monitoring and optimizing crop conditions. It enables the analysis of sensor data related to soil moisture, weather conditions, and crop health, allowing farmers to make informed decisions regarding irrigation, fertilization, and pest control.

These are just a few examples, and the applications of edge computing continue to expand as technology advances. The key idea is to process data closer to its source, reducing latency, improving reliability, and enabling real-time decision-making for time-sensitive applications.

The Challenges with Edge Computing

Edge computing brings numerous benefits, but it also presents a set of challenges that organizations need to address. The following image highlights some common challenges associated with edge computing:

Figure 2: Common challenges with edge computing. Source: Edge PowerPoint slides, Dell Technologies World 2023

Edge Computing Architecture Overview

The following diagram represents a typical edge computing architecture and its associated taxonomy.

Figure 3: A typical edge architecture

A typical edge computing architecture consists of several components working together to enable data processing and analysis at the edge. Here are the key elements you would find in such an architecture:

  • Edge devices—These are the devices deployed at the network edge, such as sensors, IoT devices, gateways, or edge servers. They collect and generate data from various sources and act as the first point of data processing.
  • Edge gateway—An edge gateway is a device that acts as an intermediary between edge devices and the rest of the architecture. It aggregates and filters data from multiple devices, performs initial pre-processing, and ensures secure communication with other components.
  • Edge computing infrastructure—This includes edge servers or edge nodes deployed at the edge locations. These servers have computational power, storage, and networking capabilities. They are responsible for running edge applications and processing data locally.
  • Edge software stack—The edge software stack consists of various software components installed on edge devices and servers. It typically includes operating systems, containerization technologies (such as Docker or Kubernetes), and edge computing frameworks for deploying and managing edge applications.
  • Edge analytics and AI—Edge analytics involves running data analysis and machine learning algorithms at the edge. This enables real-time insights and decision-making without relying on a centralized cloud infrastructure. Edge AI refers to the deployment of artificial intelligence algorithms and models at the edge for local inference and decision-making. The next section: Edge Inferencing describes the main use case in this regard.
  • Connectivity—Edge computing architectures rely on connectivity technologies to transfer data between edge devices, edge servers, and other components. This can include wired and wireless networks, such as Ethernet, Wi-Fi, cellular networks, or even specialized protocols for IoT devices.
  • Cloud or centralized infrastructure—While edge computing emphasizes local processing, there is often a connection to a centralized cloud or data center for certain tasks. This connection allows for remote management, data storage, more resource-intensive processing, or long-term analytics. Those resources are often broken down into two tiers – near and far edge:
    • Far edge: Far edge refers to computing infrastructure and resources that are located close to the edge devices or sensors generating the data. It involves placing computational power and storage capabilities in proximity to where the data is produced. Far edge computing enables real-time or low-latency processing of data, reducing the need for transmitting all the data to a centralized cloud or datacenter. 
    • Near edge: Near edge, sometimes referred to as the "cloud edge" or "remote edge" describes computing infrastructure and resources that are positioned farther away from the edge devices. In the near edge model, data is typically collected and pre-processed at the edge, and then transmitted to a more centralized location, such as a cloud or datacenter for further analysis, storage, or long-term processing.
  • Management and orchestration—To effectively manage the edge computing infrastructure, there is a need for centralized management and orchestration tools. These tools handle tasks like provisioning, monitoring, configuration management, software updates, and security management for the edge devices and servers.

It is important to note that while the components and the configurations of edge solution may differ, the overall objective remains the same: to process and analyze data at the edge to achieve real-time insights, reduced latency, improved efficiency, and better overall performance.

Edge Inferencing

Data growth driven by data intensive applications and ubiquitous sensors to enable real time insight is growing three times faster than access network. This drives data processing at the edge to keep up with the pace and reduce cloud cost and latency. IDC estimates that by 2027 62% of enterprises data will be processed at the edge!   

Inference at the edge is a technique that enables data-gathering from devices to provide actionable intelligence using AI techniques rather than relying solely on cloud-based servers or data centers. It involves installing an edge server with an integrated AI accelerator (or a dedicated AI gateway device) close to the source of data, which results in much faster response time.1 This technique improves performance by reducing the time from input data to inference insight, and reduces the dependency on network connectivity, ultimately improving the business bottom line.2 Inference at the edge also improves security as the large dataset do not have to be transferred to the cloud.3

Figure 4: Edge computing in the age of AI: An overview

Final Notes

Edge computing in the age of AI marks a significant paradigm shift in how data is processed, and insights are generated. By bringing AI to the edge, we can unlock real-time decision-making, improve efficiency, and enable innovations across various industries. While challenges exist, advancements in hardware, software, and security are paving the way for a future where intelligent edge devices are an integral part of our interconnected world. 

It is expected that Inferencing market alone will overtake training with highest growth at the edge – necessitating competition in data center, near edge, and far edge.

For more information on how edge-inferencing works, refer to the next post on this regard: Inferencing at the Edge

Figure 5: Reference slide on edge type definitions




Read Full Blog
  • Edge
  • NativeEdge

Dell NativeEdge Platform Empowers Secure Application Delivery

Jeroen Mackenbach Jeroen Mackenbach

Tue, 08 Aug 2023 14:31:00 -0000


Read Time: 0 minutes


With an ever-evolving digital landscape and most edge use cases built around brownfield applications, IT operations have become a challenging matter for many organizations, particularly when bringing workloads to the enterprise edge.

These edge operational challenges include:

  • Security of data and assets—Many of these assets have no user or identity awareness.
  • Proliferation of solution silos—Many solutions have a bespoke implementation.
  • Supporting distant locations—Many of these locations have no skilled IT staff.
  • Latency requirements—Many of these locations have limited bandwidth or are even completely disconnected.
  • Fragmented technology landscape—Many of these solutions have been implemented over years of technology evolution.
  • Environmental constraints—Many of these solutions require extended temperature, vibration, and shock resilience and have use-case specific regulatory requirements.

Edge lives outside data centers in the real world where we live. It is located where data is captured close to devices or endpoints, to generate immediate and actionable insights.

We are experiencing a perfect storm of innovation driven by an explosion of data (IoT, telemetry, video, and streaming data), technology capabilities (multicloud, AI/ML, heterogeneous computing, software-defined, and 5G), and the resulting business challenges (security, compliance, productivity, and customer experience).

Security that is required at these locations needs a different approach:

  • Security breaches can have a major effect on human well-being as they often impact essential infrastructure and services, such as power grids, housing, retail, transportation, schools, and hospitals.
  • These failures can have a direct impact on everyday business operations and equipment, such as point of sales, advanced optical inspection (AOI), overall equipment effectiveness (OEE), energy efficiency, telco base station monitoring, and patient care.
  • Edge infrastructure requires the highest level of data security. Network devices are often located at dark sites without Internet access and require the highest level of data confidentiality, such as patient records which are bound to compliance and regulatory constraints.

Dell is committed to assisting customers with the simplification of edge operations as the demand for secure and efficient application delivery has become paramount. The Dell NativeEdge platform leverages the power of edge computing to revolutionize application delivery in a secure environment.

NativeEdge provides a unique set of assets in an edge operations software platform which allows IT operations to deliver application orchestration, multicloud connectivity, zero-touch onboarding, a zero-trust security approach, and infrastructure management.

Zero-Trust Security

Application Orchestration

NativeEdge provides a standardized framework for defining and deploying applications. This simplifies the management and scalability of complex edge environments while ensuring consistency and reliability in application orchestration.

Zero-Touch Provisioning

NativeEdge zero-touch provisioning is a feature that allows for the automatic and seamless deployment of NativeEdge Endpoint (OptiPlex, Gateways, and PowerEdge) without manual intervention. It enables quick and effortless setup by leveraging order and manufacturing preconfigured settings, eliminating the need for on-site configuration, and reducing deployment time and effort.


NativeEdge multicloud capabilities allow NativeEdge Endpoints to connect and integrate with multiple cloud platforms. It enables organizations to leverage various cloud services and resources, such as storage, computing power, and analytics, across different cloud providers, which enhances flexibility and scalability in edge computing deployments.

Infrastructure Management

NativeEdge infrastructure management capabilities provide a comprehensive set of tools and features that enable centralized control and monitoring of NativeEdge Endpoints. It includes functions such as remote device management, software updates, configuration management, and performance monitoring—all of which enhance efficiency and simplify the management of edge computing infrastructure.

Zero Trust

Zero trust is a security framework according to the National Institute of Standards and Technology Special Publication (NIST SP) 800-207 that challenges the traditional perimeter-based approach. It assumes that no user or device should be inherently trusted, requiring continuous verification and authentication of every access request. It aims to improve cybersecurity by minimizing risks and enforcing strict access controls regardless of location or network. A zero-trust solution starts with the seven pillars of security as defined by the Department of Defense (DoD), such as device trust, user trust, transport and session trust, data trust, software trust, the two layers that provide the visibility and analytics, and automation and orchestration. Each pillar has 45 capabilities, and each capability has 152 zero-trust activities.

Zero-Trust Security pillars


NativeEdge is a powerful and secure edge computing application delivery solution that combines features like zero-touch provisioning, multicloud capabilities, and robust infrastructure management. It provides seamless edge, core, and cloud deployment, integration with multiple cloud platforms, and centralized control, which brings scale to edge operations.

Watch the overview video:

Video on securing edge with zero trust

Curious to know more about NativeEdge capabilities? See Edge Security Essentials: Edge Security and How Dell NativeEdge Can Help, or visit and Dell Technologies Solutions Info Hub for NativeEdge.

Read Full Blog