Dell NativeEdge Speeds Edge Deployments with FIDO Device Onboard (FDO)
Tue, 26 Sep 2023 19:15:00 -0000
|Read Time: 0 minutes
Edge computing is generally defined as “a distributed computing paradigm that brings computation and data storage closer to the sources of data.1” The goal of this approach is to improve response times and save bandwidth.
Beyond this definition, edge computing is critical for enterprises to drive innovation and business outcomes. Existing approaches to the edge have led to technology silos, unscalable operations, poor infrastructure utilization, and inflexible legacy ecosystems. The massive proliferation of diverse edge devices has also increased exposure to cyberattacks. Dell has addressed these challenges with the new NativeEdge solution, a key feature of which is the ability to deploy edge devices swiftly and securely. At the root of this capability is FIDO Device Onboard (FDO), an open standard defined by technology leaders within the FIDO Alliance to automatically and securely onboard devices within edge deployments as diverse as retail, manufacturing, and energy. The FDO implementation used by Dell is based on the open-source implementation that has been contributed to the Linux Foundation Edge project by Intel.
The integration of the FIDO Device Onboard (FDO) with the Dell NativeEdge solution helps organizations to deploy and manage infrastructure at the edge by utilizing zero-trust principles and a streamlined supply chain to secure the edge environment at scale. “Intel developed and contributed the base technology that became FDO. Our work with Dell and the FIDO Alliance is a great example of the power of collaboration to address the continuously evolving threat landscape faced by our edge customers,” said Sunita Shenoy, Senior Director, Edge Technology Product Management at Intel.
Edge computing is transforming industries and we are delighted that FDO is a key component in Dell's innovative NativeEdge platform," said Andrew Shikiar, executive director and CMO of the FIDO Alliance. See the press release here: FIDO Device Onboard (FDO) Certification Program is Launched to Enable Faster, More Secure, Deployments of Edge Nodes and IoT Devices
In this blog, we will look at the edge challenges and three key elements that seek to address them: firstly Dell’s NativeEdge solution (described here), secondly the FIDO Device Onboarding (FDO) standard, and lastly the Linux Foundation Edge Open-Source software implementation of FDO (described here).
Business Challenges at the Edge
Recent years have seen a significant shift towards the edge, as more companies deploy devices that increase the demand for more data and analytics. By deploying devices to the edge, companies can reduce latency, improve the speed of data processing, and enhance security. Further, deploying devices at the edge can also help reduce bandwidth consumption and minimize the costs that are associated with transmitting large amounts of data to the cloud. The deployment of devices at the edge has therefore become a crucial component of modern technology infrastructure, enabling businesses to improve their operational efficiency and deliver better customer experiences.
The Dell NativeEdge Solution
The NativeEdge operations software platform enables organizations to securely deploy and manage infrastructure at the edge. NativeEdge supports a wide range of NativeEdge Endpoints. It uses zero-trust principles, combined with a holistic factory integration approach and application orchestration, to create a secure edge environment. It can start small with a single device and scale out as needed, and it can be deployed centrally or globally, regardless of network connectivity challenges, absence of technical staff, or facility environment.
Driving Improved Return on Investment at the Edge
In an internal Dell analysis2 consisting of return-on-investment modeling together with nearly a hundred Dell customer interviews, and a third-party environmental consultant review for methodology validation, Dell examined the potential economic impact of running NativeEdge across 25 facilities of a composite manufacturing company.
The study found that after three years, the company could expect to see the following benefits:
- Up to 132-percent return on investment for Dell NativeEdge platform costs
- An average of 20-minute time saving per month for every edge infrastructure asset managed with NativeEdge
- Savings on transportation costs by decreasing the need for site-support dispatches, helping to reduce travel time, and eliminating up to 14 metric tons of carbon dioxide emissions
Key Elements of Dell NativeEdge
As these figures show, NativeEdge is designed to address the major aspects of managing an edge system. The first two of these aspects are closely linked as the ability to provide zero-touch provisioning (also known as onboarding) together with zero-trust security, a key tenet of which is, “Never trust, always verify."
Automating the Onboarding Process with FIDO Device Onboard (FDO)
Traditionally, the installation of edge devices has been a cumbersome and time-consuming process. Edge installers, who could be individuals such as retail store managers or factory plant managers, may lack the expertise to manage complex edge devices and operating system installations. This highlights the importance of ensuring that edge devices are user-friendly and straightforward to deploy, as mistakes in manual onboarding can lead to security issues as well as service outages.
With NativeEdge, anyone can easily set up a NativeEdge Endpoint by simply plugging in a network cable, powering on the device, and stepping away. By leveraging the FIDO Alliance’s open standard known as FIDO Device Onboard Specification 1.1, Dell assures a streamlined installation process that is as easy as possible. The FIDO Alliance is a standards organization with over 250 members that was formed in 2012 with the goal of “simpler, stronger authentication.”
Leaders in technology from the FIDO Alliance (including Intel, Amazon, Google, Qualcomm, and Arm) created FDO. It is an open specification that defines an approach which combines 'plug and play'-like simplicity with the highest levels of security. It fully aligns with the zero-trust security framework in that neither the edge device nor the platform onto which it is being onboarded are trusted before onboarding takes place. FDO extends zero trust from the installation point back to the manufacturer.
How FIDO Device Onboard (FDO) Works
The following steps are aligned with the numbers in the figure:
- At the manufacturing stage of the device (or later if preferred), the FDO software client is installed on the device. A trusted key (sometimes called an IDevID or LDevID) is also created inside the device to uniquely identify it. This key may be built into the silicon processor (or associated Trusted Platform Module, know as TPM) or protected within the file system. Other FDO credentials are also placed in the device. A digital proof of ownership, known as the Ownership Voucher (represented as the orange/black key shape in the figure) is created outside the device. This self-protected digital document can be transmitted as a text file. The Ownership Voucher allows the owner of the device to identify themselves during the onboarding process.
- The device passes its way through the supply chain (for example, from distributor to VAR). The Ownership Voucher file follows a parallel path.
- Once the target cloud or platform is selected by the device owner, the Ownership Voucher is sent to that cloud/platform. In turn, the Ownership Voucher is registered with the Rendezvous Server (RV). The RV acts in a comparable way to a Domain Name System (DNS) service.
- When the time for device onboarding comes, the device is connected to the network and powered on. After the device boots up, it uses the Rendezvous Server (RV) to find its target cloud/platform. On-premise and cloud-based RVs can be programmed into the device.
- Based on the information provided by the RV, the device contacts the cloud/platform. The device uses its trusted key to uniquely identify itself to the cloud/platform, and in return the cloud/platform identifies itself as the device owner using the Ownership Voucher. Next, the device and owner perform a cryptographic trick called a key exchange to create a secured, encrypted tunnel between them.
- The cloud/platform can now download credentials and software agents over this encrypted tunnel (or whatever else is needed for correct device operation and management). FDO allows any kind of credential to be downloaded, so that solution owners do not have to change their existing solution when they adopt FDO.
Finally, having finished the FDO process, the device contacts its management platform, which is the platform that manages it for the rest of its lifecycle. FDO then lies dormant, although it can be re-awakened if needed, such as if the device is sold or repurposed.
Dell NativeEdge FDO End-to-End Integration
Dell has integrated FDO into many elements of its NativeEdge solution, from its secure manufacturing facilities to the Dell Digital Locker used to store Ownership Vouchers to the NativeEdge Orchestrator. A full and detailed description of how FDO has been dovetailed into NativeEdge is available here.
The following diagram shows the FDO process applied within the NativeEdge environment.
The numbered steps in the diagram are explained in detail in the following steps:
- In the procurement process, the user selects the device configuration and places an order in the Dell store.
- The Dell stores receives the order and sends information to the Dell manufacturing facility.
- The Dell manufacturing facility builds the device and creates the Ownership Voucher.
- The following sub-steps occur simultaneously:
- The Dell manufacturing facility transfers the Ownership Voucher to the end user. This credential is passed through the supply chain, allowing the device owner to verify the device, and also giving the device a mechanism to verify its owner.
- The Dell manufacturing facility ships the NativeEdge Endpoint device to the user.
- The Ownership Voucher is delivered to the Edge Orchestrator that will control the device.
- The Edge Orchestrator now holds the device Ownership Voucher.
- Non IT-skilled staff unbox the device, cable the device to the network, and power it on.
- Once connected to the network, the device contacts the Rendezvous Service configured in the device.
- The Rendezvous Service provides information to the device about which orchestrator it belongs to. The Rendezvous server (which may be part of the NativeEdge Orchestrator or a separate system) is a service that acts a rendezvous point between a newly powered-on device and the owner onboarding service.
- Once the device connects to the NativeEdge Orchestrator that holds its Ownership Voucher, it starts the Secure Component Verification (SCV) process, and if it passes, it starts the registration and onboarding. This secure onboarding process includes device and ownership identification as well as component validation. SCV is part of Dell Supply Chain Security (described here).
- Once the onboarding is finished, the device is automatically provisioned with the deployment of pre-defined templates and blueprints that have been assigned to the device.
Implementing FDO with the Linux Foundation Edge Open-Source Implementation
Software implementations of FDO consist of several functional elements, which are highlighted in the following generic FDO tool diagram.
The numbered steps in the diagram are described in further detail as follows:
- The FDO client is placed on the device.
- The Manufacturing Tool installs the device credentials and creates the Ownership Voucher.
- The Rendezvous Server can be run in the cloud or on-premise.
- The FDO Platform Software Development Kit (SDK) is integrated into the target cloud or on-premise platform.
- A Reseller tool can be used by the supply chain ecosystem to extend the Ownership Voucher’s cryptographic key.
- Additionally, tools provide initial network access for the device (not shown).
Companies have a range of options when implementing the FDO software. They can develop the software themselves directly from the specification, use one of the commercially available implementations of FDO (for example, Red Hat), or they can use the Linux Foundation Edge implementation (described here).
The FDO software within the Linux Foundation Edge has been developed and contributed by Intel, one of the authors of the FDO specification. The code is a mixture of C and Java (depending on which part of the FDO system is being implemented). It offers client software for both Intel and other processors including Arm.
NativeEdge - Delivering on the Edge Promise
With NativeEdge, Dell set a simple but critical goal; allow customers to deploy Edge solutions quickly and securely and then manage them effectively throughout their lifetime. As with all simple goals, the challenge is in developing a solution that fully delivers on the promise. With NativeEdge, Dell has taken full advantage of FIDO Device Onboarding (FDO) together with the Linux Foundation Edge FDO project code to build on top of an industry onboarding technology that fully supports Dell’s mission to simplify deployment and management at the edge while delivering the highest levels of security. NativeEdge is now available for customers to deploy at scale.
1 https://en.wikipedia.org/wiki/Edge_computing
2 Based on internal analysis, May 2023. The internal analysis consisted of internal modeling, customer interviews, and third-party environmental consultant review for methodology validation.
Related Blog Posts
Dell NativeEdge Platform Empowers Secure Application Delivery
Tue, 08 Aug 2023 14:31:00 -0000
|Read Time: 0 minutes
Introduction
With an ever-evolving digital landscape and most edge use cases built around brownfield applications, IT operations have become a challenging matter for many organizations, particularly when bringing workloads to the enterprise edge.
These edge operational challenges include:
- Security of data and assets—Many of these assets have no user or identity awareness.
- Proliferation of solution silos—Many solutions have a bespoke implementation.
- Supporting distant locations—Many of these locations have no skilled IT staff.
- Latency requirements—Many of these locations have limited bandwidth or are even completely disconnected.
- Fragmented technology landscape—Many of these solutions have been implemented over years of technology evolution.
- Environmental constraints—Many of these solutions require extended temperature, vibration, and shock resilience and have use-case specific regulatory requirements.
Edge lives outside data centers in the real world where we live. It is located where data is captured close to devices or endpoints, to generate immediate and actionable insights.
We are experiencing a perfect storm of innovation driven by an explosion of data (IoT, telemetry, video, and streaming data), technology capabilities (multicloud, AI/ML, heterogeneous computing, software-defined, and 5G), and the resulting business challenges (security, compliance, productivity, and customer experience).
Security that is required at these locations needs a different approach:
- Security breaches can have a major effect on human well-being as they often impact essential infrastructure and services, such as power grids, housing, retail, transportation, schools, and hospitals.
- These failures can have a direct impact on everyday business operations and equipment, such as point of sales, advanced optical inspection (AOI), overall equipment effectiveness (OEE), energy efficiency, telco base station monitoring, and patient care.
- Edge infrastructure requires the highest level of data security. Network devices are often located at dark sites without Internet access and require the highest level of data confidentiality, such as patient records which are bound to compliance and regulatory constraints.
Dell is committed to assisting customers with the simplification of edge operations as the demand for secure and efficient application delivery has become paramount. The Dell NativeEdge platform leverages the power of edge computing to revolutionize application delivery in a secure environment.
NativeEdge provides a unique set of assets in an edge operations software platform which allows IT operations to deliver application orchestration, multicloud connectivity, zero-touch onboarding, a zero-trust security approach, and infrastructure management.
Application Orchestration
NativeEdge provides a standardized framework for defining and deploying applications. This simplifies the management and scalability of complex edge environments while ensuring consistency and reliability in application orchestration.
Zero-Touch Provisioning
NativeEdge zero-touch provisioning is a feature that allows for the automatic and seamless deployment of NativeEdge Endpoint (OptiPlex, Gateways, and PowerEdge) without manual intervention. It enables quick and effortless setup by leveraging order and manufacturing preconfigured settings, eliminating the need for on-site configuration, and reducing deployment time and effort.
Multicloud
NativeEdge multicloud capabilities allow NativeEdge Endpoints to connect and integrate with multiple cloud platforms. It enables organizations to leverage various cloud services and resources, such as storage, computing power, and analytics, across different cloud providers, which enhances flexibility and scalability in edge computing deployments.
Infrastructure Management
NativeEdge infrastructure management capabilities provide a comprehensive set of tools and features that enable centralized control and monitoring of NativeEdge Endpoints. It includes functions such as remote device management, software updates, configuration management, and performance monitoring—all of which enhance efficiency and simplify the management of edge computing infrastructure.
Zero Trust
Zero trust is a security framework according to the National Institute of Standards and Technology Special Publication (NIST SP) 800-207 that challenges the traditional perimeter-based approach. It assumes that no user or device should be inherently trusted, requiring continuous verification and authentication of every access request. It aims to improve cybersecurity by minimizing risks and enforcing strict access controls regardless of location or network. A zero-trust solution starts with the seven pillars of security as defined by the Department of Defense (DoD), such as device trust, user trust, transport and session trust, data trust, software trust, the two layers that provide the visibility and analytics, and automation and orchestration. Each pillar has 45 capabilities, and each capability has 152 zero-trust activities.
Conclusion
NativeEdge is a powerful and secure edge computing application delivery solution that combines features like zero-touch provisioning, multicloud capabilities, and robust infrastructure management. It provides seamless edge, core, and cloud deployment, integration with multiple cloud platforms, and centralized control, which brings scale to edge operations.
Watch the overview video:
Curious to know more about NativeEdge capabilities? See Edge Security Essentials: Edge Security and How Dell NativeEdge Can Help, or visit Dell.com/NativeEdge and Dell Technologies Solutions Info Hub for NativeEdge.
Delivering an AI-Powered Computer Vision Application with NVIDIA DeepStream and Dell NativeEdge
Sun, 19 May 2024 10:28:59 -0000
|Read Time: 0 minutes
The Dell NativeEdge platform, with its latest 2.0 update, brings to the table an array of features designed to streamline edge operations. From centralized management to zero-touch deployment, it ensures that businesses can deploy and manage their edge solutions with unprecedented ease. The addition of blueprints for key Independent Software Vendors (ISV) and AI applications gives users the ability to get a fully automated end-to-end stack from bare metal to production grade vertical service in retail, manufacturing, energy, and other industries. In essence, it brings the best of both worlds—an open platform that is not bound to any specific ISV or cloud provider while preserving the simplicity of a vertical solution.
This blog describes the specific integration of NativeEdge with NVIDIA DeepStream to enable developers to build AI-powered, high-performance, low-latency video analytics applications and services.
Introduction to NVIDIA DeepStream
NVIDIA DeepStream is a comprehensive streaming analytics toolkit designed to facilitate the development and deployment of AI-powered applications. It is built on the GStreamer framework and is part of NVIDIA’s Metropolis platform. Its main features include:
- Input sources—DeepStream accepts streaming data from various sources, including USB/ CSI cameras, video files, or RTSP streams.
- AI and computer vision—It utilizes AI and computer vision to analyze streaming data and extract actionable insights.
- SDK components—The core SDK includes hardware accelerator plugins that leverage accelerators such as VIC, GPU, DLA, NVDEC, and NVENC for compute-intensive operations.
- Edge-to-cloud deployment—Applications can be deployed on embedded edge devices running the Jetson platform or on a larger edge or datacenter GPUs like the T4.
- Security protocols—It offers security features like SASL/ Plain authentication and two way TLS authentication for secure communication between edge and cloud.
- CUDA-X integration—It builds on top of NVIDIA libraries from the CUDA-X stack, including CUDA, TensorRT, and NVIDIA Triton Inference Server, abstracting these libraries in DeepStream plugins.
- Containerization—Its applications can be containerized using NVIDIA Container Runtime, with containers available on NGC or NVIDIA’s GPU cloud registry.
- Programming flexibility—Developers can create applications in C, C++, or Python, and for those preferring a low-code approach, DeepStream offers Graph Composer.
- Real-time analytics—It is optimized for real-time analytics on video, image, and sensor data, providing insights at the edge.
The key benefit of the platform is that it is optimized for NVIDIA’s hardware, providing efficient video decoding and AI inferencing. It can also handle multiple video streams in real time, making it suitable for large-scale applications.
NativeEdge Integration with NVIDIA DeepStream
Deploying an AI application at the edge involves configuring and managing potentially many versions of hardware drivers, applying specific NVIDIA configuration to the containerization platform, and deploying the DeepStream stack with specific AI inferencing models. NativeEdge uses a blueprint model to automate the operational aspect of this integration. This blueprint is delivered as part of the NativeEdge solution catalog. It streamlines the entire deployment process in a way that is consistent with other solutions in the NativeEdge portfolio.
A Deeper Look Into the NativeEdge Blueprint for NVIDIA DeepStream
DeepStream is packaged as a cloud-native container and as such it relies on the availability of a container platform to be available at the edge as a prerequisite. NativeEdge enables two methods to deliver workload at the edge: a packaged virtual machine (VM) which provides full isolation or a bare metal container which maximizes performance. Once the VM or container gets provisioned on the edge, NativeEdge pulls the relevant stack, configures the GPU passthrough, and starts running the target model to enable the inferencing process.
Deployment Configuration
To deploy, the configuration steps allow the user to select the GPU target, deployment mode, and the actual artifacts without having to customize the automation blueprint for each case.
GPU Target
Select the GPU targets for the solution. The target can be A2 to L4 depending on the target footprint and performance. You can find a comparison table which provides guidance on each of the GPU capabilities here.
Deployment Mode
The deployment mode input specifies how the blueprint should configure the DeepStream container. There are three deployment modes currently supported: demo, custom model, and developer.
Demo
This mode deploys the DeepStream container and immediately starts a Triton inferencing pipeline based on an embedded demonstration video file.
Artifacts used in this mode:
- The base DeepStream container
- An archive file containing pre-built Triton inferencing models
Custom Model
This mode deploys the DeepStream container along with a customer’s bespoke pipeline configuration. In addition, the customer has the option to automatically run the pipeline as soon as the DeepStream container is deployed, without any further user intervention.
Artifacts used in this mode:
- The base DeepStream container
- An archive file containing the customer’s pipeline configuration plus any other files or data required
- An archive file containing pre-built Triton inferencing models (optional)
Developer
This mode deploys the DeepStream container and forces it to run in the background, so that a developer can log on to the host and access the container for work such as development or testing.
Artifacts used in this mode:
- The base DeepStream container
- An archive file containing pre-built Triton inferencing models (optional)
Irrespective of which deployment type is chosen, the user also needs to supply a secret artifact, which contains information such as the endpoint of the customer’s artifact repository and the credentials required to download.
Deploy the DeepStream Solution
To deploy the solution, select the target list of devices and the specific DeepStream solution from the NativeEdge catalog and execute the install workflow.
The install workflow parses the blueprint and auto-generates the execution graph that will automates the entire deployment process based on the provided configuration and environment setup.
The event flow is automated through this process. The process includes everything from setting up the endpoint configuration to the actual deployment of the inferencing model. This enables the full end-to-end automation of the entire stack and thus allows the user to start using the system immediately after it completes the process, without the need for any additional human intervention.
Use the DeepStream Solution
Once the installation from the previous step is completed successfully, we can review any relevant outputs (or capabilities) from the blueprint.
For example, if the solution is deployed with the “demo” deployment type, an RTSP feed is automatically created, which can be used by remote clients to view the output of the DeepStream demo application.
This can be seen in the following figure:
If the “custom model” deployment type is chosen, any output produced from the DeepStream application is configured by the customer themselves. In other words, the custom pipeline could potentially create an RTSP stream, in which case a client could use a similar approach to view the stream. Alternatively, the pipeline could define a video file output instead, configured to output to the persistent storage folder that is configured by NativeEdge.
Conclusion
According to IDC , inferencing at the edge is projected to double the growth rate of training by 2026. This projection is in line with the anticipated expansion of edge computing use cases, as illustrated in the following figure:
Dell NativeEdge is the first edge orchestration engine that automates the delivery of NVIDIA AI Enterprise software. In general, and specifically with DeepStream, NativeEdge simplifies the deployment and management of inferencing applications at the edge.
Through this integration, customers have the capability to implement their custom applications, which leverage popular frameworks, on NVIDIA AI accelerators that are compatible with Dell NativeEdge. This is complemented by the ability to incorporate their development infrastructure using the NativeEdge API or CI/CD processes. Additionally, NativeEdge provides support for orchestrating cloud services through Infrastructure as Code (IaC) frameworks like Terraform, Ansible, CloudFormation, or Azure ARM, allowing customers to manage their edge and associated cloud services using the same automation framework.
Integration with ServiceNow enables IT personnel to oversee NativeEdge Endpoints in a manner that is similar to other data center resources, utilizing the ServiceNow CMDB. This integration simplifies edge operations and supports more rapid and flexible release cycles for inferencing services to the edge, thereby helping customers keep pace with the speed of AI developments.
References
- Recent posts for: “DeepStream” | NVIDIA Technical Blog
- Building Intelligent Video Analytics Apps Using NVIDIA DeepStream 5.0 (Updated for GA) | NVIDIA Technical Blog
- Inferencing at the Edge | Dell Technologies Info Hub
- Announcing Dell NativeEdge 2.0: Reimagining Edge Operations | Dell USA
- Managing Video Streams in Runtime with the NVIDIA DeepStream SDK | NVIDIA Technical Blog
- DeepStream SDK | NVIDIA Developer | NVIDIA Developer