Home > AI Solutions > Artificial Intelligence > Guides > Design Guide—Delivering Edge AI with NVIDIA Fleet Command > Introduction
In the periods leading up to major innovations in the IT sector, there is frequently a burst of new concepts, buzzwords, and marketing hype before the changes solidify. It took many years before there was consensus on the adoption of the term “cloud computing” to describe a model for the use of automation and self-service provisioning of infrastructure and IT services. Now we frequently encounter the terms “public cloud” and “private cloud” to refer to variants of that model without the need to define what they mean.
Today, the IT industry is on the verge of an innovation at the intersection of the Internet of Things (IoT) and data analytics. Instead of having to move IoT data across a WAN for processing, a new model is emerging that is built on infrastructure systems deployed outside of traditional data-center environments. Many IT professionals are evaluating new hardware and software product options that can advance the state-of-the-art for edge computing. Typically, when innovations come to market, there are comparisons with existing operating models. For example, with momentum now building for a distributed mode of edge computing, some of the questions surfacing are “Will all analytics move to the edge?” and “Will the edge eat the cloud?”
Computer technology shifts are rarely zero-sum games. It is highly unlikely that every dollar spent on edge computing displaces a dollar that is invested in cloud computing. There are IoT use cases that are most suited to cloud-centric computing, edge-centric computing, and hybrid architectures. Investments in both edge and cloud computing technologies will grow for the foreseeable future, but the rate of increase of new distributed edge solutions will gain momentum quickly.
We have seen enough pilot projects to understand the range of potential edge-computing solutions. A “one size fits all” approach will not work for this diverse market. Most organizations need a strategy that embraces different types of edge implementations and multiple cloud options to provide the best services at the lowest cost. Organizations can move beyond the pilot phase by finding the most cost-effective architecture choice for a specific set of application requirements that consider an “edge-first” strategy for workload placement. If this approach seems overly complex, there are new multicloud applications with robust technology-management solutions coming to market. There is a lack of clear information in the industry press about finding the right approach for your use case.
In this guide, we present some recent findings from our edge computing solutions engineering proof-of-concept labs involving several specific use cases. We include the prerequisites, hardware setup, software used, and sizing details for these use cases.
Our lab setup included servers and networking from Dell Technologies and edge deployment management software from NVIDIA, known as NVIDIA Fleet Command.