Although the media attention being paid to edge computing has exploded in the last year, edge computing is not a recent innovation. Researchers in the early to mid-2000s were studying the potential for growth in the number and types of network-connected “smart” devices and the associated trends in worldwide Internet network use. Researches alerted the IT industry that a new model for data processing was going to be required as the IoT era developed. While the forecasts for total IoT devices deployed in this decade reached the tens of billions, the amount of Internet bandwidth available held relatively steady at about 15 percent. The need for change was apparent. Until this boom in IoT, media streaming for consumers was the only use case that required significant Internet bandwidth. The IT industry quickly learned that following its model of sending data from centralized data centers to devices would not meet the latency, bandwidth, or time-to-response needs for this new Internet of Things. Any application designs that needed to collect and concentrate data from billions of devices for transport to relatively few centralized data centers would increase the competition for already limited Internet network bandwidth. The IoT industry needed alternative models for dealing with the large amounts of data being produced from edge devices.