In recent years, advancements in technology – especially, those in robotics, automation, and artificial intelligence – have fueled many predictions about the future. These predictions run the gamut, but the ones that are the most attention-grabbing and fun to contemplate, are the techno-dystopian and techno-utopian predictions. They go something like this:
Which of these predictions resonates with you? Well, that mainly depends on whether you lean more towards pessimism or optimism. Unfortunately, these prognostications draw our attention away from the here and now. The here and now, is our double time march towards an increasingly digital reality. This is often referred to as Digital Transformation and it is placing tremendous strain on many organizations. This strain is most acutely felt in the expanding labor requirements to manage and utilize an organization’s rapidly growing portfolio of digital assets. There is a simple realization that often flies under the radar:
In the coming years, we will likely face the greatest skilled labor shortage in history.
Ok, so what is the path forward then? Well, before we can answer that, we need to take a quick look at the path already traveled.
The advancements in computer technology, since Gordon Moore first posited his now famous observation about the doubling of transistors, has been something to behold. Exponential improvements in compute, memory, and storage, along with vast improvements in bandwidth, have created an environment where incredible gains through software are being realized. Add to this the power of the internet to interconnect just about everything and you realize that Marc Andreessen was correct when he wrote, “In short, software is eating the world”.
Nearly as impressive as the technological gains themselves has been their deflationary effect. The incredible reduction in the cost per unit of compute, memory, storage, and bandwidth, along with their miniaturization, has allowed software to do more and do it in more places, even in the palm of your hand. Many discrete devices from the not-so-distant past are now just apps on a smart phone. Many interactions previously done in-person are now just part of the mobile computing landscape. These incredible advancements have been a boon for business. Because of this, most companies have embraced Digital Transformation.
All of these advances in software haven’t been without drawbacks. Today, we have a complex landscape of layered software, filled with pitfalls and obligations, that put a heavy strain on organizations.
Now, add to this already complex landscape, some new design and operational patterns:
But it doesn’t end there. Software typically exists for one purpose: to receive, process, store, retrieve, and analyze data. And just as data has grown massively over the years, the measurement terminology used in discussions about data has rapidly progressed. We hardly hear about megabytes, gigabytes, and terabytes anymore. Now petabytes, exabytes and even zettabytes are commonly used. Creative terms to describe large stores of data – such as, data warehouse, data mart (typically, a subset of a data warehouse), and even data lake – are frequently thrown about. Even with the vast improvements in bandwidth, it has become harder and harder to move data.
At some point, the term data gravity entered the lexicon to express the idea that the placement of data affects the placement of software. In addition, data has become more and more valuable, due in part to the need for large datasets to train machine learning models.
When it comes to this valuable data, two acronyms you never want to hear are DL (data loss) and DU (data unavailability), an indication that the protection and availability of data has become a vast area-of-interest by itself. Data is encrypted, replicated, duplicated, cloned, snapshotted, backed up, and so on.
Today’s IT environment – one of layered software, unfathomable amounts of data, and the vast infrastructure necessary to make it all possible – is a challenging one, to say the least.
All of this must be managed, maintained, upgraded, protected, secured, and repaired. Traditionally, this has fallen to human operators who need to perform thousands of operations over the course of a year.
Early on, they did this with CLIs (command-line interfaces), then with UIs (user interfaces) and more recently, with APIs (application programming interfaces), each with its own strengths and weaknesses.
It is impossible to scale out human operators the way we scale out software and hardware. Consider for a moment just the rapid expansion of data and its increasing value to organizations. Even if we exclude all other factors (such as cost), there may not be enough human operators to manage it, never mind the rest of the IT environment. There is a real risk that human operators, or lack thereof, will become the bottleneck.
Add to this the fact that with each manual operation, the risk of human error is ever present. As complexity increases, and with it the number operations, so does the risk. Again, what’s the path forward, then?
Well, humans need to get out of the business of day-to-day operations and into higher-level, more creative activities. These are the activities in which they can leverage the general problem-solving abilities they possess as humans: the knowledge they have gained through technical training, and the wisdom they have won through experience. This means that day-to-day operations must be transitioned to automation, but isn’t automation hard?
No, automation is a journey.
This white paper presents a six-level framework to help the reader along that journey.