The most dynamic of our three characters. Technology can provide tremendous benefits to any organization that effectively uses it. As discussed in the introduction, much of these benefits have been realized through software, which resulted in a complex landscape of layered software. This is not the only source of complexity in the age of digital business, but it can be the source of many obstacles during the automation journey. Then again, a journey without obstacles wouldn’t be much of a journey. It would be more like a stroll.
There are an endless number of considerations when it comes to technology, but we consider a few of them below:
Data
As every aspect of the business comes online, everything is now able to be measured by leveraging data, including IT. Data has become an important and strategic asset for any organization. It has even been referred to as the new oil , or as a capital asset. Data must be considered at every step of the autonomation journey.
A few things to consider:
- It is useful to think of data in terms of the DIKW pyramid. Where Data forms the foundation layer. As Data is processed and context is added, we transition to the Information layer. As relationships are established and questions become answers, we transition to the upper two levels (Knowledge and Wisdom).
- Data needs to be stored and protected. All the tasks associated with this must be automatable, so pick your data storage and data protection components wisely.
- Data will grow, so data storage and data protection requirements must be anticipated and not reacted to. Analytics are a must.
- As higher levels of automation are achieved, the information about your system will become nearly as important as the information within your system.
- Data has weight, so any decision to move data must be thought through.
Compute
No matter how complex the layered software environment becomes, it eventually comes back to hardware-based computation. There have been many developments in compute that affect today’s landscape.
Packaging and consumption model:
- Before VMware introduced ESX, most organizations thought of compute as a physical server. VMware changed that forever and now the virtual machine stands in for the physical server.
- Before AWS introduced EC2, most organizations thought of compute as something to be bought or leased. AWS changed that forever and now, renting compute by the hour (or minute) is a reality.
- Since the release of EC2, AWS (and other public cloud vendors) have introduced new services that package compute in many new ways.
- The recent popularity of container platforms has also changed the way compute is packaged.
Location:
- The increased usage of public cloud services has changed where compute takes place. This impacts the next section (Connectivity) in particular.
- The amount of data that is generated far away from the datacenter has steadily increased over the years. The cost and latency impact of moving all this data to a centralized location can be enormous. The need to move compute closer to the source of the data has become a major consideration and so, the term edge computing entered the lexicon.
- With the advent of machine learning, new location-centric considerations entered the discussion. Training has different compute and data requirements than inference. Where each should be done is an ongoing question.
A few things to consider:
- With new packaging come additional components that need to be automated.
- With new consumption models, come new decisions about when and how computation should happen. Automation must be able to address this.
- With new locations, come new decisions about where computation should happen. Automation must also be able to address this.
Connectivity
When most people read the word ‘connectivity’ in this context, they immediately think reachability through a traditional TCP/IP based network. This is correct but insufficient when thinking about connectivity for automation purposes. Automation connectivity goes beyond the ability to establish a connection using a combination of protocols and port numbers (such as TCP/443, a well-known HTTPS port). It means that automation software has access to each action to be performed or aspect to be queried, and that the response data can be interpreted.
A few things to consider:
- If a component only provides access through a UI, then that component lacks automation connectivity. No matter how elegant the UI, it is an automation dead-end.
- If a component provides access through a CLI, then it isn’t an automation dead-end, but it is more cumbersome and increases risk.
- If response data from a CLI was designed for human operators (that is, text) and doesn’t have a machine-readable option, then it adds to complexity and increases risk.
- APIs are the best option today, especially those that are considered RESTful.
- If a component does provide an API, but that API doesn’t cover 100 percent of the required actions, then automation connectivity is limited, and may make it impossible to reach the upper levels of the framework.
- If polices that govern a system are human-centric and not interpretable by the system, then automation connectivity is limited. This may make it impossible to reach the autonomy Levels 4 and 5.