For the past two decades, IT has steadily adopted DevOps for application development and delivery. DevOps features tight integration of software and infrastructure teams, allowing responsive creation, customization, and management of increasingly sophisticated application stacks. It also provides shorter times to value and more reliable scaling through capabilities such as continuous integration/continuous deployment (CI/CD). When large numbers of systems needing to be deployed combine with always-limited IT resources, the fastest way to deploy tested and proven stable systems was to avoid the need for numerous configuration iterations. Fusing developers with IT operations into unified DevOps teams and working on much smaller and faster integration exercises enabled faster response, innovation, and deployments.
Initial adoption of ML and AI proceeded without a similar model. The drawbacks of leaving the platform tasks to a separate IT team are amplified in ML and AI projects, which are driven to move and evolve more quickly than user-focused software. For example, continuous testing is easier to implement in standard web and enterprise applications because the underlying elements are relatively stable and consistent, but ML must use more volatile elements, requiring more continuous training and validation. No single standard yields the best performance in ML because conditional environments and changing pipelines require someone with domain expertise—that is the data scientist—to ensure optimization.