During the early rise of the Internet, client/server architectures started taking center stage. Servers intended for specific workloads were deployed. These servers were simpler to manage and more cost effective when compared to mainframes. Data centers became more prominent as companies started relying more on the large sets of data that led to growth of data center solution providers. The infrastructure decentralized the computing power and brought it closer to the developers and users. Applications became more UI based instead of command-line driven, which made it easier for business users to consume technology.
As this approach gained popularity and became ubiquitous, organizations used it to host a wide variety of back-end and front-end capabilities. Organizations raced to have a presence online with web applications such as Ecommerce engines to make it simpler for customers to obtain products and services. The more applications and data grew to support business functions, the more server, storage, and networking hardware was needed. Often, this equipment was designated for each department, different applications, and management services like backups, DR, and monitoring. The amount of power and cooling required increased substantially with the number of servers running in data centers. Organizations experienced situations in which a cluster of servers were not used to their full capacity. Low server consolidation led to expenses for unused space, power, and cooling. This scenario is known as “server sprawl” and the remedy was virtualization.