The term “cloud computing” has been around for quite a while, at least as far back as 2006 when Amazon started promoting their “elastic compute cloud”, or what is now know as EC2. The Wikipedia entry indicates the term “cloud computing” and some of its core concepts were in use long before that, at least since 1996. In the context of IT, 20 years is a long, long time, plenty of time for entire technologies to come and go.

There is probably no other segment of IT that has had more innovation and evolution in the last 20 years than cloud computing, and in that time even the meaning of the word “cloud” has evolved. Back in 2007, if an enterprise was doing something “in the cloud” it was almost certainly using EC2 to host virtual machines, or using a public cloud SaaS application like Salesforce.com or Dropbox. Cloud computing meant using the public internet to access IT services hosted in someone else’s datacenter. In fact a popular definitional meme says: “The cloud is just someone else’s computer.” This sort of definition of cloud is amusing but is ultimately nonsensical. What happens when an Amazon employee uses EC2? Is it not cloud in that case? Of course it is.

Cloud Computing

So what is “cloud computing” then? Going back the Wikipedia definition:

“Cloud computing is an information technology (IT) paradigm that enables ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet.”

“Cloud” is not who owns the hardware, it’s an “IT paradigm”, meaning it is a way of thinking about IT services, or a framework for delivering IT services. And after 20 years of experience, our way of thinking about cloud computing and the frameworks we use to deliver it have changed considerably. So what does cloud computing look like today?

Cloud computing may be an IT paradigm, but it is particularly a paradigm for data center architecture and operations, and it involves several key technologies organized in particular ways.

  • Standardized commodity server hardware
  • Standardized server OS with hypervisor, typically Linux, occasionally Windows
  • Server virtualization
  • Software-defined storage
  • Software-defined networking
  • Monitoring software
  • IT orchestration software
  • Accounting and billing software
  • Self-service software including APIs and web sites
These are the components necessary to provide an EC2-like cloud service, and you should note that there is nothing that requires these technologies to run on “someone else’s computer”. Cloud computing is simply a way to architect and run a data center more effectively, whether it's in your data center or someone else’s.

Benefits of Cloud Computing

Why would cloud providers (or enterprises for that matter) build and operate their data centers this way? This approach to data center architecture and operations provides three key benefits over more traditional legacy data centers.

  • Agility - The fact that everything is “software-defined”, including servers, storage, and networking, makes cloud data centers far more agile and adaptable to changing business requirements. Instead of having to install and configure hardware like servers and disk subsystems (which requires smart, expensive people), software-defined resources can be provisioned and configured through software, in particular IT orchestration software.

  • Reduced operating costs - The extensive virtualization and automation made possible by software-defined everything makes data center operations more reliable and less expensive. Common, repeated tasks like provisioning, patching, and reconfiguration of servers, networking, and storage can be managed by the orchestration software, essentially the automated version of the classic data center runbook.

  • Better application scalability and availability - Because the hardware and server environments are standardized, and because they can all be provisioned and configured with software, properly architected applications can scale better and be far more available, meaning they will suffer from less scheduled and unscheduled downtime. Software-defined networking provides load balancing services in front of redundant application servers, allowing new patched servers to be provisioned, and unpatched or failed servers to be deprovisioned, all without affecting application availability.

Getting better agility, scalability, and reliability, all while reducing operating costs sounds like a pretty good deal, and it’s why many CIOs are thinking “I need to get me some of that”. And why wouldn’t you? The public cloud providers offer excellent services, but they only make sense for some applications and IT workloads. Others are better staying in your on-prem data center where you have better cost control, reduced network latency, and simpler regulatory compliance. If you’re going to run your own data center anyway, moving to cloud-oriented data center architecture and operations makes all the sense in the world, and brings us to the idea of a “private cloud”, which I’ll talk about in another blog post.




Rate this item
(0 votes)

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.