Tuesday, August 10, 2010

Despite the promise of cloud, are we treating virtual servers like physical ones?

RightScale had some great data about usage of Amazon EC2 recently that described how cloud computing is evolving, or at least how their portion of that business is progressing. At first glance, it certainly sounds as if things are maturing nicely.

However, a couple things they reported caused me to question whether this trend is as rosy as it seems initially, or if IT is actually falling into a bit of a trap in the way it's starting to use the public cloud. I’ll explain:

Cloud servers are increasing in quantity, getting bigger, and living longer, but…

The RightScale data showed that comparing June 2009 with June 2010, there are now more customers using their service and each of those customers are launching more and more EC2 servers. (I did see a contradictory comment about this from Antonio Piraino of Tier1 Research, but I’ll take the RightScale info at face value for the moment.)

Not only have the number of cloud customers increased, but customers are also using bigger servers (12% used “extra large” server sizes last June, jumping up to 56% this June) and using those servers longer (3.3% of servers were running after 30 days in June 2009, 6.3% did so this June).

CTO Thorsten von Eicken acknowledged in his post that “of course this is not an exact science because some production server arrays grow and shrink on a daily basis and some test servers are left running all the time.” However, he concluded that there is a “clear trend that shows a continued move of business critical computing to the cloud.”

These data points, and the commentary around them, were interesting enough to catch the attention of folks like analyst James Staten from Forrester and CNET blogger James Urquhart on Twitter, and Ben Kepes from GigaOM picked it up as well. IDC analyst Matt Eastwood "knowing a thing or two about the server market" (as he said) was intrigued by the thread about the growing & aging of cloud servers, too, noting that average sales prices (ASPs) are rising.

Matt's comments especially got me thinking about about what parallels the usage of cloud servers might have with the way the on-premise, physical server market progressed. If people are starting to use cloud servers longer, perhaps IT is doing what they do on physical boxes inside their four walls -- moving more constant, permanent workloads to those servers.

Sounds like proof that cloud computing is gaining traction, right? Sure, but it cause me to ask this question:

As cloud computing matures, will "rented" server usage in the cloud start to follow the usage pattern of "owned," on-premise server usage?

And, more specifically:

Despite all the promises of cloud computing, are we actually just treating virtual servers in the cloud like physical ones? Are we simply using cloud computing as another type of static outsourcing?

One potential explanation for the RightScale numbers is that we are simply in the early stages of this market and we in IT operations are doing what we know best in this new environment. In other words, now that some companies have tried using the public cloud (in this particular case, Amazon EC2) for short-term testing and development projects, they’ve moved some more “production”-style workloads to the cloud. They’re transplanting what they know into a new environment that on the surface seems to be cheaper.

These production apps, instead of being the apps that folks such as Joe Weinman from AT&T described in his Cloudonomics posts as being ideal for the cloud because of their highly variable usage patterns, have very steady demand. This, after all, matches the increase in longer-running servers that von Eicken wrote about.

And that seems like a bad thing to me.

Why?

Because moving applications that have relatively steady, consistent workloads to the cloud means that customers are missing one of the most important benefits of cloud computing: elasticity.

Elasticity is the component that makes a cloud architecture fundamentally different from just an outsourced application. It is also the component of the cloud computing concept that can have the most profound economic effect on an IT budget and, in the end, a company’s business. If you only pay for what you use and can handle big swings in demand by having additional compute resources automatically provisioned when required and decommissioned when not, you don’t need those resources sitting around doing nothing the rest of the time. Regardless of whether they are on-premise or in the cloud.

In fact, this ability to automatically add and subtract the computing resources that an application needs has been a bit of a Holy Grail for a while. It’s at the heart of Gartner’s real-time infrastructure concept and other descriptions of how infrastructure is evolving to more closely match your business.

Except that maybe the data say that it isn’t what’s actually happening.

Falling into a V=P trap?

My advice for companies trying out cloud-based services of any sort is to think about what they want out of this. Don’t fall into a V=P trap: that is, don’t think of virtual servers and physical servers the same way.

Separating servers from hardware by making them virtual, and then relocating them anywhere and everywhere into the cloud gives you new possibilities. The time, effort, and knowledge it’s going to take to simply outsource an application may seem worth it in the short term, but many of the public cloud’s benefits are simply not going to materialize if you stop there. Lower cost is probably one of those. Over the long haul a steady-state app may not actually benefit from using a public cloud. The math is the math: be sure you’ve figured out your reasoning and end game before agreeing to pay month after month after month.

Instead, I’d suggest looking for applications in need of different requirements, things you could not get from the part of your infrastructure that's siloed and static today. Even if it is being run by someone else. Definitely take a peek at the math that Joe Weinman did on the industry’s behalf or other sources as you are deciding.

Of course, who am I to argue with what customers are actually doing?

It may turn out that people aren’t actually moving production or constant-workload apps at all. There may be an as-yet-undescribed reason for what RightScale’s data show, or a still-to-be-explained use case that we’re missing.

And if there is, I'm eager to hear it. We should all be flexible and, well, elastic enough to accept that explanation, too.

1 comment:

Dawn Richcreek said...

This is a great article and it will be exciting to see how people continue to use virtualized environments in the future. At Diskeeper Corporation, we have developed a product that optimizes virtual environments called V-locity. I am mentioning this because you mentioned that people are treating virtual environments the same way they treat traditional servers. Just like traditional servers, virtual environments will need to be maintained in order to keep up with speed and performance, especially as we see more signs of
virtualization sprawl. As we grow into more complex cloud systems,
these kinds of practices will be essential to maintain virtual
environments.