Friday, March 13, 2009

Like the Big Dig, ex-IDC analyst John Humphreys believes cloud computing will 'take time'

In the last post, I interviewed John Humphreys, formerly the resident virtualization guru at IDC, now with the virtualization and management group within Citrix. John characterized Citrix as moving beyond criticism that they aren't doing enough with their XenSource acquisition and, in fact, taking the bull by the horns -- offering XenServer for free and focusing on aspects of heterogeneous management.

That first crack at being able to manage diverse types of virtualization in the data center is certainly needed. It's one of the first steps down a path that hasn't been well-trodden so far (and it has been especially ignored by the virtualization vendors themselves). OK, but how might that all fit into the industry conversation around cloud computing? Glad you asked...

Jay Fry, Data Center Dialog: John, I asked your old buddy Al Gillen at IDC how he thought virtualization connected to (and was distinct from) cloud computing. I'd love to get your thoughts on that, too.

John Humphreys, Citrix: I see virtualization as the foundation to any "cloudy" infrastructure -- public or private. In order for any cloud to live up to the promises, it must be able to deliver services that are isolated and abstracted from the underlying infrastructure. Clearly virtualization delivers on both of those requirements in spades!

In my opinion, the opportunity is to build workflow and policies on top of the virtualized infrastructure. Today those workflows are built at the siloed application or VM level, but I believe it will require policies and workflows that exist at the IT service level. To me, execution on this sort of vision will take a long term, multi-year commitment.

DCD: The big virtualization vendors -- Citrix, VMware, and Microsoft -- have all also talked about their cloud computing visions. While I like that VMware talks about both internal and external clouds, they seem to think that everyone will be virtualizing 100% of their servers no questions asked, and that no other virtualization technologies will exist in a data center. That, to me, puts them (and their vision) out of synch with the reality of a customer's actual data center. What's your take on this?

John Humphreys: First and foremost, I agree -- I simply don't see customers going 100% virtual any time soon. There are too many "cultural barriers" or concerns in place to do that.

The second point I'd make is that to me, cloud is still years away from becoming a mainstream reality. Just as a point of comparison, x86 virtualization, measured by VMware, is over 10 years old and now approximately 20% of servers are being virtualized each year. These things take time.

Finally, I'd point out that in the near- to mid-term, the ability to federate between internal and external clouds is a huge hurdle for the industry.

Concepts like "cloudbursting" are appealing but today are architecturally dependent. In addition to the technical ability to move services between internal and external data centers, security and regulatory impacts to cloud are "hazy" at best.

DCD: You've now had a chance to view the virtualization market from two different angles -- as an analyst at IDC and now as an executive working for a vendor in the space. What about the space do you see now that you didn't catch before in your analyst role?

John Humphreys:
The move for me has been really eye-opening and educational from a lot of different perspectives. I think the one that most drew me to the role is the level of complexity that vendors must deal with in making any decision or product changes.

In the analyst realm, the focus is exclusively on strategy. When you jump over the fence, the strategy is still critical, but it is the details in the execution of that strategy that ultimately will define the success or failure of any move. That means not only being able to define a successful strategy but being able to communicate it to the organization, getting the sales teams to support the moves, coordinate the infrastructure changes that must occur internally, address supply chain issues, work with partners, etc.

As an analyst, I knew I was only seeing the first piece of the cycle, so I made the move so I could experience "the rest of story."

Being from Boston, I see a metaphor in the Big Dig and the Chunnel projects. Being an analyst is like planning the Chunnel project, while being part of a technology vendor is like planning and executing the Big Dig. The Big Dig planners had to worry about 300 years of infrastructure and needed to put all sorts contingency plans in place to ensure the successful execution. That "be prepared" requirement for flexibility appeals to me.

DCD: What's the most interesting thing that you see going on in this space right now?

John Humphreys: I see some very interesting business models being developed that leverage the cloud computing concept. I believe the industry is on the cusp of seeing a host of new ideas being introduced. And, perhaps contrary to others, I believe the down economy is a perfect incubator as expectations over the near term are lowered, giving these start-ups the opportunity to more fully develop these new and great ideas and business models. I expect we'll start to see the impact of all this innovation in the next 3-5 years.


Data center change: A Big Dig?

Thanks to John for the interview. I know the analogy to the Big Dig was something John meant in the context of how you get an organization building a product to do something big -- and how you have to make sure you're taking into account all the existing structures. However, I'm thinking it's a good one for data centers in general making transitions to a new model. Here's what I mean:

Your goal is to change how your data center runs to drastically cut your costs and improve how stuff gets done. There's a lot of infrastructure technology from Citrix, VMware, Cassatt, and a lot of others that can help you: virtualization, policy-based automation, management tools, the works. But there's all your critical infrastructure that's already in place and (mostly) working that you have to be careful not to disrupt. It's a big job to work around it and still make progress. Kinda like they had to do with the Big Dig.

But, hey, I'm not from Boston, so maybe the analogy breaks down for IT projects. I kind of hope so, actually, since cost overruns and big delays certainly aren't what we're all aiming for. In IT, you certainly have a greater ability to do smaller, bounded projects that show real return -- and still make notable, tangible improvements to running your business. Those of you who lived through the Big Dig probably know better than I on how close of a match this is.

On the hybrid public/private cloud capabilities, I think John's on target. The industry conversations about this capability (moving computing from inside your data center, out to the cloud, and then back again) are reaching a fever pitch, but there a few things that have to get solved before this is going to work. (Here's my recent post on hybrid clouds if you want to dive deeper on the topic.) But it's certainly one of the models that IT ops is going to want to have at its disposal in the future.

The approach we're taking at Cassatt is to help people think about how they might do "cloudbursting" by starting the work on creating a cloud internally first. At the same time, customers often begin experimenting with external cloud services. That early experience on both sides of the internal/external cloud divide will be a big help. (And, we've built our software with an eye toward eventually making cloud federation a reality.)

There is one thing I might quibble with John on that I didn't during the interview -- his supposition that virtualization is going to be at the core of any "cloudy" infrastructure. My take is that while the concept of separating the software and applications from the underlying hardware infrastructure is a key concept of creating a more dynamic infrastructure, you can still make headway here without having to virtualize everything.

In fact, we've heard a great deal of interest around the internal cloud computing approach, especially when it leverages what someone already has running in their data center -- physical or virtual. Virtualization can be a useful component, but being 100% virtualized is not a requirement. I was pretty critical of VMware's assumptions around this topic in a previous post. The Citrix approach that John walks through above is definitely describing a more realistic, heterogeneous world, but still has some assumptions you'll want to be careful of if you're looking into it.

So, if you are starting on (or are in the messy middle of) your own data center Big Dig -- exploring virtualization, the cloud, and all the infrastructure impact that those might have -- feel free to leave a comment here on how it's going.

If you can get past the Jersey barriers, of course.

No comments: