Friday, January 30, 2009

How one editor cuts through the cloud computing hype

Earlier in the week, I posted the first part of an interview I did with Derrick Harris, editor of the late On-Demand Enterprise site. He offered a few thoughts of his own about the state of cloud computing before heading off to his new gig with GigaOM.

Highlights from Derrick's interview so far: he thinks the table is set in 2009 for some major strides in cloud computing, despite the economic downturn. Why? Cloud computing's value prop is, plain and simple, impressive. Not that it won't be rough out there for customers and vendors this year. Derrick wasn't willing to go too far with predictions for that very reason. I can't say I blame him. Especially with those two competing forces currently dominating the landscape on cloud computing: the unflappable vendor hype machine counterbalanced by the bleak reality of a super-cautious, recession-inspired IT spending environment.

In the concluding part of the interview, I asked Derrick to talk a bit about how as a journalist he weeds through everything people tell him, especially with a topic as popular as cloud computing, to help IT management readers find the really useful nuggets.

Jay Fry, Data Center Dialog: There's obviously some criticism out there that all the cloud stuff is way, way overhyped. Do you feel it is? How do you avoid falling into that trap as a journalist?

Derrick Harris: Over-scrutinized maybe, but definitely not overhyped. As I might have made clear, I'm a firm believer in the cloud. The problem as I see it is that it's still so early, but some commentators seem to expect cloud solutions to be as mature and robust as their legacy counterparts. Pardon the cliche, but we need to give the seeds some time to grow.

I also see an issue with too much coverage, especially as it relates to the outer edges of what is considered cloud computing. I wrote several blogs over the past few months bemoaning how everything is labeled as cloud computing, even if it is little more than SaaS or consumer Web services (e.g., iPhone apps). This leads to unnecessary terminology saturation, as I don't think too many consumers consider what they're doing to be cloud computing. I try to avoid this trap by constantly reminding myself that my focus is on commercial use (for the most part), as well as by drawing distinct lines between cloud computing and cloud services.

DCD: What part of being a journalist following this space is the most interesting?

Derrick Harris: I have two favorite parts. One is watching Amazon continue to expand the scope of its cloud operations into areas unfathomed when EC2 was first announced, and the other is speaking with start-ups about why their solutions are the next big thing. Across the board, everyone is just so excited, which makes it fun to cover.

DCD: What part do you dread?

Derrick Harris: Journalistically, I dread having to decide where to draw the line between what is cloud and what is not. Every publication has its editorial scope, and as more vendors (and, increasingly, managed hosting providers) try to leverage cloud computing's good will, it becomes difficult to decide what's cloud and what's something else -- and then figure out where, if anywhere, that something else belongs within a given site.

DCD: What happened with On-Demand Enterprise? Was the bad economy the publication's death knell, or is it a commentary about the level of interest in this topic (which would seem odd, given that all of the other pubs are currently covering the same topics very aggressively)?

Derrick Harris: A lack of advertising revenue is a surefire killer of any publication, so the economy definitely bears some of the blame. Of course, that explanation ignores other factors that no doubt played into the decision to suspend publication, among them the increased competition to which you alluded. Both the company and I knew it was somewhat risky to change from the established GRIDtoday to the, essentially, new On-Demand Enterprise, but we also knew it was the right decision. Unfortunately, a confluence of factors just overpowered our best intentions.

DCD: In our Cassatt 2008 data center survey, respondents said they got most of their information about the data center energy efficiency topic from vendors rather than "independent" industry experts. How do you think IT folks are getting their information these days? How is this changing and how rapidly? What should/can industry publications (or even vendors, since, apparently people do listen to them for some things) do to help them?

Derrick Harris: I hope soon that IT folks all will be getting their information from GigaOM (kidding, but not really). Seriously, though, I think for many topics, including energy efficiency, vendors are an abundant source of information (albeit biased information) because news publications are too busy debating whether a particular paradigm can work. I understand that a key role of the press is to question authority (in this case, IT vendors), but IT users are results-oriented people, and if information about how they can best leverage cloud computing is more useful to them than a commentary on why cloud computing will never work, they will seek out what they need regardless the source.

I see tried and true areas like storage and networking covered very well by the IT press, and I think this is rapidly becoming the case with technologies like virtualization or energy efficiency, and whole paradigms like cloud computing. Outside forces are making certain things even more important to readers, and as publications become more comfortable with these technologies, they can provide the kinds of information that readers crave. From a publication's point of view, it needs to make sure a trend is for real before it starts publishing best practices and acquiring granular knowledge.


Thanks to Derrick for the interview. I suppose I should update our blogroll to remove Derrick's old On-Demand Enterprise links, but I think I'll wait until he kicks off his new role at GigaOM. Oh, and Derrick, I hope they spring for a new photo for you.

One final comment: following up on one of the topics Derrick and I discussed, Cassatt is putting the finishing touches on a new data center survey to see how attitudes and plans about data center operations, cloud computing, and energy efficiency have changed since last year. We'll share that as soon as we have all the data in. Results from the Cassatt 2008 survey are summarized here (no registration required), or available in more detail in this white paper (quick registration required). Change has been the watchword in politics and the economy over the past 12 months; will data center operations be any different?

Tuesday, January 27, 2009

Derrick Harris interview: Cloud computing 'blows away' grid's value prop

For those who watch the cloud space, you probably noticed that the bad economy took down one of the more rational voices in the conversation a few weeks back. On Dec. 19, On-Demand Enterprise (once known as GRIDtoday) froze its content and sent its editorial staff packing. Their depressing blurb saying as much is atop all of their site’s pages now.

Derrick Harris, the editor of that publication, was one of the casualties. Derrick covered the move to on-demand computing in the virtual pages of that pub, helping clear away some of the fog for readers. He had been providing a sort of measuring stick for the expansion and shifts in the cloud discussion, and along the way was never afraid to call vendors on what they were doing. Needless to say, it was a bummer for the community to have that pub collapse.

The good news: I have it on good authority that you'll be hearing from Derrick again soon. In fact, he's due to resurface with the good folks over at GigaOM shortly. In the meantime, I thought I'd get Derrick's thoughts on the cloud computing space that he has covered pretty thoroughly for so long. (How's that for turning the tables a bit on one of our journalist friends?)

To give you an idea of his healthy skepticism, here's a bit from one of his posts last summer: "Personally, I have had my fill of cloud computing for a while. Depending on who you ask, it's either great or it's terrible, it's either the biggest IT innovation since the PC or it's an elaborate ruse designed to both take your money and leave you and your data as vulnerable as a newborn. As I've stated on numerous occasions, the truth -- presently and in the real world -- resides somewhere in the middle."

I also like this snippet (from the same post), which, whether he meant it that way or not, is a great summary of the State of the Cloud, which seems to still be holding true at the moment: "Long story short: I'm tired of writing about cloud computing every week (I really am), but I cannot stand to see the propagation of misconceptions without at least voicing a rational opinion in the name of clarification. Enterprise-wise, cloud computing is not ready for primetime just yet, but there are plenty of reasons that make it an attractive option, with production-ready in-house versions being a near-term reality. In the meantime, big companies will use it for testing purposes, small companies might use it for real, and the cloud providers will continue to hone their offerings." Three cheers for rational discourse.

With that, here's the first part of my interview with Derrick:

Jay Fry, Data Center Dialog: From your time as managing editor at GRIDtoday and then as it became On-Demand Enterprise, you've had a chance to watch (and bring attention to) the rise of cloud computing. What convinced you that this "cloud" stuff was going to be something worth covering and something impactful?

Derrick Harris: I think what most convinced me that cloud computing is worth covering is how it parallels -- and then blows away -- the value propositions touted by grid vendors a few years ago. When I first started at GRIDtoday, there was much talk about capacity on demand, utility-style access, etc., as they related to grid computing, but the reality was that these capabilities only benefited certain application types, and then only grid-enabled applications. As cloud computing (both internal and external) started to take shape, this capacity-on-demand value prop was expanded to reach many more applications, flexibility was increased, and, generally, far less work and infrastructural change was required to take advantage of these benefits.

DCD: How important do you think cloud computing is going to be? What sort of impact do you think it will have on enterprises?

Derrick Harris: I think it's going to be very important, eventually becoming all but ubiquitous -- there are just too many cost savings and competitive advantages to be had by moving certain operations to the cloud. (That said, I'm not about to venture a guess as to when this ubiquity will be reality.) As far impact goes, I think Werner Vogels says it best when he talks about eliminating "undifferentiated heavy lifting" and focusing human and financial resources on a company's strengths. That could mean unprecedented levels of productivity.

DCD: What are some of the most interesting/compelling components of what's going on in the cloud space right now? What’s headed in the right direction?

Derrick Harris: I think one of the dead-on trends right now (and not just because I'm talking to someone from Cassatt) is the incorporation of policy-based power management into internal cloud offerings. This really melds the on-demand, hands-free promise of cloud computing with the harsh realities that companies need to save money wherever they can and, in some areas, additional power is tough to come by. It also allows internal vendors to compete with external providers on the cost-savings and green fronts. The increased availability of external clouds supporting Windows would seem to be a big deal, as well.

DCD: What's headed toward a dead-end?

Derrick Harris: I think it's too early to call anything a dead end just yet, as most offerings are in their early stages, but I do think large vendors need to be wary of building their external cloud platforms too cumbersome and/or too proprietary. It is the simplicity and openness of cloud computing that are so compelling.

DCD: What's the most surprising trend, innovation, or happening that you’ve seen recently?

Derrick Harris: I don't know that it's surprising, but I am very intrigued by the notion of using humans as cloud resources. Amazon Web Services' Mechanical Turk probably is the most prominent example of this, but I've also seen it in the QA testing area with uTest. Cloud computing is all about evolving how we compute, so I always am interested when I see unique ways of defining "computing."

DCD: Any predictions you'd care to make for 2009 or cloud computing or enterprise IT in general?

Derrick Harris: Not really, other than that I think the table has been set for 2009 to be a big year for cloud players – particularly when it comes to attracting enterprise users. Most everyone has put their stake in the ground, some have really solidified their offerings, and there is plenty of user interest.

DCD: What effect do you think this lovely economy will have on the adoption of cloud computing? I've heard comments both ways.

Derrick Harris: Well, vendors and some analysts have said cloud computing will be a savior of IT budgets in this nightmarish economy. Logically, this forecast makes a lot of sense, and companies willing to give cloud a try could reap substantial rewards. However, I think the reality might be that many organizations will see the economy as another reason to naysay cloud computing, feeling that failure could be fatal so it's better to just maintain the status quo. I don't mean to skirt around the question, but I just think it's too early to tell.

If there is a surefire bright spot, it is start-ups. Innovation certainly hasn't died, and with credit and funding becoming scarce, cloud computing will continue to be a great way for start-ups (particularly Web-based start-ups) to keep their IT costs manageable so they can grow the end product. Success stories already are piling up.

Next time: More from Derrick on the perils of being a high-tech journalist in this economy.

Sunday, January 25, 2009

Public/private 'hybrid' cloud computing: Sooner or later?

It's good to know that amid bank and technology stock prices cratering, and joblessness hitting 15-year highs here in California, and the feel-good sobriety of Barack Obama's inauguration, the tech industry is still up for a good debate. The debate on the legitimacy of internal clouds (called private clouds by many) raged on last week in the blogosphere, Twitter, and even on the official sites of industry publications and an analyst or two. The internal cloud topic continues to strike a chord.

But as all of this discussion swirled around, some of the ground started shifting a bit, too. If internal clouds are interesting to IT, then a hybrid of internal clouds and external clouds must be really interesting, right? I say maybe not quite yet. Read on.

Listening to strong voices in the internal cloud dialog

First off, here are some of the places you should have been reading last week (consider this a "how to" for catching up on some recent conversations about internal clouds): Rich Miller's Data Center Knowledge, James Urquhart's Wisdom of Clouds, Elastic Vapor, Chuck's Blog, Rational Survivability, among others. The Cloud Connect event in Mountain View was probably ground zero for all this.

But it didn't stop there. Gartner's Tom Bittman weighed in on private clouds as part of discussing Cisco's "unified computing" announcements (a subject for another day). "There is huge industry energy pushing in the direction that will make internal computing more real-time, on demand, adaptive, dynamic, unified," Tom said. "...What was custom will become packaged, and we will see a growth both in the numbers of cloud computing providers and in the number of organizations that feel they are building 'private clouds' to be used only by their internal customers." A pretty strong endorsement, there.

One of the other Gartner bloggers, Dan Sholler, revealed this "shocking truth about 'private clouds'" among others: "Private clouds (or whatever we end up calling them) are about combining the thing that infrastructure folks are doing with the things that the app folks are doing."

Here's where things got interesting: the concept of hybrid cloud computing

Throughout these discussions, many pondered the future of cloud computing. In his thorough, well-articulated piece on the reason private clouds make sense, Chuck Hollis of EMC envisioned a future phase of cloud computing that includes "federated service providers that provide customers choice." It's not hard to extrapolate from there to include your internal systems as one of those choices. But, says Chuck, "it's early days indeed. Near as I can tell, there are only a few service providers who've built environments from the ground up to [receive] virtualized applications and [information] -- and provide back to IT the control they need."

Then, this weekend, John Foley of InformationWeek published 10 predictions about cloud computing in 2009 (I guess I wasn't the final Top 10 list after all!). "IT departments will create public-private hybrid clouds. They'll use virtualization, APIs, and platforms like Elastra's Cloud Server to devise cloud-like environments in their own data centers that work seamlessly with public cloud services. ...Some experts talk of 'private clouds' as an alternative to public clouds, but hybrid clouds mix the best of both worlds."

So, in addition to arguing about whether internal clouds are something most large enterprises will pursue in the near term (for what it's worth, we are working with some significant organizations, both commercial and public sector, that are already partway through internal cloud implementations, even if that's not exactly the term they're using), this conversation also moved forward to the next step: hybrid clouds. That is, as John explained, the possibility of using both public and private. Both external and internal cloud computing. Simultaneously.

Why hybrid cloud computing is even more interesting than internal clouds. And farther off.

"Regular" cloud computing is about leveraging outside compute resources "by the drink." Internal cloud computing is about applying those same concepts to the infrastructure you already have in your data center. Internal cloud computing, though, is a reaction to the current (but not necessarily permanent) inadequacies of using cloud services. These are inadequacies like having to rewrite apps in order to run them using cloud-based resources, or not being able to handle security or compliance sufficiently. Or like the lock-in that can result from using many of the services available today.

Now, if you could get your internal cloud up and running AND could iron out the devil-in-the-details issues with external cloud computing, a hybrid between the two does, indeed, sound like the best of both worlds. You would have an internal infrastructure that uses your existing hardware, software, and networks very efficiently by dynamically balancing your computing demand with your supply. And, when even your internal supply is inadequate (or doesn't meet your policies for some reason), you can lean on the cloud providers to help you through a sudden, unexpected spike in demand from customers (like, say, to watch the aforementioned inauguration using the laptop in your office, without receiving the message I got from CNN's site, which went something like: "Congratulations, you made it, but so did everyone else. You are now in line for the next available spot for our live Internet video feed." I stuck with the, by the way).

With hybrid clouds, you'll need some way to make it OK to move workloads out to the cloud and back again. You'll need a policy engine so your systems can figure out when to do this and with what. And those are just a couple serious requirements off the top of my head for starters.

And I guess that's my point. The 451 Group and others have called this hybrid cloud capability the Holy Grail of computing. I've heard our CEO, Bill Coleman, talk about a world where (eventually) your systems are checking constantly with your set of external compute providers and comparing what they can supply with what you have going internally, and then having your infrastructure direct your apps to run here, there, or everywhere, depending on how prices or other conditions look on any given day/hour/minute. (This is where "follow-the-moon" computing comes in -- picture one option where your workloads run wherever computing is cheapest in the world at any given moment.)

But like tooling up to make hybrid cars, this is not something that's going to appear overnight in your IT department. That's why I'm betting we'll see some pretty robust implementations of both cloud-style architectures running inside your data center and outside of it in the cloud before you see anyone start mixing the two.

It's worth quoting Dan Sholler at Gartner again at this point: "[A]s with all innovations, the chatter will die down as we all get our hands dirty implementing this in practice." If you're seeing evidence one way or the other, I'd love to hear about it.

Friday, January 23, 2009

The last of the year-end Top 10 lists

This year I sent most of my holiday cards out with a January postmark on them. Maybe it's only fitting, then, that I waited until today to post a year-end list of sorts here at Data Center Dialog. I figured I'd let everyone else have fun pontificating first, and then, after the, um, clouds had cleared, I'd publish a 2009 List of Lists for the data center/IT ops crowd.

So, here are some of the items I thought were worth noting from the What-the-Heck-Happened-in-2008 Lists, plus commentary on the How-in-the-World-Do-I-Make-Sure-Cloud-Computing-is-on-my-List-in-2009? Predictions:

Cassatt Data Center Dialog List of Lists for IT Ops Folks Thinking about Clouds

1. For a list of the things that went on in this space last year, check out the 2008 "Cloudies." John Willis put together this first annual awards list back in December. John noted a couple of the bloggers on our blogroll (congrats James Urquhart and Reuven Cohen) for their influence and probably for their prolific output about topics in this space. They are great people to follow if you want to stay in the middle of the cloud computing discussion. (I'm also proud to say we ranked alongside 3Tera as a "Best Private Cloud Vendor.")

2. Thoughts on the cloud computing trend: From Dave Rosenberg of The Register’s "A crack in the madness of clouds: Sanity check 09?": "What will we see in 2009? Sadly not a miraculous understanding [of the term "cloud"], but instead a glimmer of hope that the cloud can live up to the hype." That would make my year, certainly. Expanding from there: "...Does this [cloud computing] concept apply to a corporate data center? Absolutely. Internal clouds will come to fruition as companies uncomfortable with the security or offsite nature of internet clouds start to figure out ways to achieve a high – if not infinite – level of scale internally."

3. A bit of advice that we can all use, now and again: "How to be less stupid in 2009" from Dennis Howlett writing Irregular Enterprise at ZDNet. Two things from this one seemed valuable. First, a bit about IT and the business side of things working together: "If you're a geek, teach the suites something new. If you're a suit, then understand at least some of these guys know more than you do about running a business." The IT equivalent of "Can't we all just get along?"

4. And, also from Dennis, a bit about the cloud: "Don't get stuck in the cloud. There are plenty of people pitching cloud computing as the next big thing and maybe they're right. But recognize that this is only one face of enterprise computing. Just cuz we can do it doesn't always mean we should and despite the column inches to the contrary, Google and Amazon are not the center of the universe." Dennis starts to say no one's really going to adopt a cloud model, but backs off a bit. "I don't see too many CFOs giving up their precious financial apps to some cloud provider anytime soon. But...however you want to define The Cloud, we've been doing it for years. Think instead of _aaS. Service based computing offers the opportunity to move away from capex to opex. Given the state of the global economy that should be a no brainer. Even so, pick your targets carefully." Totally agree.

5. Network World rattled off their "Hot Technologies for 2009" and cloud computing made the list, as you'd expect. Green IT made the list, too. Some comments I liked from author Neal Weinberg: "As we arrive at 2009, cloud computing is the technology creating the most buzz. Cloud technology is in its infancy, however, and enterprises would be wise to limit their efforts to small, targeted projects until the technology matures and vendors address a variety of deal-breaking problems." He also notes the rise of private clouds, but I can’t say I feel he got the definition right. His pragmatism is dead on, though.

6. Green IT, Neal says while highlighting that topic for the same Network World article, is about a new worldview, especially with the economic downturn firmly taking a stranglehold. "Can you afford to be green? Can you afford not to?" he asks. "...[G]reen IT doesn't stop at the data-center door, and companies can't just pass the buck to facilities managers. IT departments can and should undertake a number of green initiatives -- which won't break the bank either." Neal lists a number of pretty broad but basic actions, starting with one of our favorites: "Power down unused servers or desktops." As hesitant as IT can be about this one, there's a lot to be gained from investigating this approach, especially in dev/test environments. As I've commented before, progress has been slow on green IT, but the benefits are there.

7. Martin Veitch from CIO Mag (in the U.K., as you'll see from the spelling) talked with John Mahoney of Gartner for his list of "New Year revolutions." The two things I pulled out of that discussion: "Start taking cloud computing seriously. Cloud computing uptake offers a completely new way of provisioning IT so it might pay to work on understanding it now, especially as cost saving is a key promise of the model. ‘Some organizations are already well down the road but the mainstream will be still in the "trough of disillusionment,"' Mahoney says. 'The cloud is susceptible to the conclusion that this is not terribly productive in the short term. Our strong feeling is that this is not something you can leave until later.'" His ending suggestion: build mini-cloud applications to get "hands-on" time with this key technology. Again, pragmatism.

8. Mark Fontecchio pointed me to TechTarget's just-published SearchDataCenter Products of the Year 2008. At least I'm not the only one publishing lists in late January! Lots of new approaches in the technologies they chose that change things about the way you run your data center. (Even if I am partial to ours, instead of some of the things they chose.)

9. "The 11 Stupidest Moments in Tech for 2008" from Mark Sullivan at PC World. Some classics here. And not a word about cloud computing, you'll be happy to hear. From Microsoft's Jerry Seinfeld-filled randomness to Princess Leia reporting live for CNN on election night, this is just a fun read. Though, the comment about the Twitter-spread rumor of Steve Jobs' death isn't so funny seen in light of recent revelations about his health.

10. Finally, where is this all headed? I'll leave you with a couple comments from our own CEO, Bill Coleman, and what he sees for 2009 as picked up by SYS-CON Media. "Prediction #1: In 2009, it's all about 'the cloud.' Save money, move fast, and...ooops. Expect some problems. But remember, it's only Cloud 1.0. There's still work to do. Oh, and virtualization alone is not enough to make it all work." He goes on to make more predictions for this year about Google, social networking as our "platform for life," and identity.

Happy New Year. Or maybe I'll make a resolution to be early for something: Happy Groundhog Day!

Wednesday, January 21, 2009

Business Agility : Using internal cloud computing to create a computer animation render farm in less than a day

As I mentioned in my last post, cloud computing is all about the "-ilities" of computing environments and I wanted to spend some time in this post talking about internal cloud computing and how it can dramatically enhance a company's business and IT agility.

Many of the customers that I speak with have the same problem with their IT infrastructure. Specifically, they believe they have plenty of excess compute, storage, and network capacity in their existing IT environment to host additional applications, but they just can't get access to it for other applications. Unfortunately, due to the stove-piped approach often taken in allocating hardware to applications, this excess capacity is locked up, awaiting the day when the planned peak load comes along.

What they would like instead is to be able to dynamically allocate resources in and out of the "hot" applications as needed. This reduces the need for idle hardware in fixed silos, reducing power/cooling costs as the idle resources can be powered down, and shrinking the number of overall spare resources, since they can share the excess capacity among the apps -- and not all of their apps will hit their peak capacity at the same time).

With an internal cloud computing infrastructure, that is exactly what you can do. I'll walk you through a little example of how we supported a completely new application in our lab in less than a day. First a little background: it turns out that a few of us here at Cassatt are computer animation hobbyists and one of us (Bob Hendrich) was trying to figure out where he could render a short film that he was working with his kids for school. Now, he could always pay to use time on an existing render farm (respower as an example) but as a test, we wanted to see how long it would take us to use Cassatt Active Response as an internal cloud computing infrastructure and re-purpose a portion of our test/IT resources to create an ad hoc render farm for rendering a Blender movie (you can check out a few seconds of a test render we did here).

In our case, as we already use Cassatt Active Response to manage our internal cloud computing environment, we didn't have to install our product. If we had needed to do an install, we probably would have tacked on a few days to set up a small environment with most of that time going to racking and cabling the compute nodes (if you want to see more on how that's done check this out.) Anyway, as we already had a Cassatt-controlled environment setup, the first task was setting up a machine with all the required software necessary for a render node. This step allowed us to configure/debug the render software and capture a baseline image for use within Cassatt Active Response.

A quick sidebar for those that may not know much about Active Response, it has the concept of capturing a single "golden" image for a given service to be hosted and then replicating that image as necessary to turn up as many instances of that service as desired. As we wanted to build out a service made up of an arbitrary number of instances, we had to start with capturing the "golden" and then letting the Cassatt software handle the replication (Active Response actually not only does the image replication but also does most of the required instance-level configuration like hostnames and IPs for you).

OK, back to the main thread. To setup the image host, we snagged an existing CentOS 5.3 image and kickstarted it onto a spare box in the lab (elapsed time so far: 45 minutes).

Once CentOS was installed, we had to install/update the following to support the distributed Blender environment.

• nfs-utils
• Python
• Blender 2.48
• JRE 1.6
Distriblend or FarmerJoe (both open source renderfarm utilities)

In addition to these software updates, we needed to export a directory via NFS in which the client nodes could dump their rendered files. The way the render software works is there is a single load manager that distributes parts of a frame to render to the machines available in the farm. That means that all the nodes need a common accessible file system to allow them to coalesce the different parts of the frame back together into a single coherent frame. In our case, we just shared the disk out off of the singleton distribution node to act as the collation point.

The final step in creating the golden image was to configure the required daemon processes to start up on boot. With this completed, we proceeded to run cccapture to peel the image off the image host and store it into the Cassatt Active Response Image Matrix (that's a fancy word for what Cassatt calls the place that images (both goldens and instances) get stored. (Elapsed time: 3 hrs.)

With the image in Active Response, we moved on to building out the two required tiers needed to host the render farm (one for the distribution service and one for the rendering service). In Cassatt terms, the tier is where the user can configure all of the policy that they want enforced (things like how many nodes must be booted to provide service, what approach to use to grow/shrink capacity as a function of demand, how hardware should be handled on failure, and what, if any, network requirements the service has).

The first tier we created was for the singleton service that manages the render distribution. This tier was set up to allow only a single node per the render software architecture. In Cassatt speak that would be a min=one, target=one, max=one tier with no dynamic service-level agreement (SLA) configured.

With that tier created, we moved on to create the tier that would house the render nodes. In our case, we wanted to have Active Response manage the allocation of the nodes based on the incoming load on the system. To do this, we configured our growth management policy as Load Balanced Allocation (LBA) using SNMP load1Average as the LBA control value. We then set up the tier min to one node needing activation to provide service, the max to six nodes (the max number of nodes we wanted to use in the render farm) and set the tier to idle off after three minutes of inactivity.

An aside here for anyone that doesn't know what LBA is and what an idle timer is used for. As we kick the renders off overnight (they take hours to complete), we want to save power and have the nodes all automatically shut down when the render is complete. LBA will scale up/down a tier automatically based on the load against the tier, but it will only ever shrink a tier to the minimum user defined size (in our case that minimum was set to one.) As we didn't even want a single node to be running for multiple hours due to power consumption (we're a pretty green bunch here at Cassatt) we set up an idle timer in the tier policy that says to go ahead and shutdown even the last node if it sits idle for a specified time period (in our case, three minutes).

OK, back to the render tier config. We next specified that the tier only grab four-CPU nodes or better to maximize performance of the render (the software is tuned to get the biggest bang for the buck on multi-CPU machines.) Networking in our case was not an issue, as we just used the Active Response default network so there wasn't any network-specific policy to enter. With the tier definitions completed it was time to allocate nodes into the tiers and activate them so we could take the first test render for a spin. As we had set the minimums to 1 for both tiers, only two nodes were allocated and the services were brought online.

This concept of allocation may be a little foreign to folks not familiar with the cloud computing paradigm, so I'll explain. With a cloud computing infrastructure in place, the user doesn't manage specific hardware and its associated image (today's traditional model), but rather the service images and available hardware separately. The infrastructure then handles all the work of binding the desired image(s) to the required hardware as needed by the service. This loose coupling is at the heart of cloud computing's dramatic business agility enhancements; the same hardware can be used by different services as the user policy dictates (where that policy can be schedule, demand, or priority based. (Elapsed time: 4 hrs.)

Now the cool part. With the two tiers up and running, we handed a job to the render distribution node and the render tier immediately picked up four frames to render, and went to pretty much 100% on all four CPUs. Load1Average went through the roof as depicted on the graph in the tier page. We had set the LBA parameters to monitor every 15 seconds, and average over 60 seconds. Within just a couple of minutes, Active Response booted a second node as the service was deemed overloaded. The second node, once booted, immediately grabbed frames from the distribution node to render and it also pegged on CPU utilization. A couple more minutes passed and the Cassatt software booted a third node to try to address the service's load spike. As the render tier had a max of six instances, the system continued to try to get another piece of hardware to activate a fourth render node, but since none of the hardware available in the free pool matched the requirement, it was unsuccessful in allocating a fourth node (Active Response let me know that the tier was resource-limited by declaring it in a warning state in the UI. Had we set up email notifications, we would have also gotten an email to that effect.)

Load1Average stayed pretty high, as expected, until the nodes started running out of work. At that point, load1Average started dropping and when it got below the configured min value, the tier first dropped out of being in a warning state (it no longer needed nodes). A minute later, the tier started shutting down and returning nodes to the free pool. Once all the frames were rendered, Active Response shut the remaining node down after it was below min for the three minutes we configured (but kept that node allocated to the tier as the min was set to one).

Now, the really, really cool part. If we had access to another 100 machines, we would not have to do anything else to use them except create a wider render tier. We would use the same image and Active Response would handle the instance creation just like it did for the initial six, but for any tier max we set. Literally, in 15 minutes we could create another tier and run the test again with 100 nodes, and the system would use as many nodes as it needed to complete the job. In addition, in an environment of 100 nodes, we could use those nodes for other jobs or other applications, and if we set the priority of the render tier above the others, the render tier could steal the nodes in use by lower-priority tiers, and those tiers could get the nodes back when my render was done. We would not have to touch anything to make it happen during runtime, Active Response would simply be enforcing the policy we set up.

Well, I thought I'd close this post out with a bit of a recap as we covered a fair amount of ground. An internal cloud computing infrastructure is the enabler to allow for a substantial increase in business and IT agility within your organization. By decoupling the physical resources (hardware, storage and networking) from the specific applications they are in support of (the application images) it allows the infrastructure to provide capacity on demand for any managed application as required by the current load against that application (no more having to guess at the peak load and then keeping all that spare capacity sitting idle waiting for the rainy day to come) In addition, you as the user can update the policy for allocation as you need to keep the allocation of resources in line with the importance of the application(s) being managed.

As an example, we took a new application (a render farm application) and hosted it in Cassatt Active Response (the internal cloud computing infrastructure for our example) and was able to not only initially host the application in less than a day but was also able to host the application on existing IT/lab resources over the weekend and then give them back by Monday morning for use in their normal uses (meaning that we hosted this new application by simply using the spare cycles already available in our IT infrastructure rather than purchasing new hardware for the specific purpose as it typically the approach taken today)

Next week, we're going to spend some time talking about Disaster Recovery and how Active Response acting as your internal cloud computing infrastructure can provide DR capabilities that to date you probably thought were unattainable.

Monday, January 19, 2009

Are internal clouds bogus?

Two articles in the past week have questioned the legitimacy of one of the industry's favorite new terms: internal cloud computing. I thought it was a good time to weigh in on whether or not internal clouds are legit or just a bunch of vendor marketing hype that data center managers should ignore.

Andrew Conry-Murray at InformationWeek wrote that "there's no such thing as a private cloud." Specifically, Drew said: "I don't think the notion of a private cloud makes any sense. In my mind, a key component of the definition of a cloud is that you, the enterprise, don’t have to host and run the infrastructure. All that stuff is supposed to be out of your hands."

Eric Knorr of InfoWorld had a similar comment, but different reasoning. He heard pitches about internal clouds from vendors and interpreted what he heard as a hypefest from Dell, IBM, and others (I bet he'd throw Cassatt in that category, too, though we haven’t talked with him). Their pitches, he said, reeked of the unachievable, "automagic" IT nirvana he'd heard about for years, but never seen delivered.

In general, I say "amen." Picking up on buzzword-compliant terminology without a way to deliver on it is a waste of everyone's time. But does that mean the "internal cloud" term is a bust? Nope. Here's how we look at the internal cloud debate here at Cassatt:

A cloud by any other name

Drew's issue is not whether or not the idea of more dynamic capabilities is a good one. In fact, he finds some of the technology pitched as "private clouds" to be pretty compelling (enough, he says, to continue following some of the vendors pitching it to him).

Instead, he's drawn a line in, sky and said that if it ain't a service coming from outside your data center, it ain't cloud computing. That's fine. Having clear definitions is important, especially while a given technology or idea is careening up the hype curve.

However, here's what we're seeing: Cassatt has been talking about how our software delivers the benefits of cloud-style architectures using your existing computer infrastructure since its inception in 2003 (arguably sometimes more eloquently than others). However, as Craig Vosburgh noted in his posting here a few weeks back, it's had other names along the way. The problem we've seen is that none of these other terms had a Trout & Ries-style "position" in people's heads. IT ops folks had no category to put us into. In fact, most of the terms ("utility computing," "on-demand computing," "autonomic computing") have been greeted with a profound glazing of eyes at best. And extreme skepticism at worst, often accompanied with "I'm hoping virtualization will do that for me."

The understanding of cloud computing (and the market) is evolving

Within the past few months, however, we have noticed a pretty interesting shift as the cloud hype has accelerated. When we use cloud computing as a reference point in trying to explain the kind of thing our software is capable of, suddenly people have a mental model to discuss the dynamic, on-demand nature of it. They've heard the cloud computing idea. And they like it. And that gives us a starting point for a real discussion.

Of course, they don’t necessarily like everything about cloud computing today. Not everything can be migrated to run in Google or Amazon’s cloud immediately. In fact, we are asked how we can help folks get the benefits of what Drew and others call "real" cloud computing, but to do so for their existing apps and infrastructure.

Another example: we noticed the same change in talking with people in the exhibit halls at the recent VMworld and Gartner Data Center conferences. In previous years, we've struggled a bit making clear to someone who walks up to the Cassatt booth what it is we do and how that might benefit their day-to-day job, data center operations, and bottom line in some way. Often we'd have to hand them some product lit for deeper study.

However, starting at VMworld, we've talked to booth visitors about how we can help them create a cloud-style architecture from their existing IT systems. Click. Not that suddenly everyone can rattle off our product data sheets, but there's a logical starting point. And I don’t think it was just the glowing green swizzle sticks we were giving away.

Even more interesting: customers are coming to the realization that virtualization is not going to solve everything for them. It's just a starting point of the change in store for their approach to IT operations.

So maybe it's a matter of semantics and nomenclature. But it certainly feels to me like a step in the evolution of this market. And these things always do tend to evolve. Example: the same publication, InformationWeek, that ran Drew's article that was skeptical about internal clouds also ran an article by John Foley a few months back, saying that private clouds were, in fact, "taking shape." Last week, John also posted a blog entry about his meeting with our CEO, Bill Coleman, and our discussion about – you guessed it – private clouds. So the evolution of the conversation continues.

Skepticism about what automation can do today

Now, Eric's comments in InfoWorld seem to be less about the term "private" or "internal" cloud and more about the tendency for IT vendors to pitch capabilities and results from their fabulous products that are beyond belief.

He's usually right. Shame on us vendor types. Customers and journalists should be especially careful when promises are coming from big, established vendors who actually have a lot to lose economically from dramatic innovation and optimization in how your data center capital is being used. You do the math: on-demand data center infrastructures mean LESS capital spending, at least in the short term. Don’t tell me the hardware guys at Dell or IBM think that's in their best interest. Appirio made similar observations in their blog predicting the fall of private clouds. (By the way, I disagree with Appirio's premise that organizations that will get benefit from an internal cloud are "few and far between." I'm betting that if you have several hundred servers, you'll benefit from breaking down your application silos and pooling your resources.)

I'll tell Eric, however, that despite how "left at the altar" he might feel about utility-style computing, he shouldn't give up hope. There's real, honest-to-goodness innovation showing up in this space. And, we see actual end users, not just cloud providers, investigating it. For what it's worth, since we started using the "internal cloud" term, we've seen a strong uptick in the number and quality of inquiries about our software. Customers are looking to solve a real problem.

Try a self-funding, incremental test of internal clouds

And, knowing that Eric's reaction is not atypical, we (for one) suggest that customers take a very incremental approach. We suggest picking an app or two (only) for initial evaluation, probably in your dev/test environment. Use that to see what's really doable today.

Then, use the savings from your first, small implementation to fund any further roll-out. That way, if it doesn't save you anything or improve things in IT ops, you don't proceed with any more of an implementation.

But, if (somehow) your test of internal clouds actually delivers, you'll have both the proof and a plan to move ahead with significant leaps and bounds.

No matter what you call it.

(You can register for Cassatt's white paper on how to get started with internal cloud computing here.)

Friday, January 16, 2009

The 'Bio Data Center': working with Bull on data center efficiency in Europe

Back in November, I mentioned that we had been working with some big customers to improve the way they run many of the largest data centers in North America. With Tuesday's partnership announcement with Bull, Cassatt is expanding that focus to Europe as well.

If you didn't see the announcement, here's a very short recap: Bull has been working on a strategy to help its European customer base improve the efficiency of their data centers. They call it the "Bio Data Center." Sounds a bit esoteric, but the concept is very similar to what Gartner calls a real-time infrastructure.

In short, they are working on ways to improve IT operations processes, data center architecture, and power consumption. Which, of course, sounded very familiar to a lot of what we're trying to do.

So, after a bit of investigation (including some pretty serious technical evaluations), Bull is putting Cassatt squarely in the center of their efforts. The joint work will give European organizations a way to get access to the Cassatt Active Response software without having to wait for us to build out a European field organization. (Having lived in London for 3 years while with BEA, I'm hoping that a Cassatt presence in Europe comes sooner rather than later, but I'll be patient for the moment.)

The partnership means Bull will distribute our software in Europe and other select geographies. Our software, as a reminder, uses policy-based management to control and optimize the diverse data center resources that support your applications (hardware, software, virtual machines, networks) without requiring you to change any of the pieces.

The idea is to take incremental steps that drastically cut your operations costs (our models show most folks can cut 30% pretty easily for starters) with the goal of simultaneously making your infrastructure able to match your business much more effectively.

To help get started, customers in Europe will be able to turn to Bull for strategic guidance and implementation help. We're already working with them on detailed training so their people on the ground in France, Germany, the U.K., and other parts of Europe can be as up-to-speed on our stuff as they are on the issues that data center managers in Europe are facing.

That's something Bull is bringing to the party: direct experience in those European data centers, which I always found to be at different stages of IT evolution than their American counterparts.

Here are a couple examples of what Bull and Cassatt see "over there":
- A much more activist role of government in creating "green tape" (as the 451 Group calls it) -- government regulations around energy efficiency in data centers
- A more pervasive adoption of methodical, structured processes like ITIL. Many data center managers over there make us over here look like cowboys shooting from the hip.
- A slower adoption and healthy skepticism of cloud computing
- The recession has not hit markets like France with the immediacy and profound impact on IT budgets that it has here in the States. Yet.

We look forward to collaborating with Bull in the coming months and years. I’ll post updates as our work across The Pond progresses. If you’re a European organization interested in finding out more, drop us a line (

Monday, January 12, 2009

'Bottle the magic': 451 Group suggests clouds to beat the recession

The 451 Group is brimming with personality. I know, that's kind of a weird commentary about an IT analyst firm, but they, of all the groups I interact with in the IT operations space, don't check their quirks, sense of humor, and appreciation for irony at the door. And that's a good thing.

In reading their recently published year-end analyses and predictions about cloud computing, eco-efficient IT, and virtualization, however, I think they ran into something that forced them to keep some of their trademark commentary in check: there was a lot of stuff to write about.

Anyone watching the emergence of the cloud computing market in the past year has to feel a bit out of breath. The speed with which things appeared, changed, and changed again, was stunning ("a faster rise than the Web itself," notes the 451 Group's William Fellows and Antonio Piraino). In fact, their 2008 wrap-up and 2009 look forward, as a result, actually read a bit more like a list of the many vendors and trends that entered (or will) enter the scene. Not much space for pithy comments, though they managed to squeeze in a few nonetheless. (Besides, they have the rest of the year for that, right?)

Between gulps of air, Fellows and Piraino did manage to sift through the many pieces parts and articulate some interesting tidbits and trends, though. I'll highlight a couple interesting ones here.

An internal cloud is part of a continuum, which includes grid and utility computing. A lot of the conversations in 2008 were about what all this cloud stuff actually is. The 451 Group noted that internal clouds were one of the things they hadn't predicted to appear in 2008, but had shown up just the same. They called cloud for enterprises "grid done right" (right before pronouncing "grid is dead"). This perspective doesn't surprise me, given their strong background in following the grid computing market.

They did catch the subtle differences between grid and cloud computing: "Where grid concerns a fixed resource pool, the cloud implies a flexible pool; grid provisions capacity where the cloud provisions services; grid is science and the cloud is business." They saw a sweet spot of activity from their customer base (which I know has a good sampling from the huge financial firms for starters) in "the creation of shared internal resources, or clouds, whether or not they are sold internally as that." They even suggest selling an internal cloud project under some other name if the hype is just too thick for your organization to handle. Bottom line is that in 2009, "[c]reating internal clouds is now on the agenda for early adopters."

"Cloud is a way of using technology, not a technology in itself…and economic imperatives will drive it." Fellows and Piraino noted in their 2009 cloud preview that virtualization and automation are change agents in all this (that virtualization comment echoes sentiments from IDC's Al Gillen). "Accelerated consolidation, virtualization and automation strategies will be the building blocks of shared infrastructures," they say, "delivered as a self-service utility with retail discipline. This internal cloud is the logical end point of these combined activities." And, they added, the recessionary forces at work mean that "the opportunity to move spending from cap ex to op ex can't be underestimated."

Here's the good news I read into that, from another part of that same report: the cultural and organizational barriers that have been the biggest inhibitors for adopting shared infrastructures may have finally met their match. The 451 Group says that the recession, painful as it is, could actually be the thing that gets these barriers to fall. (It only took a global financial meltdown -- is that saying something about our industry?) Companies will do whatever they need to in order to "weather the current storm." Says Fellows in a different report on this same topic: "given the uncertain conditions and focus on cost, we think the cloud computing economic model will win out."

As for 2009 predictions, what's their tip for beating the recession/credit crunch? "Start to use cloud computing." Pretty clear, if you ask me.

Key challenge for 2009: how do you "bottle the magic" of providers like Amazon and Google? The key challenge enumerated in all of this, though, is to "bottle the magic" that makes folks like Amazon and Google "so cost effective and so elastic" and apply it to any type of cloud implementation -- inside or outside your data center. Now, Gartner talks about how you actually can run your IT systems for half what Amazon can do for you with EC2. Fellows and Piraino sound like extreme skeptics on that point: "The question is whether IT groups can deliver. ...Is it cheaper to do it in-house? Are existing management tools adequate? No."

Now, they missed pointing out that even "the magic" has some pretty big flaws. It's cost effective, yes, for the applications that can live within their constraints. It's elastic in a way not seen before in how companies own and use compute resources, but it is still very coarse-grained in what/how/when users can change. It's still waiting for a move toward policy-based automation to control the compute resources, enabling much less manual effort.

The 451 Group's further comments on a do-it-yourself approach are typically and justifiably what I'd call "skeptical until proven otherwise." For cloud control, they say, don't bet on things suddenly being less complex. "Virtualization and fungible resources introduce new management requirements out of reach of most of today's tools."

So what tools might those be? Well, they predicted that vendors like Scalent, DynamicOps, Q-Layer, and other private cloud enablers are likely to continue to get traction and attention. (The last company they named has certainly wasted no time.) In a separate report focusing on Cassatt's use of policy-based automation to support internal cloud computing, Fellows notes something that might help us stand out from this crowd: "unlike other vendors, [Cassatt] uses existing infrastructure, doesn't alter existing software, doesn't introduce new layers or agents." We certainly think those items are key, especially with rough economic waters threatening every organization from all sides.

(As an aside, Fellows did get a chance to use his fondness to point out market ironies in that Cassatt report. "It's not a little ironic," he noted, "that Cassatt's entire value proposition since it was formed has been targeted more or less on the anticipated benefits of internal cloud computing -- although, it wasn't called cloud computing until recently.")

What’s next? The "holy grail" is federating internal and external clouds. Fellows and Piraino set up a pretty big challenge for 2009 as well. "…[T]he Holy Grail of cloud computing may very well be the ability to seamlessly bridge both private clouds (datacenters) and remote cloud resources like Amazon...(EC2) in a secure and efficient manner." Their summary on this? "It will take time." Yup. No argument there. Lots of steps are being taken, but lots of work to do still.

So you can track the source material down (if you're a 451 Group client), below I've listed and linked to the three reports referred to in this post. The 451 folks also do a cloud blog you should check out.

- 2008 review -- Cloud computing, William Fellows and Antonio Piraino, Dec. 8, 2008
- 2009 preview -- Cloud computing, William Fellows and Antonio Piraino, Dec. 19, 2008
- For Cassatt, the cloud is the next driver of datacenter efficiency, William Fellows, Nov. 18, 2008

Wednesday, January 7, 2009

IDC's Al Gillen: Is a bad economy good for cloud computing?

Today's post is Part 2 of our recent interview with Al Gillen, program vice president of system software at IDC.

In the first part of our interview with Al that I posted yesterday, Al noted that virtualization is still on the rise, but more because of use cases like HA and DR than the traditional drive to simply consolidate servers. He saw systems management taking a much more important role, and virtualization serving as a "proof of concept" for cloud computing. The move to cloud computing is a transition that he says we will be continuing to talk about and work through "for the next 15 years." To me that’s a good indication of the impactful, fundamental change possible with the cloud. And the work that's still ahead of us.

In today's post, I asked him a bit more about cloud computing and one of the questions that's on everyone's minds: despite all the excitement on the topic (and, yes, the cloud computing PR train continues unabated in the new year, if my inbox is any indication), how will a dire economy affect all of this? His answer: don't expect people to suddenly make radical changes in dangerous times. They'll do what makes sense, after seeing some relevant proof points.

Here's the final excerpt from the interview:

Jay Fry, Data Center Dialog: Here's the "$700-billion question": I've heard two different schools of thought about how the current economic, um, tailspin will affect cloud computing. Do you see it speeding things along because of the chance to spend less (and get more flexibility), or slowing things down as IT becomes more cautious with any and all spending?

Al Gillen, IDC: This is a tough call. Cloud sounds good, but in reality, unless cloud computing really offers customers a seamless way to expand their resources, and does so at a lower cost, it will not be a short-term solution. Even if it does achieve these benefits in the first instantiation, cloud computing still has to overcome conservative concerns about things like data security, integrity, privacy, availability, and more. Corporate IT managers didn't get to their current job positions by taking excessive risks with company assets.

DCD: How long do you think the move toward cloud computing will take? What are the most important drivers and what's necessary for people to get comfortable with it?

Al Gillen: As I noted [in Part 1 of this interview], I believe that we will be talking about the transition to cloud computing for the next 15 years. Let's consider x86 virtualization, a market segment that VMware commercialized. VMware has been around for a decade now, and has had products in the market for about 8 years. Despite VMware's phenomenal success and billion-plus [dollar] revenue run rate [per year], the x86 server market remains only lightly penetrated by virtualization software. How much longer will it take for x86 server virtualization to become pervasive? Certainly yet another 5 years, at a minimum. It is unlikely that "cloud" can turn the industry upside down any faster.

DCD: You've heard talk from us and I'm sure others as well about internal cloud computing as a way to be able to help folks get the benefits of cloud computing without a lot of the current negatives -- and to do so using the heterogeneous IT resources they already have. What role do you see for internal cloud computing?

Al Gillen: Internal cloud computing represents a wonderful opportunity to get comfortable with the concept, and frankly, has the distinct potential to allow customers to lower their overall IT expense and raise their "greenness" using resources they have in place today.

On that note, I'd like to thank Al for being our first interview guinea pig. Despite the huge amount of interest around cloud computing and many vendor wishes to the contrary, what we hear from customers matches a lot of what Al says. The move toward running your data center differently comes in incremental steps, and those steps can't be rushed.

However, this sour economic climate is a great reason to champion IT projects and approaches that can save money and deliver results immediately. Who doesn't want to find a better, cheaper way to do things right now? (That's the reason we try to start out customer conversations with the payback/results requirements -- what is it you need to accomplish with your infrastructure management? Many people start with our IT ops and server power savings calculators to give them an initial picture.)

If you're interested in any of the research supporting Al's interview comments, you can get it (provided you're an IDC client) at their website. Frank Gens also produces IDC eXchange, an IDC blog with other, unrestricted commentary (including a lot about cloud computing and their predictions for IT in 2009) that Ken Oestreich pointed out to me. Al also noted that they have a Jan. 20 webcast to review those 2009 IT predictions that's open to the public. "We will talk about virtualization and cloud there, among other topics," he said.

Tuesday, January 6, 2009

Interview with IDC's Al Gillen: in 2009, virtualization will help bridge to the cloud

When I first started this blog a few months back, I told you we'd do our best to bring you a cogent collection of useful ideas, opinions, and other relevant tidbits from those steeped in the world of running data centers. So far, that's generally taken the form of blogging from industry events (like my posts from the Gartner Data Center Conference in Vegas in early December) or commentary on IT ops topics sparked by an article or event. Our chief engineer, Craig Vosburgh, even got in on the act last month, launching the first in a series of posts scrutinizing the necessary components of cloud computing, relying on his experience in this space (and his highly tuned B.S. detector, I might add) to help cut through the hype.

And now for something completely different.

In that first blog post, I promised we'd feature give-and-take interviews with people you should know, in and around the world of IT. We're starting the new year off with a bang, in fact: today's post is the first of these interviews -- an interview with Al Gillen, program vice president of system software at IDC. When we talked to Al about Cassatt's "internal cloud" capabilities announcement in November, he had some interesting observations about virtualization, cloud computing, and where the industry is headed.

I recently had a chance to ask him about some of these comments in a little more detail, and I'm posting his responses today and tomorrow. By the way, Al was just recently promoted to run the Enterprise Virtualization Software research team at IDC following the departure of John Humphreys to Citrix. Nicely done, Al.

In today's excerpt, Al notes some shifts in the reasons people are virtualizing servers. In 2009, he says, it won't all be about consolidation. But, as the shift toward virtualization continues, the need for a way to manage virtual systems (in parallel with physical servers, I might add) becomes much more urgent. He believes this move toward virtualization is actually a bridge to cloud computing.

Here’s an excerpt of that interview:

Jay Fry, Cassatt Data Center Dialog: Congrats on the expanded role at IDC, Al. When we talked last you mentioned the industry was starting to see the "secondary impact of virtualization" -- one of which being a necessary intersection between systems management and virtualization. Can you explain a little about what you are seeing out there?

Al Gillen, IDC: The first wave of virtualization adoption was heavily weighted to consolidation and server footprint reduction. Those use cases were a natural fit for virtualization, and to be honest, there is still lots of consolidation to take place in the industry. But users find out very quickly that server consolidation does nothing to simplify the management of the operating systems and layered software, and in some cases, can make it more complex, especially if there is any balancing going on aboard the virtualized servers. Therefore, organizations that did not have a strong management solution in place before beginning to adopt virtualization will be finding that is their next most important acquisition.

DCD: How do you see the virtualization market changing in 2009?

Al Gillen: Our survey data is finding that consolidation is no longer the stand-out favorite use case for virtualization. To be sure, consolidation is not going away, but other use cases are growing in popularity as a primary objective for deploying a virtualized solution. Use cases such as high availability and disaster recovery are emerging as strong use cases today. The other factor in play here is that a lot of the largest consolidation opportunities -- not all of them, but many of them -- already have either been consolidated or are committed to a consolidation strategy and to a given vendor. As a result, the "green field" for large scale consolidations is becoming less green and less big. However, the opportunity for larger numbers of smaller scale consolidations is still enormous.

DCD: How will the economy affect how organizations are adopting virtualization?

Al Gillen: The economic conditions are likely to drive customers more quickly toward virtualization simply because the IT departments, which often are seen by management as a cost center, will be charged with reducing expenses, while still delivering the services the business needs to be successful. Being charged with "doing more with less" has been the mandate that has helped many of the transitional technologies that we now consider mainstream to get a first foothold. Can you say Linux? x86? Or Windows?

DCD: You probably guessed I was going to ask a cloud computing question. How do you think what's going on with virtualization intersects with or is different from cloud computing?

Al Gillen: There is no question that virtualization will serve as the bridge to bring customers from today's physical data center to the compute resource that will serve as tomorrow's data center. Virtualization serves several roles in this transition, including a "proof of concept" that you really can pool multiple smaller machines into a larger resource and begin to allocate and consume the resulting superset of resources in an intelligent, scalable, and reliable manner. But virtualization also will be the lubricant that makes it possible to slide software stacks off the hardware foundations to which the software used to be permanently attached. That said, cloud will be adopted very much on a transitional basis, and will penetrate different verticals and different platforms at different rates. I believe we will be talking about the transition to cloud computing for the next 15 years.

Next time: more from Al on cloud computing and the impact of the economic meltdown.