Wednesday, September 30, 2009

Multiple virtualization vendors in one IT shop? If so, the management challenge changes

A survey published at The Hot Aisle this week purports a shift in the virtualization market that we've heard about from Microsoft and Citrix more and more over the past few months: people are adopting more than one virtualization technology in their environments.

VMware has had an impressive run as a near-monopoly in the server virtualization space for the past several years. And, the competitors have been saying that any day now they will begin chipping away at VMware's dominance. Truth is, data points making that case have seemed few and far between. Now either that's changing, or data supporting this view is getting more visible.

Having more hypervisor vendors is likely to have an impact on how all this stuff gets managed. But first, some data:

Only one hypervisor? Not really.

Just before VMworld, I saw some figures (source long since forgotten) that Microsoft's installed base for Hyper-V was growing quickly. Andi Mann from Enterprise Management Associates (EMA) pointed me to survey results that his firm has done as far back as 2006 saying that having multiple virtualization vendors is actually pretty commonplace. According to Andi's 2008 research, 90% of organizations (large & small) have multiple virtualization vendors, with 2.4 being the average number of vendors. "Only 2% of all enterprises are dealing with a simple, homogeneous virtualization environment comprised of one platform, one technology, and one vendor," says Andi.

And, this week's survey at The Hot Aisle from the Enterprise Strategy Group (ESG) adds fuel to the fire: 44% of respondents to the poll use two or more hypervisors in their IT environment. 16% use three or more.

Why more than one hypervisor?

Why would organizations want so many types of server virtualization in house? ESG's Stephen O'Donnell blogged that he didn't think this was a result of maturing products from Citrix and Microsoft, but rather because of licensing moves by vendors and license management practices by customers. ESG was also running a second poll while I was viewing their survey results, asking "Why is your organization running more than one hypervisor (e.g., Citrix/Xen, KVM, Microsoft Hyper-V, VMware ESX)?" The most common answer when I looked (24%) said that pressure from people like Oracle and Microsoft is pushing them to support multiple hypervisors. 13% say it's just departments doing their own thing. Both contribute, I'm betting.

Another reason is likely to be acquisitions, something that's becoming more common with the downturn. Sure, you didn’t set out from the start with more than one hypervisor vendor, but mergers tend to mean you get plenty more than you bargained for. Why should server virtualization be any different?

We asked similar questions at the beginning of the year in the last (Cassatt) Data Center Survey. We found that as much as IT wanted to standardize on one hypervisor, they believed that it wasn’t going to be possible. I always figured our data was a bit ahead of the curve (Cassatt's database -- the source of the survey responders -- was always filled with early adopters). These other existing and more recent data points might say that things are, indeed, moving this direction.

What having multiple hypervisors means for management

Whatever the reason, if this is indeed the reality -- that there is a broadening set of hypervisor vendors in the enterprise -- it means management of virtualization is going to need to change. Up until now that conversation has been primarily about detailed management of a VMware-specific stack. With a broader set of virtualization players involved in server virtualization, cross-virtualization management tools become a lot more important.

Why do I say that?

Sure, it seems to make logical sense. But again: data. When we asked about how people wanted to manage virtualization in our early 2009 survey, Cassatt heard pretty clearly that people wanted one management stack. And that wasn't just for all of their virtualization technology, but for their physical environments as well. Clearly, customers weren't doing that yet, but that's what they said they wanted. Silos were bad; integrated management was good. Maybe that world is getting closer now. In any case, you can read more details on those results here.

I'll be interested to see (as more data appears) how much serious marketshare movement is underway -- and how much of it is wishful thinking by VMware's competitors. The real indicator, then, might well be an uptick in interest, discussion, and implementation of broader management tools.

Tuesday, September 22, 2009

Making cloud computing work: customers at 451 Group summit say costs, trust, and people issues are key

A few weeks back, the 451 Group held a short-but-sweet Infrastructure Computing for the Enterprise (ICE) Summit to discuss "cloud computing in context." Their analysts, some vendors, and some actual customers each gave their own perspective on how the move to cloud computing is going -- and even what's keeping it from going.

The customers especially (as you might expect) came up with some interesting commentary. I'm always eager to dig into customer feedback on cloud computing successes and roadblocks, and thought some of the tidbits we heard at the event were worth recounting here.

(A side note: if you're interested in more cloud-related customer comments, you can look at some previous Data Center Dialog posts, including this one recounting questions overheard about internal clouds from a few months back.)

Clouds under the radar

As a way to set the stage, 451 Group analyst and ICE practice lead Rachel Chalmers compared cloud computing’s adoption to that of Linux in the late '90s, and to the beginnings of server virtualization in dev/test environments. "There was a lot of VMware [being used in IT] before CIOs even knew it was there. It only belatedly comes to the attention of the architects." Chalmers was clear that adoption is underway ("Customers are already using cloud," she said), and emphasized how under-the-radar adoption like this can really help. "They come pre-evangelized," said Chalmers. That means there are a lot fewer people to convince when it comes time to make the case to roll this out widely.

Customers: Some hesitate to call it cloud

Yuvi Kochar, CTO from the Washington Post Company, saw the work that his teams were doing as shared services, and said, "I hesitate to call this work a cloud." He acknowledged that they were enabling elasticity and cost accounting, but that some of what they were working on was still "hand-wired." What was the thing that pushed Kochar over the edge to move toward dynamic, shared IT services (even if he can't bring himself to actually it "cloud")? Cost. "We want to move everything to a variable cost model," Kochar said. Another customer, Ljubomir Buturovic, VP & chief scientist from Pathworks Diagnostic, said they told their server vendor they would be buying no more hardware, and they haven't. Again: cost was the driver -- in this case, capital costs.

Cloud: It's (still) not for the faint of heart

I thought some of the most useful insights of the customer panel were from Jim Houghton, now co-founder and CTO of Adaptivity. Houghton reflected back on his experiences a few years ago getting what could now be termed a private cloud -- then called utility computing -- off the ground at Wachovia and Bank of America. He characterized the work as something which eventually turned into a $100-million project to stabilize a "chaotic environment" in IT and "took out $500 million in op ex."

Houghton noted that he and his employers at the time "learned a lot of hard lessons along the way." For example, the mechanisms for truly automated provisioning and handling dynamic shifts in demand are "really important," but really hard, especially since the tools at that time he started (early to mid-2000s) were still quite immature. Metering what you're using is critical, he said, but "it's hard to meter when you're separate from the physical environment." Overall, said Houghton, cloud-style dynamic infrastructure is "not for the faint of heart."

Funny. I heard Donna Scott of Gartner make the same observation about creating a real-time infrastructure back in December. We've come a long way, but there's more to do, for sure.

Biggest pain: impact on the people and the organization

Houghton identified a couple best practices for making the shift to a dynamic, shared infrastructure to support your applications: for example, workloads that are entirely self-contained and in which you have access to the source code make excellent candidates for these types of deployments. However, he said, you need to truly understand the workload characteristics. That last bit is something I've heard many large IT organizations lament as a huge problem -- profiling what they currently have and how it runs.

However, he said, the more painful thing is the change in the operational model and its impact on a company's organization and the people that work in it. I've heard this many times over the past several years, from customers, analysts, and even vendors. You can't underestimate the impact that the cloud operating model will have on personnel.

"It gets to be a very touchy subject for clients," said Houghton. "It's hard to do the business cost model without talking about 'these 20 people will be out of a job.'" Or at least who can be moved to focus different things than they're focused on today.

Need to move beyond just virtualization

Houghton also made a comment that infrastructure that's "mildly virtualized may be efficient, but is not all that dynamic." I have to agree: there is a lot more needed to make a private cloud-style infrastructure fly than loading up on a bunch of virtual servers. In fact, that point was made pretty strongly when Dan Kusnetzky lined up his fellow 451 Group analysts to highlight some of the cloud computing inhibitors. William Fellows conveniently rattled off some of his (and the industry's) favorites: security, SLA support, corporate governance, interoperability, vendor viability, job security, and misaligned business models. To name a few.

These are issues we've heard before, for sure, and are at the top of the list of things to be addressed in order for cloud computing to be viable day-to-day in an IT shop. I think it's good news that we're hearing more and more about the manageability side of the issue.

Can I drive your Mercedes while you're not using it?

One of the strongest objections to an internal cloud, or really a shared infrastructure of any sort, still boils down to what's called "server hugging." Houghton gave an amusing explanation of the mentality by putting it this way: "Just because I have 4 Mercedes and I can only drive 1 at a time, doesn’t mean I'm going to let you drive the other 3." In his case, the team of coworkers and vendors pitching this new approach had to put in "years of work" to "build up the trust. What you have to say in response is, 'You'll get everything you wanted, plus a lot more.'" And, of course, you have to back it up by delivering on your promises with great cost savings, excellent service levels, and an improved ability to respond to new requirements from the business.

Are we making progress on cloud computing?

Chalmers noted that in many cases a move to cloud computing doesn't feel like progress at all. "All we're doing is moving the headaches somewhere else. But," she said, those management headaches "still need to be solved."

Fellows noted that there really isn't any definition of success so far. Early adopters of grid, utility computing, and virtualization have been the ones in his experience to be most aggressive in working toward cloud-style environments. "It’s a logical end-point for any of those [earlier] activities," said Fellows.

In fact, said Chalmers, "very often when we see an early adopter of cloud that's successful, it's because they understood HPC [high-performance computing] and putting everything under control in the data center."

So, are we making progress toward incorporating cloud computing in today's IT environment?

Houghton from Adaptivity made the point that "the economic malaise has put a lot of power back into the CTO's hands." It's a chance to use this power to instigate more sweeping changes to how IT operates than any time in recent memory. But people are being judicious with that power.

Appropriately, Chalmers probably did the best job of putting cloud computing in that context: "In this world of financial crisis, the acid test of any technology is: 'Does anyone care enough to sign a purchase order?'" Clearly, some do. (And some fraction of those are on stage at events like this talking about what they've done and learned.) And, as the industry matures the management capabilities and works out the kinks that many of these customers noted, others will start to feel that it's time to sign on the dotted line, too. And hopefully join them on stage.

Wednesday, September 16, 2009

7 ways Twitter improves an IT conference. And 2 ways it makes things worse.

This week, VMware announced that the presentations from VMworld 2009 were available for download. And they, of course, used Twitter to do so -- a much used source of "data center dialog," if I do say so myself.

It's been a few weeks since VMworld, but I'm amazed by the engagement still going on with that show via Twitter (check it out for yourself at #vmworld). As Andi Mann from EMA pointed out prior to the event, the VMware folks seem to have this conference tweet-o-rama thing down pretty well.

Which got me thinking: since we're all learning about what to do and not do with Twitter in real time, it might be worth assessing what worked at VMworld -- and IT shows in general -- tweetwise. And, of course, it's always fun to list the things that didn’t work at all (free advice: let’s all try not to do those things next time).

Before I launch into this, I'll note that some of these tweet-enabled scenarios were planned methodically by the show organizers. It's a big part of what's called marketing these days. Other Twitter uses, however, were definitely not in VMware's plans and probably annoyed the organizers to no end. But such is world of Twitter. If you could control it, it wouldn't be nearly as interesting of a phenomenon. Nor as powerful.

So, here are the 7 things I thought worked out really well if you happened to be on Twitter during this show (and, perhaps, will be helpful at many other IT shows like it):

1. Pre-show Build-Up Using Anything and Everything: VMware themselves, I think, did a masterful job of building excitement for the event on Twitter. And the things they used didn't have to be inherently exciting. They showed off the hands-on lab set-up. They showed off the conference bag. They teased the band headlining their party (and many on Twitter teased right back when they learned it was Foreigner). Each of these items was mostly inconsequential, but was an excuse to connect with potential attendees to convince them to come. Or remind people to sign up or even just to plan their on-line calendars.

The best part: an interesting behind-the-scenes look at the event set-up.

The worst part: random folks each saying "Just made my plane reservations to VMworld! W00t!" Either way, it was hard not to be engaged and, yes, looking forward to the event in some way.

2. Crowd-Sourced, Spontaneous Event Idea Generation and Organization: Aside from the now-very-common tweet-ups that have become pretty easy to plan, I watched a 5K fun-run over the Golden Gate Bridge get suggested, accepted, planned, and organized on Twitter in the weeks prior to the show. Requests for volunteers to help organize went out the same way, and runners goaded non-runners into joining with just a few sarcastic tweets. The only downside? A few of the runners returned to the exhibit hall post-run in their commemorative t-shirts prior to taking a commemorative shower. But that's not Twitter's fault.

3. A Way to Deal with Unexpected Logistical Snags: Many of the most popular sessions at VMworld (which, as you might guess, had "cloud computing" in the title) were fully booked pretty far in advance of the show in the pre-show reservation and agenda tool. Very frustrating. But they announced the magical "clearing of waitlists" as they were able via Twitter. And, on-site, the organizers were able to communicate about the hands-on labs when they crashed and were subsequently restored on the first day.

4. An Ad Hoc Meeting Planner for Attendees: We attendees used Twitter to find people we knew were going to be around somewhere/sometime during the week, and alert the world to our presence in general -- or even our specific location. I found CNET blogger and Cisco cloud guru James Urquhart blogging in a random hallway, exactly where he tweeted he'd be. People ID'd me from my Twitter avatar picture and made business connections. I found people I'd interacted with only in 140-character bursts, but never met (for example, it was going to be hard to recognize @beaker without his squirrel disguise, until someone sent a twitpic of him from the exhibition floor). And a few people were even sharing reviews of different parts of the big gala party as it was happening, presumably sending the twitpics and tweets with the hand not holding onto their cocktail.

5. A Way to Start Conversations to be Continued on the Show Floor: A couple vendors had booth staffers with a significant Twitter following. They used the event as a way to encourage folks to come by their booth and continue their on-line discussions. Notice I didn't give kudos for "using it to promote your booth & giveaways.” Sure, vendors used it for that, but keep reading. You get black marks for using Twitter that way.

On the positive side, Microsoft had fluorescent t-shirts identifying their tweeters, a great way to open a conversation with them. Twitter is a way to have a ready-made intro for talking to someone you're following (or vice versa). Suddenly the event is full of almost-friends and conversations can pick up where they may have left off on-line -- or take off from scratch pretty rapidly.

6. How to Get Around the Rules of the Show (AKA "The Rebuttal"): Sure, Twitter is a way to enable those not at the show to "listen in," participate, comment, etc. That's been well documented since Twitter showed up on the scene. But VMworld also featured "The Rebuttal" from none other than the Microsoft contingent. Sure, they weren't allowed to show their competitive products on the show floor, but they weren't shy about tweeting their thoughts throughout the keynotes and providing some reality checks of the hosts' spin machine. I'm not sure I agreed with all of their snarky on-show-floor commentary countering the VMware hype, but I definitely read it. It also brought up this odd situation: Microsoft as underdog. That's a bit amusing, when you think about it. Twitter's often seen as a great equalizer or content meritocracy, meaning people you've never heard of can get their two cents in. Microsoft proved the big guys can, too.

7. The Continuation of the At-the-Show, “In the Club” Feeling Long after the Event Is Over: I'm still checking (and contributing to) the #VMworld hashtag two weeks after they finished sweeping the final blinking give-away pens out of Moscone. Sure, the message flurry is nothing like it was during the event, but it has kept going. The post-event content was slower, but filled with commentary (like this one...and my previous post) and (guess what) free publicity for the organizers. Sure, VMware used the after-show tweets to publicize what the virtual twitterati were saying about them and the event (especially the good stuff), but they also used it to shape the commentary, and remind attendees that they had a great time. All that sure beats the traditional post-show survey.

OK, so those were the things that Twitter seemed to improve. What didn’t work?

1. The Twitter Analyst NDA: VMware caused another phenomenon and debate: analysts attending their Monday pre-show pre-briefings were told that the event was under non-disclosure. And, yes, that meant no tweeting. There were a few meta-tweets about not tweeting, but most analyst attendees didn't use Twitter much at all on Monday. So when Tuesday's main-stage show began, the backlog of messages and content that the analysts had been stewing over for 24 hours flew by with wild abandon.

The meta-tweets sparked another discussion that I saw several people join in over the following few days: how much information was OK to disclose? Tweeting that you were at an event getting a pre-briefing under NDA meant you were acknowledging that the "secret briefing" event was happening, and that you were merely not talking about the specific content. Some other industry events and briefings are requiring that no tweets even mention that a briefing was taking place. This war has a lot of skirmishes to work out still. NDAs and Twitter require a lot of work on the side of both the briefer and the briefee to get it right.

To be clear, the NDA actually worked as VMware wanted it to, but I'm not sure if that qualifies as a good thing. Watch this space.

2. "Come by Our Booth" Tweets: Perhaps the most useless and annoying tweets of the week were those described by one person as tweets that went something a little like this: "Hey there! Be sure to come by Booth # ___ to talk to ____ [vendor name] about ________ [product being sold] and have a chance to win a ______ [expensive techie gadget]." These messages are antithetical to how Twitter can best be used. To me, they said, "Move along, skip that booth; no conversation to be had there."

Ending up with more to do at a conference

All told, I feel like I got much more out of this event than I had previously, but also walked away with a feeling that there was even less time to get everything I wanted done during the week. The connections I made were stronger and more robust, though, and I have to admit it's because of Twitter. I'm interested to hear what others who attended thought -- and what people who've attended other tweet-enabled shows have seen that works great. Or badly.

But of course, the real question is: when do I start watching the #VMworld2010 hashtag?

Wednesday, September 9, 2009

VMworld '09 proves VMware is no Foreigner to big ambitions

Last week, VMware played host to over 12,000 guests and one '70s/'80s rock band at its annual VMworld event. At its most basic, the show (minus the concert part) was a great place to get some hands-on experience with VMWare technology -- the labs were packed all week (despite a bumpy start).

But I always look at these events as a measuring stick for the ambitions of the host. It was no surprise to me that this year was a summation and a reiteration that VMware wants it all. And it did a pretty clear job of communicating the company's belief that it can deliver.

The truth, of course, is usually a bit divergent from the official PR messages (and nearly always a bit later in arriving). Despite any, um, Head Games that it might be playing, did VMworld give any meaningful clues as to how successful they are likely to be? I think it did.

The star of the show: Is VMware Hot Blooded or As Cold As Ice?
Dan Kusnetzky, now making his home at The 451 Group, used his blog to enumerate a number of things about VMware that may seem obvious, but weren't necessarily the things that VMware went out of its way to reiterate during VMworld.

Dan's comments boil down to these:

· VMware is trying to convince the world of a major underlying assumption: that your IT environment will be pretty homogeneous. Dan reiterates that this isn't likely to be true: "Most large organizations have mainframes, midrange machines, storage servers, and network servers in their datacenters," said Dan. "VMware acts as if either these established mainframe or midrange systems are not there or are going away. Neither are likely to disappear even if industry standard systems are increasingly important."

· VMware is now large enough that much innovation is coming from other, smaller players (he listed Cassatt [now part of CA], Surgient, and others as examples).

· VMware often tramples on the smaller members of its ecosystem.

· VMware is enabling a "good enough" delivery in many existing markets (like HA/Failover) that is impacting those markets, to VMware's benefit and other players' detriment.

· The term “cloud” is getting used for, well, everything. (No argument on this one from any corner, I'd imagine.)
I think most of Dan's observations are true for a couple reasons: at the moment, VMware is a functional monopoly in server virtualization, and they've been in that role for longer than anyone thought would be the case. There are stats now starting to appear that Citrix XenSource and (especially) Microsoft’s Hyper-V are finally starting to make customer inroads. The resulting behavior from VMware, having been alone at the top for so long, is to talk about the world the way they want it to be.

The result of all this? They are getting a big piece of the business where their capabilities are "good enough," and they are a bit predatory when considering what capabilities they want to deliver and what they want to leave for partners. That's not surprising: it's an ecosystem that's a bit out of whack because of their dominant market position.
It's Urgent: Despite their dominance, VMware needs to be successful higher in the stack
VMware's current money-maker -- the hypervisor -- is headed to a price war that will eat into the revenue and margins they've enjoyed. So, they are working on ways to move up the stack. Toward this end, they are extending into management capabilities with vCenter, something I heard Gartner bugging them to do as far back as three years ago. They have extended the work in this space by announcing a focus on helping create and manage internal clouds, then external clouds, then hybrid clouds -- a vision that matches what industry watchers expect.

However, VMware's need to find something "sticky" to keep customers deeply connected to their technology prompted one of their more interesting announcements recently: their acquisition of SpringSource. The people streaming out of Paul Maritz's keynote when SpringSource CEO Rod Johnson came onstage were much more attributable to waiting lists for the next sessions and bad keynote clock management than the content.

It Feels Like the First Time…or at least like it did at BEA
It took attending VMworld for me to put some of the pieces together about the SpringSource deal: namely, that the execs behind this have done things like this before. Seeing VMware COO Tod Nielsen and CMO Rick Jackson onstage reminded me of days past when they (and I) worked together at BEA. (Rod Johnson was a favorite BEAWorld keynote during that time, I might add). BEA was, at that point, trying to do much the same thing as VMware is working to do now: create a platform that its customers would build their applications on top of. BEA did some parts of that really well, some not so well, but the thing I took away from last week was a reminder that these guys have been here before. I heard repeated comments from VMware that knowing much more about the application is going to be of greater and greater importance going forward. It’s no accident.
There are other things, though, that make me question how far they can get. One of those things is their overly homogenous worldview, and their hope that if they say things enough they will be true. Now, if their adoption rates continued unabated, I might even be willing to admit contradicting them would be a bad bet. However, feedback I heard from large customers while at Cassatt and from some of my early sales interactions here at CA says that the easily virtualized servers are going to be (if they aren't already) taken care of pretty soon. Now comes the hard part – the rest of the servers.

This attempt to keep the virtualization efforts going beyond the easy pickings is probably why they repeated from stage that with the performance work they've done recently, there's no reason a customer should worry about virtualizing virtually everything. That's certainly a (Double) Vision I'd ask for much more evidence on before believing sight unseen.

There were some other questions, too, like why Fastscale was acquired by EMC -- not its VMware subsidiary. It may be a signal of a much larger plan afoot by VMware's parent company that's much broader than the more homogeneous approach of VMware. But we'll have to wait to see how that one plays out.
One of the 140 industry analysts at the show said he didn't really feel there was much new announced at this year's VMworld. I had the same impression leaving Paul's keynote. However, maybe that's a point in VMware's favor. VMware is now at a stage where it's filling in the holes. Its vision -- and even slides -- weren't drastically different from last year's. Having worked in and around the internal/private cloud story for a number of years now, I didn't see their story as original or groundbreaking. However, it didn't need to be. The context and the story have already been set.
Will VMware be the, er, Juke Box Hero that helps the cloud go mainstream?
Instead of breaking new ground, I see VMware trying to go mainstream with what some innovators have been talking about for a number of years. In that respect, it's exciting to see. In other respects, caution is still warranted. (Customers, I'm sure, don't need to be told to approach vendors -- especially those with competitors trailing behind by a few steps -- with skepticism.) As always, they are wise to keep in mind VMware's underlying assumptions and make sure to use VMware's technology for whatever aligns with their vision -- not just because of the elegant story VMware lays out. And where customers and VMware don't align, there's a whole ecosystem of partners and would-be competitors willing to help you out.
Though I bet they won't do as good a job helping you relive the music of your youth while making IT infrastructure decisions. VMware has that nailed.
Now if I can just get "I Want to Know What Love Is" out of my head...