Showing posts with label virtualization management. Show all posts
Showing posts with label virtualization management. Show all posts

Tuesday, September 7, 2010

VMworld 2010: Cloud "in excess"? Some thoughts on "What You Need"

VMware threw another great VMworld event this year. If you didn’t attend, you missed another step in VMware’s evolution toward being a very mainstream, enterprise-focused software vendor. They are at that stage as a vendor where they are reaching out beyond what they grew up doing and are trying to expand into something broader and different.

Last year, in my book, was a bit light on news (aside from explaining their early plans for SpringSource) and more about describing these big ambitions. This year, however, was about trying to look the part and making sure they had credible solutions and stories to tell their enterprise customers, thanks to a few well-timed acquisitions.

Oh, and I guess I should mention the cloud. VMware certainly did.

Ironically, VMware had INXS play their big party – who ended their show that night with their song “Don’t Change.” Fellow punster Greg Schulz pointed out on his StorageIO blog that the song title may have felt diametrically opposed to the message that VMware was trying to tell everyone all week. Maybe the title should have been “It’s Time for a Change and We Call It the Cloud.”

But, actually, I think INXS had it right. For more details on why, I put together an INXS-themed list of my take-aways for customers now that they’ve left Moscone and are finding themselves back from VMworld 2010, staring at their day jobs.

Here’s some help figuring out What You Need:

New Sensation: All the talk of cloud certainly came to a head. VMware definitely talked about it “in excess” at the event. Paul Maritz had lots to say both in his keynote and in his panel of service providers about their announcements, including the much-leaked and much-anticipated vCloud Director product (formerly known as Project Redwood). Industry-watcher Bernard Golden said he saw the cloud discussion accelerate very seriously in his CIO.com blog about the show, noting that not only were VMware and its partners taking the next steps to make cloud “more consumable in real-world environments,” but also that at the show there was a “palpable feeling that cloud computing represents the next platform shift in computing…but on a different software construct that abstracts and makes agile the previous generation of hardware.”

Things to watch out for? Bernard mentioned a favorite of mine that I bring up in any Cloud 101 discussion I have: the “one thing that wasn’t discussed much was the process and organizational challenge caused by implementing a cloud computing environment.” It’s a good thing to have help with (and, yes, that reminds me that I did get a chance to meet up with several of our just-joined 4Base consulting folks during the week).

Listen Like Thieves: Or, maybe: here are some suggestions on how to foil those thieves, especially when it comes to your IT environment. VMware acknowledged an important angle that customers have been talking about for a long time: security is a big issue for both virtualization and cloud computing. They bought TriCipher and announced vShield offerings, showing their interest in delivering solutions in this space. In fact, it was an action-packed week on the security front: CA Technologies also acquired Arcot Monday as the show was getting started.

Don’t Change: Back to my comment about how much change should be a part of your IT operations theme song. Look at it this way: before heading to San Francisco last week, those of you in IT had been worried about a big, complex set of management and operations issues. Spending a week hearing about the newest virtualization and cloud deliverables doesn’t change the reality of what you go back to. Don’t toss out your view of what’s important; instead, use those requirements to evaluate everything you heard last week.

Never Tear Us Apart: There is a virtualized world of IT, but there continues to be a physical world as well. Andi Mann (from CA Technologies) commented in IT World Canada that even though IT is now deploying more new virtual servers than physical ones, companies' infrastructures are still only about 30% virtualized.

Meaning, of course, that you still need to manage and optimize both parts -- the virtual and the physical. Together. The virtual world that VMware had you looking at closely for 4 days is not the sum total of your environment, so don’t forget to consider management, automation, and control capabilities that understand that, too.

Devil Inside: VMworld continues to serve as an industry gathering about virtualization – and now cloud. That’s very much to their benefit. It does provide an excellent meeting place for customers and vendors, but one in which VMware is very much in control of the discussion and topics. My suggestion: push back. I made some suggestions before the event about how to sort through the flood of announcements coming. Now’s the time to figure out what’s real from VMware and the other vendors. Make sure you have real opportunities to get hands-on. It’s the only way you’re going to find out what’s ready for prime time. It's the only way to find out what makes good economic sense for your business model as an enterprise or a service provider. It's also the only way to find out what does and doesn’t work at this point.

No matter how you look at it, the event was a Kick. (For a good summary of the event in addition to Bernard Golden's, take a look at this Network World article from Jon Brodkin and this blog post from CA's Stephen Elliot. For a take on "Why VMworld was Underwhelming," read Derrick Harris's GigaOM Pro write up [subscription required].) I’m glad to have used the event to meet up with the San Francisco Cloud Club folks once again. It’s a very cloud-savvy and interesting crew.

And, before this year’s VMworld begins to Disappear from the view, I’ll offer a tip of the hat and add my thanks to the folks at VMworld for hosting us all. It was certainly in their best interest to put on the show, and now it’s up to the rest of us to make sure it was in ours as well.

Thursday, August 26, 2010

Back to school -- for the cloud? Try not to forget the multiple paths for virtualization & cloud

Summer vacation is really a bad idea.

At least, that’s what TIME Magazine reported a few weeks back. Despite our glorified, nostalgic memories of endless hours on the tire swing above the old water hole (or, more likely, trying to find towel space on a lounge chair by the gym’s overcrowded pool), apparently kids forget stuff when they aren’t in school.

So, now that everyone’s headed back to the classroom and hitting the books again, they’ve got to jog their memories on how this learning stuff worked.

Luckily, as working adults who think about esoteric IT topics like virtualizing servers and actually planning cloud computing roll-outs, we can say this is never an issue. Right? Anyone? Bueller? Bueller?

However, with VMworld imminent and people returning from vacations, it’s a good time to reiterate what I’ve been hearing from customers and others in the industry about how this journey around virtualization and cloud computing goes.

Some highlights (take notes if you’d like; there might be a short quiz next period):

Rely on the scientific method. You’re going to hear lots of announcements at VMworld next week. (In fact, many folks jumped the gun and lobbed their news into the market this week.) In any case, be a good student and diligently take notes. But then you should probably rely a bit on the scientific method. And question authority. Know what you need or at least what you think you need to accomplish your business goal. Look at any/all of our vendor announcements through that lens. You’ll probably be able to eliminate about two-thirds of what you hear next week from VMware and all its partners (and, of course, I realize that probably includes us at CA Technologies, too). But that last third is worth a closer look. And some serious questions and investigation.

The answers aren’t simply listed in the back of your textbook. Meaning what? Well, here's one thing for starters: just because you’re knee-deep in virtualization doesn’t mean you’re automagically perfectly set up for cloud computing. Virtualization is certainly a key technology that can be really useful in cloud deployments, but as I've noted here before, it’s not sufficient all by itself. The NIST definition of cloud computing (and the one I use, frankly), doesn’t explicitly mention virtualization. Of course, you do need some smart way to pool your computing resources, and 15,000 VMworld attendees can’t be wrong…right? (Go here for my write-up on last year’s VMworld event.) But, just keep that in mind. There’s more to the story.

In fact, there may be more than one right answer. There isn’t one and only one path to cloud computing. My old BEA cohort Vittorio Viarengo had a piece in Forbes this week talking about virtualization as the pragmatic path to cloud. It can be. I guess it all depends what that path is and where it goes. It just may not be ideally suited for your situation.

On the “path to cloud computing,” to borrow Vittorio’s term, there are two approaches we’ve heard from folks:

Evolution: No, Charles Darwin isn’t really a big cloud computing guru (despite the beard). But many companies are working through a step-by-step evolution to a more dynamic data center infrastructure. They work through consolidation & standardization using virtualization. They then build upon those efforts to optimize compute resources. As they progress, they automate more, and begin to rely on orchestration capabilities. The goal: a cloud-style environment inside their data center, or even one that is a hybrid of public and private. It’s a methodical evolution. This method maps to infrastructure maturity models that folks like Gartner talk about quite a bit.

Revolution: This is not something you studied in history class involving midnight rides and red coats. If organizations have the freedom (or, more likely, the pressure to deliver), they can look at a more holistic cloud platform approach that is more turn-key. It’s faster, and skips or obviates a lot of the steps mentioned in the other approach by addressing the issues in completely different ways. The benefit? You (a service provider or an end user organization) can get a cloud environment up and running in a matter of weeks. The downside? Many of the processes you’re used to will be, well, old school. You have to be OK with that.

Forrester’s James Staten explained ways to deliver internal clouds using either approach in his report about why orgs aren’t ready for internal clouds in the first place. Both the evolutionary and the revolutionary approaches are worthy of more detail in an additional post or two in the near future, I think. But the next logical question – how do you decide what approach to take? – leads to the next bit of useful advice I’ve heard:

When in doubt, pick ‘C’. Even customers picking a more evolutionary approach won’t have the luxury of a paint-by-numbers scenario. Bill Claybrook’s recent in-depth Computerworld article about the bumpy ride that awaits many trying to deliver private clouds underscores this. “Few, if any, companies go through all of the above steps/stages in parallel,” he writes. “In fact, there is no single ‘correct’ way to transition to a private cloud environment from a traditional data center.”

So, the answer may not only be a gradual evolution to cloud by way of increasing steps of virtualization, automation, and orchestration. And it may not only be a full-fledged revolution. Instead, you want to do what’s right for each situation. That means the co-existence of both approaches.

How do you decide? It’s probably a matter of time. Time-to-market, that is. In situations where you have the luxury of a longer, more methodical approach, the evolutionary steps of extending virtualization, automation, and standardization strategies is probably the right way to go. In situations where there is a willingness, eagerness, or, frankly, a need to break some glass to get things done, viva la revolution! (As you probably can guess, the CA 3Tera product falls into this latter category.)

Learn from the past. Where people have gotten stuck with things like virtualization, you’ll need to find ways around it. Sometimes that will be helped by tools from folks like VMware themselves, broader management tools from players like, oh, say CA Technologies or a number of others. Sometimes that help will need to be in the form of experts. As I previously posted, we’ve just brought a few of these experts onboard with the 4Base Technologies acquisition, and I bet there will be a few consulting organizations in the crowd at VMworld. Just a hunch.

Back to Claybrook’s Computerworld article for a final thought: “[O]ne thing is very clear: If your IT organization is not willing to make the full investment for whatever part of its data center is transitioned to a private cloud, it will not have a cloud that exhibits agile provisioning, elasticity and lower costs per application.”

And that’s enough to ruin anyone’s summer vacation. See you at Moscone.

If you are attending VMworld 2010 and are interested in joining the San Francisco Cloud Club members for drinks and an informal get-together on Wednesday evening before INXS, go here to sign up.

Tuesday, October 27, 2009

IT management nirvana? Smells like virtual and physical control

I was very amused by the headline on Denise Dubie's Network World article this week about CA's big multi-product announcement. It noted that CA and other management vendors were working toward IT management "nirvana" -- a state that IT has been pretty far away from. Especially when virtualization gets involved.

So, what's the main difference between where we are now and what she described? "Now" might be described (with a little help from a certain '90s grunge band of the same name) as "Come As You Are," "Nevermind," or something equally dire.

"IT management nirvana," on the other hand, requires a coherent way to control your IT environment, regardless of whether you're talking about physical or virtual components. The good news: I think CA is addressing that combined requirement pretty well, and this week's announcements help.

One of the things I've liked about the CA story that I've heard since I joined is the way in which the company directly addresses the day-to-day, pragmatic interests of users. Breaking down management silos is one of the keys to that. It's also worth noting that a bunch of other aspects of IT control -- big, grown-up, important concepts -- are front and center. Management, governance, automation, and security are all in the first paragraph of CA's press release regarding virtualization management -- topics that were certainly not highlighted nor addressed by many industry players even just six months ago.

Getting to "IT management nirvana" may not be exactly only about pragmatism, but nevertheless, I think it's some of the practical parts of this week's announcements that are worth noting. In particular:

· Single pane of glass for physical & virtual management. The root-cause analysis capabilities of the new CA Spectrum Service Assurance product take both the physical & virtual into account. It's designed to offer one place to display the impact of the physical & virtual IT infrastructure on the services it supports. Mike Vizard's CTO Edge write-up about this announcement gives you a peek into what the product's interface looks like, in case you're curious.

· Provisioning across physical & virtual infrastructures. Enabling application configuration management and dynamic resource provisioning is hard. Being able to do that across internal physical, virtual, and even external cloud environments is really hard. CA Spectrum Automation Manager has now added this, plus another neat little trick: rapid physical-to-virtual and virtual-to-virtual server provisioning.

· Cross-virtualization management. Add to the things mentioned above broad support for virtualization technologies like VMware vSphere, Citrix Xen, IBM LPAR, and Sun Solaris, and it's another reason it's worth noting. There are a few more holes to fill (Microsoft Hyper-V comes to mind), but it makes a great cross-VM story. What CA Spectrum Automation Manager now has in its cross-virtualization support is beyond what we were doing at Cassatt. You can see some earlier discussion of across-multiple-hypervisor management in this post.

· For VMware lovers out there, there's even more. A couple of the other products (you can find details about which ones in the CA press release) help discover more about VMware and performance issues, including database performance before, during, and after VMware VMotion migrations. In fact, there are a bunch more virtualization features scattered across many product lines that work in conjunction with your VMware technology. (Again, it's kind of nice to have answers to a lot of the broad management questions that customers are asking by virtue of an extensive product portfolio.)

Making a bet that integrated management is better

You'll notice that CA is putting its weight behind the concept that having a unified management capability is more efficient and more powerful for an IT organization. That's not unexpected given the breadth of management capabilities that CA can offer a customer. But, it's also in line with the complex environments that large customers actually have. But is it how they want to manage things?

Mike Vizard, again in his CTO Edge article, weighed in with one view: "At the moment," said Vizard, "customers seem to be favoring [an] integration strategy between existing systems management tools and providers of dedicated virtual machine management tools. Over the long haul, odds are good that customers will ultimately want to see more convergence of these tools rather than continuing to pay separate licensing fees for both," wrote Vizard.

Delivering tools that can provide this convergence, and seeing customers have great success with them, is the thing that will tip the balance. And in the end, that's probably a balance that will favor the customer. Vizard said something similar back in September when CA announced the deal to acquire NetQoS, postulating that "it will also ultimately prove a lot less expensive [to have unified management tools] than managing a whole slew of point products."

All that certainly sounds more like IT management nirvana (and certainly more harmonious and under control than the aforementioned namesake band from Seattle often was). Hopefully, these tools and others it inspires in the marketplace will gets us a step closer.

And, if any of my Nirvana references don't quite hit the mark: All Apologies. Blame Denise. Or her headline writer.

Wednesday, October 14, 2009

Running with scissors? Or maybe trimming the risks out of virtualization instead

What's riskier: standing at the top of a hill in a thunderstorm while holding a golf club...or commuting to work? Skydiving...or flying to Pamplona and then taking part in the Running of the Bulls? OK, now for the really tough one: juggling knives...or implementing virtualization in production?

Before you answer, you should be warned that humans are quite bad at assessing relative risks. TIME Magazine had a cover story a number of years back on that very subject. The problem in a nutshell, scientists say, is that we're "moving through the modern world with what is, in many respects, a prehistoric brain."

Deploying virtualization doesn't sound all that dangerous, especially compared with some of the scarier items above (like, say, if you lacking knife-juggling skills). If that was your answer, you probably haven't been in an IT shop recently. OK, maybe it's not the same as spending the night in the polar bear cage at the zoo, but it's not without risks. And since risk is a four-letter word in large IT shops that handle mission-critical systems, it's worth figuring out how to get the benefits while minimizing potential problems.

The IT Process Institute survey that VMware and my employer, CA, sponsored and released (actual survey available here) a few weeks back was put together to try to identify what some of the more mature IT shops are doing to deal with worries about risk that virtualization introduces.

The survey itself, which talked to 323 different IT organizations, is a bit daunting to wade through, so I pulled out some interesting tidbits worth highlighting here:

People's sights are set higher than just server consolidation. And they are being aggressive. 72% are aggressively virtualizing production servers, but only 19% are using virtualization just to consolidate servers. The bigger focus is on pursuing high availability and disaster recovery. And, nearly another third are shooting for dynamic resource optimization.

If you use virtualization in production, you are going to have to change operating procedures and controls. The survey found that those organizations with a strong foundation of process controls and procedures were likely to only need to modify the controls they already have in place. That's good news for some of the bigger IT shops and their IT ops staff. However, the more complex things you try to do with virtualization, the survey found, the more modifications should be considered. Kind of straightforward, but worth repeating.

Many mature virtualization users have at one point limited the release of virtualization in production until training requirements and management procedures were taken care of. Maybe it's just a phase everyone has to go through, but it seems many have slowed things down to err on the side of caution. The survey shows, however, that many IT organizations have now reached what it calls "a level of confidence needed to aggressively virtualize business critical systems, including those that are in scope for regulatory compliance." That's impressive, actually, and is a big change from a few years ago.

Here's where the running with scissors part comes in

The study identified a bunch of virtualization bogeymen -- things that seriously worry the IT shops working to deploy virtualization. Some of those worries included:

· It makes a mess (technology-wise). Also known as virtual sprawl (a term that VMware was very sensitive about when we started using this a few years back at Cassatt). This can also hinder compliance efforts.
· Things can hide. There are potential issues with discovery tools not tuned to work with virtual systems.
· Too much of a good thing. There is license compliance risk if virtual servers can appear too easily. You might also exceed available resource capacity.
· Putting all your eggs in one basket. Well-meaning administrators can inadvertently make things riskier by stacking critical apps together on one faulty machine.
· It makes a mess (organization-wise). Aggressive adoption probably means specialized training and new organization structures.
· A perfect target. Security is a big concern by survey respondents, worrying about the hypervisor as a new layer of technology that can be attacked.

Those probably all sound familiar. The interesting point is that the survey said they all added up to this: "putting virtualized systems into production without a well-reasoned set of operational controls creates an unacceptable level of production and compliance risk."

OK, time to hit these problems head-on, then.

The survey's recommendations for reducing virtualization risk

So what's a good way to start addressing those risks (besides hiding the scissors)? The survey has three sets of recommendations. I've noted where in the survey to find them so you won't have to dig through it yourself:

· 11 practices for those organizations with "baseline maturity" (generally doing server consolidation-type things with business critical systems). The focus for those orgs talks about host access, configuration controls, VM provisioning, and improvements to capacity & performance management. See page 14 of the survey for the exhaustive list.
· 25 practices for "highly mature, but static" uses of virtualization (generally looking at HA & DR issues). There the suggestions are about configuration standardization, approved build images for provisioning, and using a "trust but verify" approach for changes. It takes all of page 17 of the survey to list these suggestions.
· For the braves ones doing "highly mature, but dynamic" things with their virtualization, the research suggested 12 items around configuration discovery & tracking, change approvals, capacity management, and the overall process maturity needed to support automation. See page 22 for this list.

Some virtualization management suggestions

One of the suggestions that's "highly desirable" is a coordinated view between your physical and virtual environment, according to today's Computerworld article from Beth Schultz about "Getting a Grip on Multivendor Virtualization." CA's Stephen Elliot was quoted in the article talking about some of this survey's findings. "A lot of customers are recognizing that virtualization is great, and works wonders," said Elliot, "but certain environments will not be virtualized and so they need to figure out how to manage and automate both worlds together."

I've posted before on why the automation side of the equation is important, as have Laurie MacVittie and others. The report chimes in here, too: "Many view automation used to manage dynamic virtual resources as a prerequisite for tapping internal and external cloud computing resources." But that's a subject for another day.

The Computerworld article also has some good comments about the importance of being able to manage across multiple virtualization vendors' environments, something that has also been discussed here, but was outside the more process-oriented scope of this particular survey.

"The key thing that pleasantly surprised us [in the study] is that customers right now...are thinking more proactively about the need to manage their virtual infrastructure," Elliot said in an interview with Jeffrey Burt of eWeek. "Just because they've got new innovations [in their data centers] doesn't mean that their need for management just disappears."

That, after all, would be pretty risky.

Wednesday, September 30, 2009

Multiple virtualization vendors in one IT shop? If so, the management challenge changes

A survey published at The Hot Aisle this week purports a shift in the virtualization market that we've heard about from Microsoft and Citrix more and more over the past few months: people are adopting more than one virtualization technology in their environments.

VMware has had an impressive run as a near-monopoly in the server virtualization space for the past several years. And, the competitors have been saying that any day now they will begin chipping away at VMware's dominance. Truth is, data points making that case have seemed few and far between. Now either that's changing, or data supporting this view is getting more visible.

Having more hypervisor vendors is likely to have an impact on how all this stuff gets managed. But first, some data:

Only one hypervisor? Not really.

Just before VMworld, I saw some figures (source long since forgotten) that Microsoft's installed base for Hyper-V was growing quickly. Andi Mann from Enterprise Management Associates (EMA) pointed me to survey results that his firm has done as far back as 2006 saying that having multiple virtualization vendors is actually pretty commonplace. According to Andi's 2008 research, 90% of organizations (large & small) have multiple virtualization vendors, with 2.4 being the average number of vendors. "Only 2% of all enterprises are dealing with a simple, homogeneous virtualization environment comprised of one platform, one technology, and one vendor," says Andi.

And, this week's survey at The Hot Aisle from the Enterprise Strategy Group (ESG) adds fuel to the fire: 44% of respondents to the poll use two or more hypervisors in their IT environment. 16% use three or more.

Why more than one hypervisor?

Why would organizations want so many types of server virtualization in house? ESG's Stephen O'Donnell blogged that he didn't think this was a result of maturing products from Citrix and Microsoft, but rather because of licensing moves by vendors and license management practices by customers. ESG was also running a second poll while I was viewing their survey results, asking "Why is your organization running more than one hypervisor (e.g., Citrix/Xen, KVM, Microsoft Hyper-V, VMware ESX)?" The most common answer when I looked (24%) said that pressure from people like Oracle and Microsoft is pushing them to support multiple hypervisors. 13% say it's just departments doing their own thing. Both contribute, I'm betting.

Another reason is likely to be acquisitions, something that's becoming more common with the downturn. Sure, you didn’t set out from the start with more than one hypervisor vendor, but mergers tend to mean you get plenty more than you bargained for. Why should server virtualization be any different?

We asked similar questions at the beginning of the year in the last (Cassatt) Data Center Survey. We found that as much as IT wanted to standardize on one hypervisor, they believed that it wasn’t going to be possible. I always figured our data was a bit ahead of the curve (Cassatt's database -- the source of the survey responders -- was always filled with early adopters). These other existing and more recent data points might say that things are, indeed, moving this direction.

What having multiple hypervisors means for management

Whatever the reason, if this is indeed the reality -- that there is a broadening set of hypervisor vendors in the enterprise -- it means management of virtualization is going to need to change. Up until now that conversation has been primarily about detailed management of a VMware-specific stack. With a broader set of virtualization players involved in server virtualization, cross-virtualization management tools become a lot more important.

Why do I say that?

Sure, it seems to make logical sense. But again: data. When we asked about how people wanted to manage virtualization in our early 2009 survey, Cassatt heard pretty clearly that people wanted one management stack. And that wasn't just for all of their virtualization technology, but for their physical environments as well. Clearly, customers weren't doing that yet, but that's what they said they wanted. Silos were bad; integrated management was good. Maybe that world is getting closer now. In any case, you can read more details on those results here.

I'll be interested to see (as more data appears) how much serious marketshare movement is underway -- and how much of it is wishful thinking by VMware's competitors. The real indicator, then, might well be an uptick in interest, discussion, and implementation of broader management tools.

Wednesday, September 9, 2009

VMworld '09 proves VMware is no Foreigner to big ambitions

Last week, VMware played host to over 12,000 guests and one '70s/'80s rock band at its annual VMworld event. At its most basic, the show (minus the concert part) was a great place to get some hands-on experience with VMWare technology -- the labs were packed all week (despite a bumpy start).

But I always look at these events as a measuring stick for the ambitions of the host. It was no surprise to me that this year was a summation and a reiteration that VMware wants it all. And it did a pretty clear job of communicating the company's belief that it can deliver.

The truth, of course, is usually a bit divergent from the official PR messages (and nearly always a bit later in arriving). Despite any, um, Head Games that it might be playing, did VMworld give any meaningful clues as to how successful they are likely to be? I think it did.

The star of the show: Is VMware Hot Blooded or As Cold As Ice?
Dan Kusnetzky, now making his home at The 451 Group, used his blog to enumerate a number of things about VMware that may seem obvious, but weren't necessarily the things that VMware went out of its way to reiterate during VMworld.

Dan's comments boil down to these:

· VMware is trying to convince the world of a major underlying assumption: that your IT environment will be pretty homogeneous. Dan reiterates that this isn't likely to be true: "Most large organizations have mainframes, midrange machines, storage servers, and network servers in their datacenters," said Dan. "VMware acts as if either these established mainframe or midrange systems are not there or are going away. Neither are likely to disappear even if industry standard systems are increasingly important."

· VMware is now large enough that much innovation is coming from other, smaller players (he listed Cassatt [now part of CA], Surgient, and others as examples).

· VMware often tramples on the smaller members of its ecosystem.

· VMware is enabling a "good enough" delivery in many existing markets (like HA/Failover) that is impacting those markets, to VMware's benefit and other players' detriment.

· The term “cloud” is getting used for, well, everything. (No argument on this one from any corner, I'd imagine.)
I think most of Dan's observations are true for a couple reasons: at the moment, VMware is a functional monopoly in server virtualization, and they've been in that role for longer than anyone thought would be the case. There are stats now starting to appear that Citrix XenSource and (especially) Microsoft’s Hyper-V are finally starting to make customer inroads. The resulting behavior from VMware, having been alone at the top for so long, is to talk about the world the way they want it to be.

The result of all this? They are getting a big piece of the business where their capabilities are "good enough," and they are a bit predatory when considering what capabilities they want to deliver and what they want to leave for partners. That's not surprising: it's an ecosystem that's a bit out of whack because of their dominant market position.
It's Urgent: Despite their dominance, VMware needs to be successful higher in the stack
VMware's current money-maker -- the hypervisor -- is headed to a price war that will eat into the revenue and margins they've enjoyed. So, they are working on ways to move up the stack. Toward this end, they are extending into management capabilities with vCenter, something I heard Gartner bugging them to do as far back as three years ago. They have extended the work in this space by announcing a focus on helping create and manage internal clouds, then external clouds, then hybrid clouds -- a vision that matches what industry watchers expect.

However, VMware's need to find something "sticky" to keep customers deeply connected to their technology prompted one of their more interesting announcements recently: their acquisition of SpringSource. The people streaming out of Paul Maritz's keynote when SpringSource CEO Rod Johnson came onstage were much more attributable to waiting lists for the next sessions and bad keynote clock management than the content.

It Feels Like the First Time…or at least like it did at BEA
It took attending VMworld for me to put some of the pieces together about the SpringSource deal: namely, that the execs behind this have done things like this before. Seeing VMware COO Tod Nielsen and CMO Rick Jackson onstage reminded me of days past when they (and I) worked together at BEA. (Rod Johnson was a favorite BEAWorld keynote during that time, I might add). BEA was, at that point, trying to do much the same thing as VMware is working to do now: create a platform that its customers would build their applications on top of. BEA did some parts of that really well, some not so well, but the thing I took away from last week was a reminder that these guys have been here before. I heard repeated comments from VMware that knowing much more about the application is going to be of greater and greater importance going forward. It’s no accident.
There are other things, though, that make me question how far they can get. One of those things is their overly homogenous worldview, and their hope that if they say things enough they will be true. Now, if their adoption rates continued unabated, I might even be willing to admit contradicting them would be a bad bet. However, feedback I heard from large customers while at Cassatt and from some of my early sales interactions here at CA says that the easily virtualized servers are going to be (if they aren't already) taken care of pretty soon. Now comes the hard part – the rest of the servers.

This attempt to keep the virtualization efforts going beyond the easy pickings is probably why they repeated from stage that with the performance work they've done recently, there's no reason a customer should worry about virtualizing virtually everything. That's certainly a (Double) Vision I'd ask for much more evidence on before believing sight unseen.

There were some other questions, too, like why Fastscale was acquired by EMC -- not its VMware subsidiary. It may be a signal of a much larger plan afoot by VMware's parent company that's much broader than the more homogeneous approach of VMware. But we'll have to wait to see how that one plays out.
One of the 140 industry analysts at the show said he didn't really feel there was much new announced at this year's VMworld. I had the same impression leaving Paul's keynote. However, maybe that's a point in VMware's favor. VMware is now at a stage where it's filling in the holes. Its vision -- and even slides -- weren't drastically different from last year's. Having worked in and around the internal/private cloud story for a number of years now, I didn't see their story as original or groundbreaking. However, it didn't need to be. The context and the story have already been set.
Will VMware be the, er, Juke Box Hero that helps the cloud go mainstream?
Instead of breaking new ground, I see VMware trying to go mainstream with what some innovators have been talking about for a number of years. In that respect, it's exciting to see. In other respects, caution is still warranted. (Customers, I'm sure, don't need to be told to approach vendors -- especially those with competitors trailing behind by a few steps -- with skepticism.) As always, they are wise to keep in mind VMware's underlying assumptions and make sure to use VMware's technology for whatever aligns with their vision -- not just because of the elegant story VMware lays out. And where customers and VMware don't align, there's a whole ecosystem of partners and would-be competitors willing to help you out.
Though I bet they won't do as good a job helping you relive the music of your youth while making IT infrastructure decisions. VMware has that nailed.
Now if I can just get "I Want to Know What Love Is" out of my head...

Wednesday, March 25, 2009

Virtualization complexity is not going away, so plan for reality

Earlier this year, Cassatt ran our second annual Data Center Survey, getting responses from several hundred data center-related people in our database. Last year's survey (register to download it here) focused mainly on data center energy efficiency. This year, we asked those same questions again -- and then some. Since the issues in the data center have shifted, so have our questions. We hit a broad set of topics -- from virtualization to cloud computing to the impact of the economy on data center efficiency projects.

We’ll be publishing the overall results in a couple weeks, but I have been sifting through the survey data and found some interesting tidbits about virtualization that I thought were worth noting, especially in light of the press announcement we made today (more on that in a moment).

VMware is still the runaway leader, and virtualization is ready for primetime

No news there. More than 81% of our respondents have VMware currently deployed in their environment. But it certainly didn't stop there. Citrix and non-Citrix derivatives of Xen were installed in nearly 23% of organizations. Microsoft Hyper-V topped 17%. Other virtualization technologies were listed as well, including Parallels Virtuozzo Containers, Solaris Containers/Zones, IBM LPARS, and a smattering of others.

We also asked where virtualization was being used. The answer: everywhere. Almost 66% of respondents said they were using virtualization in not only development and test environments but also in production. That's up 3% from our 2008 survey. This year, only 13% were using virtualization solely in dev/test. Virtualization, as if we didn't already know, is all over the place and being relied on to support important applications.

Standardizing on one virtualization technology? IT ops folks don't see it happening

We asked our survey-takers to characterize how they plan to support virtualization in their data centers: did they expect to standardize on just one technology or support multiple? Only 23% said that they plan to support one and only one virtualization technology. Painful as it might be from a management standpoint, nearly 68% of respondents acknowledged that their data centers would have hypervisors from multiple vendors. They either:

· Are already supporting multiple virtualization technologies -- and don't expect that to change (23%)
· Are planning to support multiple virtualization technologies in the future (17%) or
· Hope to support only one virtualization technology, but fully expect that others will appear over time (28%)

Sounds like they have a pretty good handle on what's coming at them.

Managing physical and virtual systems with the same management stack?

Many people seemed to be planning to manage both their virtual servers and more traditional physical server environment with the same tool set. In fact, 41% told us exactly that. Well, that would be nice, wouldn't it? That answer seems to be a bit of wishful thinking, especially if you look at what people are doing right now (like what tools they're using). However, it's one of the things we at Cassatt want to make more and more possible, so we applaud their foresight on this. However, nearly 29% hadn't really made a strategic choice yet about how to manage these mixed physical and virtual environments, which tells me that customers are still thinking about this one. This also puts a big chunk of the organizations out there behind the eight ball, for sure, given how far adoption of virtualization has come. No firm management strategy? Yikes.

How you can plan for all this complexity: prepare for reality

All of these data points point me back to the design point we try to keep in mind when creating Cassatt products: reality. If you read today's announcement, we talked about delivering our new GA-level support for Parallels Virtuozzo Containers. This is in addition to the VMware and Xen support we already have in Cassatt Active Response, and the support for controlling physical servers, too (running Linux, Solaris, Windows, and AIX so far).

Most of the discussions about dynamic data centers or internal clouds that you hear from VMware and others don't bring up any of the real-world diversity that exist in IT shops. Maybe this is one of the reasons that some end users are highly skeptical of the whole concept of more flexible, shared, dynamic infrastructures. You can't talk about improving how someone runs their data center if you're not talking about the actual data center that they have to live with every day. (To be fair, as John Humphreys mentioned in my recent interview with him, Citrix at least acknowledged that having a tool that manages Xen and Hyper-V would be useful in their recent announcements. It's a start, for sure.)

What we're continuing to hear directly from customers and through surveys like the one I previewed a bit here is that virtualization complexity is here to stay. It can't be ignored. Instead, it needs to be taken into account: you'll have applications that use VMware. You'll acquire someone (or be acquired by someone) running Xen. Or Microsoft Hyper-V. Or Virtuozzo. Or (deep breath) all of the above. It's not what you'd like, nor something that's going to make your life as an IT operations professional easier in the short run. But it's reality.

And, when you add in the fact that you still have real, actual, physical servers running in your environment that need care and feeding to support applications, you have a physical and virtual management challenge that warrants an approach that smoothes the running of IT and reduces expenses -- rather than the opposite.

There's more about Cassatt Active Response 5.3, out today, in our press release, and overview product info is on our product pages, if you're interested.

And, incidentally, there are a couple of other interesting points worth highlighting from the survey data that I'll try to write about in upcoming posts prior to the whole report going public.

Wednesday, March 11, 2009

John Humphreys, now at Citrix, sees virtualization competition shifting to management

I'm sure when John Humphreys left IDC and joined Citrix last year he had to endure lots of barbs about joining "the dark side" from his analyst compatriots. Of course, his new vendor friends were probably saying the exact opposite, yet much the same thing: no more ivory tower, John; it's time to actually apply some of your insights to a real business -- after all, he now had a bunch of real, live customers that want help solving data center software infrastructure and IT operations issues.

Regardless of comments from either peanut gallery, John did indeed vacate Speen Street and trade his role running IDC's Enterprise Virtualization Service for a spot at Citrix. He's now focused on the overall strategy and messaging for their Virtualization and Management Division, the group built from the XenSource acquisition and that is actively working on virtualization alternatives to VMware -- and more.

I thought John's dual perspectives on the market (given his current and previous jobs) would make for a worthy Data Center Dialog interview, especially given Citrix's recent news that they will offer XenServer for free. And, there's a dual connection with Cassatt, too: long before they were acquired by Citrix, Cassatt and XenSource had worked together on making automated, dynamic data center infrastructure that much closer to reality (in fact, if you poke around our Resource Center or Partner page, you'll find some old webcasts and other content to that effect). And, we worked with John quite a lot in his IDC days.

In the first part of the interview that I'm posting today, I asked John about the virtualization market, criticism that Citrix has been getting, and what they have in store down the road.

Jay Fry, Data Center Dialog: John, from your "new" vantage point at Citrix, how do you see the virtualization market changing in 2009?

John Humphreys, Citrix: I see the basis for competition in virtualization changing significantly in 2009. In recent years, the innovation around virtualization has been squarely focused on the virtualization infrastructure itself with things like motion, HA, workload balancing, etc. The pace of innovation and rate of customer absorption of recent innovations has slowed. This was inevitable as the demand curve for new innovations around any technology eventually flattens.

Rather than competing to offer new features or functions on top of the base virtualization platform, I see the companies adding management capabilities that extend across multiple platforms. It's only through this ability to manage holistically that customers will truly be able to change the operational structure of today's complex IT environments.

DCD: How do you think Citrix will play into those changes?

John Humphreys: In a sentence, we are working to drive these changes in the virtualization marketplace.

Specifically, the company recently announced that the full XenServer platform would be freely available for download to anyone. This is a truly enterprise-class product -- not just a gimmicky download trial. It has live migration and full XenCenter management for free, with no limits on the number of servers or VMs a customer can host.

At the same time, we introduced a line of management products (branded Citrix Essentials) that are designed to provide advanced virtualization management across both XenServer and Hyper-V environments. Initially, we have focused on providing tools for integration and automation of mixed virtualization platform environments with capabilities like lab management, dynamic provisioning, and storage integration. Over time you will see more automation capabilities with links back to business policies and infrastructure thresholds.

DCD: What are your customers saying are the most important things that they are worrying about right now? How influenced by the economy are those priorities?

John Humphreys: Cost cutting. Pure and simple. We see the rapid economic decline setting the agenda for at least 2009 and was a major factor in the decision to offer XenServer for free. In tough economic times, well-documented cost saving measures like server consolidation are relied upon even more and being the low-cost provider of an enterprise-class virtualization solution provides Citrix with a opportunity to get XenServer in the hands of millions of customers.

DCD: I've seen some press coverage about disappointment regarding the XenSource acquisition by Citrix. The main complaint seems to be that Citrix isn’t being as aggressive as it should be in the space and VMware seems to be adding fuel to the fire by implying that Microsoft will eventually block out Citrix. What are your thoughts on how Citrix is handling XenSource and the competitive environment?

John Humphreys: I've heard those criticisms as well. I think what you have seen already in 2009 is that Citrix is become a lot more aggressive with XenServer. We think we have a distinct position in the marketplace and a unique opportunity. What you saw Citrix announce on February 23rd was the first opportunity to tell the full virtualization story. The free platform download (with motion and management) and the ability to add cross-platform advanced management capabilities is truly unique. We like where we are positioned going forward...and from the early indications, the market likes our position as well.

DCD: Why did you make the jump to the "dark side" of working for a vendor? Is the grass actually greener?

John Humphreys: I always knew I wanted to combine strategy and execution, as to me that is where the magic happens.

...

Up Next: In the second part of this interview, I ask John about his thoughts on (you guessed it) cloud computing, the pros and cons of what Citrix, VMware, and Microsoft have planned in that space, and how much different things look now that he's not an industry analyst with IDC.

Thursday, March 5, 2009

IDC: Downward Directions for IT in 2009 leave room for cloud computing uptick

IDC's 44th annual Directions conference in San Jose this week may be the longest running IT conference in the world, but it didn't pull any punches on the economy. From John Gantz's opening keynote through every track session I attended, the analysts recounted what anyone running a data center knows all too well: IT spending is pulling way back. IDC wisely did a mid-year course-correction on their 2009 spending prediction at the end of last year, and they used some of these revisions at the conference to show how far -- and how fast -- things have headed down. As I was sitting in the audience, I started to wonder if even those revisions were deep enough. Only cloud computing escaped the dour forecast (more on that in a minute).

Here's a quick summary of the key points I took away from the conference, focusing on IDC's take on the macro-level IT environment, the impact of the economy on running a data center, and -- the lone bright spot -- how cloud computing figures into all this. On that last point, let's just say Frank Gens, the day's cloud presenter, was positively giddy to be the one guy who got to deliver good news. The highlights:

The economy has us in a dark, dark place -- but IT is needed now more than ever

John Gantz, IDC's chief research officer, summed up the effect of the economy at the start of the day: "I don't think we've been here before. We're in new territory. We're in the dark" because we don't have a very good handle on what the economy's going to do next. Gantz noted that IDC has ratcheted down IT spending predictions for this year to nearly flat over 2008 (up only 0.5%). That doesn't take into account any effect from the Obama stimulus package (or those from other governments elsewhere in the world). IDC told their analysts not to try to quantify stimulus package impact, said Gantz, but to assume they "won't make things worse." Let's hope. One positive note: 2010's growth rate looks positively robust, but of course that's because it's building on the catastrophe that 2009 is working out to be.

However, says Gantz, the bad economy is not slowing down the increase in mobile Internet users, the adoption of non-traditional computing devices, nor is it putting the brakes on the amount of data being gathered or user interactions per day (predicted to increase to 8.4 times its current rate in the next 4 years). And that's all something for IT to deal with.

So, said Gantz, amid this "extinction event," there are incredible new demands for management. "The economic crisis changes everything and it changes nothing. We have a new, new normal." The current situation merely forces the issue on seeing and doing things differently. "If everything is crashing down around you," said Gantz, "now is a good time to take a risk. Now is a period of opportunity." He noted companies like Hyatt, GE, RIM, FedEx, HP, and IBM had all been started in recessions (I've also written about great innovations during previous downturns).

What opportunities did he see in particular right now? Gantz noted enterprise social media, IT outsourcing, virtualization software, and Internet advertising (really). Of particular note: virtualization management software. Which has a big impact on IDC's view of what's happening in the data center...

The move to more modular, pay-as-you-go data centers -- with warnings about virtualization management

Michelle Bailey, presenting her content from IDC's Data Center Trends program, seemed very concerned about how hard and complex managing a data center had become, and believed that we're going to see customers making moves to simplify things out of necessity.

The recession, said Bailey, "changes the decision on where to hold the [data center] assets." Its main impact is to push data center managers to move "from a fixed price model to a variable pricing model," to move costs from cap ex to op ex.

Virtualization has had a huge impact so far, and will continue to do so, according to Matt Eastwood, IDC group vice president for enterprise platforms. In fact, there will be more VMs than physical servers deployed in 2009. "It will be the cross-over year," said Eastwood.

However, that drives big, big concerns on how data center managers are going to cope, said Bailey. "The thing I worry about the most with virtualization is the management consequences. There's no way to manage this with the processes and tools in place today." In fact, Bailey is so worried that she thinks this "virtualization management gap" might stall the virtualization market itself as users search for management solutions. "I’m worried that customers may have gone too far and may have to dial it back," she said. "The challenge in the server virtualization world is that people aren't used to spending a lot of money on systems management tools."

When we at Cassatt talk to customers about this, we've found that they know there is a virtual management problem and are actively trying to address it. The approach we talk to these customers about is having a coherent strategy for managing all of your data center components based upon the application service levels you need, regardless of whether the compute resources are physical or virtual. Having a separate management stack for each virtualization vendor and another one for their physical systems is not appealing, to say the least.

One of Bailey's other most important points was that there isn't just one type of data center -- there are actually three:

1. Enterprise-style data centers focus on SLAs, cost containment, and are dealing with space issues.
2. Hosting/outsourcer data centers focus on doing what's necessary to meet customer demand.
3. Web 2.0/telco-style data centers are all about cost efficiency and growth.

Trying to compare how you run your data center with one that has a different set of goals is not productive and will get you focused on the wrong things -- and result in more of a mess.

She did say, however, no matter what type of data center you are running, to look at doing things in a much more modular way, as a way to simplify. Bailey called "massively modular" the blueprint for the future data center. This helps break down big problems into smaller, more manageable ones, and ensures that you don't have to be absolutely correct in your 20-year vision for your data center. She sees things like containerized data centers becoming more standardized and less proprietary, making this modular approach more complimentary than disruptive to what data centers are already doing. And, with power and cooling still a huge problem for data centers, IT ops and facilities need help from both a more modular approach and the "pretty sophisticated" power management tools that exist. (I like to think that she was thinking of us at this point in her presentation.)

Cloud computing is on track to move to the mainstream -- and show actual growth despite the economy

Bailey had a healthy dose of cloud computing skepticism in her break-out presentation: "Anything that has money attached to it can’' be [in the cloud] for another 10 years," she said, clearly paving the way for big organizations with security, compliance, and lock-in concerns to give this cloud model a try, but to do so within their own data centers as an internal cloud.

In Frank Gens' keynote on cloud computing, he acknowledged a lot of the concerns that companies have been expressing about going to an external cloud, however, was very upbeat. "The idea of cloud is of very, very high interest to CIOs in the market right now," he said. Last year IDC predicted that 2009 would be "the year of moving from the sandbox to the mainstream," said Gens. "We are certainly on that path right now."

Why? Maybe not for the reasons you might think (cost). Gens corroborated comments from Gartner's Tom Bittman at their Data Center Conference back in December: the No. 1 reason that people want to move to the cloud is that "it's fast" to do so.

This new cloud model hasn't yet bulldozed the old model for IT, according to IDC, for reasons we've heard (and Michelle Bailey mentioned above): deficiencies in security, performance, availability, plus problems integrating with in-house IT. Gens sees cloud computing beginning the move across Geoffrey Moore's chasm toward mainstream adoption as a result of a couple things: performance-level assurances and being able to connect back to on-premise systems.

"Service level assurances are going to be critical for us to move this market [for cloud computing] to the mainstream," said Gens. And, customers want the ability to do hybrid public/private cloud computing: "They want a bridge and they want it to be a two-way bridge" between their public and private clouds.

And, despite all the economic negativity, IDC painted a pretty rosy picture for cloud computing, noting that it's where the new IT spending growth would be happening. Gens described it as the beginning of the move to a more dynamic deployment of IT infrastructure, and part of an expanding portfolio of options for the CIO.

"We’re right where we were when the PC came along or when the Internet first came out," said Gens. As far as directions go, that's pretty much "up."

Up next: comments on Nicholas Carr's closing keynote at IDC Directions San Jose. Slides from the IDC presentations noted above are available for IDC customers in PDF format in their event archives at www.IDC.com.