Wednesday, October 21, 2009

Scientists v. Cowboys: How cloud computing looks from Europe

Is Europe following the U.S. on cloud computing...or vice versa?

While I was over in Berlin for a chunk of the summer, I had a chance to connect up with some of the discussions going on in Europe around cloud computing. It's true, high tech information these days knows no international boundaries. Articles that originally run in North American IT pubs are picked up wholesale by their European counterparts. New York Times articles run everywhere. Tweets fly across oceans. And a lot of technology news is read directly from the U.S. sources, websites, communities, and the like.

However, homegrown European publications are brimming with cloud computing, too. I found references to cloud in the Basel airport newsrack and the Berlin U-Bahn newsstands, all from local European information sources (and some of their reporters are excellent). European-based and -focused bloggers are taking on the topic as well; take a look at blogs like http://www.saasmania.com/ and http://www.nubeblog.com/. Even http://www.virtualization.info/, one of the best news sources on (you guessed it) virtualization, is run by Alessandro Perilli out of Italy. And, of course, there are big analyst contingents from the 451 Group (hello, William Fellows), Gartner, Forrester, and many others in various European enclaves.

The real question, though, is not only how are Europeans getting their cloud computing information (though I'm betting quite a few marketing folks are watching that topic closely). The real question is what are European customers doing about cloud computing now and how do they want to adopt this new operating model.

First things first: there are many, many European cloud companies

One strong indication that cloud is going to accelerate in Europe: the start-up ecosystem is growing. Just in the past few months I've run across folks like Berlin-based Zimory; Symetriq (from the city that their website and my memory tout as "beautiful and stimulating," Edinburgh, Scotland); Barcelona-based Abiquo -- just to name a few.

This week, EuroCloud launched in 7 European countries as an industry organization to promote cloud computing and software-as-a-service (SaaS). Part of the group's raison d'ĂȘtre (do you like how I worked in a European language?) is to show that there's a vibrant group of companies ready, willing, and able to help get cloud computing going. Another is to help those same companies work through issues that could be putting limits on cross-border growth -- language and legal differences. Phil Wainewright wrote a good ZDNet blog on the EuroCloud launch.

Cloud adoption in Europe v. America: "It's the little differences"

But back to customer adoption.

When I ran European marketing program(me)s for BEA out of London in the early part of this decade, I got a real flavor for some fundamental differences in how the Europeans approach IT compared with their American counterparts. I'm betting this is going to hold true for the cloud computing uptake as well.

America: The Cowboys

Americans, true to a stereotype or two, seem to be the cowboys of new technology adoption. They shoot first, ask questions later. Good thing, too. This approach gives start-ups a way to get some early customers and evangelists who seem to enjoy the thrill of sticking their necks out. Mind you, there are plenty of organizations in the States where that's not the case (and the bad economy is probably putting a damper on some of them now), but I've seen lots of examples of this willingness to take a bit of a risk in hopes of a big pay-off.

Europe: The Scientists

What I saw while living and working in Europe was that despite their many cultures and distinct national personalities, Europeans on the whole are much more methodical and measured in their consideration of anything new. At least when it comes to IT purchases. Maybe it's why they come up with things like ITIL. They are much more about process. They want to make sure all the issues they will run into have been considered. Security? Governance? Compliance? Management? Disaster recovery? I hope you've thought those all through, Mr. Vendor. And, if you have, and can deliver solid answers, you're in luck. These guys will go big for whatever it is. However, getting all those pieces in place can take a while, so you'd better plan on that.

Cloud computing in Europe: the waiting game?

So, where does that put us regarding cloud computing?

I'm sure there are some good adoption stats from IDC or someone similar that answer the question (or give us predictions) in the specific. I know what the European start-ups and other vendors of cloudy wares in Europe are hoping: they are hoping that the folks in Europe are willing to take a leap sooner rather than later. And, they hope that they will take that leap with a European-based company.

But there's another possibility, one that is less hopeful that the hype is going convince someone, and one that matches my experience from my BEA days: they'll wait. I've seen them do it. As is more their style, the European IT buyers might wait for some of the big players who will pave the way and make things safe for them. Or, maybe some will split the difference and get help from an innovator with big backing like Zimory (who has funding from Deutsche Telekom).

The economy is a bit of a wildcard if you're trying to place your bets. Earlier in the year, while working as Cassatt, I reported some commentary from our partner Bull who said that the bad economy really hadn't affected them yet. I'm hoping to revisit that in an upcoming post, along with additional info I'm seeing on where things stand now.

(If events are any indication, things are looking good. Recent Cloud Camps in Frankfurt, Munich, northeast England have all seemed to get a lot of attention.)

Either way, it will be interesting. I'd love to hear your take on the hotbeds of European cloud activity, which European vendors are doing unique and innovative work, and where and how customers are starting their cloud computing implementations of one sort or another.

In the meantime, here are some suggested folks to follow on Twitter to stay in touch with the cloud computing goings-on in Europe:

@mastermark @rjudas @zimory @vtri @abiquo @nubeblog @saasmania @raesmaa @stephenmann @wif @bryanglick

Wednesday, October 14, 2009

Running with scissors? Or maybe trimming the risks out of virtualization instead

What's riskier: standing at the top of a hill in a thunderstorm while holding a golf club...or commuting to work? Skydiving...or flying to Pamplona and then taking part in the Running of the Bulls? OK, now for the really tough one: juggling knives...or implementing virtualization in production?

Before you answer, you should be warned that humans are quite bad at assessing relative risks. TIME Magazine had a cover story a number of years back on that very subject. The problem in a nutshell, scientists say, is that we're "moving through the modern world with what is, in many respects, a prehistoric brain."

Deploying virtualization doesn't sound all that dangerous, especially compared with some of the scarier items above (like, say, if you lacking knife-juggling skills). If that was your answer, you probably haven't been in an IT shop recently. OK, maybe it's not the same as spending the night in the polar bear cage at the zoo, but it's not without risks. And since risk is a four-letter word in large IT shops that handle mission-critical systems, it's worth figuring out how to get the benefits while minimizing potential problems.

The IT Process Institute survey that VMware and my employer, CA, sponsored and released (actual survey available here) a few weeks back was put together to try to identify what some of the more mature IT shops are doing to deal with worries about risk that virtualization introduces.

The survey itself, which talked to 323 different IT organizations, is a bit daunting to wade through, so I pulled out some interesting tidbits worth highlighting here:

People's sights are set higher than just server consolidation. And they are being aggressive. 72% are aggressively virtualizing production servers, but only 19% are using virtualization just to consolidate servers. The bigger focus is on pursuing high availability and disaster recovery. And, nearly another third are shooting for dynamic resource optimization.

If you use virtualization in production, you are going to have to change operating procedures and controls. The survey found that those organizations with a strong foundation of process controls and procedures were likely to only need to modify the controls they already have in place. That's good news for some of the bigger IT shops and their IT ops staff. However, the more complex things you try to do with virtualization, the survey found, the more modifications should be considered. Kind of straightforward, but worth repeating.

Many mature virtualization users have at one point limited the release of virtualization in production until training requirements and management procedures were taken care of. Maybe it's just a phase everyone has to go through, but it seems many have slowed things down to err on the side of caution. The survey shows, however, that many IT organizations have now reached what it calls "a level of confidence needed to aggressively virtualize business critical systems, including those that are in scope for regulatory compliance." That's impressive, actually, and is a big change from a few years ago.

Here's where the running with scissors part comes in

The study identified a bunch of virtualization bogeymen -- things that seriously worry the IT shops working to deploy virtualization. Some of those worries included:

· It makes a mess (technology-wise). Also known as virtual sprawl (a term that VMware was very sensitive about when we started using this a few years back at Cassatt). This can also hinder compliance efforts.
· Things can hide. There are potential issues with discovery tools not tuned to work with virtual systems.
· Too much of a good thing. There is license compliance risk if virtual servers can appear too easily. You might also exceed available resource capacity.
· Putting all your eggs in one basket. Well-meaning administrators can inadvertently make things riskier by stacking critical apps together on one faulty machine.
· It makes a mess (organization-wise). Aggressive adoption probably means specialized training and new organization structures.
· A perfect target. Security is a big concern by survey respondents, worrying about the hypervisor as a new layer of technology that can be attacked.

Those probably all sound familiar. The interesting point is that the survey said they all added up to this: "putting virtualized systems into production without a well-reasoned set of operational controls creates an unacceptable level of production and compliance risk."

OK, time to hit these problems head-on, then.

The survey's recommendations for reducing virtualization risk

So what's a good way to start addressing those risks (besides hiding the scissors)? The survey has three sets of recommendations. I've noted where in the survey to find them so you won't have to dig through it yourself:

· 11 practices for those organizations with "baseline maturity" (generally doing server consolidation-type things with business critical systems). The focus for those orgs talks about host access, configuration controls, VM provisioning, and improvements to capacity & performance management. See page 14 of the survey for the exhaustive list.
· 25 practices for "highly mature, but static" uses of virtualization (generally looking at HA & DR issues). There the suggestions are about configuration standardization, approved build images for provisioning, and using a "trust but verify" approach for changes. It takes all of page 17 of the survey to list these suggestions.
· For the braves ones doing "highly mature, but dynamic" things with their virtualization, the research suggested 12 items around configuration discovery & tracking, change approvals, capacity management, and the overall process maturity needed to support automation. See page 22 for this list.

Some virtualization management suggestions

One of the suggestions that's "highly desirable" is a coordinated view between your physical and virtual environment, according to today's Computerworld article from Beth Schultz about "Getting a Grip on Multivendor Virtualization." CA's Stephen Elliot was quoted in the article talking about some of this survey's findings. "A lot of customers are recognizing that virtualization is great, and works wonders," said Elliot, "but certain environments will not be virtualized and so they need to figure out how to manage and automate both worlds together."

I've posted before on why the automation side of the equation is important, as have Laurie MacVittie and others. The report chimes in here, too: "Many view automation used to manage dynamic virtual resources as a prerequisite for tapping internal and external cloud computing resources." But that's a subject for another day.

The Computerworld article also has some good comments about the importance of being able to manage across multiple virtualization vendors' environments, something that has also been discussed here, but was outside the more process-oriented scope of this particular survey.

"The key thing that pleasantly surprised us [in the study] is that customers right now...are thinking more proactively about the need to manage their virtual infrastructure," Elliot said in an interview with Jeffrey Burt of eWeek. "Just because they've got new innovations [in their data centers] doesn't mean that their need for management just disappears."

That, after all, would be pretty risky.

Monday, October 5, 2009

BusinessWeek's Hamm: Recession harms Silicon Valley's ability to contribute, but helps cloud computing

Believe it or not, there are still people who get paid to watch and report on the ins and outs of Silicon Valley. They see a lot of what's going on and probably think that those of us in the business are alternately drivers of a pretty interesting part of the global economy -- and in need of therapy. (Example? How about Larry Ellison's most recent anti-cloud computing rant and the flurry of commentary that followed.) But such dynamics come with the territory.

Steve Hamm of BusinessWeek is one of the guys who has been on the case for years. I met him through my work with Cassatt and have followed his work about the goings-ons in the Valley closely. With the economy still struggling to find its footing (certainly on the job front) but the buzz still at full volume about cloud computing, I thought some of his thoughts would be a useful contribution to the dialog here. He's been in the middle of some heated discussions on these topics. Plus, I find it's always interesting to interview the interviewers now and again.

If you don't know Hamm, he has been writing about the tech industry for 20 years, first in Silicon Valley and now in New York City. At BusinessWeek, he covers innovation, globalization, and leadership. He's also the author of two books, The Race for Perfect, about innovation in portable computing, and Bangalore Tiger, about the rise of the Indian tech industry.

Jay Fry, Data Center Dialog: You've been watching Silicon Valley and its cast of characters quite closely for a while. You've also had a chance to watch the economic downturn's impact on the Valley. What do you think will be of lasting importance in the way this recession is playing out compared with previous ones -- or even compared with early predictions about how this one would proceed?

Steve Hamm, BusinessWeek: Because it's so cheap to create Web 2.0 companies, and because most of the services they create are free to the consumer, this recession has had no discernible impact on creativity and use growth in social networking and social media. However, I believe that the funding squeeze, paucity of IPOs, and risk-aversion among venture capitalists has caused a slowdown in innovation in other areas that need attention -- green technologies, for instance. So while this recession hasn't had a devastating effect on the Valley and innovation, like the dot-com bust and '02 recession, it has harmed the Valley's ability to contribute as much as it could to solving the world's systemic problems.

DCD: Your BusinessWeek cover story back in December featured some big names (like Andy Grove) questioning whether Silicon Valley has the big, long-term thinking these days to deliver major, technological breakthroughs, rather than just incremental improvements. Do you think 2009 has provided evidence one way or the other?

Steve Hamm: I don't think much has changed. While IBM continues to invest aggressively in fundamental science-oriented breakthroughs, HP has narrowed its scope. Start-ups seeking money to do deep, long-term, transformative work continue to have difficulty getting funding.
DCD: Last month you blogged about the effect that consolidation is having on the IT industry and Silicon Valley entrepreneurship in general. In your interviews for the article (with senior Valley execs Bill Coleman and Craig Conway), you heard opposing views: that consolidation will be very tough for vendors -- or that the possibility of acquisition will still inspire start-ups to invent, despite a mostly dry IPO market. "I'm looking for some smart and brave CIOs to begin to experiment with start-up technologies again," you said, "--and get the innovation stream flowing again." Have you seen these brave CIOs yet? What do you think the increasing consolidation in hardware, software, and IT service providers means?
Steve Hamm: I found a few brave CIOs this year. They include the people who were aggressively embracing the combination of cloud computing and mobility, whom I wrote about in my story, "How Cloud Computing Will Change Business." They include: Donagh Herlihy, chief information officer of Avon Products, and Todd Pierce, vice-president for information technology, at Genentech North America. These guys aren't taking big risks on unproven technology. They see ready-for-primetime technology that's really useful for them and deploy it widely and quickly.

I think the continued consolidation of enterprise technology providers gives IT purchasers fewer choices, less negotiating power, and less innovation.
DCD: Have you seen any truly innovative business concepts or technologies appear on the scene despite (or because of) the downturn?
Steve Hamm: I think one of the potentially most powerful technologies that has advanced in spite of the downturn is using mobile phones for banking in emerging nations. There are a lot of pilot programs underway. Once this stuff gets widespread, it has the potential to transform the economies of poor nations.

DCD: What was the thing that caused you to think that cloud computing was worthy of BusinessWeek coverage?
Steve Hamm: It's the newest iteration of Internet computing, and will make it easier and cheaper to put the full power of computing in everybody's hands.
DCD: In June's special report about cloud computing, you mentioned that this "may be the largest growth opportunity since the Internet boom," but that it's going to take a while. Some have speculated that cloud computing might be the innovation that comes out of this downturn. From what you've learned covering the topic, what's your view on the evolution of cloud computing and what its impact might be? Has it lived up to its billing so far?

Steve Hamm: I think it has lived up to its billing so far, through there still is a long way to go. Adoption of SaaS by businesses has accelerated during the downturn, as companies look for ways to do more things with technology while avoiding capital spending as much as possible.
One of the most important things to get done by the industry is to assure that individuals and companies can easily shift from one cloud service provider to another without undue stress. No lock-in! Efforts are starting along these lines, but they could get bogged down by the desire of companies to gain profit advantages through using proprietary technologies.

DCD: How difficult is it to pick up a relatively technical topic and write something for the BusinessWeek audience? After that June special report on cloud computing, for example, I saw criticism from industry insiders that some of the examples you used were not actually examples of what cloud computing actually does. How do you approach complicated, nuanced, or -- like cloud computing -- ill-defined technical issues?

Steve Hamm: I don't write about the plumbing of technology, so it's not hard to make something accessible to our smart and aware BusinessWeek readers. The critiques of my story were silly. My critics didn't seem to be able to register the fact that the story was about the melding of cloud computing and mobility. Also, they were hung up on narrow technical definitions while I was writing about broad shifts in the computing landscape. My advice: Get a life.

DCD: What's the relationship you generally have with the technical press and industry analysts? How much do you rely on them to vet technologies or trends before they get your attention?

Steve Hamm: I don't read trade publications very often, though I respect the best of them. I do talk to industry analysts frequently and respect many of them. Important technology shifts usually come to my attention through meetings with the innovators themselves. They (actually, their PR people) know what I'm interested in and e-mail me asking for meetings.

DCD: When I was last at your offices, the outlook for BusinessWeek (and journalism's current business model) seemed pretty bleak to me. How are things looking for BusinessWeek at the moment? Do you think there's a solution to "reset" the business of journalism that can work financially? How do you think you and your peers will come through this all?
Steve Hamm: BusinessWeek is being sold by McGraw-Hill and its future is very uncertain. The business model for serious business journalism, especially in print, is under assault. The search for truth by professionals is very expensive, and isn't supported by current print or even online revenue trends. I think that eventually new business models will emerge that support serious journalism, but I don't see them yet. If you think of something that will save journalism, please send me a note.


Thanks to Steve for spending the time to do this interview. Many of the issues discussed here will obviously continue to have a big impact on businesses -- and the IT and data centers supporting them. One of the more interesting angles to think about is that decisions being made now, under the duress of a deep recession, could play out over a vastly different backdrop as economic conditions change. Cloud computing has the chance to help shrink the time it takes a business to respond as the economic climate shifts, but only if an organization's IT department has made significant progress toward adopting this new operational model. And, of course, cloud computing needs to do what it has promised for each individual customer.

In any case, I'm pretty sure that all of this will continue to give Steve and BusinessWeek plenty of things to cover for as long as they can navigate these same rough waters themselves.

You can find Steve Hamm's Globespotting blog here.

Wednesday, September 30, 2009

Multiple virtualization vendors in one IT shop? If so, the management challenge changes

A survey published at The Hot Aisle this week purports a shift in the virtualization market that we've heard about from Microsoft and Citrix more and more over the past few months: people are adopting more than one virtualization technology in their environments.

VMware has had an impressive run as a near-monopoly in the server virtualization space for the past several years. And, the competitors have been saying that any day now they will begin chipping away at VMware's dominance. Truth is, data points making that case have seemed few and far between. Now either that's changing, or data supporting this view is getting more visible.

Having more hypervisor vendors is likely to have an impact on how all this stuff gets managed. But first, some data:

Only one hypervisor? Not really.

Just before VMworld, I saw some figures (source long since forgotten) that Microsoft's installed base for Hyper-V was growing quickly. Andi Mann from Enterprise Management Associates (EMA) pointed me to survey results that his firm has done as far back as 2006 saying that having multiple virtualization vendors is actually pretty commonplace. According to Andi's 2008 research, 90% of organizations (large & small) have multiple virtualization vendors, with 2.4 being the average number of vendors. "Only 2% of all enterprises are dealing with a simple, homogeneous virtualization environment comprised of one platform, one technology, and one vendor," says Andi.

And, this week's survey at The Hot Aisle from the Enterprise Strategy Group (ESG) adds fuel to the fire: 44% of respondents to the poll use two or more hypervisors in their IT environment. 16% use three or more.

Why more than one hypervisor?

Why would organizations want so many types of server virtualization in house? ESG's Stephen O'Donnell blogged that he didn't think this was a result of maturing products from Citrix and Microsoft, but rather because of licensing moves by vendors and license management practices by customers. ESG was also running a second poll while I was viewing their survey results, asking "Why is your organization running more than one hypervisor (e.g., Citrix/Xen, KVM, Microsoft Hyper-V, VMware ESX)?" The most common answer when I looked (24%) said that pressure from people like Oracle and Microsoft is pushing them to support multiple hypervisors. 13% say it's just departments doing their own thing. Both contribute, I'm betting.

Another reason is likely to be acquisitions, something that's becoming more common with the downturn. Sure, you didn’t set out from the start with more than one hypervisor vendor, but mergers tend to mean you get plenty more than you bargained for. Why should server virtualization be any different?

We asked similar questions at the beginning of the year in the last (Cassatt) Data Center Survey. We found that as much as IT wanted to standardize on one hypervisor, they believed that it wasn’t going to be possible. I always figured our data was a bit ahead of the curve (Cassatt's database -- the source of the survey responders -- was always filled with early adopters). These other existing and more recent data points might say that things are, indeed, moving this direction.

What having multiple hypervisors means for management

Whatever the reason, if this is indeed the reality -- that there is a broadening set of hypervisor vendors in the enterprise -- it means management of virtualization is going to need to change. Up until now that conversation has been primarily about detailed management of a VMware-specific stack. With a broader set of virtualization players involved in server virtualization, cross-virtualization management tools become a lot more important.

Why do I say that?

Sure, it seems to make logical sense. But again: data. When we asked about how people wanted to manage virtualization in our early 2009 survey, Cassatt heard pretty clearly that people wanted one management stack. And that wasn't just for all of their virtualization technology, but for their physical environments as well. Clearly, customers weren't doing that yet, but that's what they said they wanted. Silos were bad; integrated management was good. Maybe that world is getting closer now. In any case, you can read more details on those results here.

I'll be interested to see (as more data appears) how much serious marketshare movement is underway -- and how much of it is wishful thinking by VMware's competitors. The real indicator, then, might well be an uptick in interest, discussion, and implementation of broader management tools.

Tuesday, September 22, 2009

Making cloud computing work: customers at 451 Group summit say costs, trust, and people issues are key

A few weeks back, the 451 Group held a short-but-sweet Infrastructure Computing for the Enterprise (ICE) Summit to discuss "cloud computing in context." Their analysts, some vendors, and some actual customers each gave their own perspective on how the move to cloud computing is going -- and even what's keeping it from going.

The customers especially (as you might expect) came up with some interesting commentary. I'm always eager to dig into customer feedback on cloud computing successes and roadblocks, and thought some of the tidbits we heard at the event were worth recounting here.

(A side note: if you're interested in more cloud-related customer comments, you can look at some previous Data Center Dialog posts, including this one recounting questions overheard about internal clouds from a few months back.)

Clouds under the radar

As a way to set the stage, 451 Group analyst and ICE practice lead Rachel Chalmers compared cloud computing’s adoption to that of Linux in the late '90s, and to the beginnings of server virtualization in dev/test environments. "There was a lot of VMware [being used in IT] before CIOs even knew it was there. It only belatedly comes to the attention of the architects." Chalmers was clear that adoption is underway ("Customers are already using cloud," she said), and emphasized how under-the-radar adoption like this can really help. "They come pre-evangelized," said Chalmers. That means there are a lot fewer people to convince when it comes time to make the case to roll this out widely.

Customers: Some hesitate to call it cloud

Yuvi Kochar, CTO from the Washington Post Company, saw the work that his teams were doing as shared services, and said, "I hesitate to call this work a cloud." He acknowledged that they were enabling elasticity and cost accounting, but that some of what they were working on was still "hand-wired." What was the thing that pushed Kochar over the edge to move toward dynamic, shared IT services (even if he can't bring himself to actually it "cloud")? Cost. "We want to move everything to a variable cost model," Kochar said. Another customer, Ljubomir Buturovic, VP & chief scientist from Pathworks Diagnostic, said they told their server vendor they would be buying no more hardware, and they haven't. Again: cost was the driver -- in this case, capital costs.

Cloud: It's (still) not for the faint of heart

I thought some of the most useful insights of the customer panel were from Jim Houghton, now co-founder and CTO of Adaptivity. Houghton reflected back on his experiences a few years ago getting what could now be termed a private cloud -- then called utility computing -- off the ground at Wachovia and Bank of America. He characterized the work as something which eventually turned into a $100-million project to stabilize a "chaotic environment" in IT and "took out $500 million in op ex."

Houghton noted that he and his employers at the time "learned a lot of hard lessons along the way." For example, the mechanisms for truly automated provisioning and handling dynamic shifts in demand are "really important," but really hard, especially since the tools at that time he started (early to mid-2000s) were still quite immature. Metering what you're using is critical, he said, but "it's hard to meter when you're separate from the physical environment." Overall, said Houghton, cloud-style dynamic infrastructure is "not for the faint of heart."

Funny. I heard Donna Scott of Gartner make the same observation about creating a real-time infrastructure back in December. We've come a long way, but there's more to do, for sure.

Biggest pain: impact on the people and the organization

Houghton identified a couple best practices for making the shift to a dynamic, shared infrastructure to support your applications: for example, workloads that are entirely self-contained and in which you have access to the source code make excellent candidates for these types of deployments. However, he said, you need to truly understand the workload characteristics. That last bit is something I've heard many large IT organizations lament as a huge problem -- profiling what they currently have and how it runs.

However, he said, the more painful thing is the change in the operational model and its impact on a company's organization and the people that work in it. I've heard this many times over the past several years, from customers, analysts, and even vendors. You can't underestimate the impact that the cloud operating model will have on personnel.

"It gets to be a very touchy subject for clients," said Houghton. "It's hard to do the business cost model without talking about 'these 20 people will be out of a job.'" Or at least who can be moved to focus different things than they're focused on today.

Need to move beyond just virtualization

Houghton also made a comment that infrastructure that's "mildly virtualized may be efficient, but is not all that dynamic." I have to agree: there is a lot more needed to make a private cloud-style infrastructure fly than loading up on a bunch of virtual servers. In fact, that point was made pretty strongly when Dan Kusnetzky lined up his fellow 451 Group analysts to highlight some of the cloud computing inhibitors. William Fellows conveniently rattled off some of his (and the industry's) favorites: security, SLA support, corporate governance, interoperability, vendor viability, job security, and misaligned business models. To name a few.

These are issues we've heard before, for sure, and are at the top of the list of things to be addressed in order for cloud computing to be viable day-to-day in an IT shop. I think it's good news that we're hearing more and more about the manageability side of the issue.

Can I drive your Mercedes while you're not using it?

One of the strongest objections to an internal cloud, or really a shared infrastructure of any sort, still boils down to what's called "server hugging." Houghton gave an amusing explanation of the mentality by putting it this way: "Just because I have 4 Mercedes and I can only drive 1 at a time, doesn’t mean I'm going to let you drive the other 3." In his case, the team of coworkers and vendors pitching this new approach had to put in "years of work" to "build up the trust. What you have to say in response is, 'You'll get everything you wanted, plus a lot more.'" And, of course, you have to back it up by delivering on your promises with great cost savings, excellent service levels, and an improved ability to respond to new requirements from the business.

Are we making progress on cloud computing?

Chalmers noted that in many cases a move to cloud computing doesn't feel like progress at all. "All we're doing is moving the headaches somewhere else. But," she said, those management headaches "still need to be solved."

Fellows noted that there really isn't any definition of success so far. Early adopters of grid, utility computing, and virtualization have been the ones in his experience to be most aggressive in working toward cloud-style environments. "It’s a logical end-point for any of those [earlier] activities," said Fellows.

In fact, said Chalmers, "very often when we see an early adopter of cloud that's successful, it's because they understood HPC [high-performance computing] and putting everything under control in the data center."

So, are we making progress toward incorporating cloud computing in today's IT environment?

Houghton from Adaptivity made the point that "the economic malaise has put a lot of power back into the CTO's hands." It's a chance to use this power to instigate more sweeping changes to how IT operates than any time in recent memory. But people are being judicious with that power.

Appropriately, Chalmers probably did the best job of putting cloud computing in that context: "In this world of financial crisis, the acid test of any technology is: 'Does anyone care enough to sign a purchase order?'" Clearly, some do. (And some fraction of those are on stage at events like this talking about what they've done and learned.) And, as the industry matures the management capabilities and works out the kinks that many of these customers noted, others will start to feel that it's time to sign on the dotted line, too. And hopefully join them on stage.

Wednesday, September 16, 2009

7 ways Twitter improves an IT conference. And 2 ways it makes things worse.

This week, VMware announced that the presentations from VMworld 2009 were available for download. And they, of course, used Twitter to do so -- a much used source of "data center dialog," if I do say so myself.

It's been a few weeks since VMworld, but I'm amazed by the engagement still going on with that show via Twitter (check it out for yourself at #vmworld). As Andi Mann from EMA pointed out prior to the event, the VMware folks seem to have this conference tweet-o-rama thing down pretty well.

Which got me thinking: since we're all learning about what to do and not do with Twitter in real time, it might be worth assessing what worked at VMworld -- and IT shows in general -- tweetwise. And, of course, it's always fun to list the things that didn’t work at all (free advice: let’s all try not to do those things next time).

Before I launch into this, I'll note that some of these tweet-enabled scenarios were planned methodically by the show organizers. It's a big part of what's called marketing these days. Other Twitter uses, however, were definitely not in VMware's plans and probably annoyed the organizers to no end. But such is world of Twitter. If you could control it, it wouldn't be nearly as interesting of a phenomenon. Nor as powerful.

So, here are the 7 things I thought worked out really well if you happened to be on Twitter during this show (and, perhaps, will be helpful at many other IT shows like it):

1. Pre-show Build-Up Using Anything and Everything: VMware themselves, I think, did a masterful job of building excitement for the event on Twitter. And the things they used didn't have to be inherently exciting. They showed off the hands-on lab set-up. They showed off the conference bag. They teased the band headlining their party (and many on Twitter teased right back when they learned it was Foreigner). Each of these items was mostly inconsequential, but was an excuse to connect with potential attendees to convince them to come. Or remind people to sign up or even just to plan their on-line calendars.

The best part: an interesting behind-the-scenes look at the event set-up.

The worst part: random folks each saying "Just made my plane reservations to VMworld! W00t!" Either way, it was hard not to be engaged and, yes, looking forward to the event in some way.

2. Crowd-Sourced, Spontaneous Event Idea Generation and Organization: Aside from the now-very-common tweet-ups that have become pretty easy to plan, I watched a 5K fun-run over the Golden Gate Bridge get suggested, accepted, planned, and organized on Twitter in the weeks prior to the show. Requests for volunteers to help organize went out the same way, and runners goaded non-runners into joining with just a few sarcastic tweets. The only downside? A few of the runners returned to the exhibit hall post-run in their commemorative t-shirts prior to taking a commemorative shower. But that's not Twitter's fault.

3. A Way to Deal with Unexpected Logistical Snags: Many of the most popular sessions at VMworld (which, as you might guess, had "cloud computing" in the title) were fully booked pretty far in advance of the show in the pre-show reservation and agenda tool. Very frustrating. But they announced the magical "clearing of waitlists" as they were able via Twitter. And, on-site, the organizers were able to communicate about the hands-on labs when they crashed and were subsequently restored on the first day.

4. An Ad Hoc Meeting Planner for Attendees: We attendees used Twitter to find people we knew were going to be around somewhere/sometime during the week, and alert the world to our presence in general -- or even our specific location. I found CNET blogger and Cisco cloud guru James Urquhart blogging in a random hallway, exactly where he tweeted he'd be. People ID'd me from my Twitter avatar picture and made business connections. I found people I'd interacted with only in 140-character bursts, but never met (for example, it was going to be hard to recognize @beaker without his squirrel disguise, until someone sent a twitpic of him from the exhibition floor). And a few people were even sharing reviews of different parts of the big gala party as it was happening, presumably sending the twitpics and tweets with the hand not holding onto their cocktail.

5. A Way to Start Conversations to be Continued on the Show Floor: A couple vendors had booth staffers with a significant Twitter following. They used the event as a way to encourage folks to come by their booth and continue their on-line discussions. Notice I didn't give kudos for "using it to promote your booth & giveaways.” Sure, vendors used it for that, but keep reading. You get black marks for using Twitter that way.

On the positive side, Microsoft had fluorescent t-shirts identifying their tweeters, a great way to open a conversation with them. Twitter is a way to have a ready-made intro for talking to someone you're following (or vice versa). Suddenly the event is full of almost-friends and conversations can pick up where they may have left off on-line -- or take off from scratch pretty rapidly.

6. How to Get Around the Rules of the Show (AKA "The Rebuttal"): Sure, Twitter is a way to enable those not at the show to "listen in," participate, comment, etc. That's been well documented since Twitter showed up on the scene. But VMworld also featured "The Rebuttal" from none other than the Microsoft contingent. Sure, they weren't allowed to show their competitive products on the show floor, but they weren't shy about tweeting their thoughts throughout the keynotes and providing some reality checks of the hosts' spin machine. I'm not sure I agreed with all of their snarky on-show-floor commentary countering the VMware hype, but I definitely read it. It also brought up this odd situation: Microsoft as underdog. That's a bit amusing, when you think about it. Twitter's often seen as a great equalizer or content meritocracy, meaning people you've never heard of can get their two cents in. Microsoft proved the big guys can, too.

7. The Continuation of the At-the-Show, “In the Club” Feeling Long after the Event Is Over: I'm still checking (and contributing to) the #VMworld hashtag two weeks after they finished sweeping the final blinking give-away pens out of Moscone. Sure, the message flurry is nothing like it was during the event, but it has kept going. The post-event content was slower, but filled with commentary (like this one...and my previous post) and (guess what) free publicity for the organizers. Sure, VMware used the after-show tweets to publicize what the virtual twitterati were saying about them and the event (especially the good stuff), but they also used it to shape the commentary, and remind attendees that they had a great time. All that sure beats the traditional post-show survey.

OK, so those were the things that Twitter seemed to improve. What didn’t work?

1. The Twitter Analyst NDA: VMware caused another phenomenon and debate: analysts attending their Monday pre-show pre-briefings were told that the event was under non-disclosure. And, yes, that meant no tweeting. There were a few meta-tweets about not tweeting, but most analyst attendees didn't use Twitter much at all on Monday. So when Tuesday's main-stage show began, the backlog of messages and content that the analysts had been stewing over for 24 hours flew by with wild abandon.

The meta-tweets sparked another discussion that I saw several people join in over the following few days: how much information was OK to disclose? Tweeting that you were at an event getting a pre-briefing under NDA meant you were acknowledging that the "secret briefing" event was happening, and that you were merely not talking about the specific content. Some other industry events and briefings are requiring that no tweets even mention that a briefing was taking place. This war has a lot of skirmishes to work out still. NDAs and Twitter require a lot of work on the side of both the briefer and the briefee to get it right.

To be clear, the NDA actually worked as VMware wanted it to, but I'm not sure if that qualifies as a good thing. Watch this space.

2. "Come by Our Booth" Tweets: Perhaps the most useless and annoying tweets of the week were those described by one person as tweets that went something a little like this: "Hey there! Be sure to come by Booth # ___ to talk to ____ [vendor name] about ________ [product being sold] and have a chance to win a ______ [expensive techie gadget]." These messages are antithetical to how Twitter can best be used. To me, they said, "Move along, skip that booth; no conversation to be had there."

Ending up with more to do at a conference

All told, I feel like I got much more out of this event than I had previously, but also walked away with a feeling that there was even less time to get everything I wanted done during the week. The connections I made were stronger and more robust, though, and I have to admit it's because of Twitter. I'm interested to hear what others who attended thought -- and what people who've attended other tweet-enabled shows have seen that works great. Or badly.

But of course, the real question is: when do I start watching the #VMworld2010 hashtag?

Wednesday, September 9, 2009

VMworld '09 proves VMware is no Foreigner to big ambitions

Last week, VMware played host to over 12,000 guests and one '70s/'80s rock band at its annual VMworld event. At its most basic, the show (minus the concert part) was a great place to get some hands-on experience with VMWare technology -- the labs were packed all week (despite a bumpy start).

But I always look at these events as a measuring stick for the ambitions of the host. It was no surprise to me that this year was a summation and a reiteration that VMware wants it all. And it did a pretty clear job of communicating the company's belief that it can deliver.

The truth, of course, is usually a bit divergent from the official PR messages (and nearly always a bit later in arriving). Despite any, um, Head Games that it might be playing, did VMworld give any meaningful clues as to how successful they are likely to be? I think it did.

The star of the show: Is VMware Hot Blooded or As Cold As Ice?
Dan Kusnetzky, now making his home at The 451 Group, used his blog to enumerate a number of things about VMware that may seem obvious, but weren't necessarily the things that VMware went out of its way to reiterate during VMworld.

Dan's comments boil down to these:

· VMware is trying to convince the world of a major underlying assumption: that your IT environment will be pretty homogeneous. Dan reiterates that this isn't likely to be true: "Most large organizations have mainframes, midrange machines, storage servers, and network servers in their datacenters," said Dan. "VMware acts as if either these established mainframe or midrange systems are not there or are going away. Neither are likely to disappear even if industry standard systems are increasingly important."

· VMware is now large enough that much innovation is coming from other, smaller players (he listed Cassatt [now part of CA], Surgient, and others as examples).

· VMware often tramples on the smaller members of its ecosystem.

· VMware is enabling a "good enough" delivery in many existing markets (like HA/Failover) that is impacting those markets, to VMware's benefit and other players' detriment.

· The term “cloud” is getting used for, well, everything. (No argument on this one from any corner, I'd imagine.)
I think most of Dan's observations are true for a couple reasons: at the moment, VMware is a functional monopoly in server virtualization, and they've been in that role for longer than anyone thought would be the case. There are stats now starting to appear that Citrix XenSource and (especially) Microsoft’s Hyper-V are finally starting to make customer inroads. The resulting behavior from VMware, having been alone at the top for so long, is to talk about the world the way they want it to be.

The result of all this? They are getting a big piece of the business where their capabilities are "good enough," and they are a bit predatory when considering what capabilities they want to deliver and what they want to leave for partners. That's not surprising: it's an ecosystem that's a bit out of whack because of their dominant market position.
It's Urgent: Despite their dominance, VMware needs to be successful higher in the stack
VMware's current money-maker -- the hypervisor -- is headed to a price war that will eat into the revenue and margins they've enjoyed. So, they are working on ways to move up the stack. Toward this end, they are extending into management capabilities with vCenter, something I heard Gartner bugging them to do as far back as three years ago. They have extended the work in this space by announcing a focus on helping create and manage internal clouds, then external clouds, then hybrid clouds -- a vision that matches what industry watchers expect.

However, VMware's need to find something "sticky" to keep customers deeply connected to their technology prompted one of their more interesting announcements recently: their acquisition of SpringSource. The people streaming out of Paul Maritz's keynote when SpringSource CEO Rod Johnson came onstage were much more attributable to waiting lists for the next sessions and bad keynote clock management than the content.

It Feels Like the First Time…or at least like it did at BEA
It took attending VMworld for me to put some of the pieces together about the SpringSource deal: namely, that the execs behind this have done things like this before. Seeing VMware COO Tod Nielsen and CMO Rick Jackson onstage reminded me of days past when they (and I) worked together at BEA. (Rod Johnson was a favorite BEAWorld keynote during that time, I might add). BEA was, at that point, trying to do much the same thing as VMware is working to do now: create a platform that its customers would build their applications on top of. BEA did some parts of that really well, some not so well, but the thing I took away from last week was a reminder that these guys have been here before. I heard repeated comments from VMware that knowing much more about the application is going to be of greater and greater importance going forward. It’s no accident.
There are other things, though, that make me question how far they can get. One of those things is their overly homogenous worldview, and their hope that if they say things enough they will be true. Now, if their adoption rates continued unabated, I might even be willing to admit contradicting them would be a bad bet. However, feedback I heard from large customers while at Cassatt and from some of my early sales interactions here at CA says that the easily virtualized servers are going to be (if they aren't already) taken care of pretty soon. Now comes the hard part – the rest of the servers.

This attempt to keep the virtualization efforts going beyond the easy pickings is probably why they repeated from stage that with the performance work they've done recently, there's no reason a customer should worry about virtualizing virtually everything. That's certainly a (Double) Vision I'd ask for much more evidence on before believing sight unseen.

There were some other questions, too, like why Fastscale was acquired by EMC -- not its VMware subsidiary. It may be a signal of a much larger plan afoot by VMware's parent company that's much broader than the more homogeneous approach of VMware. But we'll have to wait to see how that one plays out.
One of the 140 industry analysts at the show said he didn't really feel there was much new announced at this year's VMworld. I had the same impression leaving Paul's keynote. However, maybe that's a point in VMware's favor. VMware is now at a stage where it's filling in the holes. Its vision -- and even slides -- weren't drastically different from last year's. Having worked in and around the internal/private cloud story for a number of years now, I didn't see their story as original or groundbreaking. However, it didn't need to be. The context and the story have already been set.
Will VMware be the, er, Juke Box Hero that helps the cloud go mainstream?
Instead of breaking new ground, I see VMware trying to go mainstream with what some innovators have been talking about for a number of years. In that respect, it's exciting to see. In other respects, caution is still warranted. (Customers, I'm sure, don't need to be told to approach vendors -- especially those with competitors trailing behind by a few steps -- with skepticism.) As always, they are wise to keep in mind VMware's underlying assumptions and make sure to use VMware's technology for whatever aligns with their vision -- not just because of the elegant story VMware lays out. And where customers and VMware don't align, there's a whole ecosystem of partners and would-be competitors willing to help you out.
Though I bet they won't do as good a job helping you relive the music of your youth while making IT infrastructure decisions. VMware has that nailed.
Now if I can just get "I Want to Know What Love Is" out of my head...