Wednesday, January 27, 2010

EMA’s Mann: enable the ‘responsible cloud’ while allowing cowboys enough rope to experiment

I remember back in the '90s when the World Wide Web was being described as the Wild Wild Web. For me, that phrase always conjured up jarring visions of gunslingers waiting patiently for Netscape Navigator to download page after page of underlined, blue hyperlinks.

Fast forward to 2010 and the next big wave of IT change -- cloud computing -- is being described in much the same way. Andi Mann, vice president at industry analyst group Enterprise Management Associates (EMA), recently authored a report (“The Responsible Cloud”) that used the same type of a frontier metaphor to describe the situation enterprises are finding themselves in with cloud computing today.

Andi's report centered not only on data describing the state of cloud computing in larger enterprises, but also made a big deal about the fact that many key management disciplines are both critical and still not in place for cloud. Both Carl Brooks of SearchCloudComputing and Denise Dubie of Network World wrote good summaries of some of the EMA report’s findings (CA, my employer, by the way, was one of the report’s sponsors).

To dig into what some of the specific survey findings might mean, Andi spent a few moments to do this Data Center Dialog interview. Read on for Andi’s thoughts on the speed of cloud adoption, why he was so surprised about some of EMA’s data about virtualization, respondents’ strong bias against public clouds, and just how you can make the most of those IT cowboys.
Jay Fry, Data Center Dialog: Andi, the EMA Responsible Cloud study you just published has some great data about customers’ current thoughts on some of the most important issues about cloud computing at the moment (hint to readers: it’s not the definition). First, a couple specific questions about your report. You reported that 11% of your surveyed companies are intending to implement cloud computing in the next 12 months. Does that feel like a lot, a meager number, or just about right given where you feel things are at the moment?

Andi Mann, Enterprise Management Associates: Given that we are really still in an early-adopter period with cloud computing, I think 11% seems just right considering we were looking only at mid- to very large-sized enterprises. Of course, there is a lot of hype around, with predictions of 20, 30, 40% or more of enterprises switching to cloud, but these seem over-optimistic to me. Cloud is a fundamental shift in how we build and deliver IT services, and especially with these larger organizations, it would be surprising to see an overwhelming adoption just yet. With still slow economic growth, that would mean enterprises dumping their sunk cost in existing IT, which realistically is simply not going to happen overnight.

DCD: One of the big on-going debates (still, unfortunately) is about the reality of private clouds. Your survey numbers say that they are not only real, but are favored by a significant number of customers (75%). Some industry watchers have called private clouds a stepping stone to the public cloud; others see it an end in-and-of itself. Others call it purely self-serving vendor hype. What is your take on your results – and the debate?
Andi Mann: The results did not surprise me. In fact, it seems to me that a lot of predictions about public cloud are based on impossibly idealistic notions of IT, not on the realities of a day-to-day, in-the-trenches view. Enterprises like the idea of cloud, but are looking for a balance between the ideal and the achievable. So they will look to get the benefits of cloud computing – the economies, the self-service, the agility and flexibility – from their own existing investments.
Over time they will likely branch out into public clouds in some way, but IT never gets rid of anything – mainframes, COBOL, client-server. Heck, I know some very large organizations still running Windows NT and even DOS on mission-critical systems today. I have no doubt it will be the same with cloud – private and public cloud will both simply extend the IT we already have, not replace it.

DCD: I like your comment that cloud computing is “like any ‘new frontier,’” it has “too many cowboys, and not enough sheriffs.” The whole focus of your report is the “responsible” cloud and your prescriptive maturity model for how to get there. What is so “irresponsible” about how many orgs are approaching the cloud at the moment and how do you suggest that get fixed? What’s going to lead to more sheriffs in IT? Will we get there quickly enough?
Andi Mann: The issue I see with many “irresponsible” cloud deployments is that they are lacking important management discipline – monitoring, audit, change detection, data management, and more. As a direct result, they are open to some pretty crippling problems, like data loss, downtime, poor performance, audit failure, and more – which might be fine for a Web 2.0 startup, but is unacceptable for a global financial enterprise.
As cloud use becomes increasingly mission-critical in major organizations, I do believe we will see repercussions for this lack of management, including loss of business, damaged credibility, prosecution, fines and more. When that starts to happen, we see in our research that performance monitoring, event management, reporting, security, automation, problem diagnosis, alerting, and more become more than nice-to-have, but rather mission-critical in their own right.
DCD: Coming from CA, I’m obviously a big proponent of the management disciplines you talk a lot about in your report and how to bring those to the cloud. However, I’ll play devil’s advocate for the moment: isn’t there something good about breaking the rules sometimes to get huge changes accomplished? I’ve heard lots of anecdotal evidence to suggest that cloud computing is following the same adoption trend that has been followed by so many disruptive innovations that have gone before: adopters often doing their adopting by ignoring the rules (developers paying for EC2 instances on their credit cards and the like). If IT were to always stick to the tried-and-true, innovation would never have a chance. How should companies look at those trade-offs?
Andi Mann: That is actually a really good point. It is important to see that “irresponsible cloud” is not always bad per se, or at least not forever. My colleague Julie Craig actually pointed this out in the report, saying that the “cowboys” of IT, just like in the Wild West, have an important role in opening up new frontiers like cloud. With many positive aspects of the pioneering spirit of the Wild West cowboy – adventurous, innovative, self-sufficient – they take risks so that they and others can eventually reap the rewards.
But there is a time and a place for them, and once they start to threaten the integrity of mission-critical systems and data, they must be reined in. Otherwise organizations will start to face data breaches, compliance and audit failures, service outages, staff costs, and more that will damage their business, rather than enable it.
DCD: You talk a bit in your survey about “rogue deployments” of cloud computing. What should even the most process-driven IT department learn from some of these rogue cloud efforts? Is there a good way to pull some of the rogue efforts back into the mainstream that you’ve seen?
Andi Mann: This comes down to gaining visibility into cloud deployments – such as through application mapping, network monitoring, change detection, or simply asking – and then allowing the cowboys enough rope to experiment, to learn what works and what doesn’t, to build skills and knowledge, to fix broken processes and streamline failing ones, and even to make some mistakes, as long as they are not "playing" with mission-critical systems.
Over time, these cowboys can potentially become important contributors in a new cloud group within IT, with a mandate to experiment, and go outside the box, with non-disruptive systems and experimental applications to start with, but over time with mission-critical systems too. Certainly stomping them out is not the answer – instead, by finding out what they are doing and why, you will learn what is broken and how to fix it.

DCD: Your data about virtualization adoption shows VMware at the top, as expected. However, both Microsoft Hyper-V and Oracle VM (with the Sun xVM numbers added in) are not so very far behind. That seems really surprising to me. What’s your take?
Andi Mann: Yes, that was surprising to me too. VMware leading all is not a shock, of course. I have been seeing Hyper-V gaining strength in virtualization too, so I am not too surprised Microsoft is a strong second choice, especially given the massive deployment of Windows and the strong leaning among respondents toward private, on-premise cloud.
However, I am very surprised that Citrix XenServer in particular is behind Oracle VM/Sun xVM (although the gap between them is within the survey’s margin of error). I think Oracle VM or Sun xVM are well behind even other Xen variants as general virtualization platforms. They especially need to grow their limited ecosystem of management support, particularly in essential areas of cloud management like automation and service management, before they are even close to parity with ESX, Hyper-V, or XenServer. Yet with Sun’s stellar pedigree throughout the Internet boom, Oracle’s market dominance in applications, and a formidable stack that runs from hardware all the way up, perhaps it stands to reason that Oracle has a bigger opportunity in cloud than I and many others (including Larry Ellison, at least in public [prior to their recent Sun briefing, anyway -ed.]) previously thought.
DCD: What were the other big shockers for you as you put together the report?

Andi Mann: For me, a few things really stand out. The first is the importance that these respondents put on management. Common wisdom (which frequently is neither) says that cloud is flexible, open, unrestricted, and unencumbered by processes or disciplines like ITIL or ISO or ITSM. However, the reality is the exact opposite, with a majority of organizations actually believing that most management disciplines are very important, with many cited as being even more important to cloud than to traditional IT.

The second big shocker was the impact on maturity that cloud can help to deliver, with the data showing not just a correlation between maturity and cloud deployment, but even suggesting that cloud deployment actually leads to increased IT operational maturity.
Third is the high number of respondents that cited difficulty or cost of implementation and lack of flexibility and agility as their most critical problems in cloud computing. Again, this runs contrary to the common wisdom that cloud is easy to deploy and flexible to run.
Finally, perhaps the biggest shock is the total beatdown that real enterprises give to the idea of public cloud – with 69% of respondents completely rejecting public cloud options, and only 9% choosing to adopt a purely public cloud approach (compared to 46% adopting a purely private cloud approach), it really stands out as a pariah in the choice of deployment models.
DCD: When customers are looking to select cloud computing technologies, your numbers say that an existing relationship with a vendor is the least important of all selection criteria. So, all bets seem to be off with the cloud, eh? Any insights on why?
Andi Mann: This is not entirely unusual. In fact, we have seen the same result in desktop virtualization. There are a lot of things more important than a single supplier or a relationship with an account manager. Ultimately, these are nice to have, but if you cannot ensure uptime, security, performance, or other absolutely critical values, a supplier relationship will not stop you going out of business. So enterprises are rightly focusing on what is really important. Vendor relationships will come into play, but only after all other requirements are satisfied.
So virtualization and other vendors that have aspirations to be cloud technology providers had best make sure they bring their ‘A’ game – they will not get on the team by reputation alone.
DCD: Many surveys about cloud computing have pointed to security, compliance, technology maturity, and even potential vendor lock-in as barriers to success. Interestingly, yours is one of the few (only?) I’ve seen where “human/political issues” tops the list. From my experience, organizations don’t always realize that is going to be a problem, but it ends up being a huge one, so I think you’re spot on. Any color on this and did you hear anything about how organizations address are currently looking to address this hurdle?
Andi Mann: I think a lot of people, including analysts, can forget that IT is a combination of people, process, and technology. We saw in our research into virtualization on multiple occasions that the human issues were actually the biggest hurdles to success – especially skills, resourcing, and time, but also departmental politics – so we included this in our cloud survey as well.
Especially in new technologies, the technology and process can be difficult, but ultimately it seems the people issues are the most difficult to overcome. Again, you can start to address these issues with controlled ‘skunkworks’ – using deployments in non-critical areas not just to build skills – but also to establish repeatable procedures, and to show how to be successful, so you can at least get a head-start in overcoming skills, resourcing, and political issues when the time comes to deploy cloud for critical systems and applications.

Thanks for the additional insights and in-depth commentary on the EMA report and survey results, Andi.
If anyone would like the report, but is not an EMA customer, you can purchase the report here.

Monday, January 11, 2010

CA buys Oblicore: contracts & SLAs are pivotal for cloud computing

One of the most interesting aspects of cloud computing is that the conversation about what’s important for IT to deliver shifts to what’s really important: the service you’re trying to provide.

With the cloud, no longer is it sufficient to simply talk about the ins and outs of your technology stack; in fact, the idea is for many of those underlying details to drop down into the mists of irrelevance.

So far, cloud providers have been pretty careful about sticking their necks out, barely offering any sort of SLA that you can take to the bank. The result? "Best efforts" by the cloud vendors to keep things running, but downtime situations similar to what salesforce.com users faced last week.

However, as cloud computing matures, this has to change. And as it does, being able to track and manage the service level obligations you have with providers – internal and external – will be crucial.

What CA’s acquisition of Oblicore could mean for cloud computing

This is one of the reasons that CA’s just-announced acquisition of service level management innovator Oblicore could be so intriguing: the company has set the bar for managing business-relevant SLAs.

The Oblicore technology is best known for its ability to take technical information about what’s going on in your IT environment and correlate that with the business-level information held in your service level contracts. The company’s name is an appropriate summary: service level “obligations” are at the “core” of your business.

The business-level information is a way for Oblicore’s customers – from CIOs to managers of the IT services you’re buying from external vendors – to get a real handle on what they or their service providers are truly delivering. Oblicore has a bunch of integrations with technical third party tools (including with CA products from our partnership in customer engagements with them, but also with many others from the likes of HP, IBM, BMC, Oracle, SAP, and Microsoft) that can give customers the detailed input needed for comparison. And, adding Oblicore to CA’s existing service management solutions should be a nice complement.

More details and roadmaps will become clear as we work with the Oblicore team (all of whom are joining CA, by the way) on integration plans. But having their contract management capabilities helps CA’s service management capabilities today – and as we broaden our cloud computing offerings – makes a lot of sense going forward.

EMA and Gartner on the importance of service management in the cloud

Lisa Erickson-Harris, research director at analyst firm Enterprise Management Associates (EMA), fleshed this out a bit in the press release:

“EMA believes cloud computing trends will further increase the demand for service-based management including service level and service value management. Customers are demanding solutions like Oblicore that can offer business-oriented service management including details related to the service contract, collaboration during the negotiation process, and analytics to represent service delivery results in a dashboard format. Oblicore's strengths are well-suited to bridge the IT/business gap by capitalizing on existing management data, business-driven service definition and analytics to demonstrate service delivery results at each service lifecycle phase.”

This IT/business gap that Lisa mentions was one of the key points in the keynote that Donna Scott and Ronni Colville discussed in their Gartner Data Center Conference keynote last month in Las Vegas. Their advice for IT? “Until you become a driver in the alignment with the business, you are not going to be critical to the business – nor to the CEO."

“You’ve got to know what you have,” said Colville. “You have to have visibility to your services” in business terms. Scott reiterated the same point: “You have to know what you have and what’s effective” so you can “figure out whether they are the appropriate investments.”

Oblicore’s top-down, business-level approach (as opposed to the bottoms-up, technical details which are useful if translated and given context, but can be useless by themselves) can be a big help here. If you want to manage business value, a contract-based view of service levels is an innovative and effective way to start.

For more on these topics, there are some interesting blog entries on service management and cloud computing at the ca.com community site.

Broader implications for CA’s recent cloud-related acquisitions?

I have always thought of CA as a bit of an acquisition machine. That’s certainly been the company’s reputation (and certainly not always to its benefit). However, this is the first CA acquisition I’ve been part of from the inside, and I’m hoping those on the outside will start to see a strategic thoughtfulness in CA’s recent moves, especially in the area of cloud computing.

To recap some of interesting ones of late, CA has picked up Cassatt, NetQoS, and now Oblicore. Add to this the organic development underway and there’s certainly some stuff to be watching as we move through 2010. (What? There’s organic development at CA? To which I answer: surprise! Yes, indeed, there is.) More comments to come as things happen.

The Oblicore acquisition press release from CA can be found here.

Wednesday, January 6, 2010

Watching cloud computing trends for 2010: the vision, customer reality, & downstream impact

We’ve crossed into a new decade (or not, if you’re a numbers purist), and it seems to be an appropriate time for a little reflection, and maybe a chance to get some feel for where things are headed in the 2010 for IT operations, especially as they look at what cloud computing is going to mean for them.

Last year, I rattled off a Top 10 list of Top 10 lists. This year, I, for one, am suffering from a bit of Year-End Top 10 Prediction Fatigue, so I’ll hold off on that for the moment. Instead, I thought I’d check the stats from this Data Center Dialog blog as a way to get a bead on things that people have been interested in here. That way I’m not just pulling commentary out of thin air. Plus, it’s more scientific that way, right?

So, to mark Data Center Dialog’s a-bit-more-than-one-year anniversary, here's a look back at the most popular posts over the past 6 months. My guess is it gives some indication of what people will be considering for the initial months of 2010 as well.


Beyond definitions: looking for vision…and then practical cloud considerations

It’s probably no surprise that the most popular new post here was also the one that explained a bit about the biggest news story to involve us (now former) Cassatt folks: the acquisition of the Cassatt assets and expertise by CA in June. I provided a bit of commentary on the acquisition just before taking an extended few weeks off in Berlin prior to starting my current gig at CA. It’s not too far-fetched to predict that in the months ahead there will be lots more details to talk about regarding what we’ve all been doing at CA since then. (That’s an easy prediction, for sure.)
Fumble! What not to do at a cloud computing conference – The endlessly repeated ploy of starting panel sessions at cloud computing events with the question, “So what is cloud computing?” finally took its toll on me in November. The result was a bit of a (popular) rant about why the people working on cloud computing need to move on to much more useful questions. At least that’s the only way I’m going to be able to sit through another cloud computing conference.
Judging by the numbers, I think you’re with me.

Not that talking about definitions was bad. 2009 was a year in which the definitions of cloud computing (public/private, internal/external, hybrid, and the like) came into focus as the discussion evolved throughout the year. To prove the point, the most popular entry of the last 6 months was the same entry that was the most popular of the first 6 months of the year: Are internal clouds bogus? That post was followed closely by one that described the shifts in the discussion toward hybrid clouds – and the speed with which the combination of public and private cloud computing was likely to become a reality (answer: it’ll take a bit; there are some missing pieces still). In fact, my highlight blog entry that tracked the evolution of the private cloud from the front row seat I’d had was also a favorite.

So, yes, there was a place for the definitional conversation. But real-world information about what customers are doing now was in great demand (and still is, say the stats). This pragmatism is heartening and it propelled reader interest in the entry I did on the 451 Group’s cloud computing customer panel at their ICE conference, alongside a post from earlier in the year listing actual customer questions that our field team had been getting about private clouds. There’s nothing quite like getting things from the horse’s mouth.

Here’s something that was perhaps part of that same trend about getting in better sync with reality: measurement of what is actually going on in data centers (even when it’s showing a trend toward upholding long-established patterns of inefficiency) was also of interest. I saw that as good news, especially since we also had lots of interest in our post from earlier in 2009 discussing the fact that many data center managers don’t actually know what their servers are doing. The first step to a solution is understanding what the problem is, right?

Notable Data Center Dialog interviews: Steve Hamm of BusinessWeek, Bill Coleman, and Mark Bramfitt

Some of the Data Center Dialog interviews (a feature I started at the beginning of 2009 with Al Gillen of IDC) were a few of the most popular posts in the second half of the year. The most read interview? It was one in which I turned the tables on a member of the so-called mainstream media and interviewed him: BusinessWeek’s Steve Hamm had some interesting insights on Silicon Valley in general. It didn’t hurt that he linked to the interview from his blog, too, of course. Interestingly, he has now done what many journalists are doing out of necessity -- changed careers. Steve noted via Twitter a few weeks back that he’s now at IBM.

Also interesting to our readers were the conversations I published about two folks well-known in the world of IT management talking about their Next Big Things. Bill Coleman, my former CEO, gave his take on where cloud computing is now (just Version 1.0, he said) and what he’s working on after Cassatt. Mark Bramfitt talked about his move from a leading role in PG&E’s data center energy efficiency programs to private industry in a two-part interview just published at the end of December. Both Bill and Mark included some candid thoughts on what’s gone well and not so well in their previous roles.

The longer-term implications of cloud computing
We also saw interest in some of the posts pondering what cloud computing might mean to the industry as a whole. Will it mean less will be spent on IT, or, in fact, help accelerate growth? And what about the oft-noted bogeyman of automation? Will the cloud finally mean that automation takes center stage without being cast as the human-hating Skynet from the Terminator flicks? That topic generated some interest for sure.

And, of course, Twitter…
And, as you might expect, our readers were in alignment with the rest of the industry (world?) in its interest in Twitter in the past few months. I used VMworld as a case study of 7 ways that IT conferences can be improved by using Twitter – and 2 ways it makes them worse. That one seemed especially popular with folks who found us via – you guessed it – Twitter.

So what does this all mean for 2010? I have no idea. But I’d bet a couple of these trends will continue to be important. The discussion around how private, public, and hybrid cloud computing will work will certainly continue. I’m expecting, however, that it becomes more focused around the day-to-day practicalities that end user IT departments need to know.

I’ll do my best to make sure I continue to interview folks of interest in the industry with useful perspectives that will benefit IT operations and those doing big thinking about the many ins and outs of cloud computing.

And, Data Center Dialog will continue to be a place to get a pulse on topics at the forefront of the way data centers – and IT in general -- are being run and managed. As customers continue their focus on cloud computing, this blog will too. Thanks for being part of the dialog.

Monday, December 28, 2009

PG&E’s Bramfitt: data centers can be efficient and sustainable, but we must think beyond our current horizons

Before heading off for a little vacation, I posted the first part of a two-part interview with Pacific Gas & Electric’s Mark Bramfitt. Mark is best known in data center circles (you know, the kind of people that hang around this site and others like it) as the face of the Bay Area utility’s efforts to drive data center energy efficiency through innovative incentive programs. Now that I’m back, I’m posting part 2 of that discussion.

In the first part of the interview, Mark talked about what’s worked well and what hasn’t in PG&E's data center energy efficiency efforts, the impact of the recession on green IT projects, and how cloud computing is impacting the story.

After the interview appeared, Forrester analyst Doug Washburn wondered on Twitter if PG&E might kill the incentive programs Mark had been previously guiding. The thought is a natural one given how integral Mark has been to the program. From my interview, it didn’t sound so far-fetched: Mark himself thought cost pressures on PG&E might cause PG&E to “de-emphasize…some of the industry leadership activities undertaken by PG&E…we may not be able to afford to develop new programs and services if they won’t deliver savings.”

In the final part of the interview, posted below, Mark gets a little philosophical about energy efficiency metrics, highlights what’s really holding back data center efficiency work, and doles out a final bit of advice about creating "inherently efficient" IT infrastructure as he departs PG&E.

Jay Fry, Data Center Dialog: Mark, we’ve talked previously about the fact that a lot of the data center efficiency problem comes down to split incentives – often IT doesn’t pay or even see the power bill and the facilities guy doesn’t have control over the servers that are creating the problem. In addition, facilities is usually disconnected from the business reason for running those servers. How serious is this problem right now? What effective or creative ways have you seen for dealing with it?

Mark Bramfitt, PG&E: While I think we can all cite the split incentive issue as a problem, it’s not the only obstacle to success and it gets perhaps a bit too much air time. As much as I’d like to say that everyone’s focus on energy efficiency is making the split incentive obstacle moot, our experience is that there are only two ways that we get past that.

The first is when the IT management side simply has to get more done with less spend, leading them to think about virtualizing workloads. The second is when a facility runs out of capacity, throwing both IT and facility managers into the same boat, with energy efficiency measures often saving the day.

DCD: Many of the industry analyst groups (the 451 Group, Forrester, Gartner, etc.) have eco-efficient IT practices or focus areas now, a definite change from a few years ago. Is this a sign of progress and how can industry analyst groups be helpful in this process?


Mark Bramfitt: I can’t argue against the focus on energy efficient IT being anything but a good thing, but I’ve identified a big resource gap that hampers the ability of utilities to drive efficiency programs. We rely on engineering consultants who can accurately calculate the energy savings from implementing facility and IT improvements, and even here in the Bay Area, we have a hard time finding firms that have this competency – especially on the IT equipment side.

In my discussions with utilities across the U.S., this is probably the single biggest barrier to program adoption – they can’t find firms who can do the calculations, or resources to appropriately evaluate and review them.

So, I think there’s a real opportunity for IT service providers – companies that both sell solutions and services – to step into this market. I don’t know that analyst firms can do much about that.

DCD: I think it’s fair to say that measurement of the power and ecological impact of data centers has started and really does matter. In the (pre-CA) survey Cassatt did at the beginning of this year on the topic, we found that there were definitely areas of progress, but that old data center habits die hard. Plus, arguments among the various industry and governmental groups like the Green Grid and Energy Star over how and what to measure (PUE, DCiE, EUE) probably don’t help. How do you think people should approach measurement?


Mark Bramfitt: We’ve delivered a consistent message to both customers and to vendors of metering and monitoring systems that we think quantifying energy use can only have a positive impact on driving people to manage their data centers better. Metering and monitoring systems lead people to make simple changes, and can directly measure energy savings in support of utility incentive programs.

We also like that some systems are moving beyond just measurement into control of facility and IT equipment, and to the extent that they can do so, we can provide incentive funding to support implementation.

The philosophical distinctions being made around what metrics are best are understandable – the ones we have now place an emphasis on driving down the “overhead” energy use of cooling and power conditioning equipment, and have nothing to say about IT power load. I believe that the industry should focus on an IT equipment “utilization index” rather than holding out for the ideal efficiency metric, which is probably not conceivable given all of the different IT workloads extant in the marketplace.

DCD: One of the major initiatives you had was working with other utilities and create a coalition trying to push for similar incentives to those PG&E has been offering. I’ve heard Data Center Pulse is helping work on that, too. How would you characterize where we are with this process currently. What’s next?

Mark Bramfitt: The PG&E-sponsored and -led Utility IT Energy Efficiency Coalition [also mentioned in part 1 of Mark’s interview] now has almost 50 members, and I would say it has been a success in its core mission – to provide a forum for utilities to share program models and to discuss opportunities in this market space.

I think it’s time for the Coalition, in whatever form it takes in the future, to expand to offering the IT industry a view of what utilities are offering programs and how to engage with them, as well as a place for IT and other companies to list their competencies.

I’ll be frank, though, in saying that I don’t know whether PG&E will continue to lead this effort, or if we need to think about another way to accomplish this work. I’ve had conversations with a lot of players to see how to maintain the existing effort as well as to extend it into new areas.

DCD: Watching this all from PG&E is certainly different than the perspective of vendors, IT departments, or facilities folks. Any comments on seeing this process through the vantage point you’ve had?

Mark Bramfitt: Utilities primarily look at the data center market as a load growth challenge: how can we provide new energy delivery capacity to a segment that is projected to double every 5 years? There’s already immense national and global competition for locations where 20 or even 100 mW loads can be accommodated, and where customers want to build out facilities in months, not years.

PG&E’s answer to that is to actively work with customers to improve energy efficiency, extending our ability to accommodate new load growth without resorting to energy supply and delivery capacity that is expensive to build and not our best choice from an environmental sustainability perspective.

My larger view is that IT can deliver tremendous environmental benefits as it affects broad swaths of our activities – improving delivery systems, for example, and in my line of work enabling “smart grid” technologies that can improve utility operation and efficiency. But to get there, we need to have IT infrastructure that is inherently efficient and thereby sustainable, and we have a huge opportunity space to make that happen.

DCD: Do you have any advice you’d care to leave everyone with?

Advice? Every major achievement I’ve seen in this space has been due to people expanding their vision and horizons. It’s IT managers taking responsibility for energy costs even if they don’t roll up in their budget. It’s IT companies supporting efficiency measures that might in some ways be at cross-purposes with their primary business objectives. And it’s utilities that know that their mission can’t just be about delivering energy, they need to support customers and communities in new ways.



Thanks, Mark, for taking the time for the interview, and best of luck with the new venture.

Wednesday, December 16, 2009

As Bramfitt departs PG&E, where will the new focus for data center energy efficiency efforts be?

If you were in the audience at the Silicon Valley Leadership Group’s Data Center Energy Efficiency Summit earlier this year, you were probably there (among other things) to hear Mark Bramfitt from Pacific Gas & Electric (PG&E). Mark has been the key figure in the Bay Area utility’s efforts to drive improvements in how data centers are run to cut energy costs for the data center owners and to reduce their burgeoning demand for power.

But, Mark had a surprise for his audience. During his presentation, he announced he was leaving PG&E.

“The reaction from the crowd was impressive…and for good reason,” said John Sheputis, CEO of Fortune Data Centers, in a story by Matt Stansberry at SearchDataCenter. “Mark is an excellent speaker, a very well-known entity in the Valley, and among the most outspoken people I know of regarding the broader engagement opportunities between data centers and electricity providers,” Sheputis said. “No one has done more to fund efficiency programs and award high tech consumers for efficient behavior.”

Mark has stayed on with PG&E for the months since then to help with the transition. Before he moves on to his Next Big Thing at the end of 2009, I thought I’d ask him for his thoughts on a few relevant topics. In the first part of the interview that I’m posting today, Mark talks about what’s worked well and what hasn’t in PG&E's data center energy efficiency efforts, the impact of the recession on green IT projects, and how cloud computing is impacting the story.

Jay Fry, Data Center Dialog: Mark, you’ve become synonymous with PG&E’s data center efficiency work and if the reaction to the announcement that you’ll be leaving that role at the SVLG event is any indication, you’ll be missed. Can you give some perspective on how things have changed in this area during your time on the project at PG&E?

Mark Bramfitt, PG&E: First, I truly appreciate the opportunity to offer some clarity around PG&E’s continued focus on this market, as well as my own plans, as I feel I’ve done something of a disservice to both the IT and utility industry nexus by not being more forthright regarding our plans.

My team and I have treated the data center and IT energy efficiency market as a start-up within PG&E’s much larger program portfolio, and we’ve seen a great growth curve over the past four years – doubling our accomplishments in 2008 compared to 2007, for example. We’ve built an industry-leading portfolio of programs and services, and I expect PG&E will continue to see great engagement from our customers in this space.

That being said, utilities in California are under tremendous pressure to deliver energy efficiency as cost effectively as possible, so some of the industry leadership activities undertaken by PG&E may have to be de-emphasized, and we may not be able to afford to develop new programs and services if they won’t deliver savings.

My personal goal is to see 20 or more utilities follow PG&E’s lead by offering comprehensive efficiency programs for data centers and IT, and I think I can best achieve that through targeted consulting support. I’ve been supporting utilities essentially in my “spare” time, in part through the Utility IT Energy Efficiency Coalition, but there are significant challenges to address in the industry, and I think my full-time focus as a consultant will lead to broader success.

DCD: Why are you leaving PG&E, why now, and what will you be working on?

Mark Bramfitt: It may sound trite, or worse, arrogant, but I want to amplify the accomplishments we’ve made at PG&E over the past few years, using my knowledge and skills to drive better engagement between the utility and IT industries in the coming years. PG&E now has a mature program model that can be executed well in Northern California, so I’d like to spend my time on the bigger challenge of driving nationwide activities that will hopefully yield big results.

DCD: You had some big wins at some big Silicon Valley data centers: NetApp being one of those. Can you talk about what came together to make some of those possible? What should other organizations focus on to get them closer to being able to improve their data center efficiency as well?

Mark Bramfitt:
Our “big hit” projects have all been new construction engagements where PG&E provides financial incentives to help pay for the incremental costs of energy efficiency improvements – for measures like air- or water-side economizers, premium efficiency power conditioning and delivery equipment, and air flow isolation measures.

We certainly think our financial support is a big factor in making these projects work, but my project managers will tell you that the commitment of the project developer/owner is key. The design team has to want to work with the technical resource team PG&E brings to the table, and be open to spending more capital to realize expense savings down the road.

DCD: You made some comments onstage at the Gartner Data Center Conference last year, saying that “It’s been slow going.” Why do you think that’s been, and what was most disappointing to you about this effort?

Mark Bramfitt: I don’t have any real disappointments with how things of gone – we’re just very focused on being as successful as we can possibly be, and we are introspective in thinking about what we could do better.

I’d characterize it this way: we’ve designed ways to support on the order of 25 energy efficiency technologies and measures, absolutely leading the utility industry. We’ve reached out to dozens of VARs and system integrators, all of the major IT firms, every industry group and customer association, made hundreds of presentations, delivered free training and education programs, the list goes on.

What has slowed us down, I think, is that the IT industry and IT managers had essentially no experience with utility efficiency programs three years ago. It simply has taken us far longer than we anticipated to get the utility partnership message out there to the IT community.

DCD: The green IT hype was pretty impressive in late 2007 and early 2008. Then the economic crisis really hit. How has the economic downturn affected interest in energy efficiency projects? Did it get lost in the crisis? My personal take is that it certainly didn’t get as much attention as it would have otherwise. Maybe the recession caused companies to be more practical about energy efficiency topics, but I’m not sure about that. What are your thoughts?

Mark Bramfitt: I don’t see that the message has been lost, but certainly the economy has affected market activity.

PG&E is not seeing the level of new data center construction that we had in ’07 and ’08, but the collocation community tells me demand is exceeding supply by 3-to-1. They just can’t get financing to build new facilities.

On the retrofit side, we’re seeing interest in air flow management measures as the hot spot, perhaps because customers are getting the message that the returns are great, and it is an easy way to extend the life and capacity of existing facilities.

DCD: The other topic that’s taken a big share of the IT and facilities spotlight in the last year has obviously been cloud computing. How do you see the efficiency and cloud computing conversations playing together? Is the cloud discussion helping or hindering the efficiency discussion inside organizations in your opinion?

Mark Bramfitt: I’ve talked to some thought leaders on cloud computing and many seem to think that highlighting the potential energy efficiency advantages of shared services has merit. But with regard to our program delivery, the intersection has really been about how to serve collocation owners and tenants, rather than on the broader topic of migration to cloud services.



Be sure to come back for Part 2 of the interview. We'll cover a few other topics of note, including Mark’s thoughts on the philosophical differences over measurement approaches, the single biggest barrier to data center efficiency program adoption, and even a little bit of parting advice from Mark as he gets ready to leave PG&E.

Monday, December 14, 2009

IT metrics: a snore? Not when cloud computing is involved

Amazon’s recent announcement about the availability of spot pricing for EC2 instances got me thinking a bit about the role measurement is going to play in cloud computing. In fact, I sat in on a few sessions at the Gartner Data Center Conference earlier this month focused exclusively on metrics.

A snore? Actually not.

The increasingly dynamic world of cloud computing is going to require (in fact, completely rely upon) a robust and trustworthy set of metrics for how things are going. So much so, in fact, that I’d expect a bit of a renaissance in this space.


And while Amazon’s market-leading moves point to all sorts of requirements to measure costs, availability, and other requirements relating to their EC2 offering, many IT shops are thinking much more locally: how to get going on their own private compute clouds. IT metrics & measurement are going to get more and more important for those IT shops, too. Getting the right metrics working in a way that helps you make IT decisions is a really important step. Sometimes you’ll be impressed with all sorts of useful data about your infrastructure. Just as often you aren’t going to like what those metrics report back.

Here are a few additional thoughts about using metrics to help manage IT, using the Gartner metrics sessions from Vegas as a jumping off point:

An interesting conundrum for metrics: will they help align IT & business – or make things worse?

Gartner analyst Milind Govekar spent an entire session at Gartner’s conference (“Improving Your IT Business through Metrics”) talking about ways to manage IT, and ways to use metrics to that end. Not that this is new. In fact, one of the sessions that Gartner seems to run every year is something about how misaligned IT and the business are…or how best to get them aligned…or how they are moving closer. But I heard a few interesting wrinkles this time around on how measurement should/could be used:

· Without a certain level of IT operational maturity, investment in infrastructure – and measurement of that infrastructure -- is not going to yield great results. Govekar pointed out that majority of the customers they talk to are about halfway through a journey toward more dynamic, adaptable IT. And that’s about the point that the organizations can see a measureable pay-off from the investments. “That’s where you [the IT department] start to make a difference as a trusted service provider,” Govekar said. So, that also means you should expect the journey up to that point to have some bumps. Several analysts noted a need to improve process maturity to make true IT operations progress. And, of course, Gartner conveniently gives you a way to measure where your particular shop is on that journey.

· Metrics can and should be important, but be careful. Choose wisely. Govekar put it very succinctly in his session when he said,“Be careful with metrics: they drive people to do good things; they drive people to do bad things.” As we (hopefully) learned from Wall Street last year, people optimize around what they are measured and compensated for. Make sure it’s the right thing.

· Organizational patience is key. Govekar listed senior management commitment, persistence, and patience as very important to efforts to use metrics to drive IT change. An organization without an interest in continual improvement is not likely to be able to grapple with what the metrics will reveal – and the cultural changes that are necessary to get to that more dynamic IT they think they want.

· You have to be working at a level that business people will care about in order to have any chance of getting the ear of the CEO. The Gartner conference keynote from Donna Scott and Ronnie Colville focused on this practical truth: if you can’t talk to your business folks about ways that IT can help them improve what they’re trying to do, you’re going to have a difficult path to getting your job done. And, your CIO is not likely to be seen as a full partner in the business. While “technology may be used to help innovation,” said Scott, “people and processes are the backbone of innovation, not technology.” Good point. All those virtual machines may sound cool, but not necessarily to everyone. And whatever stats, facts, and figures you do end up with, don’t expect them to be as meaningful to the business set as they might be to the IT folks.

In one of the more interesting sessions about metrics, however, Gartner began to paint a picture of some of the innovative things you can do with them

Gartner analyst Joseph Baylock had some intriguing thoughts in his keynote about ways to use patterns from your infrastructure & operations data to improve IT’s sensitivity to the business. And, if the fancy brochures I found in the lobby are any indication, it sounds like Gartner is going to be spending much more time and effort thinking this through.

Baylock talked about creating an IT operations model that seeks out patterns, applies them to models, and then adapts accordingly. What could this mean for IT? Probably nothing quite yet. However, for companies with performance-driven cultures, Gartner saw this as a way to move from using lagging indicators for managing IT (and, hence, your business) to using leading indicators.

So, not only will you want to be able to measure what you're using at any given time (from Amazon or whomever else externally – and from your own internal infrastructure), but perhaps you’ll want to start to get smart about sifting through the mounds of data to make good choices.

Sounds to me like it’s worth keeping an eye (or two) open -- and on this topic -- after all.

Tuesday, December 8, 2009

Cloud, the economy, and short-term thinking: highlights from Gartner's Data Center Conference

There was a lot to take in at the Gartner Data Center Conference this year, and unfortunately I had a very abbreviated amount of time actually on the ground in Vegas to do it. What was the most notable thing? In my opinion, the headline was how much Gartner has strengthened its feelings about cloud computing in 12 months.

And not any 12 months, mind you, but 12 months darkened by the worse economic downturn of our lives. Of course, they did talk plenty about the impact the economy has had, especially on the IT operations psyche and approach, but cloud computing won out, hands down.
From what I gathered, Gartner now sees cloud computing (especially private clouds) and the real-time infrastructures that support them as something big customers should take a look at.

This certainly isn’t an earth-shattering change. Last year at this time, Tom Bittman’s keynote was about how the future of IT looked a lot like a private cloud. This year, the keynotes and sessions throughout the week definitely gave the impression that cloud computing has evolved and improved this past year, and Gartner has moved its thinking as well. The cloud computing discussion felt more mainstream, more central to many of the presentations. It certainly played a starring role in more of them.
Private clouds received quite a bit of positive airtime, and a near-unanimous vote of confidence from Gartner analysts – something that I didn’t feel was true at last year’s event. This year, Bittman was one of a larger chorus, rather than one of the few vocal Gartner private cloud advocates.

Dave Cappuccio’s keynote on “Infrastructure & Operations: Charting a Course for the Coming Decade” emphasized Gartner’s enthusiasm around private clouds with this prediction: of the money being spent on cloud computing over the next 3-5 years, “we think private cloud services will be 70-80% of the investment.” That sounds pretty serious to me.

Cloud computing focus soars, even though budgets don’t

The audience in the room for the opening keynotes (polled as usual with handheld devices they scattered throughout the room) considered cloud computing to be the 3rd biggest data center challenge for 2010 – in a world in which they also expect flat IT operations budgets. The No. 1 challenge was “space, power, & cooling” and No. 2 was that perennial issue, “linking IT & business.” In fact, 6% of people in the room put cloud computing as their top funded item for next year (compared with 9% for virtualization). That’s impressive.
Gartner believes the bad economy is forcing short-term thinking on infrastructure & operations projects
But don't think that just because cloud computing progressed that there's nothing dramatically different from last year. There is: the impact of the economy on the behavior of IT operations.
One of the most depressing quotes of the conference was not from Jeffrey Garten of the Yale School of Management (though I thought it might be). He actually had a pretty interesting talk -- in plain English -- about what has gone well and what has gone badly in the attempts to stave off a full global financial meltdown that went into high gear at the end of 2008.
Instead, it was how Donna Scott and Ronni Colville characterized IT’s response.
Is the best offense is a good defense? Really?

Because of the bad economy, according to Scott and Colville, any talk about data center or infrastructure “transformation” is right out the window. Instead, they said, the kinder, gentler term “restructuring” is in vogue. I guess that descriptor implies short-term ROI, which is at least pragmatic, since projects with long-term pay-outs are off the table completely. Organizations, said Scott, are instead switching to fast (OK, rushed) IT restructuring moves now.

So, hooray for the focus on ROI, but the shorter time horizon isn’t necessarily good news. It means less thoughtful analysis and planning, and a focus only on what can be done immediately. The unfortunate result? “A lot of optimization decisions are made in silos,” Scott said, when in fact sharing those efforts (things like moving to shared IT services), could have a much broader impact. Gee, thanks, bad economy.

In fact, many of the comments I heard from analysts gave the distinct impression that IT is (and should be) on the defensive. “Promises of IT transformation,” said Milind Govekar during his talk on IT metrics, “are taking a back seat to immediate goals of cost optimization, increasing quality or reducing risk.” Govekar noted that, in fact, “cloud hype” is a part of the problem; it “threatens IT budgets and control.”

Comments like these from Govekar and others felt a bit like the analysts were hearing (and in some cases recommending) that IT should circle the wagons, implying that cloud computing was threatening the status quo in IT. Well, if all of its promises come true, it certainly could.
But not if IT operations makes the first move. How? I liked the bit of advice I saw come from Carl Claunch’s keynote: pilot everything. Private clouds, public clouds, hybrid clouds. If you want to know how to do it, the only way to learn is to get started.

For the record, I seem to recall recounting some very similar advice from last year’s conference. Nevertheless, from what I saw and heard this year, Gartner’s research and the evidence from attendees show increased interest and adoption in cloud computing and real-time infrastructures, despite hiccups, bumps, and bruises from the Great Recession.

That in and of itself is impressive.