Hi, Craig Vosburgh here. Starting today, I'll be one of the authors contributing to the Cassatt Data Center Dialog blog. Given my role here at Cassatt as Chief Engineer, my posts will be on the technical side of things, with an emphasis on real-world implications, with the occasional peek into how we do what we do. Well with the intro out of the way, on to today's posting...
In my 20+ years of being in the tech sector I haven't seen any other emerging technology with as many buzz words as what is now being termed "Cloud Computing." Normally, I'd want to stay away from using the term du jour, as the past few years have shown that if you wait just a few days someone will likely create a new name to further muddy the waters. However, in this case, I think the industry may finally have found the one that'll stick for the transformations underway at companies around the globe as they re-invent their computing infrastructures and IT processes to better serve their needs.
Let's start by grounding ourselves on some of the terms being used and get us on the same page before we begin this discussion (or at least closer to the same page). If you've been around the space for any time you've likely heard people throw around terms like Autonomic Computing, Adaptive Computing, On Demand Computing, Virtualization, Utility Computing and the new comers to the stage Cloud Computing and Internal or Private Clouds. All of these terms are in one way or another trying to describe a transformation of a company's IT infrastructure from today's rigid stovepipes to one that efficiently and dynamically addresses the company's evolving IT needs.
Over the past few years, many people have begun to recognize that they can't continue to manage their IT infrastructure with 10- to 20-year-old practices as those practices have resulted in too much sprawl, too much complexity, too much management cost, too much power usage and too little utilization.
As we move forward, we have to figure out how to unlock the available IT capacity (most companies believe they are running 10-20% utilized on their existing infrastructure) and leverage that excess capacity to increase the company's agility in reacting to the ever-changing business climate. As one of my co-workers, Steve Oberlin, points out, there is only so much you can save by optimizing efficiency within IT as it is usually a fixed percentage of a company's overall budget. To the extent companies look only to increase IT efficiency, they limit their savings to only their bottom line.
If, instead, companies focus not only on increasing efficiency but also on increasing their internal IT's ability to respond to new company products with their existing IT investment, they will allow for growth of the top line at a much faster pace. I believe that over the next three to five years Internal Cloud Computing (or Private Clouds as Gartner refers to them) will be the vehicle to allow companies to unlock their IT infrastructure to enable (rather than hinder) their new business ventures.
Now, before we jump into just the efficiency/agility topic (more on that in an upcoming post), let's broaden the discussion a bit to include all of the other "-ilities" we need to consider as a part of a Cloud Computing solution. Off the top of my head some of the big ones that come to mind are: Scalability, Availability, Recoverability, Usability, Interoperability, Extensibility, Swizzleability (more on what I mean by that later), Affordability (as in minimizing the huge amount of money spent to purchase, maintain and power all of this IT infrastructure) and the already discussed Agility.
Over the next few months I'm going to spend time talking about each one of these "-ilities" and how Cloud Computing can help address problems in each specific area. In addition, I'm going to talk about some real-world problems and how they have been solved today by our currently available Cassatt Active Response product line (turns out that here at Cassatt we've been thinking about and working on this transformation for five years now and we have some pretty cool stuff to show for it. I think it's time to show it off).
I hope you'll come back and check in on me and my musings over the next few months as I explore this thing called Cloud Computing.
Thursday, December 18, 2008
Tuesday, December 16, 2008
Killing comatose servers: OK, but how?
Posted by
Jay Fry
at
3:19 PM
One of the best sources of data center energy efficiency guidance available today is the Uptime Institute. Not only do they run some really focused, useful events on the topic, but their fearless leader, Ken Brill, is very visible and very direct with his recommendations. His recent article in Forbes took on one of those dirty little secrets in IT: there are a lot of servers in your data center doing absolutely nothing.
Given how serious of a problem that the current growth rates of data center energy usage will be, Ken and Uptime have given some serious thinking to how to curb the problem. Topping that list was the directive that served as the title of his article: "Kill comatose computers."
Call them orphan servers, comatose servers, idle servers, or whatever, Brill calls them "corporate enemy No. 1. Unless you have a rigorous program of removing obsolete servers at the end of their lifecycle," he writes, "it is very likely that between 15% and 30% of the equipment running in your data center is comatose. It consumes electricity without doing any computing."
Yikes. Numbers tossed around by Paul McGuckin at the recent Gartner Data Center Conference are similarly high. The solution? Brill has a simple answer: "This dead equipment needs to be unplugged and removed."
No arguments so far from me. In fact, doing work like this is one of the steps we recommend toward revamping and improving how you run your data center. The more intriguing question is one Ken also asks: why hasn’t this already happened?
The answer, unfortunately, is that most shops have worked long and hard on the steps for standing up servers, installing new components, and the like, probably because adding things to the data center is always done with some sort of time or business pressure. People are watching and they want their stuff up now. Rarely is someone breathing down your neck to unplug something. In the frenetic everyday life of an IT ops person, the decommissioning bit is the part that can wait while you handle the urgent fire of the day.
The problem is that after today's crisis comes tomorrow's. And though the orphan servers are using up power to keep them running and air conditioning to keep them cool, removing them isn't a priority. But as more and more data centers start to hit the wall for power capacity, that's going to have to change.
So, what do you do? As Ken Brill points out in Forbes, "after weeks or months pass and employees turn over, the details of what can be removed will be forgotten, and it becomes a major research project to identify what is not needed."
Unfortunately, identifying orphan servers is not something that will take care of itself. Here are some of things we've seen customers focus on to help solve this problem:
- Enable some sort of detailed monitoring on your servers
- Determine what time period is appropriate to watch for changes, based on your business and what the servers are likely to be doing
- Watch usage, users, processes, and other statistics that will be helpful in making decisions later
- Sift through the mountains of data you collect with someone who can translate it into useful information
- Engage the end users in the process
We've found that organizations sometimes want to do these steps themselves, sometimes they don't. When customers ask Cassatt to help with these steps, it's often because they are in need of the expertise or tested tools and processes for finding out what their servers are doing. It's something they often don't have internally.
The other thing we generally bring to the process is experience we've had working with some very large customers. Through our recently announced Cassatt Active Profiling Service, we've helped customers identify orphan servers, recommended candidates for virtualization, found candidates for power management, and located other servers that they could start to use as a free pool of resources to support a move toward setting up a sort of "internal cloud" architecture.
I guess that's the good news: with the data you get from a project like this, you can start to really make some significant changes to the way you manage your IT infrastructure. So, not only can you follow Ken Brill's advice and begin to kill off those comatose servers and save yourself a great deal of power, but you can also arm yourself with some unexpectedly useful information.
For example, if you know what your servers are doing (and not doing) at different times of the day, the month, and the quarter, you can use that information to start to set up some automation to manage your infrastructure based on those profiles. You can set up shared services to allocate or pull back servers for the applications which have the highest priority at any given time, based upon your priorities.
But I'm getting ahead of myself. The first step, then, is to find out what the servers in your data centers are doing. Then, if you don't like what they're doing, you can actually do something about it.
Given how serious of a problem that the current growth rates of data center energy usage will be, Ken and Uptime have given some serious thinking to how to curb the problem. Topping that list was the directive that served as the title of his article: "Kill comatose computers."
Call them orphan servers, comatose servers, idle servers, or whatever, Brill calls them "corporate enemy No. 1. Unless you have a rigorous program of removing obsolete servers at the end of their lifecycle," he writes, "it is very likely that between 15% and 30% of the equipment running in your data center is comatose. It consumes electricity without doing any computing."
Yikes. Numbers tossed around by Paul McGuckin at the recent Gartner Data Center Conference are similarly high. The solution? Brill has a simple answer: "This dead equipment needs to be unplugged and removed."
No arguments so far from me. In fact, doing work like this is one of the steps we recommend toward revamping and improving how you run your data center. The more intriguing question is one Ken also asks: why hasn’t this already happened?
The answer, unfortunately, is that most shops have worked long and hard on the steps for standing up servers, installing new components, and the like, probably because adding things to the data center is always done with some sort of time or business pressure. People are watching and they want their stuff up now. Rarely is someone breathing down your neck to unplug something. In the frenetic everyday life of an IT ops person, the decommissioning bit is the part that can wait while you handle the urgent fire of the day.
The problem is that after today's crisis comes tomorrow's. And though the orphan servers are using up power to keep them running and air conditioning to keep them cool, removing them isn't a priority. But as more and more data centers start to hit the wall for power capacity, that's going to have to change.
So, what do you do? As Ken Brill points out in Forbes, "after weeks or months pass and employees turn over, the details of what can be removed will be forgotten, and it becomes a major research project to identify what is not needed."
Unfortunately, identifying orphan servers is not something that will take care of itself. Here are some of things we've seen customers focus on to help solve this problem:
- Enable some sort of detailed monitoring on your servers
- Determine what time period is appropriate to watch for changes, based on your business and what the servers are likely to be doing
- Watch usage, users, processes, and other statistics that will be helpful in making decisions later
- Sift through the mountains of data you collect with someone who can translate it into useful information
- Engage the end users in the process
We've found that organizations sometimes want to do these steps themselves, sometimes they don't. When customers ask Cassatt to help with these steps, it's often because they are in need of the expertise or tested tools and processes for finding out what their servers are doing. It's something they often don't have internally.
The other thing we generally bring to the process is experience we've had working with some very large customers. Through our recently announced Cassatt Active Profiling Service, we've helped customers identify orphan servers, recommended candidates for virtualization, found candidates for power management, and located other servers that they could start to use as a free pool of resources to support a move toward setting up a sort of "internal cloud" architecture.
I guess that's the good news: with the data you get from a project like this, you can start to really make some significant changes to the way you manage your IT infrastructure. So, not only can you follow Ken Brill's advice and begin to kill off those comatose servers and save yourself a great deal of power, but you can also arm yourself with some unexpectedly useful information.
For example, if you know what your servers are doing (and not doing) at different times of the day, the month, and the quarter, you can use that information to start to set up some automation to manage your infrastructure based on those profiles. You can set up shared services to allocate or pull back servers for the applications which have the highest priority at any given time, based upon your priorities.
But I'm getting ahead of myself. The first step, then, is to find out what the servers in your data centers are doing. Then, if you don't like what they're doing, you can actually do something about it.
Thursday, December 11, 2008
Marking green IT's progress: 'it's been slow going'
Posted by
Jay Fry
at
10:22 PM
Two things got me thinking about the state of green IT today. First, with word "leaking" out about President-elect Obama's choice for energy secretary, I started wondering what sort of, um, change might be ahead in how the government will try to shape the energy footprint of data centers. From the energy-efficient data center work we've been involved with over the past 18 months, I'm hoping any actions build on the work that Andrew Fanara, the EnergyStar folks, the EPA, the DoE, and the like have been pushing.
Second, I saw mention of an IDC report yesterday at GreenComputing, predicting it will be a good year for green IT. That's despite the global economic worries. Energy efficiency work in IT will sneak in the data center's back door, according to Frank Gens of IDC, as "cost cutting" as companies look to adopt new technologies only if they have a speedy payback.
Both of those things led me to this question: what kind of progress (or lack thereof?) in data center energy efficiency have we seen in the past 12-18 months?
The Gartner Data Center Conference last week provided some good fodder for a report card of sorts. In fact, they had analyst Paul McGuckin, PG&E energy-efficiency guru Mark Bramfitt, and VMware data center expert Mark Thiele onstage for a panel that pretty much summed it all up in my mind. Some highlights:
"It's been slow going." Bramfitt kicked off his comments acknowledging that the incentive programs he's been running in Northern California for PG&E (the benchmark for many other programs nationwide) have only crept along, even though he has 25 things he can pay end users for. Most, said Bramfitt, are related to data center air conditioning, and not related to IT. Yet.
Despite the slow start, Bramfitt's asked his bosses for $50 million to hand out over the next three years, and was legitimately proud of a $1.4 million check he was presenting to a customer this week (he didn't name them during the panel, but I'm betting it was NetApp based on this story). VMware's Thiele noted that PG&E can actually be pretty flexible in dreaming up ways to encourage data center energy savings. He mentioned creating two completely new incentive programs in conjunction with PG&E in his current job and with previous employers.
People are realizing that energy efficiency is more about data center capacity than cost. Bramfitt noted that the real value of the incentive check that utilities can provide back to end users is not the money. It's the ability to "wave that check around saying, 'Look, PG&E paid me to do the right thing.'" More interestingly, though, "the people in my programs understand that energy efficiency has value for capacity planning and growth," Bramfitt said. "It's a capacity issue. It's not financial."
IT and facilities are only just starting to work together. Gartner's McGuckin said these two groups are still pretty separate. That matches data we published from our energy-efficiency survey from earlier this year. "I'm convinced," said McGuckin, "that we're not going to solve the energy problem in the data center without these two groups coming together."
Thiele talked about an idea that I've heard of being tried by a few companies: a "bridging-the-gap" person -- a data center energy efficiency manager -- that sits between IT and facilities. Thiele has someone doing exactly this working with him on VMware's R&D data centers. This is someone who looks at the data center as a system, an approach that, based on what our customers tell us, really makes a lot of sense. "So far it has been really, really well received," he said.
Mythbusting takes a long time. We started a page on our website of server power management myths back in late summer 2007. One of the first objections we encountered then was that it's bad to turn off servers. We still hit this objection over a year later. And I expect we will continue to for a while yet (old habits die hard!). At one point in the Gartner panel, McGuckin playfully asked Thiele about a power management comment he made: "You're stepping on another myth -- you're not suggesting shutting down servers, are you?"
"I am," Thiele said.
To help Thiele make his point, McGuckin then quoted Intel saying that servers can handle 3,000 power cycles, which translates to an 8-10 year life cycle for most machines if you turn them off once a day (Thiele noted that most get replaced in 3-5 years anyway). We're glad to help these guys do a bit of mythbusting, but I can tell you, it’s not a short-term project. "Things I considered to be gospel two years ago are being heard by people as new today," said Thiele. Having Gartner and other thought leaders continue to add their voice to this discussion can't hurt.
The best thing to do? Nibble at the problem. In the end, suggested Thiele, you need to work on energy efficiency in stages. Maybe that's a metaphor for the progress green IT is making overall. "Nibble at this," he said. "Don’t try to eat the whole whale at once. Pick a couple things you can get some traction on. Get some quick wins."
I know we've certainly adjusted our approach to help customers start small and even set their projects up as self-funding whenever possible. Quick, high-return "nibbles" are probably the best single way to make the case for energy-efficient IT -- and to ensure that next year shows a lot broader success throughout the industry than we've seen in the year that's gone by.
Second, I saw mention of an IDC report yesterday at GreenComputing, predicting it will be a good year for green IT. That's despite the global economic worries. Energy efficiency work in IT will sneak in the data center's back door, according to Frank Gens of IDC, as "cost cutting" as companies look to adopt new technologies only if they have a speedy payback.
Both of those things led me to this question: what kind of progress (or lack thereof?) in data center energy efficiency have we seen in the past 12-18 months?
The Gartner Data Center Conference last week provided some good fodder for a report card of sorts. In fact, they had analyst Paul McGuckin, PG&E energy-efficiency guru Mark Bramfitt, and VMware data center expert Mark Thiele onstage for a panel that pretty much summed it all up in my mind. Some highlights:
"It's been slow going." Bramfitt kicked off his comments acknowledging that the incentive programs he's been running in Northern California for PG&E (the benchmark for many other programs nationwide) have only crept along, even though he has 25 things he can pay end users for. Most, said Bramfitt, are related to data center air conditioning, and not related to IT. Yet.
Despite the slow start, Bramfitt's asked his bosses for $50 million to hand out over the next three years, and was legitimately proud of a $1.4 million check he was presenting to a customer this week (he didn't name them during the panel, but I'm betting it was NetApp based on this story). VMware's Thiele noted that PG&E can actually be pretty flexible in dreaming up ways to encourage data center energy savings. He mentioned creating two completely new incentive programs in conjunction with PG&E in his current job and with previous employers.
People are realizing that energy efficiency is more about data center capacity than cost. Bramfitt noted that the real value of the incentive check that utilities can provide back to end users is not the money. It's the ability to "wave that check around saying, 'Look, PG&E paid me to do the right thing.'" More interestingly, though, "the people in my programs understand that energy efficiency has value for capacity planning and growth," Bramfitt said. "It's a capacity issue. It's not financial."
IT and facilities are only just starting to work together. Gartner's McGuckin said these two groups are still pretty separate. That matches data we published from our energy-efficiency survey from earlier this year. "I'm convinced," said McGuckin, "that we're not going to solve the energy problem in the data center without these two groups coming together."
Thiele talked about an idea that I've heard of being tried by a few companies: a "bridging-the-gap" person -- a data center energy efficiency manager -- that sits between IT and facilities. Thiele has someone doing exactly this working with him on VMware's R&D data centers. This is someone who looks at the data center as a system, an approach that, based on what our customers tell us, really makes a lot of sense. "So far it has been really, really well received," he said.
Mythbusting takes a long time. We started a page on our website of server power management myths back in late summer 2007. One of the first objections we encountered then was that it's bad to turn off servers. We still hit this objection over a year later. And I expect we will continue to for a while yet (old habits die hard!). At one point in the Gartner panel, McGuckin playfully asked Thiele about a power management comment he made: "You're stepping on another myth -- you're not suggesting shutting down servers, are you?"
"I am," Thiele said.
To help Thiele make his point, McGuckin then quoted Intel saying that servers can handle 3,000 power cycles, which translates to an 8-10 year life cycle for most machines if you turn them off once a day (Thiele noted that most get replaced in 3-5 years anyway). We're glad to help these guys do a bit of mythbusting, but I can tell you, it’s not a short-term project. "Things I considered to be gospel two years ago are being heard by people as new today," said Thiele. Having Gartner and other thought leaders continue to add their voice to this discussion can't hurt.
The best thing to do? Nibble at the problem. In the end, suggested Thiele, you need to work on energy efficiency in stages. Maybe that's a metaphor for the progress green IT is making overall. "Nibble at this," he said. "Don’t try to eat the whole whale at once. Pick a couple things you can get some traction on. Get some quick wins."
I know we've certainly adjusted our approach to help customers start small and even set their projects up as self-funding whenever possible. Quick, high-return "nibbles" are probably the best single way to make the case for energy-efficient IT -- and to ensure that next year shows a lot broader success throughout the industry than we've seen in the year that's gone by.
Monday, December 8, 2008
'Real-time infrastructure': where do you start?
Posted by
Jay Fry
at
4:35 PM
I saw a lot of coverage last week on Tom Bittman's keynote at the Gartner Data Center Conference in Vegas about how IT infrastructure is going to evolve (summary: it's looking very cloud-like). Only a few of the articles (like Derrick Harris' blog) picked up some of the angles Donna Scott presented the next day to another packed room -- more of an "OK, I'm interested, now how do I get started?" session about creating a real-time infrastructure. Given all the hype about how companies should be creating next-generation architectures and the like, I thought her focus on what's being done and what could be done today was worthy of some highlights.
First off, Donna Scott gave credit where credit was due: a big driver toward real-time infrastructure (RTI) has been (and will continue to be) virtualization. It's the first step out of the moving airplane, hopefully with the parachute on. Virtualization allows IT operations to see that it's possible to de-couple hardware and software, to do things differently, and to get seriously beneficial results.
Cost wasn't the main driver for these initial steps toward a real-time infrastructure, according to the in-room poll that Donna took. Thirty-five percent said the reason was agility instead. But it's not yet full-steam ahead to get that agility benefit, either. "There's no such thing as data-center-wide RTI," Scott said. "It only exists in pockets at this point."
So what's holding RTI up?
The two highest results from her polling told a tale that we here at Cassatt have heard before: it's not the technology that's the problem. Management process maturity (29%) and organizational/cultural barriers (28%) were holding organizations back from adopting RTI instead. In fact, in Scott's poll "unproven technology" was the reason holding back only 10% of the audience. To put this in context from our own experience, Cassatt's largest customers have needed the most assistance with handling internal politics, organizational objections, and modifications to the way they are managing their IT operations as they move through an implementation. Some of this assistance comes from us, but a lot of the broader change-management expertise can come from the partners we work with. Donna summed it up nicely: "Don't ignore the people and cultural issues."
Donna's polling also found that organizations in the room had started server virtualization projects as their initial foray into creating a real-time infrastructure (56% chose this option). Makes sense. Three other areas that got some interest as RTI starting points:
1. Disaster recovery -- sharing and re-configuration (11%)
2. Loosely coupled server high-availability -- replacement of failed nodes (7%)
3. Dynamic capacity expansion-- typically for specific applications (7%)
Scott pointed to a way to get the "best of both worlds" and begin making steps toward RTI: combine server virtualization and a dynamic resource broker (what she called a "service governor"). In fact, her assumption is that "there will be many service governors in your organization," often including specialized ones that know specific types of environments very well.
So where do you start?
Donna went on to provide one of the first categorizations of service governors that's been provided to date. Her slide listed application-centric service governors, infrastructure-centric service governors, and other flavors as well. Who's providing them? Vendors like IBM, CA, DataSynapse, VMware, and, yes, Cassatt, too. She put us in a couple of categories, in fact.
"All vendors have something to offer," Scott said, but each has its own perspective on how to deliver value for customers. (Cassatt's take, by the way, is to be as vendor-neutral as possible, while providing dynamic control over the broadest set of hardware, software, and virtualization as we can.) "Most of the innovation in this space," Scott said, "is happening with the smaller suppliers."
She wrapped up her comments with a bit of a warning and a challenge: some organizations are eager to give real-time infrastructure a shot (and these are obviously the type that are Cassatt customers), figuring that the resulting rewards will easily outstrip the risks.
"RTI is not for the faint of heart," said Scott. "This is really cutting-edge stuff, so you could be breaking new ground."
I suppose it goes without saying, but if you're interested in someone to help break that new ground with you, I think we could recommend that someone.
First off, Donna Scott gave credit where credit was due: a big driver toward real-time infrastructure (RTI) has been (and will continue to be) virtualization. It's the first step out of the moving airplane, hopefully with the parachute on. Virtualization allows IT operations to see that it's possible to de-couple hardware and software, to do things differently, and to get seriously beneficial results.
Cost wasn't the main driver for these initial steps toward a real-time infrastructure, according to the in-room poll that Donna took. Thirty-five percent said the reason was agility instead. But it's not yet full-steam ahead to get that agility benefit, either. "There's no such thing as data-center-wide RTI," Scott said. "It only exists in pockets at this point."
So what's holding RTI up?
The two highest results from her polling told a tale that we here at Cassatt have heard before: it's not the technology that's the problem. Management process maturity (29%) and organizational/cultural barriers (28%) were holding organizations back from adopting RTI instead. In fact, in Scott's poll "unproven technology" was the reason holding back only 10% of the audience. To put this in context from our own experience, Cassatt's largest customers have needed the most assistance with handling internal politics, organizational objections, and modifications to the way they are managing their IT operations as they move through an implementation. Some of this assistance comes from us, but a lot of the broader change-management expertise can come from the partners we work with. Donna summed it up nicely: "Don't ignore the people and cultural issues."
Donna's polling also found that organizations in the room had started server virtualization projects as their initial foray into creating a real-time infrastructure (56% chose this option). Makes sense. Three other areas that got some interest as RTI starting points:
1. Disaster recovery -- sharing and re-configuration (11%)
2. Loosely coupled server high-availability -- replacement of failed nodes (7%)
3. Dynamic capacity expansion-- typically for specific applications (7%)
Scott pointed to a way to get the "best of both worlds" and begin making steps toward RTI: combine server virtualization and a dynamic resource broker (what she called a "service governor"). In fact, her assumption is that "there will be many service governors in your organization," often including specialized ones that know specific types of environments very well.
So where do you start?
Donna went on to provide one of the first categorizations of service governors that's been provided to date. Her slide listed application-centric service governors, infrastructure-centric service governors, and other flavors as well. Who's providing them? Vendors like IBM, CA, DataSynapse, VMware, and, yes, Cassatt, too. She put us in a couple of categories, in fact.
"All vendors have something to offer," Scott said, but each has its own perspective on how to deliver value for customers. (Cassatt's take, by the way, is to be as vendor-neutral as possible, while providing dynamic control over the broadest set of hardware, software, and virtualization as we can.) "Most of the innovation in this space," Scott said, "is happening with the smaller suppliers."
She wrapped up her comments with a bit of a warning and a challenge: some organizations are eager to give real-time infrastructure a shot (and these are obviously the type that are Cassatt customers), figuring that the resulting rewards will easily outstrip the risks.
"RTI is not for the faint of heart," said Scott. "This is really cutting-edge stuff, so you could be breaking new ground."
I suppose it goes without saying, but if you're interested in someone to help break that new ground with you, I think we could recommend that someone.
Saturday, December 6, 2008
If David Letterman worked for Gartner
Posted by
Jay Fry
at
10:28 AM
If David Letterman worked for Gartner, Wednesday's "Top 10 Disruptive Technologies Affecting the Data Center" keynote from Carl Claunch would probably have been a bit more like engineer/humorist Don McMillan's routine at the Gartner Data Center Conference. Carl's Top 10 list certainly would have been funnier mashed-up with McMillan's good-natured attack on death-by-PowerPoint. But maybe that wasn't the vibe Carl was going for.
Or maybe I've been in Vegas too long.
In any case, Derrick Harris from On-Demand Enterprise gave a good run-down of Carl's keynote (including the Top 10 list itself) and other key presentations from the week. I thought I'd add a few specific comments on the cloud computing and green IT items (#2 and #10, respectively) from what Carl said on stage. And, for good measure, I can't help but add in a few of McMillan's "time-release comedy" nuggets.
How do you know a cloud provider can deliver what they promise? Since you can't spend countless hours asking cloud service providers a sufficiently long string of questions about how they run their production environment for high reliability, you have to fall back on one of two approaches, Claunch said. One is contractual remedies. That one's inadequate. The second (and only currently useful) approach is to base your decision on reputation, track record, and brand. As Claunch said, "there's a leap of faith here. It's early days."
What is a virtual server? The waiter or waitress who never shows up.
(You guessed right. That wasn't in Carl's keynote. McMillan wrote that one especially for us virtualization-immersed conference goers.)
Huge swings in the scale of compute resources needed to support your apps would pretty much force you to try using an external cloud provider. Claunch didn't think organizations will have the ability to support their apps when compute requirements suddenly grow by 10, 100, or 1000x. I'm not sure I agree: it's what companies are having to do now. Of course, it's what leads to massive over-provisioning of servers that end up sitting idle for most of the year. But, if you set up your servers in a pool that's able to be dipped into by any of your apps, automatically provisioned (and de-provisioned when appropriate), all using priorities/policies you set, you actually can do this yourself. Several of the organizations Cassatt is working with now are doing exactly this.
I definitely agreed with his next point, though:
With cloud computing, it's all about service. As Carl said, "All that matters to you is the service boundary." In most respects, you don't really care what combination of hardware, software, or virtual machines is being used to run your applications, so long as they run to your satisfaction. "If they [the cloud provider] can figure out how to do it with rubber bands reliably, that’s fine," Claunch said. "It doesn't matter to you."
What is enterprise storage? McMillan told us, of course, that "enterprise storage" is a closet on Star Trek. May your data centers live long and prosper.
About the Green IT topic: more than a third of the audience would be willing to pay a premium if a new bit of technology is "green." 26% of the people answering Claunch's in-room poll said they'd "buy green" only where it saves money, space, defers data center construction, and the like. That much I expected. What I didn't expect was the 34% who said they'd pay more for technology if it's green. In this economy. Seven percent even said they'd pay a "substantial" green premium.
Finally, statistics are everywhere at conferences like this. Some are intriguing. Some are useless. You can never quite tell immediately which is which, though. (The goofy Gartner game-show music while the voting and tallying is happening certainly doesn't help.) The Gartner analysts, though, helped provide context whenever they did the in-room polling by rattling off how the results had changed from the past few times they'd done similar polls or what they'd heard from end-user inquiries.
In honor of useless statistics, then, I'll leave you with one that McMillan pointed out that left 'em rolling in the aisles. If 44% of marriages end in divorce, that must mean that 56% end in death. That really doesn't seem to leave you with any good options, does it?
And Gartner should breathe a sigh of relief: I think Letterman likes his "day" job.
Or maybe I've been in Vegas too long.
In any case, Derrick Harris from On-Demand Enterprise gave a good run-down of Carl's keynote (including the Top 10 list itself) and other key presentations from the week. I thought I'd add a few specific comments on the cloud computing and green IT items (#2 and #10, respectively) from what Carl said on stage. And, for good measure, I can't help but add in a few of McMillan's "time-release comedy" nuggets.
How do you know a cloud provider can deliver what they promise? Since you can't spend countless hours asking cloud service providers a sufficiently long string of questions about how they run their production environment for high reliability, you have to fall back on one of two approaches, Claunch said. One is contractual remedies. That one's inadequate. The second (and only currently useful) approach is to base your decision on reputation, track record, and brand. As Claunch said, "there's a leap of faith here. It's early days."
What is a virtual server? The waiter or waitress who never shows up.
(You guessed right. That wasn't in Carl's keynote. McMillan wrote that one especially for us virtualization-immersed conference goers.)
Huge swings in the scale of compute resources needed to support your apps would pretty much force you to try using an external cloud provider. Claunch didn't think organizations will have the ability to support their apps when compute requirements suddenly grow by 10, 100, or 1000x. I'm not sure I agree: it's what companies are having to do now. Of course, it's what leads to massive over-provisioning of servers that end up sitting idle for most of the year. But, if you set up your servers in a pool that's able to be dipped into by any of your apps, automatically provisioned (and de-provisioned when appropriate), all using priorities/policies you set, you actually can do this yourself. Several of the organizations Cassatt is working with now are doing exactly this.
I definitely agreed with his next point, though:
With cloud computing, it's all about service. As Carl said, "All that matters to you is the service boundary." In most respects, you don't really care what combination of hardware, software, or virtual machines is being used to run your applications, so long as they run to your satisfaction. "If they [the cloud provider] can figure out how to do it with rubber bands reliably, that’s fine," Claunch said. "It doesn't matter to you."
What is enterprise storage? McMillan told us, of course, that "enterprise storage" is a closet on Star Trek. May your data centers live long and prosper.
About the Green IT topic: more than a third of the audience would be willing to pay a premium if a new bit of technology is "green." 26% of the people answering Claunch's in-room poll said they'd "buy green" only where it saves money, space, defers data center construction, and the like. That much I expected. What I didn't expect was the 34% who said they'd pay more for technology if it's green. In this economy. Seven percent even said they'd pay a "substantial" green premium.
Finally, statistics are everywhere at conferences like this. Some are intriguing. Some are useless. You can never quite tell immediately which is which, though. (The goofy Gartner game-show music while the voting and tallying is happening certainly doesn't help.) The Gartner analysts, though, helped provide context whenever they did the in-room polling by rattling off how the results had changed from the past few times they'd done similar polls or what they'd heard from end-user inquiries.
In honor of useless statistics, then, I'll leave you with one that McMillan pointed out that left 'em rolling in the aisles. If 44% of marriages end in divorce, that must mean that 56% end in death. That really doesn't seem to leave you with any good options, does it?
And Gartner should breathe a sigh of relief: I think Letterman likes his "day" job.
Thursday, December 4, 2008
What this recession means for your data center operations
Posted by
Jay Fry
at
5:03 PM
Apparently the recession isn't deterring too many people from talking about how to improve data center operations. Attendance here at Gartner's 27th annual Data Center Conference is supposedly down only 10% since last December's conference (when, some experts now say, this whole nasty recession actually began). At least, that's the official word on conference attendance. At least one of our neighboring vendors in the exhibit hall here was a bit skeptical of that count. It was hard for me to get an accurate assessment: being right next to the bar kept a pretty big crowd streaming by us during the evening events. Ah, Vegas.
But after these ~1,800 data center managers finish their week here, what advice for dealing with data center operations in the current economic, um, free fall will they have to take home? (Other than, "you probably shouldn't bet your entire remaining '08 capital budget on black.") Here are some tidbits from a few of the analyst sessions I picked up:
Asset management. According to Donna Scott in her "IT Operations Management Scenario: Trends, Directions, and Market Landscape" session, many IT organizations don't have a good handle on what they have. Cassatt's experiences with customers back this up. Once you have this info, you can move on to the next item...
Get rid of things you don't use. As Scott also said, there are probably things in your data center you don't use. Do a bit of "license harvesting." Don't pay for licenses you don't need. Analyze the usage of the software and hardware in the data center and try to do some consolidation to reduce costs. We'd suggest using extra machines as a free pool of resources which can be set up to handle big demand spikes from any of your applications.
Policy-based automation to reduce your labor costs. Things like your build processes or failure replacement/re-provisioning can be automated, according to Scott. We often find that automation can be a scary word to IT ops folks, but in an economy like this one, it might be just the thing you're looking for. Or worth a test-drive at least.
Don't forget about longer term, big impact items like data center consolidation. OK, it may take five years, but it's a pretty strategic move that can yield "enormous" savings, according to Scott. Bill Malik cautioned in a Q&A panel today, however, that if you don't re-engineer your operations processes for your new, consolidated data center when doing this, you run the risk of not seeing those returns. (To me, this one isn't exactly low-hanging fruit.)
Some other ideas from Paul McGuckin and Donna Scott's keynote
Tuesday's second keynote was about "The Data Center Scenario: Planning for the Future" and Paul and Donna provided another list of ideas to deal with the Aging Data Center Dilemma: "The dilemma becomes," according to McGuckin, "how do you live with the data centers you have" if this recession means no money for building more or even expansion?
Their suggestions:
- Build only what you need
- What if you're out of power in your facility? "Virtualization and consolidation are your friend," said McGuckin.
- Locate and decommission orphan servers and storage. "You need to find these and get rid of them," said McGuckin. It may be 1% of your servers. It may be 12%. It's probably not 30%, he said, but it's worth it. (And if you need help identifying those orphans, let us know.
- Target five-year-old servers for replacement
- Move test systems (they are imaged elsewhere) and 50% of your Web servers off your UPS.
- Implement power-capping and load-shedding strategy. Some servers that are less important need to be turned off, McGuckin said. (Something else Cassatt knows a bit about.)
- Cold-aisle containment is "almost magic," said McGuckin, "allowing you to cool a lot more stuff with the equipment you already have in your data center."
By the way, how bad is it out there?
Not surprisingly, cost pressures have jumped to the top of the list of concerns for those running data centers, if the results here at the Gartner show are any indication. In the in-room polling Donna Scott did during her "IT Operations Management Scenario: Trends, Directions, and Market Landscape" session, she found the top three concerns for infrastructure and operations were cost reduction/management (48%), demonstrating business value (42%), and the pressure to move/change even faster (36%). Previously 7x24 availability concerns were at the top of this list. It would seem that it is, indeed, the economy, stupid.
A more positive sign, though, was the bell curve Donna got when she asked about people's infrastructure and operations budgets in 2009 in another in-room poll. Sure, 30% of the audience saw a 1-10% whack coming to their budgets, but 29% thought things would stay the same, and some non-zero number of lucky dogs are actually getting a budget increase.
Maybe it's that last group that's been filling the MGM Grand's high-rollers room.
But after these ~1,800 data center managers finish their week here, what advice for dealing with data center operations in the current economic, um, free fall will they have to take home? (Other than, "you probably shouldn't bet your entire remaining '08 capital budget on black.") Here are some tidbits from a few of the analyst sessions I picked up:
Asset management. According to Donna Scott in her "IT Operations Management Scenario: Trends, Directions, and Market Landscape" session, many IT organizations don't have a good handle on what they have. Cassatt's experiences with customers back this up. Once you have this info, you can move on to the next item...
Get rid of things you don't use. As Scott also said, there are probably things in your data center you don't use. Do a bit of "license harvesting." Don't pay for licenses you don't need. Analyze the usage of the software and hardware in the data center and try to do some consolidation to reduce costs. We'd suggest using extra machines as a free pool of resources which can be set up to handle big demand spikes from any of your applications.
Policy-based automation to reduce your labor costs. Things like your build processes or failure replacement/re-provisioning can be automated, according to Scott. We often find that automation can be a scary word to IT ops folks, but in an economy like this one, it might be just the thing you're looking for. Or worth a test-drive at least.
Don't forget about longer term, big impact items like data center consolidation. OK, it may take five years, but it's a pretty strategic move that can yield "enormous" savings, according to Scott. Bill Malik cautioned in a Q&A panel today, however, that if you don't re-engineer your operations processes for your new, consolidated data center when doing this, you run the risk of not seeing those returns. (To me, this one isn't exactly low-hanging fruit.)
Some other ideas from Paul McGuckin and Donna Scott's keynote
Tuesday's second keynote was about "The Data Center Scenario: Planning for the Future" and Paul and Donna provided another list of ideas to deal with the Aging Data Center Dilemma: "The dilemma becomes," according to McGuckin, "how do you live with the data centers you have" if this recession means no money for building more or even expansion?
Their suggestions:
- Build only what you need
- What if you're out of power in your facility? "Virtualization and consolidation are your friend," said McGuckin.
- Locate and decommission orphan servers and storage. "You need to find these and get rid of them," said McGuckin. It may be 1% of your servers. It may be 12%. It's probably not 30%, he said, but it's worth it. (And if you need help identifying those orphans, let us know.
- Target five-year-old servers for replacement
- Move test systems (they are imaged elsewhere) and 50% of your Web servers off your UPS.
- Implement power-capping and load-shedding strategy. Some servers that are less important need to be turned off, McGuckin said. (Something else Cassatt knows a bit about.)
- Cold-aisle containment is "almost magic," said McGuckin, "allowing you to cool a lot more stuff with the equipment you already have in your data center."
By the way, how bad is it out there?
Not surprisingly, cost pressures have jumped to the top of the list of concerns for those running data centers, if the results here at the Gartner show are any indication. In the in-room polling Donna Scott did during her "IT Operations Management Scenario: Trends, Directions, and Market Landscape" session, she found the top three concerns for infrastructure and operations were cost reduction/management (48%), demonstrating business value (42%), and the pressure to move/change even faster (36%). Previously 7x24 availability concerns were at the top of this list. It would seem that it is, indeed, the economy, stupid.
A more positive sign, though, was the bell curve Donna got when she asked about people's infrastructure and operations budgets in 2009 in another in-room poll. Sure, 30% of the audience saw a 1-10% whack coming to their budgets, but 29% thought things would stay the same, and some non-zero number of lucky dogs are actually getting a budget increase.
Maybe it's that last group that's been filling the MGM Grand's high-rollers room.
Wednesday, December 3, 2008
From Gartner's Data Center Conf: Tom Bittman says the future looks a lot like a private cloud
Posted by
Jay Fry
at
5:38 PM
Here's an amusing tidbit that gives you some indication what Gartner's Tom Bittman thinks the future holds for IT infrastructure and operations: he had so many "cloud" images in his keynote PowerPoint presentation here in Vegas on Tuesday that he actually had to go out and take photos of clouds himself. He needed to have enough distinctly different images that simply using clip art was getting Gartner into copyright issues (lawyers do care where you grab clip art from, I guess).
Bittman's presentation and comments kicked off the 27th annual Gartner Data Center Conference. Since Cassatt is here as a silver sponsor anyway, I thought I'd take the opportunity to give you a quick run-down of some of the highlights as they happen. Or at least as soon after they happen as is reasonable in a town like Las Vegas.
Back to Tom's opening comments: in laying out the future of infrastructure and operations, he described them as the "engine of cloud computing." In other words, pretty central to what is happening in this space. Cloud computing as provided by independent vendors today or as delivered by you inside your own data center all require an evolution in IT operations & management. Here are some interesting comments he made on-stage:
"The future of infrastructure and operations looks an awful lot like a private cloud." With the advent of cloud-style computing, "it's now about who can share things best," according to Bittman. But sharing is not something IT -- or end users, for that matter -- have been used to doing. Or have liked doing. Certainly virtualization has gone a long way toward getting people to think about their IT resources in a new way. In fact, he pointed out how interrelated virtualization, cloud computing, and Gartner's concept of a "real-time infrastructure" actually are. That real-time infrastructure story is one that Bittman, Donna Scott, and the Gartner folks blazed the trail with seven years ago and Tom noted that examples are starting to appear. Google and Amazon have some of this figured out. Tom gave them credit for understanding how to do shareable infrastructure, but not for understanding quality of service requirements for applications anywhere close to the way the IT people in his audience do.
"If you fully utilize your own equipment, Amazon [EC2] will cost you twice as much." Meaning this: the cloud computing benefit that is most interesting is not price. In most enterprises, if you are doing a good job of running your IT ops, you can do it for less than Amazon can. Instead, Bittman contended, it is the low barrier to entry that's so intriguing. And the concept of elasticity. The cloud providers have a lot of work to do managing elasticity, though, and it's one of the reasons Cassatt talks to organizations about using our software to do that -- for their internal resources.
"[In cloud computing] you don't ask how it's done, you ask for a result." This is one of the beauties of cloud computing, inside or outside your four walls. It's about masking the underlying infrastructure from you in a way that's financially beneficial and very flexible. In discussing this, Tom explained the new "meta operating system" required in the data center (or in your cloud service provider's data center) to make this all possible. He talked about VMware's future vision of something like this announced at VMworld (Ken Oestreich provided a good commentary on the pros/cons of that) and Microsoft's newly announced Azure.
But, Bittman said, "a meta operating system is not enough. We need to have something that can automate this to service levels. In cloud computing, today this is very simplistic. There's very little you can control." In fact, this is a sign of the immaturity of the cloud offerings.
The thing that can fill this gap is what Tom and Gartner call a "service governor" (and, is the category in which Donna Scott placed Cassatt Active Response in her Wednesday RTI presentation). A "service governor is what's going to take advantage of the meta OS -- whatever's under the covers -- and make it do useful work," Bittman said.
Bittman also offered a warning. The path that cloud computing doesn't want to follow is that of client/server. People went around central IT and the result was skyrocketing costs and little integration. Failures will be rampant with cloud computing, he said, unless IT is involved, no matter how well-meaning those do-it-yourselfers from the business side of the house are.
His advice: have a plan. Be proactive when it comes to cloud computing, but have a blueprint about where you're going, especially in these economic times. Or, if I may add my own suggestion, at least hedge your bets by starting a side business selling cloud photography to IT departments, analysts, and vendors. We could even use a few ourselves.
Next up: more highlights from Vegas, baby.
Bittman's presentation and comments kicked off the 27th annual Gartner Data Center Conference. Since Cassatt is here as a silver sponsor anyway, I thought I'd take the opportunity to give you a quick run-down of some of the highlights as they happen. Or at least as soon after they happen as is reasonable in a town like Las Vegas.
Back to Tom's opening comments: in laying out the future of infrastructure and operations, he described them as the "engine of cloud computing." In other words, pretty central to what is happening in this space. Cloud computing as provided by independent vendors today or as delivered by you inside your own data center all require an evolution in IT operations & management. Here are some interesting comments he made on-stage:
"The future of infrastructure and operations looks an awful lot like a private cloud." With the advent of cloud-style computing, "it's now about who can share things best," according to Bittman. But sharing is not something IT -- or end users, for that matter -- have been used to doing. Or have liked doing. Certainly virtualization has gone a long way toward getting people to think about their IT resources in a new way. In fact, he pointed out how interrelated virtualization, cloud computing, and Gartner's concept of a "real-time infrastructure" actually are. That real-time infrastructure story is one that Bittman, Donna Scott, and the Gartner folks blazed the trail with seven years ago and Tom noted that examples are starting to appear. Google and Amazon have some of this figured out. Tom gave them credit for understanding how to do shareable infrastructure, but not for understanding quality of service requirements for applications anywhere close to the way the IT people in his audience do.
"If you fully utilize your own equipment, Amazon [EC2] will cost you twice as much." Meaning this: the cloud computing benefit that is most interesting is not price. In most enterprises, if you are doing a good job of running your IT ops, you can do it for less than Amazon can. Instead, Bittman contended, it is the low barrier to entry that's so intriguing. And the concept of elasticity. The cloud providers have a lot of work to do managing elasticity, though, and it's one of the reasons Cassatt talks to organizations about using our software to do that -- for their internal resources.
"[In cloud computing] you don't ask how it's done, you ask for a result." This is one of the beauties of cloud computing, inside or outside your four walls. It's about masking the underlying infrastructure from you in a way that's financially beneficial and very flexible. In discussing this, Tom explained the new "meta operating system" required in the data center (or in your cloud service provider's data center) to make this all possible. He talked about VMware's future vision of something like this announced at VMworld (Ken Oestreich provided a good commentary on the pros/cons of that) and Microsoft's newly announced Azure.
But, Bittman said, "a meta operating system is not enough. We need to have something that can automate this to service levels. In cloud computing, today this is very simplistic. There's very little you can control." In fact, this is a sign of the immaturity of the cloud offerings.
The thing that can fill this gap is what Tom and Gartner call a "service governor" (and, is the category in which Donna Scott placed Cassatt Active Response in her Wednesday RTI presentation). A "service governor is what's going to take advantage of the meta OS -- whatever's under the covers -- and make it do useful work," Bittman said.
Bittman also offered a warning. The path that cloud computing doesn't want to follow is that of client/server. People went around central IT and the result was skyrocketing costs and little integration. Failures will be rampant with cloud computing, he said, unless IT is involved, no matter how well-meaning those do-it-yourselfers from the business side of the house are.
His advice: have a plan. Be proactive when it comes to cloud computing, but have a blueprint about where you're going, especially in these economic times. Or, if I may add my own suggestion, at least hedge your bets by starting a side business selling cloud photography to IT departments, analysts, and vendors. We could even use a few ourselves.
Next up: more highlights from Vegas, baby.
Monday, December 1, 2008
Forrester-Cassatt webcast poll: enterprises not cloudy -- yet
Posted by
Jay Fry
at
9:32 PM
There are lots of great resources out there if you're interested in cloud computing (no, really?). Some are a little more caught up in the hype, some less so. The trick is distinguishing between the two. We dug up some interesting stats in our recent webcast with Forrester Research that we thought were worth highlighting and adding to the conversation, hopefully tending toward the "less hype/more useful" side of the equation. But you be the judge.
First, some quick background on the Cassatt-Forrester webcast: it was a live event featuring Cassatt's chief scientist Steve Oberlin and James Staten, principal analyst at Forrester Research, held Nov. 20. We had both speakers give their take on aspects of cloud computing. James gave a really good run-down of his/Forrester's take on what defines cloud computing, the positives and negatives organizations will run into, plus discussed examples of what a few companies are actually doing today. He cited a project from the New York Times in which they made 11 million newspaper articles from 1851-1922 available as PDFs -- for a total cost of $240 using Amazon EC2 and S3 (and, no, I didn't forget a few zeroes. The actual cost was indeed $240).
James gave an overview of platform-as-a-service and infrastructure-as-a-service offerings out there today, plus advice to IT Ops on how you should experiment with the cloud -- including a suggestion to build an internal cloud inside the four walls of your data center. All in all, he left you with the feeling that cloud computing is indeed real, but really was asking IT ops folks what they're doing about it. (Translation: you can't afford to just sit on your hands on this one...)
Steve Oberlin's comments used James' thoughts about building an internal cloud as a jumping-off point. He explained a bit about how Cassatt could help IT build a cloud-style set-up internally using the IT resources they already have in their data centers. The main concept he talked about was having a way to use internal clouds to get the positives of cloud computing, but to do so incrementally. No "big bang" approach. And, how to help customers find a way to get around the negatives that cause concern about today's cloud offerings.
And that's where some of the interesting stuff comes in.
On the webcast, we asked some polling questions to get a feel for where people were coming from on the cloud computing topic. Some of the results:
To most, cloud computing is a data center in the clouds. There are many definitions of what cloud computing actually is. OK, that's no surprise. For the webinar attendees, it wasn't just virtualized server hosting (though 35% said it was). It wasn't just SaaS (though 49% said that was their definition). It wasn't just a virtualized infrastructure (the answer for 53%). By far the largest chunk -- 78% of the webinar attendees -- said it was an entire "data center in the clouds." And that was before James Staten offered his definition (one Steve liked, too): "a pool of highly scalable, abstracted infrastructure, capable of hosting end-customer applications, that is billed by consumption."
Most people haven't even started cloud computing yet. Of all the data we gathered from the webinar, this was the one that most starkly showed where people are with cloud computing. Or rather, where they aren't. We asked "what is your company currently using cloud computing for?" 70% replied that they are not using it yet. James said Forrester's most recent research echoes these results. So, there's a long way to go. Some were starting to experiment with websites/microsites, rich Internet applications, and internal applications of some sort. Those were all single-digit percentage responses. So what was the most frequently selected application type, aside from "none"? Grid/HPC applications (9%).
Security, SLAs, and compliance: lots and lots of hurdles for cloud computing. We asked about the most significant hurdles that webinar attendees' organizations faced with cloud computing. The answers were frankly not very surprising: they are the same things we've been hearing from large companies and government agencies for months now. 76% cite security as the cloud's biggest obstacle for their organization. 62% said service levels and performance guarantees. 60% said compliance, auditing, and logging. As if to underscore all of this, nobody clicked the "cloud computing has no hurdles" button. Not that we expected them to, but, hey, we're optimistic.
By the way, we don't take these results to mean that all is lost for cloud computing. On the contrary, it's these negatives and hurdles that people see that we think we've got some solutions for at Cassatt. In any case, these simple polls from our webcast just scratch the surface and beg for some follow-up research. In the meantime, if you're interested in hearing the webcast firsthand, you can listen to and watch a full replay here (registration required). If you'd like the slides, drop us an e-mail at info@cassatt.com.
Feel free to post your thoughts on these results.
First, some quick background on the Cassatt-Forrester webcast: it was a live event featuring Cassatt's chief scientist Steve Oberlin and James Staten, principal analyst at Forrester Research, held Nov. 20. We had both speakers give their take on aspects of cloud computing. James gave a really good run-down of his/Forrester's take on what defines cloud computing, the positives and negatives organizations will run into, plus discussed examples of what a few companies are actually doing today. He cited a project from the New York Times in which they made 11 million newspaper articles from 1851-1922 available as PDFs -- for a total cost of $240 using Amazon EC2 and S3 (and, no, I didn't forget a few zeroes. The actual cost was indeed $240).
James gave an overview of platform-as-a-service and infrastructure-as-a-service offerings out there today, plus advice to IT Ops on how you should experiment with the cloud -- including a suggestion to build an internal cloud inside the four walls of your data center. All in all, he left you with the feeling that cloud computing is indeed real, but really was asking IT ops folks what they're doing about it. (Translation: you can't afford to just sit on your hands on this one...)
Steve Oberlin's comments used James' thoughts about building an internal cloud as a jumping-off point. He explained a bit about how Cassatt could help IT build a cloud-style set-up internally using the IT resources they already have in their data centers. The main concept he talked about was having a way to use internal clouds to get the positives of cloud computing, but to do so incrementally. No "big bang" approach. And, how to help customers find a way to get around the negatives that cause concern about today's cloud offerings.
And that's where some of the interesting stuff comes in.
On the webcast, we asked some polling questions to get a feel for where people were coming from on the cloud computing topic. Some of the results:
To most, cloud computing is a data center in the clouds. There are many definitions of what cloud computing actually is. OK, that's no surprise. For the webinar attendees, it wasn't just virtualized server hosting (though 35% said it was). It wasn't just SaaS (though 49% said that was their definition). It wasn't just a virtualized infrastructure (the answer for 53%). By far the largest chunk -- 78% of the webinar attendees -- said it was an entire "data center in the clouds." And that was before James Staten offered his definition (one Steve liked, too): "a pool of highly scalable, abstracted infrastructure, capable of hosting end-customer applications, that is billed by consumption."
Most people haven't even started cloud computing yet. Of all the data we gathered from the webinar, this was the one that most starkly showed where people are with cloud computing. Or rather, where they aren't. We asked "what is your company currently using cloud computing for?" 70% replied that they are not using it yet. James said Forrester's most recent research echoes these results. So, there's a long way to go. Some were starting to experiment with websites/microsites, rich Internet applications, and internal applications of some sort. Those were all single-digit percentage responses. So what was the most frequently selected application type, aside from "none"? Grid/HPC applications (9%).
Security, SLAs, and compliance: lots and lots of hurdles for cloud computing. We asked about the most significant hurdles that webinar attendees' organizations faced with cloud computing. The answers were frankly not very surprising: they are the same things we've been hearing from large companies and government agencies for months now. 76% cite security as the cloud's biggest obstacle for their organization. 62% said service levels and performance guarantees. 60% said compliance, auditing, and logging. As if to underscore all of this, nobody clicked the "cloud computing has no hurdles" button. Not that we expected them to, but, hey, we're optimistic.
By the way, we don't take these results to mean that all is lost for cloud computing. On the contrary, it's these negatives and hurdles that people see that we think we've got some solutions for at Cassatt. In any case, these simple polls from our webcast just scratch the surface and beg for some follow-up research. In the meantime, if you're interested in hearing the webcast firsthand, you can listen to and watch a full replay here (registration required). If you'd like the slides, drop us an e-mail at info@cassatt.com.
Feel free to post your thoughts on these results.
Subscribe to:
Posts (Atom)