All of this week's hubbub about Cassatt's future has certainly kept me plenty busy, but I thought I'd take a break from all that to publish some of the feedback we received about data center efficiency projects from our 2nd Annual Data Center Survey. Data center efficiency is a topic that's near and dear to our hearts, but is not always at the top of data center discussions (and in a week swirling with speculation, doubly so). Often it's the technology du jour that grabs the spotlight instead, even when the end goal is, in fact, to make things run better. Here's hoping we can help change that.
(Oh, and in case you're scouring this post for hints or tips about what's happening with Cassatt, I'll let you know up front that you're likely to be disappointed. Unless, of course, you manage to decode all of the many double-secret messages I've encrypted within this post. Riiiiight.)
You did a survey about data center efficiency? Why?
If there's one thing that I hope came through in the many discussions about Cassatt this week, it's that everything we do (and have done with the organizations we've worked with throughout our history) boils down to this: improving data center efficiency. The interesting thing is how fundamental optimizing data center operations is for a number of the topics that are front-and-center for IT today. Yes, we have products in this area, but that's not the only reason to focus here. Data center efficiency is a driver for cloud computing. It's at the core of the energy efficiency and green IT work. And it's something that the economic downturn is demanding of IT departments.
Now, I'm not claiming we knew how serious and prolonged the recession would be when we were coming up with our survey questions, but now actually seems like an ideal time to talk about data center optimization. And our numbers support this: only 5.5% of our responders said they aren't pursuing a data center efficiency project.
So, with that in mind, here are some of the interesting things for the other 94.5% of you that we unearthed when talking to data center managers in our database:
Data centers: where everyone is "above average"
It's always fun to ask questions you know are going to lead to amusing results. When we asked how people rate their data center(s) in terms of IT operations efficiency, 41.2% said they were "average," 38.5% said "better than average," and 7% said "very efficient." That leaves only a little more than 10% who admitted they were "worse than average" or "poor."
OK, folks, maybe this whole conversation should start with a session on how to honestly assess where you are. Everyone obviously listens to too much Garrison Keillor. Having seen this "overestimation" problem in a lot of end user IT departments, we created a profiling service to help provide customers accurate baselines for improving operations. And so IT wouldn't have to rely on their gut feel that they're "doing pretty well." You're probably not.
So, what data center efficiency projects are going on in IT?
Definitely virtualization. And virtualization. Oh, and virtualization. That (and server consolidation) accounted for 42.4% of the data center efficiency projects underway. A few data center consolidation (12.7%) and energy efficiency (11.8%) projects were thrown in for good measure. (By the way, we have some '09 energy efficiency project data that I'll post later alongside a comparison with last year's survey results on that topic).
This trend toward data center consolidation was underscored in a separate question in which almost three-quarters of the respondents said they were moving toward fewer, more efficient data centers. But there are exceptions. 16% are actually expanding the number of data centers they are using. Some of the audience polls at the December Gartner Data Center Conference showed similar trends -- in both directions. One size definitely doesn't fit all.
Everything isn't going to be virtualized, though
I previously posted a bunch of the virtualization-specific survey results that we received. Here's one additional bit of data: exactly how pervasive will or won't virtualization eventually be? 4% of the respondents expect to virtualize 100% of their server environment, while another 26% figured they'd be between 75% and 100% virtualized.
The interesting answer, though, is the one at the bottom end of the scale: despite all of the in-roads virtualization is making (and it is everywhere, there's no denying it), 41% of data center managers we talked to said they would be virtualizing less than half of their servers. There was even a stodgy 6% saying they won't be virtualizing at all, thank you very much. I'd say that a big chunk of responders know what they're talking about, too, from actual virtualization experience: 43% have completed some virtualization projects, but still have more to do. These numbers continue to tell me that heterogeneous physical and virtual resources will remain the norm in big, enterprise data centers.
Why pursue a data center efficiency project? Economics, but not the economy
When we asked why folks were working on data center efficiency, one of the options was "current economic conditions." I figured this would be one of the big drivers. Who wouldn't at this point? In fact, it was not. The biggest reason for a data center efficiency project was that "there will be specific economic benefit, regardless of external economic conditions" (so said 35.8%). That implies that these data center efficiency projects are not a short-term fad, but in fact, likely to be an on-going activity. The only way to really tell will be to ask the question again in next year's survey. I'm hoping we get the chance to ask, one way or another.
The second biggest reason (32.1%) was capacity constraints on IT infrastructure (power, space, etc.). This matches what we've seen from our customers and prospects. The ones with the most urgency have consistently been the ones coming to us searching for a solution -- sometimes temporary, sometimes more permanent -- to a data center capacity issue (one organization wasn't able to add even a single additional server to a specific facility because they were out of electrical capacity). Only a little over 12% answered the "why" question by saying they were reacting to a bit of organizational arm-twisting, in the form of a corporate mandate to improve data center efficiency. Looks like many don't need that kind of incentive from above. They know an important issue when they see it.
Anything else worth noting…like, say, cloud computing?
And since no self-respecting data center survey would be complete without asking cloud computing questions, I'll post the answers we received on that topic shortly as well. No matter how things end up with Cassatt, I figure that continuing to post this data (you can see previous posts from the past few weeks here and here) could provide some useful insights for IT ops and the industry at large.
And keep the dialog going.
P.S. If you're looking for some more interesting data center (in)efficiency statistics, James Governor over at RedMonk passed on some dramatic ones from IBM that I link to here. For starters, 78% of data centers were built before the dotcom era, and one of James' sources figures supply chain waste from data center inefficiency is around $40 billion. More fuel for the fire showing why these projects are so important for data center operations.
Thursday, April 30, 2009
Tuesday, April 21, 2009
Why are end users skeptical about cloud computing? Maybe because vendors control the info flow
Posted by
Jay Fry
at
10:38 PM
Sifting through the results of the Cassatt 2nd Annual Data Center Survey, I found something that seemed worth special comment, especially in light of all the cloud computing FUD stirred up by the McKinsey report released at the Uptime Institute event in New York last week.
The McKinsey report said that cloud computing, in so many words, isn't worth the money. I can understand delivering a contrarian message as a counterbalance to the extreme, positive cloud hype currently underway. Or as John Foley of InformationWeek called it, a "much-needed reality check." Fine. But, I (and a good number of other folks -- see the link list at the end of this post) think it was off base in a lot of ways.
But that wasn't the interesting part to me. Instead, I thought it was interesting how easy it was for a single report to hit the New York Times (and a few other places, obviously) and then ignite a "negativity storm" about the perils of cloud computing. Only a few days before, the industry was in a full-fledged love fest with the concept of clouds. (And, probably will be again: the vSphere 4 announcements from VMware this week are working hard to build the "love" for cloud computing -- especially private clouds -- back up.)
But back to my question: why is it so easy to launch a FUDfest around cloud computing?
Here's one possible answer: it's because the end users are not comfortable with the information they are getting about cloud computing (public/private, internal/external, hybrid -- all of it). Why? Because the biggest source of information about this new approach to data center operations and IT in general is (surprise, surprise) the big vendors who are trying to sell them this stuff.
What makes me say this? We asked end users about it. One of the questions we included in the Cassatt 2009 Data Center Survey was: "From what sources do you get data or guidance regarding cloud computing?" Respondents were allowed to select all answers that applied. The answers, in order of popularity were:
· System vendors (e.g. Dell, HP, IBM, Sun): 46% [who would have thought at the time that we should have included Oracle in this list?!]
· Analysts (e.g. Forrester, Gartner, IDC, 451 Group): 43%
· Industry events (summits, conferences, etc.): 42%
· Industry publications/websites/blogs: 39% (e.g. TechTarget, Computerworld, etc.)
· Software/IT management vendors (e.g. BMC, CA, Cassatt, VMware, others): 33%
· Colleagues or peers: 32%
· None of the above: 21%
· Independent bloggers: 11%
· Other industry organizations: 10%
· Other: 2%
So topping the list -- above industry analysts or even their peers -- were vendors. Data center folks who answered our survey said vendors (and system vendors specifically) provided them with their key cloud computing information.
(By the way, this survey question was inspired by one we asked in last year's survey. In 2008, we asked where people got data or guidance on data center energy efficiency. The findings last year: 49% said system vendors, 43% said power & cooling vendors. Same story, different topic. Mark Fontecchio at TechTarget did a good post-Earth Day write-up on last year's findings you can read for comparison.)
Back to 2009 and cloud computing, though. I am heartened that IT ops folks do seem to spend due diligence time with industry websites and resources, and even manage to attend some of the bevy of cloud computing events making the rounds. And obviously, our results come from asking questions of the Cassatt database, a self-selecting lot, and should be viewed in that light.
But, I think there's something here. You could say that the news media love opposing views, and the McKinsey report was tuned to be just that. However, I'd argue that the skepticism by end users (and those writing on their behalf) was already in place. IT has been burned before by promises of the "next big thing." Especially when there is even a slight inkling that this next big thing is being pushed down their throats by anyone whose incentives don't line up with theirs. (OK, that's pretty much anyone selling something, so that's a little extreme, I admit.) We probably shouldn't underestimate, however, the impact of knowing that some very large vendors may not be here tomorrow (e.g. Sun being gobbled by Oracle), thanks to both the maturity of the high-tech industry and the weak economy.
The result? If those vendors are how you get your "reliable" information about a completely new way to run your data center, you'd probably be wise to be a bit skittish. And branch out a bit.
So, what should IT do to "branch out" and gain a little confidence about cloud computing? Even though I'm on the vendor side of the table, there are a couple things I'd suggest for end users that I think would be beneficial for them (and, frankly, for vendors, too):
· Make sure when pulling together supposedly reliable source information that it actually is relevant to your situation -- and actually valid. Some of the McKinsey report talks about SMBs being the only place clouds can be viable. Huh? (Again, check out some of the links who help do the math on that below.)
· Before making blanket statements about what's possible or not, try things out. You may find the cloud approach your peer companies took is exactly wrong for you. Or a no-brainer that you should have already piloted. You won't know until you’ve tried it, at least in a small, controlled way.
· If you have tried things out, even if only as a test or pilot, share that with the industry. We're all learning from each other, given how fast things move.
· Do continue to get information from vendors, but ask really tough questions. We're happy when people do that to us at Cassatt, and in fact treat that as a great way to qualify someone as a real potential customer. If you're not asking the hard questions, you're not serious about cloud computing. (The corollary: if we -- or whomever you are asking -- can't answer 'em, ask someone else who can.)
· Encourage the industry journalists/analysts to go beyond what people say in their press releases and report on what's really going on out there, good and bad. Many have this approach built into their DNA and are trying to do exactly this, but even the best ones need end user help to succeed. Be a source for them. What goes round, comes round, after all.
If you're interested in what people have been saying about the McKinsey report, here's a sampler.
Some basics to kick it off:
NY Times
Forbes
Data Center Knowledge
General commentary, ranging from mildly supportive with caveats to denouncing the whole thing (and lots inbetween):
InformationWeek (John Foley)
Appirio
Cloudiquity
Carpathia Hosting
RightScale
Avastu
Rough Type
ZDNet (Andrew Nusca)
Many Niches
InformationWeek (Michael Hickins)
Elastic Vapor
Tech Crunch
IT World
James Hamilton, AWS
GigaOm
Booz Allen Hamilton
CloudPundit (Lydia Leong of Gartner)
Mosso (Lew Moorman)
CIO Magazine (Bernard Golden)
Cloud Avenue (Krishnan Subramanian)
GoGrid
Whew. And most of this was before the Oracle-Sun deal announcement and the VMware vSphere news hit. (Though I am continuing to add to this as I find new commentary.) You would probably be able to make a pretty strong argument that information overload also plays a big role in end user cloud computing skepticism. If you're spending all your time keeping track of how the industry is morphing day-to-day, you're going to have trouble keeping your data center going, too.
The McKinsey report said that cloud computing, in so many words, isn't worth the money. I can understand delivering a contrarian message as a counterbalance to the extreme, positive cloud hype currently underway. Or as John Foley of InformationWeek called it, a "much-needed reality check." Fine. But, I (and a good number of other folks -- see the link list at the end of this post) think it was off base in a lot of ways.
But that wasn't the interesting part to me. Instead, I thought it was interesting how easy it was for a single report to hit the New York Times (and a few other places, obviously) and then ignite a "negativity storm" about the perils of cloud computing. Only a few days before, the industry was in a full-fledged love fest with the concept of clouds. (And, probably will be again: the vSphere 4 announcements from VMware this week are working hard to build the "love" for cloud computing -- especially private clouds -- back up.)
But back to my question: why is it so easy to launch a FUDfest around cloud computing?
Here's one possible answer: it's because the end users are not comfortable with the information they are getting about cloud computing (public/private, internal/external, hybrid -- all of it). Why? Because the biggest source of information about this new approach to data center operations and IT in general is (surprise, surprise) the big vendors who are trying to sell them this stuff.
What makes me say this? We asked end users about it. One of the questions we included in the Cassatt 2009 Data Center Survey was: "From what sources do you get data or guidance regarding cloud computing?" Respondents were allowed to select all answers that applied. The answers, in order of popularity were:
· System vendors (e.g. Dell, HP, IBM, Sun): 46% [who would have thought at the time that we should have included Oracle in this list?!]
· Analysts (e.g. Forrester, Gartner, IDC, 451 Group): 43%
· Industry events (summits, conferences, etc.): 42%
· Industry publications/websites/blogs: 39% (e.g. TechTarget, Computerworld, etc.)
· Software/IT management vendors (e.g. BMC, CA, Cassatt, VMware, others): 33%
· Colleagues or peers: 32%
· None of the above: 21%
· Independent bloggers: 11%
· Other industry organizations: 10%
· Other: 2%
So topping the list -- above industry analysts or even their peers -- were vendors. Data center folks who answered our survey said vendors (and system vendors specifically) provided them with their key cloud computing information.
(By the way, this survey question was inspired by one we asked in last year's survey. In 2008, we asked where people got data or guidance on data center energy efficiency. The findings last year: 49% said system vendors, 43% said power & cooling vendors. Same story, different topic. Mark Fontecchio at TechTarget did a good post-Earth Day write-up on last year's findings you can read for comparison.)
Back to 2009 and cloud computing, though. I am heartened that IT ops folks do seem to spend due diligence time with industry websites and resources, and even manage to attend some of the bevy of cloud computing events making the rounds. And obviously, our results come from asking questions of the Cassatt database, a self-selecting lot, and should be viewed in that light.
But, I think there's something here. You could say that the news media love opposing views, and the McKinsey report was tuned to be just that. However, I'd argue that the skepticism by end users (and those writing on their behalf) was already in place. IT has been burned before by promises of the "next big thing." Especially when there is even a slight inkling that this next big thing is being pushed down their throats by anyone whose incentives don't line up with theirs. (OK, that's pretty much anyone selling something, so that's a little extreme, I admit.) We probably shouldn't underestimate, however, the impact of knowing that some very large vendors may not be here tomorrow (e.g. Sun being gobbled by Oracle), thanks to both the maturity of the high-tech industry and the weak economy.
The result? If those vendors are how you get your "reliable" information about a completely new way to run your data center, you'd probably be wise to be a bit skittish. And branch out a bit.
So, what should IT do to "branch out" and gain a little confidence about cloud computing? Even though I'm on the vendor side of the table, there are a couple things I'd suggest for end users that I think would be beneficial for them (and, frankly, for vendors, too):
· Make sure when pulling together supposedly reliable source information that it actually is relevant to your situation -- and actually valid. Some of the McKinsey report talks about SMBs being the only place clouds can be viable. Huh? (Again, check out some of the links who help do the math on that below.)
· Before making blanket statements about what's possible or not, try things out. You may find the cloud approach your peer companies took is exactly wrong for you. Or a no-brainer that you should have already piloted. You won't know until you’ve tried it, at least in a small, controlled way.
· If you have tried things out, even if only as a test or pilot, share that with the industry. We're all learning from each other, given how fast things move.
· Do continue to get information from vendors, but ask really tough questions. We're happy when people do that to us at Cassatt, and in fact treat that as a great way to qualify someone as a real potential customer. If you're not asking the hard questions, you're not serious about cloud computing. (The corollary: if we -- or whomever you are asking -- can't answer 'em, ask someone else who can.)
· Encourage the industry journalists/analysts to go beyond what people say in their press releases and report on what's really going on out there, good and bad. Many have this approach built into their DNA and are trying to do exactly this, but even the best ones need end user help to succeed. Be a source for them. What goes round, comes round, after all.
If you're interested in what people have been saying about the McKinsey report, here's a sampler.
Some basics to kick it off:
NY Times
Forbes
Data Center Knowledge
General commentary, ranging from mildly supportive with caveats to denouncing the whole thing (and lots inbetween):
InformationWeek (John Foley)
Appirio
Cloudiquity
Carpathia Hosting
RightScale
Avastu
Rough Type
ZDNet (Andrew Nusca)
Many Niches
InformationWeek (Michael Hickins)
Elastic Vapor
Tech Crunch
IT World
James Hamilton, AWS
GigaOm
Booz Allen Hamilton
CloudPundit (Lydia Leong of Gartner)
Mosso (Lew Moorman)
CIO Magazine (Bernard Golden)
Cloud Avenue (Krishnan Subramanian)
GoGrid
Whew. And most of this was before the Oracle-Sun deal announcement and the VMware vSphere news hit. (Though I am continuing to add to this as I find new commentary.) You would probably be able to make a pretty strong argument that information overload also plays a big role in end user cloud computing skepticism. If you're spending all your time keeping track of how the industry is morphing day-to-day, you're going to have trouble keeping your data center going, too.
Thursday, April 16, 2009
The Great Internal Cloud Debate: Where are we now?
Posted by
Jay Fry
at
9:48 AM
In case you haven't been spending 24x7 keeping track of the industry chatter on the internal cloud and/or private cloud issue, I thought I'd point you to some recent relevant discussions. And maybe highlight what sounds something like a consensus that seems to be building about how this concept will affect (and even benefit) IT, shocking though that may be to you.
One of the most methodically thought-through and extensively discussed sets of definitions for clouds-that-aren't-really-what-everyone-meant-by-cloud-computing-in-the-first-place that I've seen recently was proposed by Chris Hoff (@Beaker on Twitter), which came complete with visual aids (thank you for that, Hoff, actually). Hoff's original point was to try to add some clarity to the "vagaries of cloudcabulary" as he described it -- and to show why using the HIPPIE (Hybrid, Public, Private, Internal, and External) terms for clouds interchangeably (as, ahem, I've kinda been doing myself around here) really doesn't help matters.
In cloud computing, there are lots of hairs to split on where the physical (er, or virtual) compute resource is located, who owns it, who manages it, who can access it -- and how. And, it turns out that after much debate, the private cloud term is the one that seems to be the squishiest. Hoff ended up with something that a lot of people liked (read his post, updates, and the comments to get the full picture), but I'm betting that the precision with which his definitions have been sculpted will be lost on many. He acknowledges that, too, in saying "I don't expect people to stop using [the] dumbed down definitions" and points specifically to comparing "private" clouds to "internal" ones as a prime offender.
So when is an internal cloud private? Or vice versa?
Since this internal v. private cloud distinction isn't one that we've really been making on this blog up to this point, I think it's worth explaining what we mean by each in light of the issues Hoff raised.
When we talk "internal clouds" here, we are mainly talking about using what you already have in your data center to create a dynamic resource pool, managed through policy-based automation software like what Cassatt provides. That means we are, for the most part, ignoring the status of a lot of the other key issues that Hoff discusses in our initial conversations. It's not that management and access (to name a few) aren't important, but they are topics that we add to the discussion along the way with customers. They are just not necessarily the first words out of our mouths.
Why?
Because we're trying to highlight what we think is the most important value to Cassatt customers: being able to leverage the concept of cloud computing by using what you already own inside your data center. In beginning this discussion about improving the efficiency of the data center resources an organization already has, the "internal cloud" moniker seems a fair, if somewhat imprecise, starting point. But you have to start somewhere.
Of course, after heading down that path a bit with a customer, the "private cloud" term may be the one that actually makes the most sense to describe what they are doing or working toward. It may be that the customer's ideal set-up includes both internal and external resources (I'm talking location here), and may need to be used by people and resources inside/outside the company, but still need to be trusted and integrated sufficiently to be considered part of that company's internal compute infrastructure. Hybrid cloud situations could definitely fall into this category, as they begin to move from the realm of PowerPoint to that of reality. And in all those cases, we should absolutely use the private cloud term.
So, we'll endeavor to be more precise in what we mean. Thanks for the pointers in the right direction, Hoff.
And, by the way, private cloud computing is suddenly everywhere
Having just said that there is a distinction between how someone uses private and internal clouds as a label, I am forced to note that the IT press and analyst communities seem to have latched onto the "private cloud" term much more aggressively, regardless of any distinctions. Or maybe those publishing recently have been following the debate (some certainly have on Twitter). I'll let you decide. In any case, here are a couple write-ups on private (and internal) clouds worth noting of late:
· InformationWeek’s Charlie Babcock covered “Why ‘Private Cloud’ Computing Is Real – And Worth Considering” pretty thoroughly. He argues that even though no single piece of an internal cloud architecture may look like a breakthrough, "private clouds represent a convergence of trends holding great promise for enterprise computing," enabling users to tap computing power without a lot of know-how. If your IT guys can master virtualization, he says, you'll master the private cloud (despite virtualization not being a requirement, it seems to be a good measuring stick). And, notes Charlie, "internal clouds aren't just a more efficient way of maintaining old data center practices." Instead, you have to rethink how you do things. Craig Vosburgh did a whole Data Center Dialog post about that topic if you're interested.
· Forrester's James Staten explained his view on how to "Deliver Cloud Benefits Inside Your Walls." While James does use both the internal and private cloud nomenclature, his first report in their "private cloud" series published April 13, 2009, puts Forrester's stake in the ground on the topic. While their definition is a little too virtualization- and developer-based for my tastes, I can't disagree with James that "the end result looks a lot like organic IT" -- the term Forrester has been using for a dynamic, utility-style data center since 2002.
· "Private Cloud Computing Is Real -- Get Over It," said Gartner's Tom Bittman in one in a series of blog posts on the topic. Tom has been pretty clear and pragmatic in his posts on this topic. Whether the name is precisely accurate is not the important point, he says. Instead, it's the idea. And he's in the middle of writing a bunch of research that, if his blog posts are any indication, will put Gartner's full weight behind the concept.
· Also from InformationWeek: what GE is doing with their private cloud, and what private cloud tools are hitting the market. (Yep, Cassatt got a quick mention.)
· 451 Group/Tier 1 Research explained that “The Sky’s the Limit: How Cloud Computing Is Changing the Rules” in their recent webcast. William Fellows recounted BT, Betfair, and Bechtel examples of how real customers are using private (and even hybrid) clouds in this webcast, created from a new report of theirs. Customer examples like this (and the ones in the InformationWeek articles) are great to see.
So where are we now?
To borrow a phrase, we've come a long way, baby. Frankly, even from when Cassatt started actively using the "internal cloud" term aggressively in customer-facing conversations in the middle of 2008 or when I first blogged on the topic here ("Are internal clouds bogus?"), there's been a notable change in both quality and quantity of discussions.
On the quality side of things: the conversation is no longer about whether this concept makes sense, but instead about who is doing it and the distinctions necessary for other companies to really get there. (Our own example: our recent webcast was explicitly about the steps toward creating an internal cloud.) This qualitative step forward is a good sign that the hype is starting to get outpaced by a little bit of real-world execution.
As for quantity, let's just say that my Google alerts for "private clouds" and "internal clouds" are going crazy. For fun (more or less), I set up "dueling Google alerts" on these two specific phrases a few months back. Some days they are more weighted toward one term, some days toward the other ("private clouds" won today, 8 mentions to 6). But the reality is that if I didn't have Google limiting their appearance in my inbox to only once a day, I wouldn't be able to keep my head above the, well, clouds.
One of the most methodically thought-through and extensively discussed sets of definitions for clouds-that-aren't-really-what-everyone-meant-by-cloud-computing-in-the-first-place that I've seen recently was proposed by Chris Hoff (@Beaker on Twitter), which came complete with visual aids (thank you for that, Hoff, actually). Hoff's original point was to try to add some clarity to the "vagaries of cloudcabulary" as he described it -- and to show why using the HIPPIE (Hybrid, Public, Private, Internal, and External) terms for clouds interchangeably (as, ahem, I've kinda been doing myself around here) really doesn't help matters.
In cloud computing, there are lots of hairs to split on where the physical (er, or virtual) compute resource is located, who owns it, who manages it, who can access it -- and how. And, it turns out that after much debate, the private cloud term is the one that seems to be the squishiest. Hoff ended up with something that a lot of people liked (read his post, updates, and the comments to get the full picture), but I'm betting that the precision with which his definitions have been sculpted will be lost on many. He acknowledges that, too, in saying "I don't expect people to stop using [the] dumbed down definitions" and points specifically to comparing "private" clouds to "internal" ones as a prime offender.
So when is an internal cloud private? Or vice versa?
Since this internal v. private cloud distinction isn't one that we've really been making on this blog up to this point, I think it's worth explaining what we mean by each in light of the issues Hoff raised.
When we talk "internal clouds" here, we are mainly talking about using what you already have in your data center to create a dynamic resource pool, managed through policy-based automation software like what Cassatt provides. That means we are, for the most part, ignoring the status of a lot of the other key issues that Hoff discusses in our initial conversations. It's not that management and access (to name a few) aren't important, but they are topics that we add to the discussion along the way with customers. They are just not necessarily the first words out of our mouths.
Why?
Because we're trying to highlight what we think is the most important value to Cassatt customers: being able to leverage the concept of cloud computing by using what you already own inside your data center. In beginning this discussion about improving the efficiency of the data center resources an organization already has, the "internal cloud" moniker seems a fair, if somewhat imprecise, starting point. But you have to start somewhere.
Of course, after heading down that path a bit with a customer, the "private cloud" term may be the one that actually makes the most sense to describe what they are doing or working toward. It may be that the customer's ideal set-up includes both internal and external resources (I'm talking location here), and may need to be used by people and resources inside/outside the company, but still need to be trusted and integrated sufficiently to be considered part of that company's internal compute infrastructure. Hybrid cloud situations could definitely fall into this category, as they begin to move from the realm of PowerPoint to that of reality. And in all those cases, we should absolutely use the private cloud term.
So, we'll endeavor to be more precise in what we mean. Thanks for the pointers in the right direction, Hoff.
And, by the way, private cloud computing is suddenly everywhere
Having just said that there is a distinction between how someone uses private and internal clouds as a label, I am forced to note that the IT press and analyst communities seem to have latched onto the "private cloud" term much more aggressively, regardless of any distinctions. Or maybe those publishing recently have been following the debate (some certainly have on Twitter). I'll let you decide. In any case, here are a couple write-ups on private (and internal) clouds worth noting of late:
· InformationWeek’s Charlie Babcock covered “Why ‘Private Cloud’ Computing Is Real – And Worth Considering” pretty thoroughly. He argues that even though no single piece of an internal cloud architecture may look like a breakthrough, "private clouds represent a convergence of trends holding great promise for enterprise computing," enabling users to tap computing power without a lot of know-how. If your IT guys can master virtualization, he says, you'll master the private cloud (despite virtualization not being a requirement, it seems to be a good measuring stick). And, notes Charlie, "internal clouds aren't just a more efficient way of maintaining old data center practices." Instead, you have to rethink how you do things. Craig Vosburgh did a whole Data Center Dialog post about that topic if you're interested.
· Forrester's James Staten explained his view on how to "Deliver Cloud Benefits Inside Your Walls." While James does use both the internal and private cloud nomenclature, his first report in their "private cloud" series published April 13, 2009, puts Forrester's stake in the ground on the topic. While their definition is a little too virtualization- and developer-based for my tastes, I can't disagree with James that "the end result looks a lot like organic IT" -- the term Forrester has been using for a dynamic, utility-style data center since 2002.
· "Private Cloud Computing Is Real -- Get Over It," said Gartner's Tom Bittman in one in a series of blog posts on the topic. Tom has been pretty clear and pragmatic in his posts on this topic. Whether the name is precisely accurate is not the important point, he says. Instead, it's the idea. And he's in the middle of writing a bunch of research that, if his blog posts are any indication, will put Gartner's full weight behind the concept.
· Also from InformationWeek: what GE is doing with their private cloud, and what private cloud tools are hitting the market. (Yep, Cassatt got a quick mention.)
· 451 Group/Tier 1 Research explained that “The Sky’s the Limit: How Cloud Computing Is Changing the Rules” in their recent webcast. William Fellows recounted BT, Betfair, and Bechtel examples of how real customers are using private (and even hybrid) clouds in this webcast, created from a new report of theirs. Customer examples like this (and the ones in the InformationWeek articles) are great to see.
So where are we now?
To borrow a phrase, we've come a long way, baby. Frankly, even from when Cassatt started actively using the "internal cloud" term aggressively in customer-facing conversations in the middle of 2008 or when I first blogged on the topic here ("Are internal clouds bogus?"), there's been a notable change in both quality and quantity of discussions.
On the quality side of things: the conversation is no longer about whether this concept makes sense, but instead about who is doing it and the distinctions necessary for other companies to really get there. (Our own example: our recent webcast was explicitly about the steps toward creating an internal cloud.) This qualitative step forward is a good sign that the hype is starting to get outpaced by a little bit of real-world execution.
As for quantity, let's just say that my Google alerts for "private clouds" and "internal clouds" are going crazy. For fun (more or less), I set up "dueling Google alerts" on these two specific phrases a few months back. Some days they are more weighted toward one term, some days toward the other ("private clouds" won today, 8 mentions to 6). But the reality is that if I didn't have Google limiting their appearance in my inbox to only once a day, I wouldn't be able to keep my head above the, well, clouds.
Tuesday, April 7, 2009
Webcast polls: Fast progress on internal clouds, but org barriers remain
Posted by
Jay Fry
at
10:59 AM
Today's free advice: you should never miss out on the opportunity to ask questions of end users. Surprise, surprise, they tell you interesting things. And, yes, even surprise you now and again. We had a great opportunity to ask some cloud computing questions last week, and found what looks like an interesting acceleration in the adoption -- or at least consideration -- of internal clouds.
As you've probably seen, Cassatt does an occasional webcast on relevant data center efficiency topics and we like to use those as opportunities to take the market's temperature (even if we are taking the temperature of only a small, unscientific sample). Back in November, we asked attendees of the webcast Cassatt did with James Staten of Forrester (covering the basics of internal clouds) some very, well, basic questions about what they were doing with cloud computing. The results: enterprises weren't "cloudy" -- yet. 70% said they had not yet started using cloud computing (internal or external).
Last Thursday we had another webcast, and again we used it as an opportunity to ask IT people what they actually are doing with internal clouds today. As expected, end users have only just started down this path and they are being conservative about the types of applications they say they will put into an internal cloud at this point. But you'd be surprised how much tire-kicking is actually going on.
This is a bit of a change from what we heard from folks back in the November Forrester webcast. In last week's webcast we were a little more specific in our line of questioning, focusing our questions on internal clouds, but the answers definitely felt like people are farther along.
Some highlights:
The webcast itself: tips on how to create an internal cloud from data center resources you already have. If you didn't read about it in my post prior to the event, we had Craig Vosburgh, our chief engineer and frequent contributor to this blog, review what an internal cloud was and the prerequisites your IT operations team must be ready for (or at least agree upon) before you even start down the path of creating a private cloud. He previewed some of what he said in the webcast in a posting here a few months back ("Is your organization ready for an internal cloud?"). The second part of the webcast featured Steve Oberlin, Cassatt chief scientist and blogger in his own right, covering 7 steps he suggests following (based on direct Cassatt customer experiences) to actually get an internal cloud implementation going.
On to the webcast polling question results:
IT is just beginning to investigate internal cloud computing, but there's significant progress. The biggest chunk of respondents by far (37%) were those who were just starting to figure out what this internal cloud thing might actually be. Interestingly, 17% had some basic plans in place for a private cloud architecture and were beginning to look into business cases. 7% had started to create an internal cloud and 10% said they were already using an internal cloud. Those latter two numbers surprised me, actually. That's a good number of people doing serious due diligence or moving forward fairly aggressively.
One word about the attendee demographics before I continue: people paying attention to or attending a Cassatt webcast are going to be more likely than your average bear to be early adopters. Our customers and best prospects are generally large organizations with very complex IT environments -- and IT is critical to the survival of their business. And, I'm sure that we got a biased sampling because of the title of our webcast ("How to Create an Internal Cloud from Data Center Resources You Already Have"), but it's still hard to refute the forward progress. Another interesting thing to note: we had more registrations and more attendees for this webcast than the one featuring Forrester back in November. I think that's another indication of the burgeoning interest level in the topic (and certainly not a ding at Forrester or their market standing -- James Staten did a bang-up job on the November one).
Now, if it makes the cloud computing naysayers feel any better, we did get 10% of the respondents to the first polling question saying they had no plans to create an internal cloud. And, there was another 20% who didn't know what an internal cloud was. We were actually glad to have that last group at the event; hopefully they had a good feel for some basic terminology by the end of the hour.
IT organizational barriers are the most daunting roadblocks for internal clouds. At the end of Craig's section of the webcast, he recapped all the prerequisites that he mentioned and then turned around and asked the audience what they thought their organization's biggest hurdles were from the list he provided. Only one of the technical issues he mentioned even got votes. Instead, 45% of the people said their organization's "willingness to make changes" was the biggest problem. A few (17%) also mentioned problems with the willingness to decouple applications and services from their underlying compute infrastructure -- an issue that people moving to virtualization would be having as well. 5% weren't comfortable with the shifts in IT roles that internal clouds would cause.
So, despite the 17% that said they had the prerequisites that Craig mentioned well in hand, this seems to be The Big Problem: how we've always done things. Getting a whole bunch of very valuable benefits still has to overcome some pretty strong organizational and political inertia.
IT isn't sure what its servers are doing. One of the 7 steps Steve mentioned was to figure out what you already have before trying to create an internal cloud out of it. Sounds logical. However, by the look of things from our recent survey work and in this webcast poll, this is a gaping hole. Only 9% said they had a minute-by-minute profile of what their servers were doing. Instead, they either only had a general idea (41%), they knew what their servers were originally set up to do but weren't sure that was still the case (24%), or they didn't have a really good handle on what their servers were doing at all (18%). Pretty disturbing, and as Steve mentioned on the webcast, it's important to get this information in hand before you can set up a compute cloud to handle your needs. (We found this problem so prevalent with our customers that Cassatt actually created a service offering to help.)
Test and development apps are the first to be considered for an internal cloud. In the final polling question (suggested from a question I posed on Twitter), we asked "What application (or type of application) was being considered to move to an internal cloud first?" And, despite the data nightmare that would ensue, we decided to let the answer be free-form. After sifting through and trying to categorize the answers, we came up with roughly 3 buckets of responses. People were interested in trying out an internal cloud approach first with:
· Development/test or non-mission-critical apps
· Web servers, often with elastic demand
· New, or even just simple, apps
While a few people also said back-up or DR applications (you can read about one approach to DR using an internal cloud in a previous post) and some pointed to specific apps like electronic health records, most were looking to try something with minimal risk. A very sensible approach, actually. It matches the advice Steve gave everyone, to be honest (step #3: "Start small").
For those who missed the webcast, we've posted the slides on our site (a quick registration is required). Helpful hint when you get the deck: check out the very last slide. It's a great summary of the whole thing, subtly entitled "The 1 Slide You Need to Remember about Creating Internal Clouds." The event recording will also be posted shortly (to make sure you are notified when it is, drop us an e-mail at info@cassatt.com).
In the meantime, let us know both what you thought was useful (or not) about the webcast content, and also what topic we should cover in the follow-on webcast that we've already started planning. Hybrid clouds, anyone?
As you've probably seen, Cassatt does an occasional webcast on relevant data center efficiency topics and we like to use those as opportunities to take the market's temperature (even if we are taking the temperature of only a small, unscientific sample). Back in November, we asked attendees of the webcast Cassatt did with James Staten of Forrester (covering the basics of internal clouds) some very, well, basic questions about what they were doing with cloud computing. The results: enterprises weren't "cloudy" -- yet. 70% said they had not yet started using cloud computing (internal or external).
Last Thursday we had another webcast, and again we used it as an opportunity to ask IT people what they actually are doing with internal clouds today. As expected, end users have only just started down this path and they are being conservative about the types of applications they say they will put into an internal cloud at this point. But you'd be surprised how much tire-kicking is actually going on.
This is a bit of a change from what we heard from folks back in the November Forrester webcast. In last week's webcast we were a little more specific in our line of questioning, focusing our questions on internal clouds, but the answers definitely felt like people are farther along.
Some highlights:
The webcast itself: tips on how to create an internal cloud from data center resources you already have. If you didn't read about it in my post prior to the event, we had Craig Vosburgh, our chief engineer and frequent contributor to this blog, review what an internal cloud was and the prerequisites your IT operations team must be ready for (or at least agree upon) before you even start down the path of creating a private cloud. He previewed some of what he said in the webcast in a posting here a few months back ("Is your organization ready for an internal cloud?"). The second part of the webcast featured Steve Oberlin, Cassatt chief scientist and blogger in his own right, covering 7 steps he suggests following (based on direct Cassatt customer experiences) to actually get an internal cloud implementation going.
On to the webcast polling question results:
IT is just beginning to investigate internal cloud computing, but there's significant progress. The biggest chunk of respondents by far (37%) were those who were just starting to figure out what this internal cloud thing might actually be. Interestingly, 17% had some basic plans in place for a private cloud architecture and were beginning to look into business cases. 7% had started to create an internal cloud and 10% said they were already using an internal cloud. Those latter two numbers surprised me, actually. That's a good number of people doing serious due diligence or moving forward fairly aggressively.
One word about the attendee demographics before I continue: people paying attention to or attending a Cassatt webcast are going to be more likely than your average bear to be early adopters. Our customers and best prospects are generally large organizations with very complex IT environments -- and IT is critical to the survival of their business. And, I'm sure that we got a biased sampling because of the title of our webcast ("How to Create an Internal Cloud from Data Center Resources You Already Have"), but it's still hard to refute the forward progress. Another interesting thing to note: we had more registrations and more attendees for this webcast than the one featuring Forrester back in November. I think that's another indication of the burgeoning interest level in the topic (and certainly not a ding at Forrester or their market standing -- James Staten did a bang-up job on the November one).
Now, if it makes the cloud computing naysayers feel any better, we did get 10% of the respondents to the first polling question saying they had no plans to create an internal cloud. And, there was another 20% who didn't know what an internal cloud was. We were actually glad to have that last group at the event; hopefully they had a good feel for some basic terminology by the end of the hour.
IT organizational barriers are the most daunting roadblocks for internal clouds. At the end of Craig's section of the webcast, he recapped all the prerequisites that he mentioned and then turned around and asked the audience what they thought their organization's biggest hurdles were from the list he provided. Only one of the technical issues he mentioned even got votes. Instead, 45% of the people said their organization's "willingness to make changes" was the biggest problem. A few (17%) also mentioned problems with the willingness to decouple applications and services from their underlying compute infrastructure -- an issue that people moving to virtualization would be having as well. 5% weren't comfortable with the shifts in IT roles that internal clouds would cause.
So, despite the 17% that said they had the prerequisites that Craig mentioned well in hand, this seems to be The Big Problem: how we've always done things. Getting a whole bunch of very valuable benefits still has to overcome some pretty strong organizational and political inertia.
IT isn't sure what its servers are doing. One of the 7 steps Steve mentioned was to figure out what you already have before trying to create an internal cloud out of it. Sounds logical. However, by the look of things from our recent survey work and in this webcast poll, this is a gaping hole. Only 9% said they had a minute-by-minute profile of what their servers were doing. Instead, they either only had a general idea (41%), they knew what their servers were originally set up to do but weren't sure that was still the case (24%), or they didn't have a really good handle on what their servers were doing at all (18%). Pretty disturbing, and as Steve mentioned on the webcast, it's important to get this information in hand before you can set up a compute cloud to handle your needs. (We found this problem so prevalent with our customers that Cassatt actually created a service offering to help.)
Test and development apps are the first to be considered for an internal cloud. In the final polling question (suggested from a question I posed on Twitter), we asked "What application (or type of application) was being considered to move to an internal cloud first?" And, despite the data nightmare that would ensue, we decided to let the answer be free-form. After sifting through and trying to categorize the answers, we came up with roughly 3 buckets of responses. People were interested in trying out an internal cloud approach first with:
· Development/test or non-mission-critical apps
· Web servers, often with elastic demand
· New, or even just simple, apps
While a few people also said back-up or DR applications (you can read about one approach to DR using an internal cloud in a previous post) and some pointed to specific apps like electronic health records, most were looking to try something with minimal risk. A very sensible approach, actually. It matches the advice Steve gave everyone, to be honest (step #3: "Start small").
For those who missed the webcast, we've posted the slides on our site (a quick registration is required). Helpful hint when you get the deck: check out the very last slide. It's a great summary of the whole thing, subtly entitled "The 1 Slide You Need to Remember about Creating Internal Clouds." The event recording will also be posted shortly (to make sure you are notified when it is, drop us an e-mail at info@cassatt.com).
In the meantime, let us know both what you thought was useful (or not) about the webcast content, and also what topic we should cover in the follow-on webcast that we've already started planning. Hybrid clouds, anyone?
Wednesday, April 1, 2009
A test: Applying an internal cloud to disaster recovery
Posted by
Jay Fry
at
10:02 PM
Amidst the talk about improving data center efficiency, a lot of things are on the table. You can move to virtualization, add automation, even try painting the roof (seriously...I heard .Mike Manos of Microsoft talk about that one in his AFCOM keynote last year). There's usually a sacred cow, however. And usually that cow turns out to be one of the biggest culprits of inefficiency in the entire IT department.
Disaster recovery has been one of those sacred cows.
Being the iconoclasts we are here at Cassatt, we thought we should hit this bastion of IT ops conservatism head-on. Sure, the data center folks have many good reasons why they need to do things the way they do currently. And, sure, those same guys and gals have their necks in the noose if the word "disaster" ever appears describing their IT systems and it is not very quickly and relatively seamlessly followed by the word "recovery." We're talking about continuity of operations for business-critical IT systems, something that very likely could make or break a company (especially if things go horribly wrong with your data center and, say, the economy is already in the dumpster).
However, we wanted to apply the internal cloud concepts of resource sharing, policy-based automated provisioning, and energy efficiency (as in, don't have stuff on when it's not needed) to disaster recovery. So we did. We even found a couple willing users to try it out. I thought I'd explain our approach along with the before and after comparisons.
What disaster recovery approaches usually look like now
Existing data center disaster recovery solutions today can vary, but usually require a full duplicate set of servers to be dedicated as back-ups in case of a failure, or an outsourced service that guarantees the same within something like two hours, at costs somewhere in the neighborhood of $5,000 per system. Oh, and those servers (no matter where they are located) need to be on, consuming power and cooling 24x7. There are less immediate and less responsive ways people have their DR set up, too (think trucks, tapes, and several days to restart). Usually, the more reliable the set-up, the more wasteful. We thought we should tackle one of the most wasteful approaches.
How we set up our test to show an internal cloud handling disaster recovery
We set up a very small test environment for this internal cloud approach to disaster recovery in conjunction with one of our customers. We placed two servers under Cassatt Active Response control and called this soon-to-fail environment the "Apples" data center (you’ll see where this is going shortly, and, no it has nothing to do with your iPod). We put another two servers -- of slightly different configuration -- under a different Cassatt controller in what we called the "Oranges" data center.
We helped the customer set up the Cassatt Active Response management consoles to control the clouds of (two) servers, plus the related OSs (Solaris in this case), application software, and networks. We helped them create service-level policies and priorities for the applications under management. The underlying stateless infrastructure management handled by Cassatt was synchronized with data and storage technologies to make sure that the customer's applications not only have servers to run on, but that they also have the data to run as users require, despite the disaster. In this example, we worked with NetApp's SnapMirror product to handle the mirroring of the data between the Apples and the Oranges data centers.
Disaster is declared: the internal cloud begins to beg, borrow, and steal servers
Time for the actual disaster now. Here's what happened:
· We declared the Apples data center "dead." A human notified the Cassatt software managing Apples that it was OK to kick off the disaster recovery process
· Because Cassatt Active Response was managing the cloud of IT resources, it knew what applications were running in the Apples data center and that, because of the disaster, they need to be moved to the Oranges data center to keep business running.
· Cassatt checked to see what hardware was available at the Oranges site. It saw 2 Sun SPARC T2000s.
· Since the hardware configuration at the Oranges site is different from the Apples site, the Cassatt software used priorities that the customer set up to bring up the most important applications on the best-fit hardware
· With these priorities, Cassatt provisioned the available hardware at the Oranges sites with the operating systems, applications, and necessary networking configurations. (The Oranges systems were off when the "disaster" began, but if they had been running other apps, those apps would have been gracefully shut down, and the servers "harvested" for the more critical disaster recovery activities
· The applications that were once running at the Apples site came up, despite the hardware differences, on the Oranges site, ready to support the users.
Sample disaster recovery complete. The internal cloud approach worked: the apps were back up in under an hour. Now that's not appropriate for every type of app or business requirement, but for others this would work just fine.
If there had been fewer server resources in the fail-over site, by the way, Cassatt Active Response would have applied the priorities the customer set up to enable the best possible provisioning of the applications onto the reduced amount of hardware.
This was only a small test, but…
Yep, these are only very limited tests, but they start to show what's possible with an internal cloud. The interesting thing here that's different from what traditional DR has been is this: the customers doing these tests used the compute resources that they already had in their data centers. With an internal cloud approach, the very logic that runs your data center day-by-day (and much more efficiently, I might add) is the same thing that brings it back to life from a disaster.
The other thing that makes this interesting is that by using a cloud-style approach, this disaster recovery scenario can also be easily adapted to more scheduled events, such as application migrations, or to move current workloads to the most energy-efficient server resources available. I'm sure there are a few other scenarios that customers will think of that we haven't.
We'll keep you updated as customers begin to deploy these and other internal cloud examples. And I promise next time we won't be comparing Apples to Oranges. (Yeah, that one hurt to write as well.)
Update: Craig Vosburgh did a technical overview of how a DR scenario might benefit from using an internal cloud-based approach similar to what's mentioned above a week or so back. You can read that here.
Disaster recovery has been one of those sacred cows.
Being the iconoclasts we are here at Cassatt, we thought we should hit this bastion of IT ops conservatism head-on. Sure, the data center folks have many good reasons why they need to do things the way they do currently. And, sure, those same guys and gals have their necks in the noose if the word "disaster" ever appears describing their IT systems and it is not very quickly and relatively seamlessly followed by the word "recovery." We're talking about continuity of operations for business-critical IT systems, something that very likely could make or break a company (especially if things go horribly wrong with your data center and, say, the economy is already in the dumpster).
However, we wanted to apply the internal cloud concepts of resource sharing, policy-based automated provisioning, and energy efficiency (as in, don't have stuff on when it's not needed) to disaster recovery. So we did. We even found a couple willing users to try it out. I thought I'd explain our approach along with the before and after comparisons.
What disaster recovery approaches usually look like now
Existing data center disaster recovery solutions today can vary, but usually require a full duplicate set of servers to be dedicated as back-ups in case of a failure, or an outsourced service that guarantees the same within something like two hours, at costs somewhere in the neighborhood of $5,000 per system. Oh, and those servers (no matter where they are located) need to be on, consuming power and cooling 24x7. There are less immediate and less responsive ways people have their DR set up, too (think trucks, tapes, and several days to restart). Usually, the more reliable the set-up, the more wasteful. We thought we should tackle one of the most wasteful approaches.
How we set up our test to show an internal cloud handling disaster recovery
We set up a very small test environment for this internal cloud approach to disaster recovery in conjunction with one of our customers. We placed two servers under Cassatt Active Response control and called this soon-to-fail environment the "Apples" data center (you’ll see where this is going shortly, and, no it has nothing to do with your iPod). We put another two servers -- of slightly different configuration -- under a different Cassatt controller in what we called the "Oranges" data center.
We helped the customer set up the Cassatt Active Response management consoles to control the clouds of (two) servers, plus the related OSs (Solaris in this case), application software, and networks. We helped them create service-level policies and priorities for the applications under management. The underlying stateless infrastructure management handled by Cassatt was synchronized with data and storage technologies to make sure that the customer's applications not only have servers to run on, but that they also have the data to run as users require, despite the disaster. In this example, we worked with NetApp's SnapMirror product to handle the mirroring of the data between the Apples and the Oranges data centers.
Disaster is declared: the internal cloud begins to beg, borrow, and steal servers
Time for the actual disaster now. Here's what happened:
· We declared the Apples data center "dead." A human notified the Cassatt software managing Apples that it was OK to kick off the disaster recovery process
· Because Cassatt Active Response was managing the cloud of IT resources, it knew what applications were running in the Apples data center and that, because of the disaster, they need to be moved to the Oranges data center to keep business running.
· Cassatt checked to see what hardware was available at the Oranges site. It saw 2 Sun SPARC T2000s.
· Since the hardware configuration at the Oranges site is different from the Apples site, the Cassatt software used priorities that the customer set up to bring up the most important applications on the best-fit hardware
· With these priorities, Cassatt provisioned the available hardware at the Oranges sites with the operating systems, applications, and necessary networking configurations. (The Oranges systems were off when the "disaster" began, but if they had been running other apps, those apps would have been gracefully shut down, and the servers "harvested" for the more critical disaster recovery activities
· The applications that were once running at the Apples site came up, despite the hardware differences, on the Oranges site, ready to support the users.
Sample disaster recovery complete. The internal cloud approach worked: the apps were back up in under an hour. Now that's not appropriate for every type of app or business requirement, but for others this would work just fine.
If there had been fewer server resources in the fail-over site, by the way, Cassatt Active Response would have applied the priorities the customer set up to enable the best possible provisioning of the applications onto the reduced amount of hardware.
This was only a small test, but…
Yep, these are only very limited tests, but they start to show what's possible with an internal cloud. The interesting thing here that's different from what traditional DR has been is this: the customers doing these tests used the compute resources that they already had in their data centers. With an internal cloud approach, the very logic that runs your data center day-by-day (and much more efficiently, I might add) is the same thing that brings it back to life from a disaster.
The other thing that makes this interesting is that by using a cloud-style approach, this disaster recovery scenario can also be easily adapted to more scheduled events, such as application migrations, or to move current workloads to the most energy-efficient server resources available. I'm sure there are a few other scenarios that customers will think of that we haven't.
We'll keep you updated as customers begin to deploy these and other internal cloud examples. And I promise next time we won't be comparing Apples to Oranges. (Yeah, that one hurt to write as well.)
Update: Craig Vosburgh did a technical overview of how a DR scenario might benefit from using an internal cloud-based approach similar to what's mentioned above a week or so back. You can read that here.
Subscribe to:
Posts (Atom)