Sunday, May 31, 2009

Old habits die hard: the bad news on data center energy efficiency

Despite the batch of pretty good news I reported in my previous post about the trends we see in how data center managers are approaching energy efficiency from 2008 to 2009, there is some bad news. Isn't there always?

But before we get too gloomy, it should be noted that much of the media, vendor, and customer discussion about energy efficiency over the past few years seems to have paid off in getting the word out. As I discussed last time, more IT folks have "green" initiatives to leverage, and more are measuring their power consumption. Sure, that sometimes means that what they measure is pretty inefficient or problematic ("Um, you know we're out of power in our New York data center?"), but at least they're measuring.

The conversation has led many (including my own esteemed colleague and Cassatt chief scientist Steve Oberlin) to point out that the current cloud computing discussions can absolutely be seen as one way that data centers -- or at least the organizations that own them -- can become more green.

The "green IT" messages just might be getting through, but...

All this discussion seems to mean that organizations like the EPA's Energy Star program, the Uptime Institute, the Green Grid, and others are getting through to people who run data centers. For example, in this year's Cassatt Data Center Survey, slightly more (63.2% v. 61.4% previously) know that the EPA recommends turning off servers when they aren't in use. Even though it's just a slight gain, I'll take it.

However, our survey also uncovered enough backsliding from 2008 to 2009 that I'm still forced to point out that some old habits die hard. Even if some of those habits are hurting the operation of your data center. In fact, some of the concepts that Cassatt has been strongly advocating, are meeting some very stiff opposition. Chalk up a few points for operational inertia.

Example: many would consider shutting off idle servers, but the percentage has dropped

Here's an example of one case where the "business-as-usual" approach is still holding on pretty firmly: one of the simplest use cases for Cassatt Active Response has been to use our software to shut down servers when they become idle and then use our policy-based automation to turn them back on when needed. However, powering up and down servers has long been seen as a no-no in IT operations, despite a wave of sources (including the EPA and the Green Grid) recently advocating this approach.

Given the long-standing skepticism about fiddling with server power, our 2009 survey's result actually seems impressive: 55% of the folks who responded to our survey (IT operations, data center managers, and the like) could justify turning off servers. Seems significant, right? Well, it is, actually, if you have a good feel for the underlying conservatism of those charged with keeping the data center running. The sad news from our perspective, however, is that this number is down slightly from last year's figure (59%), showing that the entrenched management ideas are still very strong. That’s despite some pretty good savings estimates. (Our savings calculators conservatively show Active Power Management-driven cost reductions starting at around 23%, with potential for closer to 50% in many cases.)

Similarly, the "deal breakers" for server power management remain similar to what they were last year: application availability is the most important. Impact on physical reliability and application stability were tied for the 2nd most important. This year's numbers do show a surge in worries about the physical reliability of machines (36.3% to 42.5%) and in the potential application downtime that people perceive server power management might cause (45.3% to 51.4%). But if you end up with a heat-induced outage like had today, suddenly some proactive server shut-downs to avert a literal data center meltdown may not seem so scary.

So, any signs that the conservatism in IT operations groups can change?

Actually, yes. Respondents seemed to have grown more comfortable with determining ROI around the topic of server power management. (And, no, I don't think our aforementioned savings calculators can be credited with that.) Also, despite an increase in skepticism regarding using automation for server power management, 36.7% still said they would be OK using automation to power manage a majority of servers in their dev/test environments -- exactly the kind of advice we have been giving prospective customers. Interestingly, 27.2% even said they'd do this for low priority production servers.

Though the IT/facilities gap remains, it is shrinking

By the way, one of the issues alluded to in nearly all writings on the topic of data center energy efficiency -- the alignment gap between IT and facilities -- is still there. But the gap is closing. When asked how integrated facilities and IT planning are, 29.7% said there was "no gap" and they were "tightly aligned," 37.0% said there was a "small gap" and that they speak with their counterparts in the other organization "somewhat."

Where was the improvement? Last year, 32.2% said they had either a "significant gap" in which IT and facilities touch base "infrequently" or a "large gap," meaning they "don't interact at all." This year those numbers dropped to 20.3%. Maybe these organizations are being brought together by smart companies looking for answers to their data center energy problems. Or maybe these guys are just taking the advice of Ken Brill of the Uptime Institute or various analysts and doing something as simple as taking their IT and facilities counterparts to lunch. I'm happy either way. It's amazing what a little communication can do.

So, are the 'experts' succeeding in being heard about data center energy efficiency?

One of the odd things we noticed in last year's survey was that, despite there being a great deal of independent, unbiased expert advice out there regarding data center energy efficiency, respondents got most of their information on the topic from entrenched system and power/cooling vendors. You know, the ones with a big stake in keeping the status quo. (Of course, it could be argued that these vendors have a good perspective on what's needed. However, these vendors just don't have the economic incentives to push radical change.)

And 2009? Same thing. Expert bloggers and media websites on the topic (like TechTarget's SearchDataCenter and others) did get significant mention. Peers and industry analysts did well, too. But the big guys are still the ones that folks are going to for guidance.

There was a curveball this year, too, however. The Uptime Institute, the Green Grid, the EPA, and even the Silicon Valley Leadership Group (which put on a great event about "green IT" last June) all did unexpectedly worse than last year when asked where folks go for data center energy efficiency guidance. Hmmmm.

Big problems take a long time to fix, so don’t expect instant improvement

The take-away from all this? I look at it this way: the data center power problem is a big, long-term one. The processes and approaches that have gotten us all into this situation are big, long-term ones. And, therefore, the solutions are going to need to be pretty fundamental, and, as a result, they will also take a long time to implement. So, we need to be in this for the long haul.

The good news, as I see it, is that the 2009 Cassatt Data Center Survey suggests that the magnitude of the problem is starting to be seen and understood. IT and facilities groups should be given credit for the initial actions they seem to be taking in the areas that they have under their control. It's a great start. Now, if you combine these small initial steps with a bit of economic pressure (which the outside world is adding to the mix quite effectively on its own right now), who knows? Maybe even some of these outdated habits will fall by the wayside.

If you'd like a copy of the 2009 Cassatt Data Center Survey results, e-mail me at

Wednesday, May 27, 2009

Measurement matters: survey finds improvements in data center energy efficiency, but...

When we ran our 2008 Cassatt Data Center Energy Efficiency survey, the topic was really at the forefront of the industry conversation. Green IT was everywhere you looked. It was a perfect time to ask questions (and get some intriguing answers) about how data center managers were approaching their power issues.

Fast-forward a year to 2009.

As we planned out the questions for our 2nd annual Data Center Survey, the hard part was figuring out what not to ask. We expanded our survey questions to ask about cloud computing, virtualization management, data center efficiency projects, and even how people are finding and dealing with orphan servers -- changes that we felt matched the important data center trends to keep tabs on.

Even if it is unable to keep up with the kind of buzz being generated by things like the cloud computing hype train, the data center energy efficiency issue remains critical. Last year's survey provided us with a lot of insights into what people were doing and thinking of doing around green IT -- and even how they felt about "radical ideas" like turning off servers they weren't using (59% said they could justify it, by the way).

So, you'll be happy to hear, that we kept the energy efficiency questions in this year's survey. The trend data we now have by asking those same questions a second year in a row was worth the extra effort. (And, in the end, Cassatt's 2009 survey was only 4 questions longer than the 2008 one. We even had almost 100 more respondents this year.)

Here are a few insights I see from comparing the data center energy efficiency results from Cassatt's 2009 survey with those from 2008:

More awareness of the data center energy efficiency problem

All of the talk about energy efficiency has had some effect. More of our respondents have a corporate "green" initiative this year (46.9% in 2009, 40.4% last year). More have a handle on how energy-efficient their data centers are than they did last year: only 4% said they "didn't know" how they were doing on energy efficiency this year. Last year's figure was almost quadruple that (15.7%). I think it's safe to say that some people have been doing their homework.

Data center managers think their data centers are more efficient this year

Even better, respondents think their data centers are more efficient now. 30.8% believed their data centers to be "very efficient" or "better than average" in '09. That number was only 19.1% 12 months ago. Only 13.6% see themselves as "worse than average" or "poor" this year, down from 21.9% in 2008. There's still an amusing reporting bias that I've mentioned before in which way too many people believe they are average or above, but it looks like people believe they are getting better at data center energy efficiency. Is this optimism warranted yet? It might be; read on.

More good news: measurement is improving

For those of you who have been beating the drum to measure, measure, measure, our survey has some good news. They are. More people are measuring power consumption of their data center server environments this year than did last year, and they are measuring it with more granularity. Specifically, 19.7% said they don't measure power consumption, a drop from the 28.3% who said that last year. More people are measuring power consumption at more detailed levels this year: more by individual server, more by power distribution units (PDU), and more by server racks -- and fewer are just measuring at the server room level. That's all steady movement in the right direction.

But measurement uncovers a bigger problem

There's one potentially painful side effect of the increased measurement that's been reported in our survey, though: the problem of data center power capacity has gotten worse.

The data center power capacity problem is the topic that we here at Cassatt have found to be the main driver in kick-starting any sort of energy efficiency project. I've heard similar dire stats from Gartner and others, but here's what we found: 44.7% have a data center using 75% or more of its power capacity. That's way up from 28.6% last year. The percentage of those within >90% of power capacity at their most constrained data center more than doubled since last year (15.9% v 7.1%). There is an urgent need for these organizations to do one of two things: find a way to extend the life of their existing data centers, or figure out how, in this economy, to build another data center. It's not a comfortable spot to be in, and unfortunately it seems that more and more data center managers are finding that out. I'm also betting that some of the public cloud vendors are getting calls from these same organizations, looking for a Plan B. Or C.

And, it is exactly these power, space, and/or cooling constraints, as you would expect, that were once again the top reason for pursuing a data center energy efficiency project (noted by 34.6% of respondents this year). Pursuing this kind of project because of "environmental responsibility," however, dropped, falling from 22.1% to 14.0%. Maybe folks are tiring of the "green" message that they've been hit over the head with for the past 18 months. Or, maybe reality has stepped in: the "current economic conditions" was picked by 5.3% as the reason for pursuing energy efficiency projects for their data centers this year, too (a new choice in the 2009 survey). Green is nice, but IT has to react when the business (and its budget) is threatened.

So what green IT projects are they working on?

Almost exactly the same proportion as last year (63%) said that they are planning -- or are actually working on -- a data center energy efficiency project. The difference is that a few more of them are actually in progress this year (42.6%, up from 39.0% in 2008), rather than just having grandiose plans for IT "greenness" sitting on a shelf somewhere to keep their executives happy.

So what energy-efficiency strategies are people actually implementing? This year's list is nearly the same as last year's mix; everything is within a few percentage points of what we reported about projects being pursued last year: server consolidation/virtualization was noted by 73.3%, up from its already impressive numbers last year (69.2%). Some of the other projects, in order of popularity, are: storage consolidation/virtualization, more power-efficient servers, consolidate data centers, improve cooling, and server power management software.

We allowed respondents to tell us if they are working on multiple data center energy efficiency projects if that was the case. And it was. People, on average, had exactly the same number of projects this year as in '08: 3.4 per respondent. That's proof that people continue to confront this complex problem in multiple ways.

As for the future, virtualization continues to lead, but 34.0% were also considering purchasing "more power-efficient servers." That's an interesting answer, given the capital spending freezes on in many IT organizations. Also, one of the things Cassatt has always advocated remained on the list, too: over 20% of the respondents are considering server power management software of some sort.

Good news/bad news: progress, but a complex problem

As positive as a lot of the data above sounds, there's still a pretty big uphill climb that remains. The path we're on won't let data centers (nor the power grid) sustainably support the business computing that industry is expected to generate even within the next few years. The economy might slow that demand some this year, but probably not for long (at least, I hope that's the case, from a personal and macroeconomic point of view). I noted the slow progress on green IT back in a December post, and I'll delve into a couple more examples of it in my next post.

In the meantime, kudos to the IT ops and data center professionals who are taking on the data center power problem head-on. And, now that they've gotten good at this measurement thing, maybe it's smart to consider some more, easy steps for data center folks to work on next. After all, I bet they're looking for another project to tackle. In their spare time. (I can always dream, right?)

Wednesday, May 20, 2009

Internal or external clouds? Both, say end users (or "Cloud computing data you won't hear about in Vegas")

Some people have all the luck. Some get to spend the better part of a week in the gilded gaming halls of Vegas, staving off hundred-degree heat with extreme air conditioning, waving their slot machine credit vouchers in one hand and their free drinks in the other, while supposedly learning a thing or two about cloud computing.

Ah, Vegas trade shows. The goal is to survive without leaving too much of your hard-earned cash in any of the fancy, flashing gizmos, nor doing any other things that can't be properly accounted for on a company expense report.

The lucky ones, by my estimation, are those of us who get to stay home, though. OK, that's probably just sour grapes on my part. But even from afar, watching the Interop, Enterprise Computing Summit, and Forrester IT Forum goings-on via Twitter, streaming keynotes, etc., I have a couple items I thought I'd add to the discussion: here is some new, real data from our 2009 Cassatt Data Center Survey that gives a snapshot about what end users are actually thinking and doing about cloud computing.

For those of you just rolling in from a night of Cirque du Soleil and all-night roulette, here's a quick summary:

· Organizations are most likely pursuing both internal AND external clouds, rather than just one or the other.
· Oh, and yes, people's definitions of the cloud are still all over the map.
· Many people haven't looked into cloud computing yet, but there are quite a few working on it right now. (No, really? Maybe that's one reason that everyone's converging on Vegas this week.)
· "I'm not sure" and "don’t know yet" were really popular answers to a lot of questions. At the rate things are changing, though, I don't expect that to be the case for very long.
· The external cloud computing hurdles we've all been hearing about remain: security, service levels, and compliance, in that order, for starters.
· Despite some of the industry commentary about the importance of time-to-market, cost is the key driver for moving to an external cloud.
· Cost is also the top driver for people to pursue an internal cloud, though agility and on-demand elasticity follow close behind.
· Energy efficiency wasn't the primary driver behind any of these moves.

If you have a few minutes before returning to the blackjack tables, here's a bit more discussion on each of these points:

Everyone (still) has more than one basic definition of cloud computing

To start, here's what our people said cloud computing is. The funny thing (sort of) is that we let people select more than one answer, or even write in one of their own. The average number of answers given was 2.1. So either we did a bad job of giving them possible answers (which, given how fast descriptions are shifting, is certainly worth entertaining) or even the people who know what cloud computing is think it can be a bunch of different things. The responses:

- Internet-based services (49.1%)
- A data center in the cloud (42.9%)
- SaaS (39.4%)
- A virtualized infrastructure (36.6%)
- Don’t know or something else (20.6%)
- Virtual server hosting (18.1%)

How end users are using (and plan to use) cloud computing: SaaS and IaaS

We asked how (or if) their business was using cloud computing today -- and what they were using clouds for. The main answers were software-as-a-service (SaaS) (16.1%), compute or storage infrastructure (15.4%), e-mail/messaging/collaboration (12.6%), and websites/microsites (12.3%).

However, the two biggest responses show things are still revving up: 20.4% said they are "evaluating" cloud computing. 35.8% haven’t looked into it yet. And a stubborn (but expected) 11.2% say they will never use cloud computing.

In talking about the future, however, planned use of the cloud, well, puffs up quite a bit. Every option we gave saw an increase between what end users are doing today and what they plan to do in the cloud in the future. Infrastructure for compute or storage (which we'd call IaaS) jumps to 26.6% and SaaS also leaps to 24.8% when we asked about future plans. Delivering rich Internet applications appeared near the top, too, with 16.3%.

But, as a result of either the economy’s pressure or just uncertainty about what cloud computing is or can do, 40.4% said they don’t know what they will do on this topic. 8.5% specifically said they do plan to use cloud computing, but they don’t know what for. And only 7.1% said they will never use cloud computing.

The dark clouds around cloud computing: a reason to work within your current infrastructure

Much has been written and discussed over the problems that are holding IT back from using cloud computing. So, of course, we asked our respondents about the most significant hurdles that their organization faces with cloud computing. The answers weren't surprising, including the 32.9% that aren't sure what their problems with clouds would be yet. That matches the fact that many haven't started on clouds implementations. However, whether people have started or not, only 3.1% said that there weren't any hurdles. That's healthy realism. The hurdles:

- Security (50.0%)
- Service levels and performance guarantees (39.5%)
- I'm not sure (32.9%)
- Compliance, auditing, & logging (30.1%)
- Platform/architecture lock-in (18.9%)
- Doesn't improve use of existing infrastructure (11.2%)
- Narrow provider offerings (8.4%)
- Something else (5.9%)
- No hurdles (3.1%)

"Lock-in" and "narrow provider offerings" weren't seen as big deals yet, which makes sense. This is a very early market and people who are experimenting are forgiving and also have pretty specific needs they are trying to meet. As things mature for cloud computing and end users start to deploy a broader range of applications in the cloud, I'd expect those issues to become more important. Be warned, though, that maturity could come more quickly than expected, leading to some messy entanglements that will need to be worked out.

The rest of the issues listed above emphasize many of the reasons we at Cassatt have been talking to organizations (especially the large ones with very complex IT systems) about internal cloud computing. The goal we talk about is to use a company's existing IT infrastructure as a cloud of compute resources, dynamically allocated by policy, while maintaining use of the security, compliance, and other systems already in place. The idea is to get the most out of a cloud-style architecture without sacrificing the things a large IT organization are required to have in place.

Users aren't choosing between internal and external clouds -- they want both

In the survey, we also asked for respondents to characterize their organization's approach to internal and external cloud computing. The answers:

- We are investigating internal AND external cloud computing (27.1%)
- We are investigating internal cloud computing only (10.0%)
- We are investigating external cloud computing only (5.4%)
- We are not investigating any type of cloud computing (24.3%)
- I don't know (33.2%)

Again, a big chunk of "don't knows," but a significant number who know what their approach is say they are looking into both internal and external cloud computing. This is important, I think. End users want the best of both worlds. From our experience, the public/private cloud dividing line is likely to blur into a broad hybrid or federated cloud discussion with each organization pretty quickly, but each org will still have to start somewhere specific. The result: both internal and external cloud projects are forging ahead. For more discussion on the state of the internal/external cloud debate and definitions, check out some of my earlier posts (posts on how legit internal clouds actually are, the reasons hybrid clouds will take a while to mature, and the current state of the internal/external cloud debate are all related).

Cost is actually the driver, not agility -- so far

Gartner's Tom Bittman, Cassatt's own Steve Oberlin, and others over the past few months have said that, in fact, cost isn't going to be the big driver in the move to the cloud. Instead, the move is being pushed by a need for agility, for ease in getting started or even making changes, and to be responsive to what the business needs. That may still be the case long-term, and both Tom and Steve provide some compelling numbers and arguments why this is likely to be true. However, it's not true yet, according to our survey.

We asked about the most compelling element offered by external cloud computing for each responder's organization, and reducing costs ended up on top:

- Reductions in operating costs (27.1%)
- Overall improvement in ability to respond to business demands (agility) (18.2%)
- Other (16.1%)
- Benefits of a large data center without owning capital (13.2%)
- Immediate response to capacity changes (elasticity) (11.8%)
- Easy/cheap to get started (6.8%)
- Improved energy efficiency (4.3%)
- Transparent costs/metered billing (2.5%)

Same was true when we asked what elements would compel an organization to pursue internal cloud computing. Cost was on top. However, the ability to respond to the business and to have the infrastructure respond to dynamic changes (a key attribute of what Tom and Gartner call a "real-time infrastructure"), were also pretty highly ranked:

- Reductions in current operating costs (43.8%)
- Overall improvement in ability to respond to business demands (agility) (37.5%)
- Immediate response to capacity changes (elasticity) (35.0%)
- I don't know (27.6%)
- Improved energy efficiency (25.4%)
- Use of existing capital to improve IT operations (21.2%)
- Benefits of external cloud services without needing to go outside our organization/processes (16.3%)
- Transparent costs/metered billing (15.2%)
- It's not compelling to my organization (9.5%)
- Other (1.4%)

Internal clouds, it is interesting to note, were not compelling to less than 10% of the respondents.

As with any survey (and as I've mentioned in posting some of the other survey results over the past few weeks on virtualization and data center efficiency), this one is subject to the biases of our questions and of our pool of respondents, in this case people who have been following Cassatt's early moves in this market.

Nevertheless, it's interesting to see that, at least for the group of forward-looking IT folks we surveyed, internal clouds are on the radar, right alongside external clouds.

As they say in Vegas, maybe it's time to double down.

Background note on our survey: For this survey, which Cassatt did earlier this year, we had over 300 on-line respondents from our database, mainly directors, managers, and others directly involved with the IT infrastructure and data center operations from a variety of organizations, large and small. 45% were from entities employing more than 5,000 people, and 18.5% were from organizations larger than 50,000 people. A bit over half worked for organizations with between 1 and 5 data centers; 13.9% said they have more than 20. The respondents themselves mainly said (64.7% of them, anyway) that they had 1 to 5 data centers in their direct sphere of influence. This was our second annual Cassatt Data Center Survey. Last year's survey focused on energy efficiency topics; this year covered a broader variety of topics, including cloud computing, IT operations and data center efficiency, and we also revisited energy efficiency (more on that in a future post). If you'd like a copy of the raw survey results, e-mail me at

Thursday, May 7, 2009

Why Star Trek can teach us a thing or two about internal cloud computing

In honor of the opening of the new Star Trek (sort-of reboot) movie, the slightly whimsical blog topic for today is a bizarre little connection I noticed between what our Cassatt Data Center Dialog blog normally talks about and how the Starship Enterprise manages to "boldly go." Yes, even if you don't know a tribble from a warp core breach, I'm saying there just might be something to be learned about internal cloud computing from Star Trek.

And if not, at least this post is a good excuse to use the word fizzbin in a sentence in a work setting.

I realize that I am reaching a bit here, but I figured if Star Trek can continue to be a cultural phenomenon after 40 years, it's worth thinking about why. It certainly is right up there with Star Wars and Dilbert as one of IT folks' all-time favorite entertainment (and quote) sources. And since even Wil Wheaton (aka @wilw on Twitter and wunderkind Wesley Crusher on The Next Generation) has given the new movie a big thumbs up, I'm feeling like we're poised for a full-fledged Trekfest for the next few weeks. I might as well join in.

Actually, technology copies Star Trek

Newsweek used their May 4 cover story to ask what continues to make Star Trek so compelling. Sure, green-skinned Orion slave girls have their appeal to a certain audience. But, the Trek ratio of technobabble to green aliens heavily favors the gibberish. One of the Newsweek articles was by former Star Trek: The Next Generation writer Leonard Mlodinow who, after a trial by fire on the show's writing staff, figured out that the point of the show was not to see how you could "put real science in the science fiction," as he was attempting.

Instead, Star Trek lets you imagine something new that exists in the future, as a target for us to strive for. "I had had it backward," wrote Mlodinow. "The fun in Star Trek didn't come from copying science but from having science copy it." Captain James T. Kirk would certainly have recognized all those Motorola RAZRs had one of his slingshots around the sun landed him in 2006, for example.

Which brings me to cloud computing.
The Enterprise's internal cloud: now that's a real-time infrastructure
It seems to me that the minds that came up with -- and got us comfortable with how to use -- the phaser, universal translator, replicator, holodeck, and transporter, also came up with on-premise cloud computing.

Here's why I say that: in the 79 episodes of the original '60s series, and even in the Next Generation version from the '90s, a very seamless computing infrastructure ran the Enterprise. Their futuristic, on-board data center (in addition to inspiring Google, it seems) provided Kirk or Picard's crews with endless, on-demand data; instantaneous holographic simulations despite a few missing variables; and even complex calculations sufficient to bring whales, some very odd wardrobe choices, and at least one hideous toupee 300 years into the future in a stolen Klingon Bird of Prey.

Any real-world IT person would notice that the computing infrastructure and underlying architecture that enabled all this wasn't really mentioned that much. It was asked to do a whole lot (including speech recognition of nervous Scottish engineers), without ever interrupting the flow of any given episode with lame interludes where we had to watch the red-shirted Enterprise IT staff (you had to know that pun was coming) provision servers to each new task that Spock or Data threw at the system.

Of course, most IT folks have been tuning out the representations of "realistic" computing in TV and movies since Matthew Broderick used his home modem to kick off a quick game of "global thermonuclear war' in the '80s. So, I forgive you if you didn't spend much time thinking about this. But I digress.
Sure, there were a few Star Trek computer security problems that occasionally locked the crew on course for the Andromeda galaxy or kept them out of their own bridge, but they rarely had what you or I would call IT operations issues. I recall a few minor run-ins with Iconian probes and sentient nanites, but there really was no drama about the ship's computing infrastructure.

And that's exactly how it should be. This is what we're all working toward. Everyone gets the computing power they need, when they need it. (Which is usually right as the Borg are bearing down on them, blabbering on about resistance being futile and all.) The crew had computer resources allocated to whatever priorities they requested without people manually messing with the IT operations. Reads just like one of our Cassatt white papers. Or an explanation of Gartner's real-time infrastructure concept. Hmmm.

Now, I always had the impression that the Enterprise was actually a giant flying mainframe, which may be the case, but the idea of internal cloud computing has its roots there anyway. You have policy-based automation carving up and allocating available resources based upon priorities ("We're venting plasma down here!") and service levels you set ("I'm a doctor, not a bricklayer!").

Now that's an IT future filled with Roddenberryesque hope and optimism. And one that may sound far-fetched, but as data centers continue their path toward greater and greater complexity, we need computers to do what they are good at and help us mere mortals manage things. We need automation and policy management to take things beyond what we can handle in our own brains (or even Spock's brain) and take things where no one has gone before.
OK, so even if you aren't buying any of this so far, here's another thing to think about. Is there another technology more in need of a futuristic proof point, more in need of a little validation? Fast-forwarding to the 23rd century to see how this cloud computing stuff all works out -- and gets used -- is great fun for two reasons.

One, with the Enterprise, we get to see an organization whose "business" is seamlessly supported by their "IT" -- a situation that most 21st century folks would indeed qualify as science fiction. But it's something to aim for.

And, two, we get to fast-forward through all the cloud computing hype to the point in time where it's just about making stuff work. We can skip both the market noise and the chaotic mess that's sometimes supporting today's applications -- and get on with the living long and prospering.
Anyway, enjoy the movie. I expect a flurry of 120-minute "team lunches" at cineplexes just down the road from IT departments around the planet. I'll be the guy in the costume.

P.S. Just in case you think I’m a little too immersed in this ‘Star Trek’ thing, be thankful that at least I'm not doing Klingon opera like these guys (thanks, @digiphile).
(Update: There was a fun front-page San Francisco Chronicle article over the weekend by Benny Evangelista that recounted a number of the technologies that 'Star Trek' has inspired over the years, some successfully, some not so much. Two of note: a sort of medical tricorder that doesn't require needles to learn things about your blood and a universal translator-type device in use in Iraq since 2003. I'm still waiting for the transporter, myself.)