Tuesday, March 31, 2009

No April Fools' Day joke: Data center managers don't know what their servers are doing

Given the arrival of my favorite Silicon Valley holiday, I'd like to brush aside some of the content in this post as a big April Fools' Day ruse. Unfortunately, it's not.

Here are the facts: according to a couple folks who should know, data centers are buying new equipment before making good use of what they already have. OK, maybe that's not new news, but we've just added another couple scary bits of data ourselves. According to a new survey we did here at Cassatt, not only are data center managers making poor use of their equipment, they don't even know what some of that equipment is doing.

Sounds like a major disconnect. Here are some specifics from a couple big analyst firms and early feedback from our 2009 Cassatt Data Center Survey:

Gartner: Organizations struggle to quantify their data center capacity problems

Gartner analyst Rakesh Kumar published a paper (ID #G00165501) at the beginning of the month in which he slapped some pretty direct zingers (for Gartner, anyway) in his "key findings":

· 50% of data centers will face power, cooling, and floor space constraints within 3 years. (Our forthcoming survey, by the way, saw similar problems. 46% said their data center is within 25% of its maximum power capacity.)
· Data centers use "inefficient, ad hoc approaches" rather than "continuous-improvement, process-driven" methods to get a handle on these problems (how's that for a nice way to scold the IT guys?).
· Most organizations can't quantify their capacity problems.

Now, we know that the processes, like the technology, in use in today's data centers (especially for big organizations) have been cobbled together over time. Those ad hoc approaches Rakesh mentioned don't surprise me. But not being able to even quantify the problem seems like an issue that has to get solved immediately.

Forrester: the bad economy means it's time to improve IT operations...or else

Forrester's Glenn O'Donnell published some similar points in a recent NetworkWorld article. Given the rocky economy, companies are making IT investments in anything that improves operational discipline, provided there are speedy results, according to Glenn. He places the operational IT budget at around 75% of the total that companies spend on IT (also known as the "keeping the lights on" money). However, "30% to 50% of the energy we expend is wasted," he said, blaming "inefficient processes and poor decisions." And by energy, he means time and effort. And time = money. So he's talking about...money. The business world equivalent of Darwin's natural selection will not be kind to companies wasting money in the current economic climate, says Glenn. He points to "mean time to resolution" (MTTR) as a gauge of how a company is doing in its data center operational efficiency. One solution: "We must demonstrate...MTTR improvements with pilots of process and automation."

Cassatt survey: people don't know what their servers are doing

Back for a moment to Rakesh's Gartner report. "Most organizations," Rakesh writes, "struggle with quantifying the scale and technical nature of their data center capacity problems because of organizational problems, and because of a lack of available information." There's the rub. You need real, actual data before you can do anything about it. And -- as our customers have been telling us -- that's not easy to come by.

Our new Cassatt 2009 Data Center Survey (due out in the next few weeks), shows how acute the problem is. The survey will show that over 75% of data center managers only have a general idea of the current dynamic usage profile of their servers. A couple other somewhat disturbing stats we found:

· 7% said they don't have a very good handle on what their servers are doing
· 20% know what their servers were originally provisioned to do, but aren't certain that those machines are actually still involved in those tasks
· Only a bit more than 16% of those in IT ops have a detailed, minute-by-minute profile of what activity is being performed, the users involved, granular usage stats, interdependencies, and the like
· More than 20% of respondents thought that between 10-30% of their servers were "orphans" (servers that are on, but doing absolutely nothing). The actual number we have routinely found to be true orphans in our investigations with big customers, incidentally, is right around 11%. (For more on the "orphan" server topic, see my previous post.)

Getting the information that IT ops needs

OK, so that's all pretty dire. I'm interested, though, in how we help end user IT ops teams make progress in the face of this. I know that when working with customers on data center efficiency projects, this lack of data doesn't cut it: it's much better to come to them with a solution. Or at least some suggestions.

I'm sure others have come up with different ways to solve this, too, but we at Cassatt realized we had to create a way to get a profile of what a customer's environment is actually doing over time. So, as a step in the process of using our software to create an internal compute cloud -- complete with automated service-level management policies for their applications, VMs, and servers -- we put together an Active Profiling Service. We use some monitoring software and tap into the smarts from some of our experts who know the ins and outs of data centers to put this service together. The result: a look at what your data center is doing today and recommendations about what to do with that info.

(If you're interested, we can show you some sample Cassatt Active Profiling Service reports: ping us at info@cassatt.com. Also, Steve Oberlin and Craig Vosburgh will be walking through some aspects of this in a webcast this week.)

Once you have the data: some data center optimization suggestions

Once you have some of the crucial profile data, what are some useful optimization suggestions?

Glenn from Forrester points to Harley Davidson as someone who is headed in the right direction through a combination of "process, automation, hiring good people, and a determination to discard the destructive practices of the past."

Randy Ortiz of Data Center Journal suggests a little something called the "Data Center and IT Ops Diet." When you are on a diet, Randy notes, "you carefully examine what you take in and how much you burn off. The Data Center and IT Ops diet [which he describes as driven by the economic downturn] provides you with the necessities only: availability and efficiency. There is no room for large projects with long-reaching ROIs."

And what about Rakesh of Gartner? He has similar advice: before jumping into some new data center build-out, a "continuous process of data center improvement [should] be established" (in fact, his whole paper is called, appropriately enough, "Continuously Optimize Your Data Center Capacity Before Building or Buying More," which is what got me started on this whole rant in the first place). "Too often," Rakesh writes, "the data center improvements are considered a project" and not a long-term, on-going process. He suggests continually optimizing IT infrastructure because "sprawl in the infrastructure creates sprawl in the data center." The tactics he lists for consideration are pretty basic: consolidation, virtualization, and tossing out older hardware.

These are great starts. We're seeing customers do them all. And many of those customers we are working with are implementing these ideas as an integral part of a data center optimization project that also includes working with our software and an active profiling engagement (we do, of course, as a result, suffer from a sampling bias). However, what we've been helping these customers do is relevant here: they can identify and decommission specific orphan servers -- the actual ones that are sitting around doing nothing. They can find the specific candidates for virtualization, where server utilization is very low, and workloads are such that they can be stacked together with other workloads on fewer boxes. They can locate and set up intelligent server power management -- identifying hardware for which workloads are very cyclical, enabling servers to be completely shut down to save power during off hours. And, from all this can also come recommendations for ways to optimize your overall IT operations, including setting up policy-based IT infrastructure automation and an internal compute cloud. You could even use some of the decommissioned servers as spare capacity.

As cool as any of this may sound, though, there's no need to get carried away. Start simply. Find out what your servers are doing. Or start experimenting with optimization in a limited corner of your environment where the stakes aren't very high.

But start.

The industry that spawns some of the most creative April Fools' Day jokes shouldn't be one itself.

If you're interested in getting a preview copy of our second annual Data Center Survey results under non-disclosure, let me know at jay.fry@cassatt.com.

Wednesday, March 25, 2009

Virtualization complexity is not going away, so plan for reality

Earlier this year, Cassatt ran our second annual Data Center Survey, getting responses from several hundred data center-related people in our database. Last year's survey (register to download it here) focused mainly on data center energy efficiency. This year, we asked those same questions again -- and then some. Since the issues in the data center have shifted, so have our questions. We hit a broad set of topics -- from virtualization to cloud computing to the impact of the economy on data center efficiency projects.

We’ll be publishing the overall results in a couple weeks, but I have been sifting through the survey data and found some interesting tidbits about virtualization that I thought were worth noting, especially in light of the press announcement we made today (more on that in a moment).

VMware is still the runaway leader, and virtualization is ready for primetime

No news there. More than 81% of our respondents have VMware currently deployed in their environment. But it certainly didn't stop there. Citrix and non-Citrix derivatives of Xen were installed in nearly 23% of organizations. Microsoft Hyper-V topped 17%. Other virtualization technologies were listed as well, including Parallels Virtuozzo Containers, Solaris Containers/Zones, IBM LPARS, and a smattering of others.

We also asked where virtualization was being used. The answer: everywhere. Almost 66% of respondents said they were using virtualization in not only development and test environments but also in production. That's up 3% from our 2008 survey. This year, only 13% were using virtualization solely in dev/test. Virtualization, as if we didn't already know, is all over the place and being relied on to support important applications.

Standardizing on one virtualization technology? IT ops folks don't see it happening

We asked our survey-takers to characterize how they plan to support virtualization in their data centers: did they expect to standardize on just one technology or support multiple? Only 23% said that they plan to support one and only one virtualization technology. Painful as it might be from a management standpoint, nearly 68% of respondents acknowledged that their data centers would have hypervisors from multiple vendors. They either:

· Are already supporting multiple virtualization technologies -- and don't expect that to change (23%)
· Are planning to support multiple virtualization technologies in the future (17%) or
· Hope to support only one virtualization technology, but fully expect that others will appear over time (28%)

Sounds like they have a pretty good handle on what's coming at them.

Managing physical and virtual systems with the same management stack?

Many people seemed to be planning to manage both their virtual servers and more traditional physical server environment with the same tool set. In fact, 41% told us exactly that. Well, that would be nice, wouldn't it? That answer seems to be a bit of wishful thinking, especially if you look at what people are doing right now (like what tools they're using). However, it's one of the things we at Cassatt want to make more and more possible, so we applaud their foresight on this. However, nearly 29% hadn't really made a strategic choice yet about how to manage these mixed physical and virtual environments, which tells me that customers are still thinking about this one. This also puts a big chunk of the organizations out there behind the eight ball, for sure, given how far adoption of virtualization has come. No firm management strategy? Yikes.

How you can plan for all this complexity: prepare for reality

All of these data points point me back to the design point we try to keep in mind when creating Cassatt products: reality. If you read today's announcement, we talked about delivering our new GA-level support for Parallels Virtuozzo Containers. This is in addition to the VMware and Xen support we already have in Cassatt Active Response, and the support for controlling physical servers, too (running Linux, Solaris, Windows, and AIX so far).

Most of the discussions about dynamic data centers or internal clouds that you hear from VMware and others don't bring up any of the real-world diversity that exist in IT shops. Maybe this is one of the reasons that some end users are highly skeptical of the whole concept of more flexible, shared, dynamic infrastructures. You can't talk about improving how someone runs their data center if you're not talking about the actual data center that they have to live with every day. (To be fair, as John Humphreys mentioned in my recent interview with him, Citrix at least acknowledged that having a tool that manages Xen and Hyper-V would be useful in their recent announcements. It's a start, for sure.)

What we're continuing to hear directly from customers and through surveys like the one I previewed a bit here is that virtualization complexity is here to stay. It can't be ignored. Instead, it needs to be taken into account: you'll have applications that use VMware. You'll acquire someone (or be acquired by someone) running Xen. Or Microsoft Hyper-V. Or Virtuozzo. Or (deep breath) all of the above. It's not what you'd like, nor something that's going to make your life as an IT operations professional easier in the short run. But it's reality.

And, when you add in the fact that you still have real, actual, physical servers running in your environment that need care and feeding to support applications, you have a physical and virtual management challenge that warrants an approach that smoothes the running of IT and reduces expenses -- rather than the opposite.

There's more about Cassatt Active Response 5.3, out today, in our press release, and overview product info is on our product pages, if you're interested.

And, incidentally, there are a couple of other interesting points worth highlighting from the survey data that I'll try to write about in upcoming posts prior to the whole report going public.

Tuesday, March 24, 2009

Webcast: Using what you already have to create a cloud

If there's one thing customers hate, it's a great idea that comes with a caveat. Especially a caveat that says something like: "In order to benefit from said great idea, you are required to tear everything out and start all over." That sort of behavior usually gets you kicked out of data center and IT operations meetings. Data centers don’t operate that way. Data center operators don’t operate that way.

However, if there's a way to link a new idea or technology incrementally with what's already underway or where investments have already been made, you're going to get a much warmer reception.

OK, so I'll apply the above truisms to cloud computing (and internal/private clouds especially). If organizations think that in order to get the benefits of an internal cloud they have to start buying their infrastructure anew, they are going to be interested only where they have plans (and budgets) that can support that kind of spending and that kind of change. That's especially difficult in this economy. In fact, there's a non-zero chance that they might be openly hostile and, despite the promised benefits for internal clouds, ignore the concept completely.

Instead, my belief is this: a really useful internal cloud is one that leverages what they already have. And, man, do people have a lot of messy, complex stuff -- VMs, physical servers, applications, networking, the works.

So, how do you build an internal cloud out of what you already have in your data center?

Based on our experiences with customers, there are a couple things you'll need to know and do. Some of these are organizational and technological challenges (Craig Vosburgh has talked about some of the issues in his posts here before), like a willingness to change current procedures and roles, or even having your once-relatively-static CMDB shift into a more dynamic mode.

Once you know you're going to be able to address some of those high-level concerns, you then have to get a real, live project going around an actual set of applications. Rubber, meet road.

To rattle off some of the things we've learned from our customers in getting internal cloud projects off the ground, we corralled a couple of our technical experts for a webcast on the topic next week. Steve Oberlin, Cassatt's chief scientist and newly minted blogger (check out Cloudology), and the aforementioned Craig Vosburgh, our chief engineer, are going to walk through what they've seen that works, what doesn't work, and -- probably most interestingly -- talk about how you can use the data center components you've already invested in as the basis for a cloud-style architecture on your own premises.

This webcast is a follow-on to one we did back in November with James Staten of Forrester, which covered what internal clouds were in the first place (you can view the playback of that one here -- simple registration required) and how they might help data center efficiency. This one is about the next step: where do you start?

One thing that might be of special interest: Steve and Craig will point out some internal cloud starter projects and the characteristics that make those projects good pilots. And, yes, they will talk a bit about how something like Cassatt's software can help, but mostly the webcast will be a synthesis of what we've learned in customer engagements in the hope that you can benefit from some of our war stories. And, the two of them are planning to talk for about 20 minutes each, leaving a bunch of time for live questions from the attendees. You have my word on that. (It helps that I'm the emcee.)

Thanks, by the way, to Eric Lundquist of Computerworld for pointing his readers to the webcast. "I don't usually plug vendor webcasts," says Eric in his blog, "but this upcoming one from Cassatt looks interesting. If you go to the webcast, let me know if you like it or not." We'll work hard not to disappoint, and we'll let you be the judge of that.

There's one caveat (of course): you'll have to show up to the webcast.

You can register for the Cassatt April 2 webcast "How to Create an Internal Cloud from Data Center Resources You Already Have" here.

Monday, March 23, 2009

Internal clouds and a better way to do recovery to a back-up datacenter

Last post, we talked about a variety of failures within a datacenter and how an internal cloud infrastructure would help you provide a better level of service to your customers at a lower cost. In this post, we're on to the final use case for our discussion of recovery capabilities enabled by using an internal cloud infrastructure -- and I think we've left the best for last.

In the wake of 9/11 and to respond to SOX compliance issues, many companies have been working on catastrophic disaster recovery solutions in the event a datacenter becomes unavailable. This kind of massive failure is where a cloud computing infrastructure really shines as it enables capabilities that to date were unattainable due to the cost and complexity of the available solutions. That said, we'll build on the previous example (multiple applications of varying importance to the organization) but this time the failure is going to be one in which the entire datacenter that is hosting the applications becomes unavailable (you can use your imagination on what kind of items cause these types of failures…).

Let's lay the groundwork for this example and describe a few more moving parts required to affect the solution, once again using a Cassatt implementation as the reference point. First, the datacenters must have a data replication mechanism in place as the solution relies on the data/images being replicated from the primary site to the backup site. The ideal approach would be to use a two-phased commit approach as this means no data loss on failure (other than transactions in flight which will roll back) as the things being written to the primary datacenter are being written to the backup datacenter at the same time. While this is the preferred approach, if you can relax your data coherency requirements (such that the backup site's data is within 30-60 minutes of the primary site) then the required technology/cost can be simplified/reduced substantially by using one of the myriad of non-realtime replication technologies offered by the storage vendors.

The second requirement of the solution is that the IP addresses must stay the same across the recovery sites (meaning that when the primary site is recovered to the secondary site, it will come up in the same IP address space that it had when running in the primary datacenter). The reason for this requirement is that many applications write the IP address of the node into local configuration files, making them very difficult to find and prohibitively complex to update during a failure recovery. (Think of updating thousands of these while in the throes of performing a datacenter recover and how likely that at least a few mistakes will be made. Then add to that how difficult it would be to actually find those mistakes.) We've learned that we end up with a much more stable and easy to understand/debug recovery solution if we keep the addresses constant.

On the interesting topics front, there are two items that are perhaps unexpectedly not required for the solution to work. First, the backup datacenter is not required to have identical hardware as the primary site (both the type of hardware and the quantities can differ.) Second, the backup datacenter can be used to host other lower priority applications for the company when not being used for the recovery (so your investment is not just sitting idle waiting for the rainy day to happen, but instead is contributing to generating revenue).

With those requirements/background out of the way, let’s walk through the failure and see how the recovery works. Again, we'll start with the assumption that everything is up and running in a steady state in the primary datacenter when the failure occurs. For this failure, the beginning of the failover process is manually initiated (while we could automate the failover, the recovery of one datacenter into another just seems like too big a business issue to leave the decision to automation. Instead, we require the user to initiate the process.) Once the decision is made to recover the datacenter into the backup site, the operator simply runs a program to start the recovery process. This program performs the following steps:
  • Gracefully shut down any applications still running in the primary datacenter (depending on the failure, not all services may have failed, so we must start by quiescing the systems)

  • Gracefully shut down the low-priority applications running in the backup datacenter in preparation for recovering the primary datacenter applications.

  • Set aside the backup datacenter’s data so that we can come back to it later when the primary datacenter is recovered. When we want to migrate the payload back to the primary site, we'll want to recover the applications that were originally running in that backup datacenter. There isn't anything special being done in this step in terms of setting aside the data. In practice, this just means unmounting the secondary datacenter storage from the control node.

  • Need to update the backup data centers network switch and routing information so that switches know about the production site network configuration. Also would need to update the backbone routers, etc so that they know about the change in location.

  • Mount the replicated data store(s) into place. This gives the control node in our Cassatt-based example access to the application topology and requirements required to recovery the applications into the new datacenter.

  • Remove all existing hardware definitions out of the replicated database. We keep all of the user-defined policies that describe the server, storage, and networking requirements of the applications. However, because the database we are recovering includes the hardware definitions from the primary datacenter and none of that hardware exists in the secondary datacenter, we must remove it prior to starting the recovery so that the system is forced to go through hardware allocation steps. These steps are important because they will map the application priorities and requirements to the hardware available in the backup datacenter.

Once these steps are completed, the recovery logic in the cloud infrastructure is started and the recovery begins. The first thing the cloud infrastructure controller must do is to inventory the hardware in the secondary datacenter to determine the type and quantities available. Once the hardware is inventoried, the infrastructure takes the user-entered policy in the database and determines what applications have the highest priorities. It begins the allocation cycle to re-establish the SLAs on those applications. As hardware allocations complete, the infrastructure will again consult the stored policy from the database to keep intact the dependencies between the various applications, starting them in the required order to recover the applications successfully. This cycle of inventory, allocation, and activation will continue until either all of the applications have been recovered (in priority/dependency order) or until the environment becomes hardware constrained (meaning that there is insufficient hardware of the correct type to meet the needs of the applications being recovered).

The same approach outlined above is done in reverse when the primary datacenter is recovered and the applications need to be migrated back to their original locations. Once the applications are recovered back to the primary datacenter, the applications that were originally running in the backup datacenter can be recovered by simply putting the storage mounts back in place and restarting the control node. In this case no extra scrubbing steps are required, as the hardware has not changed this time. After restarting the control node, the applications are recovered just as if a power outage had happened. Once restarted, the applications will pick up exactly where they had left off prior to the primary datacenter failure.

Thanks for taking the time to read and I hope this post has you thinking about some of the major transformational benefits that your organization can receive from adopting an internal cloud infrastructure for running your IT environment. My next installment will be a discussion of how an internal cloud infrastructure's auditing and tracking capabilities can provide your organization an unparalleled view into how your resources are being used. We'll then explore how this type of information can enable you to provide your business units with billing reports that show exactly what resources their applications used and when for any given month.

Wednesday, March 18, 2009

A snapshot: actual customer questions about internal cloud computing

After my recent post covering Nicholas Carr's presentation at the IDC conference, I received a comment from Jon Collins voicing extreme skepticism that the transformation Carr talked about (and I agreed with) was, in fact, underway.

One of the fun parts of being on the front lines of what's going on in the data centers these days is having a pretty interesting set of data points that a lot of other folks don't have. When faced with questions like Jon's or even the more basic, "What's really going on out there with cloud computing?" I always go back to what our customers and prospects are saying.

So, while sitting in on customer meetings with the Cassatt sales team over the past few weeks, I began writing down a lot of what I was hearing and putting it into general buckets of responses. I'll share some of the more interesting ones here (leaving out confidential stuff, of course) with the hope of providing some direct, unvarnished views into what's happening.

I'll let you be the judge, but from the list of questions I gathered, I think people may be farther along than many think.

One caveat before I start: Cassatt talks to end users about ways to make their data centers more efficient. The way we can help them do that is to create a cloud-like environment inside their data center’s four walls, using their existing physical and virtual servers, networks, and other infrastructure resources – all managed by policies they set. We call that an internal or private cloud. A lot of other people do, too (see Gartner’s Tom Bittman or James Urquhart, to name two). As a result, the things people ask us about are generally in the context of internal cloud computing, and not much about external cloud services. So, don't expect me to make any sweeping generalizations about the external cloud part of the market; we're not getting good data about that. Instead, this should provide some real and specific insights into customers' thinking about internal clouds.

As you can imagine, we get all types of questions. I grouped the ones below into a couple different categories, based upon where people are in their adoption or thinking process:

Tire-kickers: Can you tell us what a cloud is? Aren't we already doing that with virtualization?

These folks are really just trying to get their heads around what cloud computing is. They aren't close to getting anything started, but have probably heard they should investigate the topic. So, these questions are pretty basic:

"What is the difference between an internal cloud and virtualization?"

"Do I need to virtualize everything in order to create an internal cloud?"

"What are the business use cases that indicate I should look at a cloud environment rather than focus on other IT spending such as virtualization?"

"I have a data center consolidation project on the table right now. How does this internal cloud environment intersect with that?"


You'll notice some pretty basic terminology confusion, and a big tendency to compare the new cloud concepts with things they know, like virtualization, even if that comparison is not always the best one. These reminded me of the 80% of conversations that we tend to have at trade shows.

Getting their arms around it: can you explain how and when an internal cloud can be useful?

The IT folks who asked these next questions were beyond the definition stage and were generally trying to think about specific use cases, in most cases as a way to begin to put the boundaries around a particular project. That makes these questions a lot more practical than the last, because they are thinking about the internal cloud concepts in terms of their actual business needs. This group seemed to be in the brainstorming stage about what the benefits and drawbacks could be, usually focusing on the benefits. (After all, it's the benefits that make these concepts appealing, right?)

"How do I determine if an application is ideal for an internal cloud?"

"If I have a widely distributed organization where there is not a lot of centralized IT for data centers, but instead I have lots of very small data centers, server rooms, closets, etc., should I still consider an internal cloud?"

"I want to manage our internal hardware as if it were Amazon EC2 instances."

"In our development environment, we want the ability to move workloads to and from Linux and Windows environments, including across physical and virtual servers."

"We have a zero-server-growth mandate. We're trying to figure out ways to repurpose servers to get around this, and to find other ways to cut back on expenses. We're watching usage patterns on our servers, but really have no formal methodology for doing so."

Reality bites: Thinking about trade-offs and impact on current operations

Another bunch of IT folks I heard from were beyond the brainstorming and blue-sky thinking regarding the potential internal cloud benefits. Instead, they wanted to go directly where most data center and IT operations folks take any conversation about new technology: what potential problems is this going to cause? And, once they're feeling a little comfortable on that front, they really want to figure out how they get there from here. They were trying to come up with some of the actual steps that would need to happen to make something like this work.

"How do I create a cloud when I have a lot of static infrastructure right now. How do I get started, and how do I get these servers to do this?"

"How do I assure everyone that this open usage of an internal cloud of servers improves SLA performance rather than hinders it? Why should I believe that this 'all-in-one' kind of solution will do any better than my specialized, hardened solution for making sure that my service levels are kept?"


"How do I agree on security concerns over networks in a private cloud. How does that work, and what do I have to give up network-wise to have a private cloud?"

"Does the internal cloud exist outside of my standard IT operations center or is it part of it? I have IBM Tivoli, so what happens to the application provisioning process? What happens to my current monitoring process?"


"Can I use an internal cloud and still have disaster recovery, and how does that work with my storage?"


"In the case of a major data center outage or failure, what happens to the internal cloud controllers themselves, and how is restoration handled?"


I should also point out that this stage of the conversation is where any fluffy hype about the wonders of cloud computing gets bowled over by the day-to-day realities of running a real data center that supports real applications and real users. If you are going to make a case that things are going to work much better with an internal cloud approach rather than how you're running IT today, you have to be able to answer these kinds of queries.

I'm actually ready to get started: any tips?

I also found a set of customers and prospects who really sounded ready to get going; they are just looking for some of the final important pieces to fall into place. They want to know what's worked and what hasn't worked with other implementations. They want the insider's view of how they should approach things. And probably most importantly, how not to.

"Should I focus on development to create a cloud or on production? If I choose production, what are the 'gotchas' when creating an internal cloud, like how do I know how many servers I will end up needing?"

"I have to actually prove I can create a cloud in the next 30 days. Management wants to know that it can service the jobs -- automation will come right after that. I need to be able to use this capability to add $2 million to the top line of our SaaS business."


"How do we figure out when to have our internal cloud provision more servers or turn something off because it has become idle?"


"What are the best practices for implementation of an internal cloud?"


Other more complicated issues are getting asked about, too

In addition, there are some more complex questions that usually are the result of a lot of experiences up front before they are even asked, and then take a team of implementation professionals to work with the customers in order to answer adequately. We heard some of those, too.

"How do I do chargeback for billing for internal clouds if I am not billing for a computer? What am I billing for: CPU, cycles, memory, etc.?"

"If I have a complex enterprise application, using some combination of STRUTS, SQL Server, Tomcat, and Apache, that accesses a mainframe, where some of the application will run in the cloud and some will be on traditional infrastructure, or controlled through another business unit, what are the considerations for considering a hybrid cloud model?" [Whew! Is that all?]

These and others like them frequently have answers that start with, "Well, it depends upon what you're trying to do...."

...

After posting all these questions, I realize I probably should provide links back to the Cassatt site or our product documentation to give some of the actual answers, but the point was not pitch Cassatt software, but to provide a snapshot of where some of the end user thinking is about internal clouds and their data centers at this moment in time.

My summary on all this? We talk mainly to very large corporate and government organizations with complex, messy data centers. They are conservative about change, but some are actively seeking a better way to run IT operations. And, within that group, I'd say we're seeing customers move from asking more questions like those at the beginning of this post to asking the ones in the latter part of it -- the ones more firmly based in reality and the nitty-gritty details of making something like an internal cloud work.

Time will tell, and this is obviously just one set of data points. But, as much as I want argue for or against the internal cloud concept (or Carr's "big switch"), what end users are doing will be the true measure of progress. I'll keep you posted.

Friday, March 13, 2009

Like the Big Dig, ex-IDC analyst John Humphreys believes cloud computing will 'take time'

In the last post, I interviewed John Humphreys, formerly the resident virtualization guru at IDC, now with the virtualization and management group within Citrix. John characterized Citrix as moving beyond criticism that they aren't doing enough with their XenSource acquisition and, in fact, taking the bull by the horns -- offering XenServer for free and focusing on aspects of heterogeneous management.

That first crack at being able to manage diverse types of virtualization in the data center is certainly needed. It's one of the first steps down a path that hasn't been well-trodden so far (and it has been especially ignored by the virtualization vendors themselves). OK, but how might that all fit into the industry conversation around cloud computing? Glad you asked...

Jay Fry, Data Center Dialog: John, I asked your old buddy Al Gillen at IDC how he thought virtualization connected to (and was distinct from) cloud computing. I'd love to get your thoughts on that, too.

John Humphreys, Citrix: I see virtualization as the foundation to any "cloudy" infrastructure -- public or private. In order for any cloud to live up to the promises, it must be able to deliver services that are isolated and abstracted from the underlying infrastructure. Clearly virtualization delivers on both of those requirements in spades!

In my opinion, the opportunity is to build workflow and policies on top of the virtualized infrastructure. Today those workflows are built at the siloed application or VM level, but I believe it will require policies and workflows that exist at the IT service level. To me, execution on this sort of vision will take a long term, multi-year commitment.

DCD: The big virtualization vendors -- Citrix, VMware, and Microsoft -- have all also talked about their cloud computing visions. While I like that VMware talks about both internal and external clouds, they seem to think that everyone will be virtualizing 100% of their servers no questions asked, and that no other virtualization technologies will exist in a data center. That, to me, puts them (and their vision) out of synch with the reality of a customer's actual data center. What's your take on this?

John Humphreys: First and foremost, I agree -- I simply don't see customers going 100% virtual any time soon. There are too many "cultural barriers" or concerns in place to do that.

The second point I'd make is that to me, cloud is still years away from becoming a mainstream reality. Just as a point of comparison, x86 virtualization, measured by VMware, is over 10 years old and now approximately 20% of servers are being virtualized each year. These things take time.

Finally, I'd point out that in the near- to mid-term, the ability to federate between internal and external clouds is a huge hurdle for the industry.

Concepts like "cloudbursting" are appealing but today are architecturally dependent. In addition to the technical ability to move services between internal and external data centers, security and regulatory impacts to cloud are "hazy" at best.

DCD: You've now had a chance to view the virtualization market from two different angles -- as an analyst at IDC and now as an executive working for a vendor in the space. What about the space do you see now that you didn't catch before in your analyst role?

John Humphreys:
The move for me has been really eye-opening and educational from a lot of different perspectives. I think the one that most drew me to the role is the level of complexity that vendors must deal with in making any decision or product changes.

In the analyst realm, the focus is exclusively on strategy. When you jump over the fence, the strategy is still critical, but it is the details in the execution of that strategy that ultimately will define the success or failure of any move. That means not only being able to define a successful strategy but being able to communicate it to the organization, getting the sales teams to support the moves, coordinate the infrastructure changes that must occur internally, address supply chain issues, work with partners, etc.

As an analyst, I knew I was only seeing the first piece of the cycle, so I made the move so I could experience "the rest of story."

Being from Boston, I see a metaphor in the Big Dig and the Chunnel projects. Being an analyst is like planning the Chunnel project, while being part of a technology vendor is like planning and executing the Big Dig. The Big Dig planners had to worry about 300 years of infrastructure and needed to put all sorts contingency plans in place to ensure the successful execution. That "be prepared" requirement for flexibility appeals to me.

DCD: What's the most interesting thing that you see going on in this space right now?

John Humphreys: I see some very interesting business models being developed that leverage the cloud computing concept. I believe the industry is on the cusp of seeing a host of new ideas being introduced. And, perhaps contrary to others, I believe the down economy is a perfect incubator as expectations over the near term are lowered, giving these start-ups the opportunity to more fully develop these new and great ideas and business models. I expect we'll start to see the impact of all this innovation in the next 3-5 years.

...

Data center change: A Big Dig?

Thanks to John for the interview. I know the analogy to the Big Dig was something John meant in the context of how you get an organization building a product to do something big -- and how you have to make sure you're taking into account all the existing structures. However, I'm thinking it's a good one for data centers in general making transitions to a new model. Here's what I mean:

Your goal is to change how your data center runs to drastically cut your costs and improve how stuff gets done. There's a lot of infrastructure technology from Citrix, VMware, Cassatt, and a lot of others that can help you: virtualization, policy-based automation, management tools, the works. But there's all your critical infrastructure that's already in place and (mostly) working that you have to be careful not to disrupt. It's a big job to work around it and still make progress. Kinda like they had to do with the Big Dig.

But, hey, I'm not from Boston, so maybe the analogy breaks down for IT projects. I kind of hope so, actually, since cost overruns and big delays certainly aren't what we're all aiming for. In IT, you certainly have a greater ability to do smaller, bounded projects that show real return -- and still make notable, tangible improvements to running your business. Those of you who lived through the Big Dig probably know better than I on how close of a match this is.

On the hybrid public/private cloud capabilities, I think John's on target. The industry conversations about this capability (moving computing from inside your data center, out to the cloud, and then back again) are reaching a fever pitch, but there a few things that have to get solved before this is going to work. (Here's my recent post on hybrid clouds if you want to dive deeper on the topic.) But it's certainly one of the models that IT ops is going to want to have at its disposal in the future.

The approach we're taking at Cassatt is to help people think about how they might do "cloudbursting" by starting the work on creating a cloud internally first. At the same time, customers often begin experimenting with external cloud services. That early experience on both sides of the internal/external cloud divide will be a big help. (And, we've built our software with an eye toward eventually making cloud federation a reality.)

There is one thing I might quibble with John on that I didn't during the interview -- his supposition that virtualization is going to be at the core of any "cloudy" infrastructure. My take is that while the concept of separating the software and applications from the underlying hardware infrastructure is a key concept of creating a more dynamic infrastructure, you can still make headway here without having to virtualize everything.

In fact, we've heard a great deal of interest around the internal cloud computing approach, especially when it leverages what someone already has running in their data center -- physical or virtual. Virtualization can be a useful component, but being 100% virtualized is not a requirement. I was pretty critical of VMware's assumptions around this topic in a previous post. The Citrix approach that John walks through above is definitely describing a more realistic, heterogeneous world, but still has some assumptions you'll want to be careful of if you're looking into it.

So, if you are starting on (or are in the messy middle of) your own data center Big Dig -- exploring virtualization, the cloud, and all the infrastructure impact that those might have -- feel free to leave a comment here on how it's going.

If you can get past the Jersey barriers, of course.

Wednesday, March 11, 2009

John Humphreys, now at Citrix, sees virtualization competition shifting to management

I'm sure when John Humphreys left IDC and joined Citrix last year he had to endure lots of barbs about joining "the dark side" from his analyst compatriots. Of course, his new vendor friends were probably saying the exact opposite, yet much the same thing: no more ivory tower, John; it's time to actually apply some of your insights to a real business -- after all, he now had a bunch of real, live customers that want help solving data center software infrastructure and IT operations issues.

Regardless of comments from either peanut gallery, John did indeed vacate Speen Street and trade his role running IDC's Enterprise Virtualization Service for a spot at Citrix. He's now focused on the overall strategy and messaging for their Virtualization and Management Division, the group built from the XenSource acquisition and that is actively working on virtualization alternatives to VMware -- and more.

I thought John's dual perspectives on the market (given his current and previous jobs) would make for a worthy Data Center Dialog interview, especially given Citrix's recent news that they will offer XenServer for free. And, there's a dual connection with Cassatt, too: long before they were acquired by Citrix, Cassatt and XenSource had worked together on making automated, dynamic data center infrastructure that much closer to reality (in fact, if you poke around our Resource Center or Partner page, you'll find some old webcasts and other content to that effect). And, we worked with John quite a lot in his IDC days.

In the first part of the interview that I'm posting today, I asked John about the virtualization market, criticism that Citrix has been getting, and what they have in store down the road.

Jay Fry, Data Center Dialog: John, from your "new" vantage point at Citrix, how do you see the virtualization market changing in 2009?

John Humphreys, Citrix: I see the basis for competition in virtualization changing significantly in 2009. In recent years, the innovation around virtualization has been squarely focused on the virtualization infrastructure itself with things like motion, HA, workload balancing, etc. The pace of innovation and rate of customer absorption of recent innovations has slowed. This was inevitable as the demand curve for new innovations around any technology eventually flattens.

Rather than competing to offer new features or functions on top of the base virtualization platform, I see the companies adding management capabilities that extend across multiple platforms. It's only through this ability to manage holistically that customers will truly be able to change the operational structure of today's complex IT environments.

DCD: How do you think Citrix will play into those changes?

John Humphreys: In a sentence, we are working to drive these changes in the virtualization marketplace.

Specifically, the company recently announced that the full XenServer platform would be freely available for download to anyone. This is a truly enterprise-class product -- not just a gimmicky download trial. It has live migration and full XenCenter management for free, with no limits on the number of servers or VMs a customer can host.

At the same time, we introduced a line of management products (branded Citrix Essentials) that are designed to provide advanced virtualization management across both XenServer and Hyper-V environments. Initially, we have focused on providing tools for integration and automation of mixed virtualization platform environments with capabilities like lab management, dynamic provisioning, and storage integration. Over time you will see more automation capabilities with links back to business policies and infrastructure thresholds.

DCD: What are your customers saying are the most important things that they are worrying about right now? How influenced by the economy are those priorities?

John Humphreys: Cost cutting. Pure and simple. We see the rapid economic decline setting the agenda for at least 2009 and was a major factor in the decision to offer XenServer for free. In tough economic times, well-documented cost saving measures like server consolidation are relied upon even more and being the low-cost provider of an enterprise-class virtualization solution provides Citrix with a opportunity to get XenServer in the hands of millions of customers.

DCD: I've seen some press coverage about disappointment regarding the XenSource acquisition by Citrix. The main complaint seems to be that Citrix isn’t being as aggressive as it should be in the space and VMware seems to be adding fuel to the fire by implying that Microsoft will eventually block out Citrix. What are your thoughts on how Citrix is handling XenSource and the competitive environment?

John Humphreys: I've heard those criticisms as well. I think what you have seen already in 2009 is that Citrix is become a lot more aggressive with XenServer. We think we have a distinct position in the marketplace and a unique opportunity. What you saw Citrix announce on February 23rd was the first opportunity to tell the full virtualization story. The free platform download (with motion and management) and the ability to add cross-platform advanced management capabilities is truly unique. We like where we are positioned going forward...and from the early indications, the market likes our position as well.

DCD: Why did you make the jump to the "dark side" of working for a vendor? Is the grass actually greener?

John Humphreys: I always knew I wanted to combine strategy and execution, as to me that is where the magic happens.

...

Up Next: In the second part of this interview, I ask John about his thoughts on (you guessed it) cloud computing, the pros and cons of what Citrix, VMware, and Microsoft have planned in that space, and how much different things look now that he's not an industry analyst with IDC.

Friday, March 6, 2009

Nicholas Carr: IT and economy are cloudy, but the Big Switch is on

For those at IDC's Directions conference this week in San Jose (my highlights are posted here) who hadn't yet read his book, The Big Switch, Nicholas Carr used his keynote to walk through his "IT-is-going-to-be-like-the-electrical-utility" metaphor, and his reasoning behind it. Many, I'm sure, had heard it before (some even complained about it a bit on Twitter). But that doesn't make it any less likely to come true.

IT, he argued, is the next resource to go through a very similar transformation to what happened with electricity, moving from private generation of what was needed to a public grid. Grove's law about bandwidth's growth has "been repealed," said Carr. It is "the beginning of the next great sea change in information technology" -- the move to utility or cloud computing. (For more detail on this, Andy Patrizio of Internetnews.com also posted a good run-down.) So far, so good, but here's a question: has anything shifted since Carr's book came out? How far have we come?

Big changes in attitudes on cloud/utility computing since last year

In the 14 months since The Big Switch was published, people's attitudes have changed dramatically. What was greeted (even in Carr's estimation during his keynote) with skepticism early last year is pretty much a done deal this year. We're moving to "simply a better model for computing without hardly even noticing it," he said. "Cloud computing has become the center of investment and innovation."

Now, you can argue about the speed and impact of the changes, but it's hard to argue that the change isn't happening. Gordon Haff of Illuminata published a good thought-piece on whether or not what's happening is really as profound as Carr suggests, and whether data centers are even seeing those changes yet. Healthy skepticism, for sure, especially when talking about the data centers of big organizations.

However, Carr used his time onstage this week at the IDC conference to extend the vision his book laid out and talk a bit about the intermediary steps we're going through in the "big switch." In his view, cloud computing can be many things for organizations. It can be a new do-it-yourself model for IT. A supplement to whatever IT exists that's "shovel-ready." A replacement. A democratizer. Even a complete revolution in which IT and business finally get on the same page, shocking as that may be.

No matter which view (or, more likely, combination of views) ends up being true, Carr was clear to IT folks and businesses alike, if a little understated: "It behooves you to look into this. Figuring out how to harness the power" of a public cloud "may be the great enterprise of this century."

I think on this point Carr, IDC and their analyst brethren, and vendors like my company, Cassatt, and others are all pretty unanimous in what they're saying. Despite some false starts at the beginning of the new century, the big change for IT -- being able to actually use utility-style computing -- is really here. It's called cloud computing. Of course, it's still early days, but let the fun begin. Actually, if you hadn't noticed, it already has.

Now, comes the messy part: making this new model work

Carr was pretty clear that this is going to take a few steps. No arguments here. And while he did joke about the term "private clouds" being an oxymoron if there ever was one (my Appirio Twitter friends loved that one; see also the many previous posts here and elsewhere arguing that topic pro and con), he also talked about private clouds as one of the many steps along this path. In looking at applying the cloud model to the data center infrastructures organizations already have, we are asking "what can we learn about how the cloud operators do IT -- and revamp our own data centers on the cloud model." To Carr, internal cloud computing sounded like a "big opportunity while a public cloud is being built out."

This is also where things get messy for today's big IT suppliers, thanks to a little thing called the innovator's dilemma (from Clayton Christensen). "Big traditional vendors are in quite a fix today," said Carr. "You have big vendors making investments in cloud computing, but it's almost on faith because they haven't figured out how to make money on it yet. The problem is not in the technology, it's in the business model."

Add to this a tidbit from a survey IDC's Frank Gens talked up earlier in the day: the least important buying criteria for cloud services were that the company be a large, established company, or that the company had done business with an organization before. Well, now...that certainly opens up the playing field quite a bit.

Carr pointed to a number of hurdles that still exist, like having to hit a level of reliability beyond what's been required today just to prove all this cloud stuff is safe. However, based on what I hear from all sides (especially customers), I think the engine of change has really started up on cloud computing. Everyone is talking about it, testing the hypotheses from all angles. But that's the easy part. More importantly, though, people are trying it. Some are moving fast; some are moving cautiously. Some are trying external clouds; some are applying the cloud concept to the resources inside their own data centers. And, the "cloudy" (er, grim) economy, as Carr called it (and as Frank Gens noted earlier in the day), is probably helping in its own way, too.

As unlikely as it may have seemed when his book came out last year, Carr's "big switch" is on.

Thursday, March 5, 2009

IDC: Downward Directions for IT in 2009 leave room for cloud computing uptick

IDC's 44th annual Directions conference in San Jose this week may be the longest running IT conference in the world, but it didn't pull any punches on the economy. From John Gantz's opening keynote through every track session I attended, the analysts recounted what anyone running a data center knows all too well: IT spending is pulling way back. IDC wisely did a mid-year course-correction on their 2009 spending prediction at the end of last year, and they used some of these revisions at the conference to show how far -- and how fast -- things have headed down. As I was sitting in the audience, I started to wonder if even those revisions were deep enough. Only cloud computing escaped the dour forecast (more on that in a minute).

Here's a quick summary of the key points I took away from the conference, focusing on IDC's take on the macro-level IT environment, the impact of the economy on running a data center, and -- the lone bright spot -- how cloud computing figures into all this. On that last point, let's just say Frank Gens, the day's cloud presenter, was positively giddy to be the one guy who got to deliver good news. The highlights:

The economy has us in a dark, dark place -- but IT is needed now more than ever

John Gantz, IDC's chief research officer, summed up the effect of the economy at the start of the day: "I don't think we've been here before. We're in new territory. We're in the dark" because we don't have a very good handle on what the economy's going to do next. Gantz noted that IDC has ratcheted down IT spending predictions for this year to nearly flat over 2008 (up only 0.5%). That doesn't take into account any effect from the Obama stimulus package (or those from other governments elsewhere in the world). IDC told their analysts not to try to quantify stimulus package impact, said Gantz, but to assume they "won't make things worse." Let's hope. One positive note: 2010's growth rate looks positively robust, but of course that's because it's building on the catastrophe that 2009 is working out to be.

However, says Gantz, the bad economy is not slowing down the increase in mobile Internet users, the adoption of non-traditional computing devices, nor is it putting the brakes on the amount of data being gathered or user interactions per day (predicted to increase to 8.4 times its current rate in the next 4 years). And that's all something for IT to deal with.

So, said Gantz, amid this "extinction event," there are incredible new demands for management. "The economic crisis changes everything and it changes nothing. We have a new, new normal." The current situation merely forces the issue on seeing and doing things differently. "If everything is crashing down around you," said Gantz, "now is a good time to take a risk. Now is a period of opportunity." He noted companies like Hyatt, GE, RIM, FedEx, HP, and IBM had all been started in recessions (I've also written about great innovations during previous downturns).

What opportunities did he see in particular right now? Gantz noted enterprise social media, IT outsourcing, virtualization software, and Internet advertising (really). Of particular note: virtualization management software. Which has a big impact on IDC's view of what's happening in the data center...

The move to more modular, pay-as-you-go data centers -- with warnings about virtualization management

Michelle Bailey, presenting her content from IDC's Data Center Trends program, seemed very concerned about how hard and complex managing a data center had become, and believed that we're going to see customers making moves to simplify things out of necessity.

The recession, said Bailey, "changes the decision on where to hold the [data center] assets." Its main impact is to push data center managers to move "from a fixed price model to a variable pricing model," to move costs from cap ex to op ex.

Virtualization has had a huge impact so far, and will continue to do so, according to Matt Eastwood, IDC group vice president for enterprise platforms. In fact, there will be more VMs than physical servers deployed in 2009. "It will be the cross-over year," said Eastwood.

However, that drives big, big concerns on how data center managers are going to cope, said Bailey. "The thing I worry about the most with virtualization is the management consequences. There's no way to manage this with the processes and tools in place today." In fact, Bailey is so worried that she thinks this "virtualization management gap" might stall the virtualization market itself as users search for management solutions. "I’m worried that customers may have gone too far and may have to dial it back," she said. "The challenge in the server virtualization world is that people aren't used to spending a lot of money on systems management tools."

When we at Cassatt talk to customers about this, we've found that they know there is a virtual management problem and are actively trying to address it. The approach we talk to these customers about is having a coherent strategy for managing all of your data center components based upon the application service levels you need, regardless of whether the compute resources are physical or virtual. Having a separate management stack for each virtualization vendor and another one for their physical systems is not appealing, to say the least.

One of Bailey's other most important points was that there isn't just one type of data center -- there are actually three:

1. Enterprise-style data centers focus on SLAs, cost containment, and are dealing with space issues.
2. Hosting/outsourcer data centers focus on doing what's necessary to meet customer demand.
3. Web 2.0/telco-style data centers are all about cost efficiency and growth.

Trying to compare how you run your data center with one that has a different set of goals is not productive and will get you focused on the wrong things -- and result in more of a mess.

She did say, however, no matter what type of data center you are running, to look at doing things in a much more modular way, as a way to simplify. Bailey called "massively modular" the blueprint for the future data center. This helps break down big problems into smaller, more manageable ones, and ensures that you don't have to be absolutely correct in your 20-year vision for your data center. She sees things like containerized data centers becoming more standardized and less proprietary, making this modular approach more complimentary than disruptive to what data centers are already doing. And, with power and cooling still a huge problem for data centers, IT ops and facilities need help from both a more modular approach and the "pretty sophisticated" power management tools that exist. (I like to think that she was thinking of us at this point in her presentation.)

Cloud computing is on track to move to the mainstream -- and show actual growth despite the economy

Bailey had a healthy dose of cloud computing skepticism in her break-out presentation: "Anything that has money attached to it can’' be [in the cloud] for another 10 years," she said, clearly paving the way for big organizations with security, compliance, and lock-in concerns to give this cloud model a try, but to do so within their own data centers as an internal cloud.

In Frank Gens' keynote on cloud computing, he acknowledged a lot of the concerns that companies have been expressing about going to an external cloud, however, was very upbeat. "The idea of cloud is of very, very high interest to CIOs in the market right now," he said. Last year IDC predicted that 2009 would be "the year of moving from the sandbox to the mainstream," said Gens. "We are certainly on that path right now."

Why? Maybe not for the reasons you might think (cost). Gens corroborated comments from Gartner's Tom Bittman at their Data Center Conference back in December: the No. 1 reason that people want to move to the cloud is that "it's fast" to do so.

This new cloud model hasn't yet bulldozed the old model for IT, according to IDC, for reasons we've heard (and Michelle Bailey mentioned above): deficiencies in security, performance, availability, plus problems integrating with in-house IT. Gens sees cloud computing beginning the move across Geoffrey Moore's chasm toward mainstream adoption as a result of a couple things: performance-level assurances and being able to connect back to on-premise systems.

"Service level assurances are going to be critical for us to move this market [for cloud computing] to the mainstream," said Gens. And, customers want the ability to do hybrid public/private cloud computing: "They want a bridge and they want it to be a two-way bridge" between their public and private clouds.

And, despite all the economic negativity, IDC painted a pretty rosy picture for cloud computing, noting that it's where the new IT spending growth would be happening. Gens described it as the beginning of the move to a more dynamic deployment of IT infrastructure, and part of an expanding portfolio of options for the CIO.

"We’re right where we were when the PC came along or when the Internet first came out," said Gens. As far as directions go, that's pretty much "up."

Up next: comments on Nicholas Carr's closing keynote at IDC Directions San Jose. Slides from the IDC presentations noted above are available for IDC customers in PDF format in their event archives at www.IDC.com.

Monday, March 2, 2009

Will cloud computing be the innovation from this downturn?

Entrepreneur Magazine recently ran a list of significant innovations that appeared during previous times of economic duress.

From the Great Depression came Scotch tape. Miracle Whip. Campbell's Chicken Noodle soup. The fluorescent light bulb. From the stagflation/oil crisis/Vietnam era, the scanable supermarket bar code. From the dot-com bust and 9/11, the iPod and the Blackberry.

It's a cool list that provides you exactly that warm, fuzzy glimmer of hope that watching the stock ticker and the news certainly does not right now. Hey, it says, even if things look pretty grim at the moment, someone somewhere has just had his or her aha! moment. Don't worry, he or she is frantically working on something that we won't be able to live without in a decade or so. Hopefully, they can get financing. (Note to financing types: say yes, it'll be worth it.)

So what will be the great contribution to society during this particularly nasty downturn? More specifically, will cloud computing be one of the great innovations that comes out of the Great Recession?

I discussed that possibility in a post a few weeks back, just prior to a Churchill Club event about how Silicon Valley itself can survive this recession. In addition to the reasons I posted about why cloud computing may (or may not) be what comes out of this mess, I thought I'd add a few more points on the topic.

Jason Clarke and James Walker of Bitemarks quoted Cassatt CEO Bill Coleman at the Churchill Club event with some advice for start-ups (cloud or otherwise). "Startups need to learn to monetize their offerings quickly or risk going out of business. ...Coleman states we are at the next major inflection point in the evolution of the technology market." The invent-boom-bust-build-out cycle that Bill has seen over his career in high tech certainly points the way to a lot of opportunity atop the cloud computing wave. (If, of course, clouds have waves. Well, you understand.) But, your business has to deliver value to your customers while building a coherent business model for yourself.

Tom Foremski of Silicon Valley Watcher, formerly with the Financial Times, covered several intriguing, related topics with Bill Coleman recently. The first part of that interview is available here.
One of the other tidbits the Bitemarks guys pulled out of Bill's Churchill Club comments was this: "As for those of us who are working through our first recession, it's worth remembering -- as Coleman stated -- you learn far more during a downturn that you do at any other time."

Entrepreneur Magazine's columnist Brad Sugars agrees. Not only is there a lot to be learned now, he gives 10 good reasons to start your business now, because of the downturn. He lists a bunch of reasons why, from the fact that everything is cheaper now, to better availability of good people, to the fact that the media coverage is a lot easier -- the media loves things that buck the trend, you see.
In any case, no matter if you subscribe to any of these beliefs or not, it'll be interesting to watch this cycle and even more interesting to participate in it. That's certainly why I and my other industry colleagues are here putting the effort into cloud computing and the big changes in IT operations it enables.
And, let's hope that in terms of innovations that come in down economic cycles, cloud computing is more like the transistor radio. Or the pace maker. (Both from the Eisenhower recessions.)
But, please, let it be nothing like the hula hoop.