Showing posts with label public clouds. Show all posts
Showing posts with label public clouds. Show all posts

Wednesday, June 1, 2011

Looking forward or backward? Cloud makes you decide what IT wants to be known for

Cloud computing is all about choice. I’ve heard that a lot. What most people mean when they say this is that there are suddenly a whole bunch of places to run your IT workloads. At Amazon using EC2 or at Rackspace? At ScaleMatrix or Layered Tech? Or inside your own data center on a private cloud you’ve created yourself?

But there are some more fundamental choices that cloud seems to present as well. These choices are about what IT is going to be when it grows up. Or at least what it’s going to morph into next.

Here are 3 big decisions that I see that cloud computing forces IT to make, all of which add up to one, big, fundamental question: will IT define itself as an organization that looks toward the future or back into the past? Before you scoff, read on: the answer, even for those eagerly embracing the cloud, may not be as clear as you think.

The business folks’ litmus test for IT: Cloud v. No Clouds

First off, the business people in big organizations are using the rise of cloud computing, even after setbacks like the recent Amazon outage, to test whether IT departments are about looking forward or backward. When the business folks come to IT and describe what they are looking for, they now expect cloud-type reaction times, flexibility, infinite options, and pay-as-you-go approaches. At that point, IT is forced to pick sides. Will they acknowledge that cloud is an option? Will IT help make that option possible, if that’s the right choice for the business? Or will they desperately hold onto the past?

Embracing cloud options in some way, shape, or form puts IT on the path to being known as the forward-looking masters of the latest and greatest way of delivering on what the business needs. Rejecting consideration of the cloud paints IT as a cabal of stodgy naysayers who are trying their darnedest to keep from having to do anything differently.

John Treadway tweeted a great quote from Cloud Connect guru Alistair Croll on this same topic: "The cloud genie is out of the bottle. Stop looking for the cork and start thinking [about] what to wish for."

The business folks know this. They will use IT’s initial reaction to these options as a guide for future interactions. Pick incorrectly, and the business isn’t likely to ask again. They’ll do their own thing. That path leads to less and less of IT work being run by IT.

OK. Say IT decides to embrace the cloud as an option. The hard choices don’t stop there.

A decision about the IT role: Factory Manager v. Supply Chain Orchestrator

Starting to make use of cloud computing in real, live situations puts IT on a path to evaluate what the role of IT actually evolves into. Existing IT is about running the “IT factory,” making the technology work, doing what one CIO I heard recently called “making sure the lights don’t flicker.” This is IT’s current comfort zone.

However, as you start using software, platforms, and infrastructure as-a-service, IT finds itself doing less of the day-to-day techie work. IT becomes more of an overseer and less of the people on the ground wiring things together.

I’ve talked about this role before as a supply chain orchestrator, directing and composing how the business receives its IT service, and not necessarily providing all that service from a company’s own data centers. You can make a good case that this evolution of IT will give it a more strategic seat at the table with the business users.

But, even if you decide you want to consider cloud-based options and you’re all in favor of changing the role of IT itself, there’s still another question that will have a big effect on the perception – and eventual responsibilities – of IT.

The problem with sending the new stuff cloud: Building IT expertise in Legacy v. Cutting Edge

Everyone who has made the choice to use cloud computing is next faced with the logical follow-on question: so, what do I move to the cloud? And, then, what do I keep in-house to run myself?
And that’s where I think things get tricky. In many cases, the easiest thing to do is to consider using the cloud for new applications – the latest and greatest. This lets you keep the legacy systems that are already working as they are – running undisturbed as the Golden Rules of IT and a certain 110-year-old light bulb suggest (“if it’s working, don’t touch it!”).

But that choice might have the unintended effect of pigeonholing your IT staff as the caretakers of creaky technology that is not at the forefront of innovation. You push the new, more interesting apps off elsewhere – into the cloud. In trying to make a smart move and leverage the cloud, IT misses its chance to show itself as a team that is at (and can handle) the leading edge.

Maybe I’m painting this too black and white, especially in IT shops where they are working to build up a private cloud internally. And maybe I’m glossing over situations where IT actually does choose to embrace change in its own role. In those situations, there will be a “factory” role, alongside an “orchestrator” role. But that “factory” manager role will be trimmed back to crucial, core applications – and though they are important, they are also the ones least in need of modernization.

Either way, isn’t the result still this?: IT’s innovation skills get lost over time if they don’t take a more fundamental look at how they are running all of their IT systems, environment, and how they look at their own roles.

The problem I see is that big enterprises aren’t going to suddenly reassess everything they have on the first day they begin to venture into the cloud. However, maybe they should. For the good of the skills and capability and success of their IT teams, a broader view should be on the table.

Short-term and long-term answers

So, as you approach each of these questions, be sure to look not only at the immediate answer, but also at the message you’re sending to those doing the asking. Your answers today will have a big impact on all future questions.

All of this, I think, points out how much of a serious, fundamental shift cloud computing brings. The cloud is going to affect who IT is and how it’s viewed from now on. Take the opportunity to be the one proactively making that decision in your organization. And if you send things outside your four walls, or in a private cloud internally, make sure you know why – and the impact these decisions will have on IT’s perception with your users.

Since cloud computing is all about choice, it’s probably a smart idea to make sure you’re the one doing the choosing.

Wednesday, May 25, 2011

Hurford of DNS Europe: service providers and SaaS developers are showing enterprises how cloud is done

The antidote to cloud computing hype is, well, reality. For example: talking to people who are in the middle of working on cloud computing right now. We’ve started a whole new section of the ca.com website that provide profiles, videos, and other details of people doing exactly that.


One of the people highlighted on this list of Cloud Luminaries and Cloud Accelerators is Stephen Hurford of DNS Europe. DNS Europe is a London-based cloud hosting business with over 500 customers across the region. They have been taking the cloud-based business opportunities very seriously for several years now, and provide cloud application hosting and development, hybrid cloud integration services, plus consulting to help customers make the move to cloud.


I convinced Hurford, who serves as the cloud services director for DNS Europe, to share a few thoughts about what customers and service providers are – and should be – doing right now and some smart strategies he’d suggest.


Jay Fry, Data Center Dialog: From your experiences, Stephen, how should companies think about and prepare for cloud computing? Is there something enterprises can learn from service providers like yourself?


Stephen Hurford, DNS Europe: Picking the right project for cloud is very important. Launching into this saying “we’re going to convert everything” to the cloud in this short period of time is almost doomed. There is a relatively steep learning curve with it all.


But this is an area of opportunity for all of the service providers that have been using CA 3Tera AppLogic [like DNS Europe] for the last three years. We’re in a unique position to be able to help enterprises bypass the pitfalls that we had to climb out of.


Because the hosting model has become a much more accepted approach in general, enterprises are starting to look much more to service providers. They’re not necessarily looking to service providers to host their stuff for them, but to teach them about hosting, because that’s what their internal IT departments are becoming – hosting companies.


DCD: What is the physical infrastructure you use to deliver your services?

Stephen Hurford: We don’t have any data centers at all. We are a CSP that doesn’t believe in owning physical infrastructure apart from the rack inwards. We host with reputable Tier 3 partners like Level3, Telenor, and Interxion, but it means that we don’t care where the facility is. We can deploy a private cloud for a customer of ours within a data center anywhere in the world and with any provider.


DCD: When people talk about cloud, the public cloud is usually their first thought. There are some big providers in this market. How does the public cloud market look from your perspective as a smaller service provider? Is there a way for you to differentiate what you can do for a customer?


Stephen Hurford: The public cloud space is highly competitive – if you look at Amazon.com, GoGrid, Rackspace, the question is how can you compete in that market space as a small service provider? It’s almost impossible to compete on price, so don’t even try.


But, one thing that we have that Amazon and Rackspace and GoGrid do not have is they do not have is an up-sell product – they cannot take their customers from a public cloud to a private cloud product. So when their customers reach a point where they say, “Well, hang on, I want control of the infrastructure,” that’s not what you get from Amazon, Rackspace and GoGrid. From those guys you get an infrastructure that’s controlled by the provider. Because we use CA 3Tera AppLogic, the customer gets control, whether hosted by a service provider or by themselves internally.


DCD: My CA Technologies colleague Matt Richards has been blogging a bit about smart ways for MSPs to compete and win with so much disruption going on. Where do you recommend a service provider start if they want to get into the cloud services business today?


Stephen Hurford: My advice to service providers who are starting up is to begin with niche market targeting. Pick a specific service or an application or a target market and become very good at offering and supporting that.


We recommend starting at the top, by providing SaaS. SaaS is relatively straightforward to get up and running on AppLogic if you choose the right software. The templates already exist – they are in the catalog, they are available from other service providers, and soon will be available in a marketplace of applications. Delivering SaaS offerings is the easiest technical and learning overhead approach and that’s why we recommend it.


DCD: Looking at your customer base, who is taking the best advantage of cloud platform capabilities right now? Is the area you just mentioned – SaaS – where you are seeing a lot of movement?

Stephen Hurford: Yes. The people who have found us and who “get it” are the SaaS developers. In fact, 90% of our customers are small- to medium-sized customers who are providing SaaS to enterprises and government sectors. It’s starting to be an interesting twist in the tale: these SaaS providers are starting to show enterprises how it’s done. They are the ones figuring out how to offer services. The enterprises are starting to think, “Well, if these SaaS providers can offer services based on AppLogic, why can’t I build my own AppLogic cloud?” That’s become our lead channel into the enterprise market.


DCD: How do service providers deal with disruptive technologies like cloud computing?

Stephen Hurford: From a service provider perspective, it’s simple: first, understand the future before it gets here. Next, push to the front if you can. Then work like crazy to drive it forward.


Cloud is a hugely disruptive technology, but without our low-cost resources we could not be as far forward as we are.


One of the fundamentally revolutionary sides of AppLogic is that I only need thousand-dollar boxes to run this stuff on. And if I need more, I don’t need to buy a $10,000 box. I only need to buy 4 $1,000 boxes. I have a grid and it’s on commodity servers. That is where AppLogic stood out from everything else.


DCD: Can you explain a bit more about the economics of this and your approach to keeping your costs very low? It sounds like a big competitive weapon for you.

Stephen Hurford: One of the big advantages of AppLogic is that it has reduced our hardware stock levels by 75%. That’s because all cloud server nodes are more or less the same, so we can easily reuse a server from one customer to another, simply by reprovisioning it in a new cloud.


One of the key advantages we’ve found is that once hardware has reached the end of its workable life in terms of an enterprise’s standard private cloud, it can very easily be repurposed into our public cloud where there is less question of “well, exactly what hardware is it?” So we’ve found we can extend the workable lifespan of our hardware by 40-50%.


DCD: What types of applications do you see customers bringing to the cloud now? Are they starting with greenfield apps, since this would give them a clean slate? The decision enterprises make will certainly have an impact for service providers.
Stephen Hurford: Some enterprises are taking the approach to ask “OK, how can I do my old stuff on a cloud? How do I do old stuff in a new way?” That’s one option for service providers – can I use the benefits of simplifying and reducing my stock levels, reducing my management overhead from my entire system by moving to AppLogic and then I won’t have 15 different types of servers and 15 different types of management teams. I can centralize it and get immediate benefits from that.


The other approach is to understand that this is a new platform with new capabilities, so I should look at what new stuff can I do with this platform. For service providers, it’s about finding a niche – being able to do something that works for a large proportion of your existing customers. Start there because those are the folks that know you and they know your brand. Think about what they are currently using in the cloud and whether they would rather do that with you.

DCD: Some believe that there is (or will soon be) a mad rush to switch all of IT to cloud computing; others (and I’m one of those) see a much more targeted shift. How do you see the adoption of cloud computing happening?


Stephen Hurford: There will always be customers who need dedicated servers. But those are customers who don’t have unpredictable growth. And need maximum performance. Those customers will get a much lower cost benefit from moving to the cloud.


For example, we were dealing with a company in Texas that wanted to move a gaming platform to the cloud. These are multi-player, shoot-‘em-up games. You host a game server that tracks all the coordinates of every object in space in real-time between 20 and 30 players and sends that data to the Xbox or PlayStation that renders it so they can play in the same gamespace. If you tried to do that with standard commodity hardware, you’re not getting the needed performance on the disk I/O.


The question to that customer was, “Do you have a fixed requirement? If you need 10 servers for a year and you’re not going to need to grow or shrink, don’t move to the cloud.” Dedicated hardware, however, is expensive. They said, “We don’t know what our requirements are and we need to be able to deploy new customers within 10 minutes.” I told them you don’t want to use dedicated servers for that, so you’re back to the cloud, but perhaps with a more tailored hardware solution such as SSD drives to optimize I/O performance.


DCD: So what do you think is the big change in approach here? What’s the change that a cloud platform like what you’re using for your customers is driving?

Stephen Hurford: Customers are saying it’s very easy with our platform to open up 2 browsers and move an entire application and infrastructure stack from New York to Tokyo. But that’s not enough.


Applications need to be nomadic. The concept of nomadic applications is way distant in the future, but for me, what we’re able to offer today is a clear sign-post for the future. Applications with their infrastructure will eventually become completely separated from the hardware level and the hypervisor. My application can go anywhere in the world with its data and with its OS that it needs to run on. All it needs to do is to plug into juice (hardware, CPU, RAM) – and I’ve got what I need.


Workloads like this would be liable to know where they are. By the minute, they’ll be able to find the least-cost service provisioning. If you’ve got an application and it doesn’t really matter where it is and it’s easy to move it around, then you can take advantage of least-cost service provisioning within a wider territorial region.




Thanks, Stephen, for the time and for sharing your perspectives. You can watch Stephen’s video interview and read more about DNS Europe here.

Thursday, December 16, 2010

Survey points to the rise of 'cloud thinking'

In any developing market, doing a survey is always a bit of a roll of the dice. Sometimes the results can be pretty different from what you expected to find.

I know a surprise like that sounds unlikely in the realm of cloud computing, a topic that, if anything, feels over-scrutinized. However, when the results came back from the Management Insight survey (that CA Technologies sponsored and announced today), there were a few things that took me and others looking at the data by surprise.

Opinions of IT executives and IT staffs on cloud don’t differ by too much. We surveyed both decision makers and implementers, thinking that we’d find some interesting discrepancies. We didn’t. They all pretty much thought cloud could help them on costs, for example. And regardless of both groups’ first impressions, I’m betting cost isn’t their eventual biggest benefit. Instead, I’d bet that it’s agility – the reduced time to having IT make a real difference in your business – that will probably win out in the end.

IT staff are of two minds about cloud. One noticeable contradiction in the survey was that the IT staff was very leery about cloud because they see its potential to take away their jobs. At the same time, one of the most popular reasons to support a cloud initiative was because it familiarized them with the latest and greatest in technology and IT approaches. It seems to me that how each IT person deals with these simultaneous pros and cons will decide a lot about the type of role they will have going forward. Finding ways to learn about and embrace change can’t be a bad thing for your resume.

Virtualization certainly has had an impact on freeing people to think positively about cloud computing. I wrote about this in one of my early blogs about internal clouds back at the beginning of 2009 – hypervisors helped IT folks break the connection between a particular piece of hardware and an application. Once you do that, you’re free to consider a lot of “what ifs.”

This new survey points out a definite connection between how far people have gotten with their virtualization work and their support for cloud computing. The findings say that virtualization helps lead to what we’re calling “cloud thinking.” In fact, the people most involved in virtualization are also the ones most likely to be supportive of cloud initiatives. That all makes sense to me. (Just don’t think that just because you’ve virtualized some servers, you’ve done everything you need to in order to get the benefits of cloud computing.)

The survey shows people expect a gradual move from physical infrastructure to virtual systems, private cloud, and public cloud – not a mad rush. Respondents did admit to quite a bit of cloud usage – more than many other surveys I’ve seen. That leads you to think that cloud is starting to come of age in large enterprises (to steal a phrase from today’s press release). But it’s not happening all at once, and there’s a combination of simple virtualization and a use of more sophisticated cloud-based architectures going on. That’s going to lead to mixed environments for quite some time to come, and a need to manage and secure those diverse environments, I’m betting.

There are open questions about the ultimate cost impact of both public and private clouds. One set of results listed cost as a driver and an inhibitor for public clouds, and as a driver and an inhibitor for private ones, too. Obviously, there’s quite a bit of theory that has yet to be put into practice. I bet that’s what a lot of the action in 2011 will be all about: figuring it out.

And who can ignore politics? Finally, in looking at the internal organizational landscape of allies and stonewallers, the survey reported what I’ve been hearing anecdotally from customers and our folks who work with them: there are a lot of political hurdles to get over to deliver a cloud computing project (let alone a success). The survey really didn’t provide a clear, step-by-step path to success (not that I expected it would). I think the plan of starting small, focusing on a specific outcome, and being able to measure results is never a bad approach. And maybe those rogue cloud projects we hear about aren’t such a bad way to start after all. (You didn’t hear that from me, mind you.)

Take a look for yourself

Those were some of the angles I thought were especially interesting, and, yes, even a bit surprising in the survey. In addition to perusing the actual paper that Management Insight wrote (registration required) about the findings, I’d also suggest taking a look at the slide show highlighting a few of the more interesting results graphically. You can take a look at those slides here.

I’m thinking we’ll run the survey again in the middle of next year (at least, that seems like about the right timing to me). Two things will be interesting to see. First, what will the “cloud thinking” that we’re talking about here have enabled? The business models that cloud computing makes possible are new, pretty dynamic, and disruptive. Companies that didn’t exist yesterday could be challenging big incumbents tomorrow with some smart application of just enough technology. And maybe with no internal IT whatsoever.

Second, it will be intriguing to see what assumptions that seem so logical now will turn out to be – surprisingly – wrong. But, hey, that’s why we ask these questions, right?

This blog is cross-posted on The CA Cloud Storm Chasers site.

Tuesday, August 10, 2010

Despite the promise of cloud, are we treating virtual servers like physical ones?

RightScale had some great data about usage of Amazon EC2 recently that described how cloud computing is evolving, or at least how their portion of that business is progressing. At first glance, it certainly sounds as if things are maturing nicely.

However, a couple things they reported caused me to question whether this trend is as rosy as it seems initially, or if IT is actually falling into a bit of a trap in the way it's starting to use the public cloud. I’ll explain:

Cloud servers are increasing in quantity, getting bigger, and living longer, but…

The RightScale data showed that comparing June 2009 with June 2010, there are now more customers using their service and each of those customers are launching more and more EC2 servers. (I did see a contradictory comment about this from Antonio Piraino of Tier1 Research, but I’ll take the RightScale info at face value for the moment.)

Not only have the number of cloud customers increased, but customers are also using bigger servers (12% used “extra large” server sizes last June, jumping up to 56% this June) and using those servers longer (3.3% of servers were running after 30 days in June 2009, 6.3% did so this June).

CTO Thorsten von Eicken acknowledged in his post that “of course this is not an exact science because some production server arrays grow and shrink on a daily basis and some test servers are left running all the time.” However, he concluded that there is a “clear trend that shows a continued move of business critical computing to the cloud.”

These data points, and the commentary around them, were interesting enough to catch the attention of folks like analyst James Staten from Forrester and CNET blogger James Urquhart on Twitter, and Ben Kepes from GigaOM picked it up as well. IDC analyst Matt Eastwood "knowing a thing or two about the server market" (as he said) was intrigued by the thread about the growing & aging of cloud servers, too, noting that average sales prices (ASPs) are rising.

Matt's comments especially got me thinking about about what parallels the usage of cloud servers might have with the way the on-premise, physical server market progressed. If people are starting to use cloud servers longer, perhaps IT is doing what they do on physical boxes inside their four walls -- moving more constant, permanent workloads to those servers.

Sounds like proof that cloud computing is gaining traction, right? Sure, but it cause me to ask this question:

As cloud computing matures, will "rented" server usage in the cloud start to follow the usage pattern of "owned," on-premise server usage?

And, more specifically:

Despite all the promises of cloud computing, are we actually just treating virtual servers in the cloud like physical ones? Are we simply using cloud computing as another type of static outsourcing?

One potential explanation for the RightScale numbers is that we are simply in the early stages of this market and we in IT operations are doing what we know best in this new environment. In other words, now that some companies have tried using the public cloud (in this particular case, Amazon EC2) for short-term testing and development projects, they’ve moved some more “production”-style workloads to the cloud. They’re transplanting what they know into a new environment that on the surface seems to be cheaper.

These production apps, instead of being the apps that folks such as Joe Weinman from AT&T described in his Cloudonomics posts as being ideal for the cloud because of their highly variable usage patterns, have very steady demand. This, after all, matches the increase in longer-running servers that von Eicken wrote about.

And that seems like a bad thing to me.

Why?

Because moving applications that have relatively steady, consistent workloads to the cloud means that customers are missing one of the most important benefits of cloud computing: elasticity.

Elasticity is the component that makes a cloud architecture fundamentally different from just an outsourced application. It is also the component of the cloud computing concept that can have the most profound economic effect on an IT budget and, in the end, a company’s business. If you only pay for what you use and can handle big swings in demand by having additional compute resources automatically provisioned when required and decommissioned when not, you don’t need those resources sitting around doing nothing the rest of the time. Regardless of whether they are on-premise or in the cloud.

In fact, this ability to automatically add and subtract the computing resources that an application needs has been a bit of a Holy Grail for a while. It’s at the heart of Gartner’s real-time infrastructure concept and other descriptions of how infrastructure is evolving to more closely match your business.

Except that maybe the data say that it isn’t what’s actually happening.

Falling into a V=P trap?

My advice for companies trying out cloud-based services of any sort is to think about what they want out of this. Don’t fall into a V=P trap: that is, don’t think of virtual servers and physical servers the same way.

Separating servers from hardware by making them virtual, and then relocating them anywhere and everywhere into the cloud gives you new possibilities. The time, effort, and knowledge it’s going to take to simply outsource an application may seem worth it in the short term, but many of the public cloud’s benefits are simply not going to materialize if you stop there. Lower cost is probably one of those. Over the long haul a steady-state app may not actually benefit from using a public cloud. The math is the math: be sure you’ve figured out your reasoning and end game before agreeing to pay month after month after month.

Instead, I’d suggest looking for applications in need of different requirements, things you could not get from the part of your infrastructure that's siloed and static today. Even if it is being run by someone else. Definitely take a peek at the math that Joe Weinman did on the industry’s behalf or other sources as you are deciding.

Of course, who am I to argue with what customers are actually doing?

It may turn out that people aren’t actually moving production or constant-workload apps at all. There may be an as-yet-undescribed reason for what RightScale’s data show, or a still-to-be-explained use case that we’re missing.

And if there is, I'm eager to hear it. We should all be flexible and, well, elastic enough to accept that explanation, too.

Tuesday, April 13, 2010

Forrester's Staten: Realities of private and hybrid clouds aren't what you're expecting

James Staten does not pull punches. And for an IT industry analyst, that’s a good thing.
I first met James a few years back when he joined Forrester during my time at Cassatt. I heard him do a couple presentations at that year’s Forrester IT Forum, had some briefing sessions with him, and realized that with James, friendly conversations quickly turn into very specific advice and commentary. Even better, it was advice and commentary that was very much on target.

For those of you who don’t know James, he is a principal analyst in Forrester Research’s Infrastructure & Operations practice, helping IT ops folks make sense of topics like virtualization, cloud computing, IT consolidation best practices, and other data center- and server-focused issues. From my perspective, James has been a great addition to Forrester, especially in his role as one of the earliest voices helping describe the impact and meaning of cloud computing.

So, I thought I’d turn him loose on a couple current cloud computing topics and see if we couldn’t find a few things to argue over. In a good way, of course. Here’s that interview:
Jay Fry, Data Center Dialog: There’s lots of debate about how cloud computing takes hold in an organization. Some see people starting with public clouds. There is also lots of conversation about private clouds. In your discussions with customers, what path do you see end user organizations taking now – and is it even possible to make any generalizations?

James Staten, Forrester: Most of the time enterprises start by experimenting with public clouds. They are easy to consume and deliver fast results. And typically the folks in the enterprises who do so are application developers and they are simply trying to get their work done as fast and easy as possible and see IT ops as slow, expensive, and a headache to deal with.

In response to this, IT ops likes the idea of a private cloud – their word, not mine. This usually translates into an internal cloud and the desire is to transform parts (if not all) of their virtual server environment into a cloud. Usually this transformation happens only in name, not in operation -- and that’s where a disconnect arises. IT ops pros tend to rebut public cloud by saying, “Hey, I can provision a VM in minutes, too.” But that’s not the full value of cloud computing. Fast deployment is just the beginning.

DCD: How do you see hybrid cloud environments coming into the picture? How soon will that be a reality?

James Staten: It’s a reality for some firms today, but not in the way that many people think. A hybrid between your internal data center and a public cloud isn’t very realistic due to latency, cost, and security. A hybrid between your internal cloud and a public cloud isn’t realistic either. (See the above answers for why.) What is realistic is a hybrid between dedicated hosting and cloud hosting within the same hosting provider. USA.gov is doing just such a thing in Terremark’s Alexandria, VA data center (Forrester customers can read a case study on this here). This kind of a hybrid allows the two separate environments to share the same backbone network and security parameters. And it allows the business service being supported to match the right type of resources to the right parts of the service.
For example, you may not want or need elastic scalability for your database tier, so it’s more stable and economical to host it on a dedicated resource with a predictable 12-month contract. But you have a lot of resource consumption volatility in the app and web tiers and so they are best served being hosted in a cloud environment.

DCD: I’ve written here before about rogue deployments using cloud computing from business users without the explicit buy-in of the IT department. Do you see that as likely or commonplace? How should IT deal with these?
James Staten: They are extremely common place as evidenced by our role-based research, which shows that when you ask application developers if their company is using cloud, you get back that 24% say yes; you ask the same question to IT ops pros and get back that 5% say yes. Clearly app dev is using the cloud and cimcumventing IT ops in doing so. It’s no surprise. They see IT ops as slow and rigid.

How should IT ops respond? First, you can’t be draconian and say, “Don’t use cloud.” Those who are parents know how well that works. Instead, IT ops needs to add value to this activity so they are invited into this process. They need to embrace the use of public clouds by asking how they can help.

DCD: What are some of the big changes that you see underway with regard to cloud computing this year so far? Has anything really surprised you?
James Staten: This year will be all about understanding where to start for most companies, how to move from testing the waters for the percentage who have already gotten this far, and how to optimize your cloud deployment for those who have already moved into production on public clouds.

For IT ops, this year is about roadmapping your transformation for running a virtual server environment to running an internal cloud. They won’t get there in one year but they can build the plan and start moving now.

Nothing has really surprised me in the past 12 months but I look forward to seeing the innovative ways companies will devise to take advantage of clouds in the future. We’ve seen some very promising starts.

DCD: Acquisitions are shaping up to be a major storyline this year from my point of view. And I’m not just saying that because of CA’s recent moves with Cassatt, Oblicore, 3Tera, and Nimsoft; there have been many. You were recently commenting on Twitter about confusion coming from Oracle about the management products related to their Sun acquisition. What role do you think acquisitions are (and should) play in shaping what customers have to choose from in the cloud computing space?

James Staten: For the most part, acquisitions in a new emerging space are about speed to market. The leading software companies could adapt their existing enterprise products to incorporate and integrate cloud computing, but that’s hard, as there are already 50-100 other priorities on the roadmap that keep the installed base happy. Acquisitions also inject new blood (new thinking, new technologies, perhaps even new culture) into an existing player and that is often sorely needed to drive a sense of urgency around addressing a new market opportunity because the immediate revenue opportunity in the new market is much smaller. This is at the heart of Clayton Christensen’s Innovator’s Dilemma. Cloud computing looks very much like a disruptive innovation and according to Christensen’s theory, the disruption can be a king-maker for those that lead the disruption (and make a pauper out of those being disrupted). So that fuels the belief that acquisitions are necessary.

DCD: You wrote one of the earliest analyst reports on cloud computing in March 2008 (“Is Cloud Computing Ready for the Enterprise?”). You’ve also been on top of “how to” topics for end users with research to help clarify some of the fuzziness around cloud computing (“Which Cloud Computing Platform is Right for You?” [April 2009], “Best Practices: Infrastructure-as-a-Service” [Sept. 2009], etc.). Given how the cloud computing category – and the industry debate – has evolved, would you approach some of your earlier research differently now? Any conclusions you would change?

James Staten: No, I feel proud of the research we have done as it was backed by real-world interviews with those who are leading this evolution, rather than theories and leaps of faith about what might occur. We were also very clear in our objective of clarifying what was truly new and different about cloud computing to help guide customers through what we knew would be a hype-filled world. It’s unfortunate the industry has latched on so tightly to the term “private cloud” because in our opinion it is a very nebulous and thus meaningless concept. Heck, anything can be made private. But then again, cloud computing isn’t most precise term, either.
DCD: If many years from now we’re thinking back on how cloud computing got started versus how it ended up, there will be an interesting storyline about the degree of difference between the two. How much of the hype around cloud computing should be ignored and how fundamental do you think it will end up being? What’s going to be the thing that really makes cloud more mainstream in your opinion?
James Staten: The one part about the hype that I think should be ignored is the belief that everything will be cloud in the future. While this is a nice, disruptive statement that draws a ton of discussion and "what if" thinking, it simply isn’t realistic. If it were, there would be no mainframes today. Everything old is still running in enterprise data centers; we simply contain these technologies as we move to adopt new ones that arise.

And cloud computing is definitely not an answer to every question. Some applications have no business in the cloud and frankly will be less efficient if put there.

It’s through better understanding of what the cloud is good for and what it isn’t that IT will move forward. At the heart of cloud computing is the idea of shared empowerment – that by agreeing to a standardized way of doing something, we can share that implementation and thus garner greater efficiencies from it. That concept will manifest itself in many ways over the coming years. SaaS is a classic example that is delivering new and greater functionality to customers faster. IaaS is a great example because when multiple customers share the same pool of resources the overall utilization of the pool can be higher and thus the cost of the infrastructure can be spread more effectively across a bigger customer set, lowering the cost of deployment for all. Take this further and you can imagine many more scenarios:
· Why buy any software when it is more efficient to rent it only when you need it? We haven’t seen that model in SaaS yet.
· Why write code when you can more easily construct a business service by stringing service elements together? This is the core of SOA and builds upon the Java construct of reusable libraries. How far can we take that?
· Why store anything yourself when storing things on the Internet allows that data to be anywhere you are, whenever you need it to be – and to be self-correcting? I’ve been using Plaxo for over 5 years to keep track of all my business contacts so I don’t really care if I lose my mobile phone, laptop, or other device that stores this information. Now that Plaxo links with Facebook and LinkedIn, all the people in my address book can update their own records and I get this info as soon as it is synced. That’s distributed data management in the cloud. Can this same model be applied to other data and storage problems?
DCD: So, what area do you think is the next battleground?
James Staten: There are many areas that will be effected by cloud computing. One market that greatly interests me right now is HPC/grid computing. Loosely-coupled applications are a great fit for the cloud today and flip the economics of grid on their head. There are some incredible examples of companies using this combination to transform how they do business in healthcare, financial services, government and many other fields. Business intelligence is fertile ground for change due to this and the influx of MapReduce. I’m really looking forward to seeing what changes in this arena.

DCD: What do you believe will be some of the more interesting starting points for customers’ cloud-based activity?

James Staten: You gotta start with test and development because this is the one area where any company can get immediate benefit. Every enterprise has a queue of test projects waiting to get lab resources. Use the cloud as an escape valve for these projects. Then take a look at your web applications. If they have traffic volatility they are natural fit for cloud.

DCD: Do you see any dead-ends that some customers are heading down that others should be careful to avoid?

James Staten: It’s a total dead end to think that you can simply buy a “cloud in a box” and suddenly you have an internal cloud. Internal cloud isn’t a “what,” it’s a “how.” How you operate the environment is far more the determiner of [the benefit of] cloud computing than the infrastructure and software technologies it is based on. This is true for service providers, too. Just because an ISP has expertise in managing physical or virtual servers doesn’t mean they can effectively run a cloud. Sure, some of the cloud building block technologies can help you get there, but this is totally an operational efficiency play.

DCD: Forrester recently changed its blogging policy for its analysts to require that any content in research-related areas be posted only to official Forrester blogs. Did this move make it harder or easier to blog, and do you think it’s going to help customers in the long run?

James Staten: It made it significantly easier for me as I strongly believe in separation between work and personal life, and equally believe in freedom of expression – and thankfully, so does Forrester. While I can’t speak for every analyst (nor can I speak for Forrester on this topic), I can say personally that this change in policy was a good one. Prior to this we had team blogs, rather than blogs for each analyst, and if someone wanted their own stage, they had to go outside to do it. Now our blogging platform gives every analyst their own outlet while preserving the aggregation of blog content by client role. We also moved to a blogging system that makes it much, much easier for me to author and publish blog entries myself. Anything that makes me more productive and helps clients consume our value is a good thing.

Thanks to James for spending the time for this extended interview. Of course, given that James is a pretty intense endurance runner during his off hours (he’s shooting for 8 marathons and 6 half-marathons this year – and 50 marathons by his 50th birthday), a marathon interview seemed somewhat appropriate.

If you have thoughts or feel an urge to disagree with James about any of topics we touched upon here, feel free to add your comments.

Wednesday, March 10, 2010

CA and Nimsoft: Because smaller companies have different IT management requirements, but seem eager for cloud

If you’ve been watching CA, you’ve noticed some recently announced acquisitions that, when they close, will help us enable customers to make a transition to a cloud-connected enterprise, notably 3Tera, Oblicore, and Cassatt. And while the 3Tera and Oblicore deals in particular have a strong focus on managed service providers (MSPs) as customers, much of the end user interest in these solutions comes pretty exclusively from the largest enterprises.
Which begs the question, what about the slightly smaller companies? How are they looking to navigate the clouds and approach IT management in general?

Funny you should ask…
Today’s announcement that CA has signed a definitive agreement to acquire Nimsoft is an acknowledgement by CA that the group of companies a bit below the world’s biggest ones has a very particular set of requirements that, up until now, we have not really addressed. Nimsoft, however, as a provider of IT performance and availability software, has done a very good job of tailoring both its product (you can see a 13-minute demo here) and its way of working with customers to their specific needs.

With Nimsoft, CA can bring a tailored IT management solution to a new set of customers -- emerging enterprises, emerging national markets, and the MPSs and cloud service providers that serve those markets.

Who are the “emerging enterprises”?

To be clear, we’re still talking about some pretty large companies. Our internal categorization calls the group the “emerging enterprises” (of course, everyone has a different label). To CA, these emerging enterprises are the companies with annual revenues of less than $2 billion, but more than about $300 million. While the world’s mega-corporations may be a bit sluggish about adopting cloud computing, many signs point to the fact these emerging enterprises are also the ones that are experimenting with cloud computing in a much more significant way.

In addition, the MSPs and cloud service providers that serve these organizations are also some of the entities making the biggest strides when it comes to cloud computing.

But to serve these emerging enterprises, you have to do things differently
As Nimsoft knows firsthand, the emerging enterprises and these MSPs and cloud service providers require you to do things differently. How?

First, the software needs to be straightforward (and even easy) to install, use, and maintain. It needs to have a very broad but focused set of functional capabilities, stretching across a significant amount of the environments, components, and devices that an organization will want to manage, inside and outside its four walls.
Nimsoft fits that bill nicely. In fact, Nimsoft has been doing that so well, that many have called it (rightly or wrongly) an alternative to solutions from the Big Four management vendors. More about that in a moment.

The second requirement that’s pretty drastically different for smaller enterprises is that they are actually much more likely to buy their IT management capabilities from MSPs. 300 of Nimsoft’s 800 customers are MSPs today, often providing IT management services via the cloud to their end users.

When you start thinking about markets that are served via MSPs, there’s another whole set of customers that can be addressed – emerging national geographies. In the same way that some developing countries skipped the land-line phone infrastructure and went straight to wireless, many end user organizations outside of the more IT-mature geographies are thinking about avoiding data center build-outs altogether, and going to cloud computing. Or, if they are doing their own data center, then they are considering managing their environment by an MSP’s remote monitoring capabilities. Either way, IT management provided as a service is front and center.
So why is this customer segment (and therefore Nimsoft itself) interesting to CA?

CA has long been in the IT management software space. However, what I’ve seen in the past 6 months since I came aboard is a very serious focus on making changes to move CA from the slow, steady, “value” company it has been to one focused on growth. You can see the big bets that CA is placing to make this transition: we’re making significant invesments (through acquisition and internal development) on the software that large customers will need to be successful transitioning to and managing cloud computing. With Nimsoft, we’re also expanding the set of customers that we serve.

Why? That is where the new growth is. While the largest of the large companies do account for a big chunk of the IT management spending now and over the next few years, you may be surprised at how significant the emerging enterprises will be. Our numbers find them making up approximately 25% of the software spending in the market CA addresses (parts of the IT management software market, plus a few other areas) by 2013. That’s big, and something that CA hasn’t really touched before now.

If the product is that cool, what’s kept Nimsoft from having luck selling to larger companies?

Nimsoft has had great success. They fill the gap in the market above open source and low-end solutions, but below things like what CA’s Service Assurance products (Spectrum, eHealth, NetQoS, and Wily) provide. The truth is that Nimsoft has also had luck with some very large companies who, at first blush, you’d expect to buy a more global-style solution. However, what we mostly heard when we talked to their customers was that where large customers had bought Nimsoft, they use it for specific, departmental needs, while making use of another tool from CA or others for the enterprise-wide or more in-depth tasks or analysis. This is the reason CA is saying it plans to invest in both its current Service Assurance product line and Nimsoft – these products address different markets with different customer requirements. There will always be some gray area, but we’ll work through that as the companies come together.

Nimsoft fits into the CA Cloud Products & Solutions Business Line

To bring Nimsoft onboard, CA seems to have learned a lot from its Wily acquisition a number of years back. Like Wily, Nimsoft will be kept intact as a stand-alone business unit. It will become major new piece of Chris O'Malley's new Cloud Products and Solutions Business Line. CA plans to build on what Nimsoft has done and the success they’ve had with MSPs and emerging enterprises. This should also enable us to tap into those emerging geographic markets I mentioned earlier.

Nimsoft should fit nicely into the Cloud organization at CA – they’ve been thinking about their tool as something that helps monitor both internal and cloud environments. Check out their Unified Monitoring site for a peek at some of the cool things they can do. It’s a bit of a high-level weather report for the cloud. It monitors up/down status for popular public cloud offerings from the likes of Amazon Web Services, Google App Engine, salesforce.com, Rackspace, and others.

This acquisition is not about technology, it’s about the market

The bottom line for this announcement is that it’s very different from the others recently announced by CA. The Nimsoft deal is less about the technology (though their technology is pretty interesting on its own) and more about the market.

By acquiring Nimsoft, CA gets a solution for customer segments that it has not typically had offerings for. These markets become one of the key places CA is going to focus for growth going forward.

And, Nimsoft will (after the close) provide CA with a really good example of what happens when you listen to your customers’ needs very precisely. Here’s to more of the same.

Tuesday, March 2, 2010

Going rogue, cloud computing-style: what you can learn by going around IT

I have to say, the San Francisco Cloud Club (#sfcloudclub on Twitter) is a great place to hear good ideas get batted around.

For the uninitiated, the group is a bunch of cloud computing experts, thinkers, and doers from around the Bay Area who occasionally give up an evening for a good discussion/argument or two about what’s happening in this market. (I wrote up one of the previous lively, cloud-filled conversations this group hosted in an earlier post: “Two cloud computing Rorschach tests: ‘legacy clouds’ and the lock-in lesson” for those who want to get a taste for the group’s content.)

As the group is getting ready for an expanded meeting of the minds connected with the upcoming Cloud Connect conference in Santa Clara (March 15-18), I realized that I’m still mulling over an idea or two about the psychology of cloud computing adoption brought up about the during the group’s MLK Day get-together in January.

Big companies will adopt cloud computing very conservatively, right?

Back at that meeting, we spent a bit of time on how companies are actually adopting cloud computing. The difficulty that big companies currently have with the public cloud has been covered extensively. It’s worrisome for them, thanks to things we’ve all heard about, like security, performance and reliability, and lock-in worries. At Cloud Club, we discussed how many orgs, as a result, are (officially, anyway) saying that they want to work on private clouds instead.

But most interestingly, we also discussed the truly amazing tendency that people in even in the most conservative IT organizations have to, well, “go rogue.”

In some of the most locked-down IT environments, folks try innovative stuff on their own, even when it doesn’t meet all of their strict requirements. Why? To help them get their job done in a much better or faster way.

Now that’s interesting.
Several of the Cloud Club attendees talked about “innovation on the edges” of organizations, but even when a company’s party line is to take this cloud thing one step at a time, there are people in the middle of it all trying some pretty aggressive stuff. I’ve heard the story many times of how the developers at Fill in Big Company Name Here had each been racking up independently huge bills (totaling in the millions of dollars if you believe some stories) for services like Amazon EC2 without an official mandate to do so. In fact, in some cases ignoring an official policy not to do such things.

Ah, cloud people do the darndest things.

Despite the rules, cloud computing just might seep in to organizations

After hearing this same story from a bunch of sources, I’m coming to the conclusion that this is the way the adoption of cloud computing is really going to go. It’s going to seep into organizations, like many other compelling technologies and approaches have before it.

A now-old-school example (and the one that I had the most direct connection with) of this organic, under-the-radar type of adoption was the way the BEA WebLogic app server found its way into organizations in the late ‘90s. It was picked up by people (developers) who found it the most useful way to get their concept from idea to working app. BEA and Java rode that wave quite some distance.

And, of course, that approach isn’t unique; open source is all about this. The “freemium” approach requires this same groundswell of interest up front to have a set of targets to upsell.

Is “going rogue” in IT good or bad?

Are “mavericky” installations something to be encouraged – or rooted out? With these projects, you certainly have a chance to apply a very specific solution to a very specific problem. I’ve generally found that to be the best way to see if something works or not.

So, then, the real question will be how organizations deal with these “rogue elements” after the fact.

Some of the rogues will be business owners who wanted to get something that they perceive they cannot currently get from central IT. Others will actually be IT folks themselves, knowing what their restrictions would be if they did things the “right” way, who were trying either an end-run in hopes of getting results that would make people take notice or who actually had at least some sort of tacit approval.

As CIOs and central IT work to get ahead of the issues that cloud computing is bringing, they will have a choice. They can bring the hammer down and punish those who went around the letter of the law to use cloud computing in unsanctioned ways. This has the very real possibility of crushing not only the entrepreneurial spirit of those who tried the new approach, but also discrediting the results they achieved (even if they are impressive in terms of either cost or agility).

Andi Mann’s EMA Responsible Cloud survey talked about this (I can still talk about the survey he ran while at EMA, even though he’s now a CA employee, right? For the record, I’m very happy to have his expertise onboard here and still have great respect for the EMA team). The EMA survey said that these unauthorized “skunkworks” cloud computing implementations showed “all the positive aspects of the pioneering spirit of the wild west cowboy” and can highlight “a valid use case, and help to advance the organization as a whole.”

These rogue implementations “can provide exceptional early stage experience and value. Practitioners attain skills, mistakes provide lessons, and experiences provide a basis for building an enterprise approach.”

How to rein in the cloud computing cowboys

To deal with them, then, EMA noted, “the key…is to recognize when such use cases happen, understand why they happen, be prepared to both take advantage of these opportunities, and be able to pull them back into a Responsible Cloud model when required.”

In my interview with Andi from a couple months ago, he suggested this approach that I liked: over time, these cowboys can potentially become important contributors in a new cloud group within IT, with a mandate to experiment -- to go outside the box. They can start with non-disruptive systems and experimental applications at first, but over time can start to apply their approach to more important systems, too, if and when that makes sense.

“Certainly stomping them out is not the answer,” said Andi. “Instead, by finding out what they are doing and why, you will learn what is broken and how to fix it.”

It’s a delicate balance, but one that can have some interesting pay-offs as organizations experiment – secretly or out in the open – with cloud computing.

If you’re interested in a literal snapshot into the San Francisco Cloud Club discussions, Gary Orenstein posted pics from the January gathering here. You’ll notice I keep my usual very low profile.

Wednesday, February 24, 2010

3Tera brings powerful, simple way for customers to move apps to clouds -- and help reshape CA?

Today’s announcement of CA’s definitive agreement to acquire 3Tera has a couple of interesting wrinkles. It’s definitely of interest for those of us who have followed 3Tera and the many companies in the cloud computing space for a while. But it also has the possibility of shaking up the admittedly stodgy image of CA – and what CA can deliver for customers thinking about cloud computing.

But first, for those who haven’t been living and breathing this stuff, here’s the quick take:

3Tera is one of the early pioneers in cloud computing. Even better, they are one of the early pioneers with customers, which tells you that they are onto something.

The 3Tera product, AppLogic, has a pretty slick graphical user interface that customers can use to configure and deploy composite applications to either public or private clouds. It takes the many, very manual tasks required to put an application into a cloud environment and lets you take care of them with an elegant series of clicks, drags, and drops. (For a high-speed tour of how this works, they have a 4-minute demo you can watch here.)

Yes, as you’d expect, there’s “some assembly required” up front to make this all possible, but one 3Tera customer noted that their IT folks would not allow them to go back to doing app configuration and deployment the old, manual way now that they were using AppLogic. We considered that a good sign.

And, because 3Tera creates a layer of abstraction for the applications that you’re encapsulating, you have a bunch of deployment options, both now and at any point in the future. That could come in very handy as you think about both public and private cloud possibilities.

Both service providers & enterprises can use 3Tera for cloud enablement

3Tera’s most aggressive customers have been the managed service providers – companies that are scrambling to find compelling, differentiated ways to offer cloud-based services to their customers. Enterprises have also started see 3Tera’s possibilities for deploying composite apps into a private cloud environment. However, enterprises have been a little more reticent to make sure they know what they are getting into before making the leap to cloud.

This is probably one of the areas that CA can help improve by backing the 3Tera innovations with significant resources: enterprises need to feel comfortable to move applications to the cloud. A 3Tera/CA combination will give enterprises a significant partner that’s providing a technology to make the steps possible – and much more doable.

And luckily, we’re expecting the whole 3Tera team to come over to CA: there’s still a bit of evangelizing left to do.

Even so, however, 3Tera customer successes have been noted by an analyst or two. In his report on private clouds last year (“Deliver Cloud Benefits Inside Your Walls,” April 13, 2009), Forrester analyst James Staten called 3Tera’s AppLogic “the leading cloud infrastructure software offering available today” and noted that it was “the foundation of many public clouds.” That’s a great foundation to build from.

One of the reasons 3Tera’s probably had good success with service providers is the fact that it uses Xen to help abstract the applications from the infrastructure. Xen is usually good news given the margin pressures that service providers are under. In the enterprise, however, VMware is a much more common and acceptable choice. It was already on 3Tera’s roadmap to deliver VMware support, but here’s another bit of good news coming with this deal: CA plans to extend AppLogic to also be able to use both ESX and Microsoft’s Hyper-V.

Note to self: This is a big shift for CA

The 3Tera deal is certainly a very public acknowledgement by CA that cloud computing is front and center to what’s changing in IT. And, the deal drops another piece into place in a rapidly filling-out strategy by CA to address those changes.

Customers are likely going to have a number of important needs as cloud computing becomes more central. Organizations want to know how their systems are performing using business metrics (Oblicore can help with this). They need to decide what components to take to the cloud – and what components not to. 3Tera can help them cloud-enable and deploy the apps they want to move now. And, as customers look to optimize this process and their environment, that’s where the Cassatt expertise can be useful.

Alongside these moves, the existing CA portfolio continues to have a strong, immediate role to play. The 3Tera deal may seem like quite a shift from CA’s traditional assurance, automation, and security businesses, but in fact, is a complimentary piece. Customers need to have the ability to manage their environments end-to-end, even if the cloud is involved, and they can (and are) working on that using CA’s existing solutions today. There are lots of opportunities for more linkage going forward as well.

Which leads me back to comments like those in a Derrick Harris’s GigaOM article (complete with some prognosticating about CA’s next moves) a week or so ago: “Let’s be honest, systems management vendor CA doesn’t exactly inspire visions of innovation.”

Hopefully (as Derrick mentions), we’re in the process of changing that. But we’ll leave the verdict on that up to customers.

3Tera: Since the “good old days” of cloud computing…before it was cloud computing

On a more personal note, I’ve been watching 3Tera since my early days at Cassatt, as we and they were all wrestling with how to describe our respective offerings, how to get market traction, and what moves would pay off. Like Cassatt, these guys were cloud before cloud was cool. A lot of the way they describe themselves predates the cloud computing phraseology. And that’s OK. As I noted earlier, they focused on the problem of how to encapsulate apps and make it possible to deploy them in lots of ways and locations, including the cloud.

An interesting connection point: after Cassatt landed at CA, our former CEO Bill Coleman took on a consulting role with 3Tera that I wrote about back in August. Some of his comments then look interesting now, in light of the CA deal with 3Tera.

More details will follow here as I have a chance to dig into some of the interesting aspects of this deal, its implications, and other questions that will come up. Until then, the press release about the acquisition is posted here.

Thursday, February 11, 2010

From private clouds to solar panels: more control and uniqueness, but are they worth it?

Andi Mann of EMA wrote recently that failures are endemic to public clouds. And, by the way, that’s OK. In fact, says Andi, failures are normal part of what your IT infrastructure needs to be able to deal with these days.

Even if you take it as a given that we’ll hear about cloud service failure after failure in the news from now on (a daunting prospect in and of itself), public clouds surprisingly still set a pretty high bar for internal IT. Andi’s figures put some public cloud uptime numbers at 3 to 3.5 “nines,” as in 99.9 or 99.95% uptime – that's 5-10 minutes of downtime each week.

Now, if you’re hoping to get a lot of the public cloud computing benefits but do so on-premise by creating a private cloud infrastructure, there's a serious amount of work and investment required to match public cloud availability for all of your applications. Andi pins “normal” cloud computing outages at 5-10 times less likely than those in internal data centers.

“CIOs who are planning to build their own private cloud have a surprisingly high bar to reach,” blogs Andi.
Sounds like it may not be worth the effort for private clouds then, eh?
Actually, think again. It just might be, but for other reasons than you might think.
Private clouds hold what’s most unique about your organization
Mike Manos recently had some interesting observations from his time at Microsoft and at his recently-ended stint at Digital Realty Trust (sounds like he’s heading to greener pastures at Nokia). In response to James Hamilton at Amazon (thanks to Dave O’Hara for pointing out the discussion), Mike postulated that the things that make private clouds quite interesting, despite the high bar, are the way they encapsulate the unique components of an organization.

In other words, the most tailored and specific things about your IT environment are the best argument for a private cloud.

That rings true for me. Cloud computing is a way to pay for only what you need, and a way to have compute, storage, and other resources appear and disappear to support your demands, as those demands appear and disappear. There are components of what IT does for your company that are not unique. Those sound perfect to move to external, public cloud computing services at some point. They are commodities, and should be handled as such. Maybe not now, but eventually.

The more specific, complex, special pieces of IT seem logical to be the ones you keep inside as you get started down the cloud path. Those take the kid gloves and your special expertise, at least for now.

The push to get the most from those important, unique pieces of IT is giving enterprises a strong incentive to pursue a cloud-style architecture in-house. To again quote Andi Mann’s EMA research, private clouds are the preferred approach to cloud computing for 75% of the organizations they polled, far ahead of the interest in public clouds.

Are private clouds a temporary fix or a permanent fixture?

With all of that as a background, how permanent are private clouds? Here's a quick detour to help answer that:
Chris Hoff of Cisco collected commentary at his Rational Survivability blog on the topic of private clouds recently by weighing in on the appropriateness of the IT-as-electricity analogy that Nick Carr brought mainstream with The Big Switch. His quick take was that private clouds might be like batteries (thought he didn’t go too far explaining his concept, beyond labeling it an “incomplete thought” to get conversation going). However, a couple of his commenters had an analogy I liked better: that of a solar power generator.

So, is a solar power generator a good analogy for a private cloud? You’re generating “IT power” for your own use, using your own resources. Unlike what the “battery” analogy implies, a private cloud implementation is not what you’d call temporary. In fact, as Manos was thinking about private clouds in the blog noted above, one of his comments was that “there’s no such thing as a temporary data center.” Or a temporary private cloud infrastructure, I’d add. Like it or not, most IT projects, even if they are done for an ostensibly short period of time, end up living long beyond their intended sunset.

Private clouds will be no different. Many (like Gartner) see private clouds as a stepping stone or a temporary requirement until the public cloud addresses all of the roadblocks people keep complaining about. But once that infrastructure to add/subtract, build/collapse things in your IT environment is in place, you should be able to get a lot of use from it. And it will live on. This is similar, in fact, to the situation if you had taken the time and effort to get that solar installation up and running.

As things progress, I predict this “either/or” kind of language (as in “public clouds” or “private clouds”) that we’ve been seeing will fall by the wayside. I think Manos is right: “and” will be the way of the future. We’ll aim for use of the public cloud where it makes sense. And we’ll keep using private clouds – leveraging their reflection of organizational uniqueness, coupled with an unintentional permanence because, well, they work. We’ll find a way to take advantage of both.

Paving the way for hybrid clouds

This scenario, by the way, makes hybrid clouds the end state, a situation that Hoff sees as “the way these things always go.” Scott Hammond, CEO of newScale, uses an example that reiterates this: “The data center looks like my dad’s basement.” In other words, IT continues to be a strange mishmash of the new, combined with all that’s come before. That’s reality.

So a conversation that started by questioning whether public cloud computing service outages are endemic or even a problem, which then shifted to how private clouds can hold unique value for organizations, ends up connecting the two.

Of course, hybrid clouds will require an additional level of thinking, management, and control. That’s a topic that will have to get a unique post of its own one of these days.

In the meantime, I’ll leave you to ponder what other cloud computing metaphors we might be able to unearth in Scott’s dad’s basement. It just might be worth it. Especially if we find something to help those solar panels pay off.