Monday, August 30, 2010

CA snags Arcot: Another step for cloud identity & security

Not everything in the news this week is about VMworld. CA Technologies pulled in an interesting new security-related acquisition today, one that brings in solutions focused on advanced authentication and on-line fraud detection. The idea is to leverage these capabilities to help solve some of the issues in cloud computing related to one of the key concepts that needs solving: managing identity.

As those who watch this space know, security has been on the minds of folks in cloud computing since the term appeared, with the topic topping the list of cloud worries in survey after survey.

The company being acquired is Arcot Systems, Inc., a 165-person firm based in Sunnyvale, Calif., which has a pretty healthy business in this space.

Arcot will team with CA Technologies’ security group’s and on their existing focus on Identity and Access Management (something that the folks in the know on this stuff call, well, IAM). In fact, the CA security folks tell me they think this move does quite a bit to accelerate CA Technologies’ IAM cloud service offering.

Arcot has quite a bit of street cred, it seems. Currently, their solutions (which can be on-premise or cloud-based) are used to prevent fraudulent transactions for about 1 million credit card transactions a day.

“Identity is a critical area for security whether you’re talking about in-house or the cloud,” noted Arcot’s president & CEO Ram Varadarajan in the press release. They boast 120 million identities verified by their solutions today. The company has been around since 1997, has 35 patents awarded or pending, and co-invented the 3-D Secure protocol for online payment security with a little company you may have heard of called Visa.

Not too shabby. Especially on the heels of other security-related M&A activity, including Intel gobbling up McAfee, which also mentioned the cloud angle.

For more granular details, though, I’ll direct you to folks in the know on this topic. The CA Technologies cloud security strategy (articulated here) has 3 pieces to it:

· Enable organizations to extend existing on-premises IAM systems to support cloud applications and services;
· Provide IAM technology to cloud providers to secure their services – whether public, private or hybrid; and
· Enable IAM services from the cloud

And, for a bit more detail about this deal in particular, check out today’s press release, Arcot’s website, or Matthew Gardiner’s blog post.

The article is cross-posted at the CA Cloud Storm Chasers blog.

Thursday, August 26, 2010

Back to school -- for the cloud? Try not to forget the multiple paths for virtualization & cloud

Summer vacation is really a bad idea.

At least, that’s what TIME Magazine reported a few weeks back. Despite our glorified, nostalgic memories of endless hours on the tire swing above the old water hole (or, more likely, trying to find towel space on a lounge chair by the gym’s overcrowded pool), apparently kids forget stuff when they aren’t in school.

So, now that everyone’s headed back to the classroom and hitting the books again, they’ve got to jog their memories on how this learning stuff worked.

Luckily, as working adults who think about esoteric IT topics like virtualizing servers and actually planning cloud computing roll-outs, we can say this is never an issue. Right? Anyone? Bueller? Bueller?

However, with VMworld imminent and people returning from vacations, it’s a good time to reiterate what I’ve been hearing from customers and others in the industry about how this journey around virtualization and cloud computing goes.

Some highlights (take notes if you’d like; there might be a short quiz next period):

Rely on the scientific method. You’re going to hear lots of announcements at VMworld next week. (In fact, many folks jumped the gun and lobbed their news into the market this week.) In any case, be a good student and diligently take notes. But then you should probably rely a bit on the scientific method. And question authority. Know what you need or at least what you think you need to accomplish your business goal. Look at any/all of our vendor announcements through that lens. You’ll probably be able to eliminate about two-thirds of what you hear next week from VMware and all its partners (and, of course, I realize that probably includes us at CA Technologies, too). But that last third is worth a closer look. And some serious questions and investigation.

The answers aren’t simply listed in the back of your textbook. Meaning what? Well, here's one thing for starters: just because you’re knee-deep in virtualization doesn’t mean you’re automagically perfectly set up for cloud computing. Virtualization is certainly a key technology that can be really useful in cloud deployments, but as I've noted here before, it’s not sufficient all by itself. The NIST definition of cloud computing (and the one I use, frankly), doesn’t explicitly mention virtualization. Of course, you do need some smart way to pool your computing resources, and 15,000 VMworld attendees can’t be wrong…right? (Go here for my write-up on last year’s VMworld event.) But, just keep that in mind. There’s more to the story.

In fact, there may be more than one right answer. There isn’t one and only one path to cloud computing. My old BEA cohort Vittorio Viarengo had a piece in Forbes this week talking about virtualization as the pragmatic path to cloud. It can be. I guess it all depends what that path is and where it goes. It just may not be ideally suited for your situation.

On the “path to cloud computing,” to borrow Vittorio’s term, there are two approaches we’ve heard from folks:

Evolution: No, Charles Darwin isn’t really a big cloud computing guru (despite the beard). But many companies are working through a step-by-step evolution to a more dynamic data center infrastructure. They work through consolidation & standardization using virtualization. They then build upon those efforts to optimize compute resources. As they progress, they automate more, and begin to rely on orchestration capabilities. The goal: a cloud-style environment inside their data center, or even one that is a hybrid of public and private. It’s a methodical evolution. This method maps to infrastructure maturity models that folks like Gartner talk about quite a bit.

Revolution: This is not something you studied in history class involving midnight rides and red coats. If organizations have the freedom (or, more likely, the pressure to deliver), they can look at a more holistic cloud platform approach that is more turn-key. It’s faster, and skips or obviates a lot of the steps mentioned in the other approach by addressing the issues in completely different ways. The benefit? You (a service provider or an end user organization) can get a cloud environment up and running in a matter of weeks. The downside? Many of the processes you’re used to will be, well, old school. You have to be OK with that.

Forrester’s James Staten explained ways to deliver internal clouds using either approach in his report about why orgs aren’t ready for internal clouds in the first place. Both the evolutionary and the revolutionary approaches are worthy of more detail in an additional post or two in the near future, I think. But the next logical question – how do you decide what approach to take? – leads to the next bit of useful advice I’ve heard:

When in doubt, pick ‘C’. Even customers picking a more evolutionary approach won’t have the luxury of a paint-by-numbers scenario. Bill Claybrook’s recent in-depth Computerworld article about the bumpy ride that awaits many trying to deliver private clouds underscores this. “Few, if any, companies go through all of the above steps/stages in parallel,” he writes. “In fact, there is no single ‘correct’ way to transition to a private cloud environment from a traditional data center.”

So, the answer may not only be a gradual evolution to cloud by way of increasing steps of virtualization, automation, and orchestration. And it may not only be a full-fledged revolution. Instead, you want to do what’s right for each situation. That means the co-existence of both approaches.

How do you decide? It’s probably a matter of time. Time-to-market, that is. In situations where you have the luxury of a longer, more methodical approach, the evolutionary steps of extending virtualization, automation, and standardization strategies is probably the right way to go. In situations where there is a willingness, eagerness, or, frankly, a need to break some glass to get things done, viva la revolution! (As you probably can guess, the CA 3Tera product falls into this latter category.)

Learn from the past. Where people have gotten stuck with things like virtualization, you’ll need to find ways around it. Sometimes that will be helped by tools from folks like VMware themselves, broader management tools from players like, oh, say CA Technologies or a number of others. Sometimes that help will need to be in the form of experts. As I previously posted, we’ve just brought a few of these experts onboard with the 4Base Technologies acquisition, and I bet there will be a few consulting organizations in the crowd at VMworld. Just a hunch.

Back to Claybrook’s Computerworld article for a final thought: “[O]ne thing is very clear: If your IT organization is not willing to make the full investment for whatever part of its data center is transitioned to a private cloud, it will not have a cloud that exhibits agile provisioning, elasticity and lower costs per application.”

And that’s enough to ruin anyone’s summer vacation. See you at Moscone.

If you are attending VMworld 2010 and are interested in joining the San Francisco Cloud Club members for drinks and an informal get-together on Wednesday evening before INXS, go here to sign up.

Friday, August 13, 2010

CA, 4Base, and why consulting is a good idea, even in the era of self-service

Sure, self-service is one of the key attributes expected from cloud services. But contrary to what you may hear from vendors, it’s not always possible to do everything you need to do to using only something that comes in a box (or even if it’s provisioned as a service, as is increasingly the case). Getting virtualization broadly adopted in your organization or a cloud-style infrastructure running well in your shop is more complicated than that.

David Linthicum noted this in his InfoWorld column this week. “While private clouds seem like mounds of virtualized servers to many in IT,” he writes, “true private clouds are architecturally and operationally complex, and they require that the people behind the design and cloud creation know what they are doing. Unfortunately, few do these days.”

As strongly as folks want to believe that everything can be solved with a mouse click, the rise of boutique consulting firms focused on cloud and virtualization tells you that there’s a need here. And it’s something that CA Technologies decided to address head-on.

CA Technologies acquired 4Base Technology to fill customers’ real-world virtualization and cloud experience gap

As you might have seen from yesterday’s news, CA Technologies is pulling in a new type of expertise to offer customers help with the real-world issues that both virtualization and cloud computing create. We’ve acquired 4Base Technology, a small, focused consulting firm with people on the ground who know how these technologies and related operation models can and should work. They have seen the intricacies that IT departments are faced with daily when trying to go from a fluffy, conceptual future to a working implementation.

The folks at 4Base know the relevant technology from Cisco, Citrix, EMC, Microsoft, NetApp, and, especially, VMware. In fact, their partnership with VMware will be a great way for CA Technologies to expand our existing relationship with the market share leader in virtualization. It doesn’t hurt that 4Base is headquartered in Sunnyvale, not so very far from VMware’s sunny Palo Alto HQ (well, normally sunny. This summer, not so much).

Side-by-side collaboration with customers on virtualization & cloud

In rolling out virtualization and cloud computing for large enterprises, you really have to work side-by-side with your customers. I’ve seen this during my time here at CA Technologies and in my previous years at Cassatt. Leaving customers to figure everything out on their own is not a path to success.

In fact, I’ve heard many stories about customers struggling with what Andi Mann (from the CA Technologies virtualization management group) calls “virtual stall” as they proceed with their virtualization roll-outs. Customers start down the path, but for a variety of reasons that Andi describes, get stuck. One large customer we talked to in the Cassatt days knew they wanted to virtualize more of their thousands and thousands of servers, but really didn’t have the staff or understanding about what process to follow to get the benefits they were looking for. Or even identify what servers to virtualize next. So nothing moved. That’s not a good return on anyone’s investment.

I think this acquisition is a nice start toward helping customers address these issues (so does Andi Mann, by the way, according to his blog about 4Base). It shows that CA Technologies understands that the customer’s success isn’t something that should be left up to chance or just a best effort. It should be approached methodically using an approach that’s steeped in experience.

The 4Base Technology acquisition also means CA Technologies can start working with customers much earlier in their planning process – not just at the point in time when they need help installing and deploying software. That’s a shift for CA Technologies.

In fact, 4Base’s practices, service offerings, and skills will be a solid foundation to a team being formed in the services organization called the CA Global Virtualization and Cloud Consulting Team. The 4Base team has offerings ranging from virtualization operational readiness assessments, to virtualization capability assessment & strategy, to cloud-based advisory services. Watch for more interesting details on this group as it matures.

These types of offerings give CA Technologies the opportunity to provide the benefit of our experiences in planning out a customer’s cloud approach, and the opportunity to help see this through to its roll-out. Same with virtualization. Customers can make use of as little or as much of these capabilities as they need. But just having these offerings will help us be more proactive, rather than reactive – which I’m betting should please our customers.

But isn’t consulting doomed by the self-service aspects of cloud computing?

Beefing up on consulting capabilities, however, begs the question I alluded to at the start of this post: if all this cloud and virtualization stuff is supposed to be completely handled by one of the things that’s in the standard definition of cloud computing (“self-service” qualifies in most definitions these days), why is this capability even needed?

Along those same lines, I saw a recent article at CIO.com by Thomas Wailgum (@twailgum on Twitter) that discussed The Coming Upheaval in Tech Services, a report by Forrester analysts John McCarthy and Pascal Matzke, that was skeptical that the big consulting firms would be able to pull down successful services business in the cloud space in the short term.

My conversation on Twitter about that article with Laurie McLaughlin at CIO Magazine and others centered on the complexity issue: “Here's the quandary,” she tweeted, “who wants to [market] cloud as so complex that you should pay consultants to help?”

While it’s true that no one’s looking for complexity, we know that complexity is with us in current, more traditional IT environments. As we get early cloud computing implementations off the ground I don’t think we have much choice: complexity will follow IT to the cloud (and back) as well. Especially if you want to connect them in any way to existing environments. This article in IT World says, in fact, that the best way to build a career in cloud computing is to help people actually implement it.

Experience matters

It may be true that the larger consulting firms will have trouble building a business on cloud implementation consulting in the short term, but this is likely because they (so far) lack the best-practices and actual experiences that lead you to trust someone with a strategic project like a virtualization roll-out or cloud computing project.

“At the core of this problem,” said Linthicum in InfoWorld, “is the fact that we're hype-rich and architect-poor. IT pros who understand the core concepts behind SOA, private cloud architecture, governance, and security -- and the enabling technology they require -- are few and far between, and they clearly are not walking the halls of rank-and-file enterprises and government agencies.”

So, instead, I’m betting IT will want to hire the ones who have done it before. Says Linthicum: “What can you do to get ready? The most common advice is to hire people who know what they're doing and have the experience required to get it right the first time.”

And those folks are mostly – at this point, anyway – with small firms like 4Base. Keep tabs on what CA Technologies is planning to do from here. I’m hoping for a lot of real-world success as the company builds on what 4Base has been able to learn so far, expands their reach, and accelerates from there.

This article is cross-posted at the CA Cloud Storm Chasers blog.

Tuesday, August 10, 2010

Despite the promise of cloud, are we treating virtual servers like physical ones?

RightScale had some great data about usage of Amazon EC2 recently that described how cloud computing is evolving, or at least how their portion of that business is progressing. At first glance, it certainly sounds as if things are maturing nicely.

However, a couple things they reported caused me to question whether this trend is as rosy as it seems initially, or if IT is actually falling into a bit of a trap in the way it's starting to use the public cloud. I’ll explain:

Cloud servers are increasing in quantity, getting bigger, and living longer, but…

The RightScale data showed that comparing June 2009 with June 2010, there are now more customers using their service and each of those customers are launching more and more EC2 servers. (I did see a contradictory comment about this from Antonio Piraino of Tier1 Research, but I’ll take the RightScale info at face value for the moment.)

Not only have the number of cloud customers increased, but customers are also using bigger servers (12% used “extra large” server sizes last June, jumping up to 56% this June) and using those servers longer (3.3% of servers were running after 30 days in June 2009, 6.3% did so this June).

CTO Thorsten von Eicken acknowledged in his post that “of course this is not an exact science because some production server arrays grow and shrink on a daily basis and some test servers are left running all the time.” However, he concluded that there is a “clear trend that shows a continued move of business critical computing to the cloud.”

These data points, and the commentary around them, were interesting enough to catch the attention of folks like analyst James Staten from Forrester and CNET blogger James Urquhart on Twitter, and Ben Kepes from GigaOM picked it up as well. IDC analyst Matt Eastwood "knowing a thing or two about the server market" (as he said) was intrigued by the thread about the growing & aging of cloud servers, too, noting that average sales prices (ASPs) are rising.

Matt's comments especially got me thinking about about what parallels the usage of cloud servers might have with the way the on-premise, physical server market progressed. If people are starting to use cloud servers longer, perhaps IT is doing what they do on physical boxes inside their four walls -- moving more constant, permanent workloads to those servers.

Sounds like proof that cloud computing is gaining traction, right? Sure, but it cause me to ask this question:

As cloud computing matures, will "rented" server usage in the cloud start to follow the usage pattern of "owned," on-premise server usage?

And, more specifically:

Despite all the promises of cloud computing, are we actually just treating virtual servers in the cloud like physical ones? Are we simply using cloud computing as another type of static outsourcing?

One potential explanation for the RightScale numbers is that we are simply in the early stages of this market and we in IT operations are doing what we know best in this new environment. In other words, now that some companies have tried using the public cloud (in this particular case, Amazon EC2) for short-term testing and development projects, they’ve moved some more “production”-style workloads to the cloud. They’re transplanting what they know into a new environment that on the surface seems to be cheaper.

These production apps, instead of being the apps that folks such as Joe Weinman from AT&T described in his Cloudonomics posts as being ideal for the cloud because of their highly variable usage patterns, have very steady demand. This, after all, matches the increase in longer-running servers that von Eicken wrote about.

And that seems like a bad thing to me.

Why?

Because moving applications that have relatively steady, consistent workloads to the cloud means that customers are missing one of the most important benefits of cloud computing: elasticity.

Elasticity is the component that makes a cloud architecture fundamentally different from just an outsourced application. It is also the component of the cloud computing concept that can have the most profound economic effect on an IT budget and, in the end, a company’s business. If you only pay for what you use and can handle big swings in demand by having additional compute resources automatically provisioned when required and decommissioned when not, you don’t need those resources sitting around doing nothing the rest of the time. Regardless of whether they are on-premise or in the cloud.

In fact, this ability to automatically add and subtract the computing resources that an application needs has been a bit of a Holy Grail for a while. It’s at the heart of Gartner’s real-time infrastructure concept and other descriptions of how infrastructure is evolving to more closely match your business.

Except that maybe the data say that it isn’t what’s actually happening.

Falling into a V=P trap?

My advice for companies trying out cloud-based services of any sort is to think about what they want out of this. Don’t fall into a V=P trap: that is, don’t think of virtual servers and physical servers the same way.

Separating servers from hardware by making them virtual, and then relocating them anywhere and everywhere into the cloud gives you new possibilities. The time, effort, and knowledge it’s going to take to simply outsource an application may seem worth it in the short term, but many of the public cloud’s benefits are simply not going to materialize if you stop there. Lower cost is probably one of those. Over the long haul a steady-state app may not actually benefit from using a public cloud. The math is the math: be sure you’ve figured out your reasoning and end game before agreeing to pay month after month after month.

Instead, I’d suggest looking for applications in need of different requirements, things you could not get from the part of your infrastructure that's siloed and static today. Even if it is being run by someone else. Definitely take a peek at the math that Joe Weinman did on the industry’s behalf or other sources as you are deciding.

Of course, who am I to argue with what customers are actually doing?

It may turn out that people aren’t actually moving production or constant-workload apps at all. There may be an as-yet-undescribed reason for what RightScale’s data show, or a still-to-be-explained use case that we’re missing.

And if there is, I'm eager to hear it. We should all be flexible and, well, elastic enough to accept that explanation, too.

Thursday, August 5, 2010

Video: Time machines and other good uses for cloud computing

The folks working on our 3Tera AppLogic product revved up a short video that I thought was a good illustration of a couple ways customers are using the product to help them.

Plus, honestly, I thought the team came up with some amusing names for the not-so-amusing quandaries that customers are in – the things they are using cloud computing to solve. Add a groovy beat behind it all, and it’s certainly not the worst way to spend 3 minutes and 36 seconds on YouTube.

See if any of these sound familiar for big enterprises:

Time machine. The business needs their applications released now. Sure, they didn’t ask IT to start working on this until, well, now. What they need is a time machine. Or at least a way to help dramatically accelerate their speed to market. “Delay is not an option.” Oh, gee, thanks.

New markets/old problems. You need your applications rolled out in new places around the world. Really, this kind of replication sounds like it should be simple. I mean, they are the same applications, after all. And it is simple -- unless you’re the guy trying to help Bangalore do all this remotely from Chicago.

Full plate. Those geniuses in marketing (hey!) are throwing requirements at IT that are going to stretch the infrastructure as it is. Then they add more. It’s a big problem that needs on-demand scalability. A lot of it.

(OK, so don’t expect it to be as amusing as the conference call spoof Dave Grady did that’s going around. But that’s pretty hard to live up to.)

Here’s the video:




Hint: I don’t think I’d be giving anything away if I told you that each of these scenarios has a happy ending. That’s why we brought the 3Tera guys onboard to be part of a cloud solution for customers, after all.

Any good ones they missed? Comments welcome.

Monday, August 2, 2010

Internal clouds: with the right approach, you might be more ready than you think

If you saw the headline “You’re Not Ready for Internal Cloud” making the rounds last week, you might have thought Forrester analyst James Staten was putting the nail in the coffin of the private cloud market. Or maybe you just thought he was channeling Jack Nicholson’s Colonel Jessep from “A Few Good Men” (yelling "You can’t handle the truth!”…about cloud computing?).
It turns out neither is the case as you read that Forrester report and look around at what customers are doing: internal clouds aren’t to be dismissed. And customers can handle the truth about what’s possible here and now.
James deserves congratulations for once again crystallizing some key issues that customers are having around cloud computing – and for having some good advice on ways to deal with them. (It’s not the first time James has been ahead of the curve with practical advice, I might add. For some other examples, check out the links in my interview with him from a few months back.)

It turns out James hides one of the most important messages about “being ready for internal clouds” in the subhead of the report: “…You Can Be: Here’s How.”
Here are a few things that struck me in reading both James’ report and commentary on it last week (Forrester clients in good standing can read James’ write-up here):
The industry expects internal (or private) clouds to either be dead simple or impossibly complex. The truth is in the middle somewhere. If you’re trying to take what you already have and get those IT systems to begin behaving in a cloud-like manner, there’s certainly some work to do. Rodrigo Flores of newScale paraphrased Staten’s reasoning in his blog here as this: “Cloud is a HOW, not a what. It’s a way to operate and offer IT service underpinned by self-service and automation.”

Creating an internal cloud includes both technological changes and alterations to the approach that IT departments are taking to managing and distributing their infrastructure and resources. Because of that, it really isn’t possible to buy an internal cloud out of a single box. Plus, I’ve heard customers that are very interested in building on what they’ve already invested in. That requires an IT department willing to invest as well, in things like time, effort, but most of all, change. But there are some comparatively easier options. More on that in a moment.
You can’t just turn to your vendors for a simple definition of cloud computing that’s going to automatically match what you want. You have to get involved yourself. James has been one of the industry watchers who isn’t afraid to call vendors out when they are using a cloud label on their offerings without good reason. He notes that the term is thrown around “sloppily, with many providers adding the moniker to garner new attention for their products.” He warns that you can’t just “turn to your vendor partners to decode the DNA of cloud computing.” His comments match what I say in my Cloud Computing 101 overview. I believe the definition of cloud computing (and in this case internal or private cloud) is less important than actually figuring out what you can and should be doing with it. And that’s something that IT departments need to work on in conjunction with their vendors and partners. You can’t just take their word for it.

You do need to be pretty self-critical and evaluate your level of readiness for an internal cloud. In his article about James’ report, Larry Dignan from ZDNet notes that “there’s a lot of not-so-sexy grunt work to do before cloud computing is an option. IT services aren’t standardized and you need that [in order] to automate.” The Forrester report has a check list of things you should think about to decide if you’re ready to take some of these steps.
One of the highlights from that list is the importance of sharing IT infrastructure among multiple groups. As Dignan says, “various business units inside a company are rarely on the same infrastructure.” However, writes James, “the economic benefits of cloud simply don’t work without this” kind of sharing.
A while back (over a year ago, in fact), my blog co-contributor Craig Vosburgh posted some of his thoughts about what you must be ready to do from a technical standpoint to really operate a private cloud. It turns many of these are pretty complementary to James’ list. Craig mentioned the willingness to be comfortable with change of a style and scope that IT shops haven’t been used to before, given that workloads will possibly be shifting around from place to place at a moment’s notice, along with a number of other, more technologically-focused considerations. Whatever list you use, the point is that you need to have one. And you need to make sure you’ve thought through where you are now and where you’re going.

One measure that James spends a good portion of his paper talking about is the level of an organization’s virtualization maturity. His belief is that this will have a strong effect on their ability to make an internal cloud a reality.
Is this a useful thing to measure? From what I’ve seen, yes…and no. Virtualization maturity is important, but greenfield opportunities give you the chance to start from scratch with something simple and show big gains. James mentions several ways to actually deliver internal clouds. Only one of those involves progressing through all stages of a virtualization & automation project involving you existing, complex infrastructure for your key applications. He notes (and customers have, too) that you can actually start with something much simpler.
After all, as Rodrigo Flores noted in his blog, “why get stuck solving the hardest, most complex applications” when, in fact, those tend to be pretty stable, don’t really move the needle, and are often under the most intense governance and compliance scrutiny.
This is where a turnkey cloud solution can be a big help (and, yes, it should be noted that CA Technologies’ 3Tera solution falls into that category). Instead of having to move through a slower set of evolutionary steps around the systems and applications that you already have running and don’t want to disrupt, the other option is to start something new. Pick one of those new applications you’ve been asked for that’s not a bet-the-business proposition and try something that gives you a big leap over what already exists or the standard approach. If it fails, you learn something at minimal risk. But if it’s successful, you have a powerful new model to think about and try to replicate in other areas.

And, enough with the excuses. The bottom line is that trying one of these approaches is going to be critical for the IT department’s future. There are certainly very good reasons that internal or private clouds are going to be difficult. But I believe IT is going to be a lot like James Urquhart envisions in his recent post: it's going to be “hybrid IT.” For some things, you will leverage applications and IT infrastructure from inside your firewall; for other things, you'll get help from outside. There will be a need for managing each piece, and to manage the whole thing as a coherent IT supply chain. The more experience that you and your IT shop have with even a piece of this early in the process, the better choices you’ll be able to make from here on out.

In either case, take a look at James Staten’s report if you can. It’s helpful in thinking through these and a bunch of other very relevant considerations around internal clouds. But maybe its most useful role is to serve as a bit of an internal rallying cry, helping you get your act together on internal clouds.

Who knows? With the right approach, you might be more ready for internal clouds than you think.