Friday, December 31, 2010

A cloudy look back at 2010

Today seemed like a good day to take stock of the year in cloud computing, at least according to the view from this Data Center Dialog blog – and from what you as readers thought was interesting over the past 12 months.

Setting the tone for the year: cloud computing M&A

It probably isn’t any big surprise that 3 of the 4 most popular articles here in 2010 had to do with one of the big trends of the year in cloud computing – acquisitions. (Especially since my employer, CA Technologies, had a big role in driving that trend.) CA Technologies made quite a bit of impact with our successive acquisitions of Oblicore, 3Tera, and Nimsoft at the beginning of the year. We followed up by bringing onboard others like 4Base, Arcot, and Hyperformix.

But those first three set the tone for the year: the cloud was the next IT battleground and the big players (like CA) were taking it very seriously. CRN noted our moves as one of the 10 Biggest Cloud Stories of 2010. Derrick Harris of GigaOm called us out as one of the 9 companies that drove cloud in 2010. And Krishnan Subramanian included CA's pick-up of 3Tera and Nimsoft in his list of key cloud acquisitions for the year at CloudAve.

As you’d expect, folks came to Data Center Dialog to get more details on these deals. We had subsequent announcements around each company (like the release of CA 3Tera AppLogic 2.9), but the Nimsoft one got far and away the most interest. I thought one of the more interesting moments was how Gary Read reacted to a bunch of accusations of being a “sell-out” and going to the dark side by joining one of the Big 4 management vendors they had been aggressively selling against. Sure, some of the respondents were competitors trying to spread FUD, but he handled it all clearly and directly -- Gary's signature style, I’ve come to learn.

What mattered a lot? How cloud is changing IT roles

Aside from those acquisitions, one topic was by far the most popular: how cloud computing was going to change the role of IT as a whole – and for individual IT jobs as well. I turned my November Cloud Expo presentation into a couple posts on the topic. Judging by readership and comments, my “endangered species” list for IT jobs was the most popular. It included some speculation that jobs like capacity planning, network and server administration, and even CIO were going the way of the dodo. Or were at least in need of some evolution.

Part 2 conjured up some new titles that might be appearing on IT business cards very soon, thanks to the cloud. But that wasn’t nearly as interesting for some reason. Maybe fear really is the great motivator. Concern about the changes that cloud computing is causing to peoples’ jobs certainly figured as a strong negative in the survey we published just a few weeks back. Despite a move toward “cloud thinking” in IT, fear of job loss drove a lot of the negative vibes about the topic. Of course, at the same time, IT folks are seeing cloud as a great thing to have on their resumes.
All in all, this is one of the major issues for cloud computing, not just for 2010, but in general. The important issue around cloud computing is not so much about figuring out technology, it’s about figuring out how to run and organize IT in a way that makes the best use of technology, creates processes that are most useful for the business, and that people learn to live and work with on a daily basis. I don’t think I’m going out on a limb here to say that this topic will be key in 2011, too.
Learning from past discussions on internal clouds
James Urquhart noted in his “cloud computing gifts of 2010” post at CNET that the internal/private cloud debate wound its way down during the year, ending in a truce. “The argument died down…when both sides realized nobody was listening, and various elements of the IT community were pursuing one or the other – or both – options whether or not it was ‘right.’” I tend to agree.
These discussions (arguments?), however, made one of my oldest posts, “Are internal clouds bogus?” from January 2009, the 5th most popular one – *this* year. I stand by my conclusion (and it seems to match where the market has ended up): regardless of what name you give the move to deliver a more dynamic IT infrastructure inside your 4 walls, it’s compelling. And customers are pursuing it.
Cloud computing 101 remained important
2010 was a year in which the basics remained important. The definitions really came into focus, and a big chunk of the IT world joined the conversation about cloud computing. That meant that things like my Cloud Computing 101 post, expanding on my presentation on the same topic at CA World in May, garnered a lot of attention.
Folks were making sure they had the basics down, especially since a lot of the previously mentioned arguments were settling down a bit. My post outlined a bunch of the things I learned from giving my Cloud 101 talk, namely don’t get too far ahead of your headlights. If you start being too theoretical, customers will quickly snap you right back to reality. And that’s how it should be.
Beginning to think about the bigger implications of cloud computing
However, several forward-looking topics ended up at the top of the list at Data Center Dialog this year as well. Readers showed interest in some of the things that cloud computing was enabling, and what it might mean in the long run. Consider these posts as starting points for lots more conversations going forward:
Despite new capabilities, are we just treating cloud servers like physical ones? Some data I saw from RightScale about how people are actually using cloud servers got me thinking that despite the promise of virtualization and cloud, people perhaps aren’t making the most of these new-fangled options. In fact, it sounded like we were just doing the same thing with these cloud servers as we’ve always done with physical ones. It seemed to me that missed the whole point.

Can we start thinking of IT differently – maybe as a supply chain? As we started to talk about the CA Technologies view of where we think IT is headed, we talked a lot about a shift away from “IT as a factory” in which everything was created internally, to one where IT is the orchestrator of service coming from many internal and external sources. It implies a lot of changes, including expanded management requirements. And, it caught a lot of analyst, press, customer, -- and reader – attention, including this post from May.

Is cloud a bad thing for IT vendors? Specifically, is cloud going to cut deeply into the revenues that existing hardware and software vendors are getting today from IT infrastructure? This certainly hasn’t been completely resolved yet. 2010 was definitely a year where vendors made their intentions known, however, that they aren’t going to be standing still. Oracle, HP, IBM, BMC, VMware, CA, and a cast of thousands (OK, dozens at least) of start-ups all made significant moves, often at their own user conferences, or events like Cloud Expo or Cloud Connect.

What new measurement capabilities will we need in a cloud-connected world? If we are going to be living in a world that enables you to source IT services from a huge variety of providers, there is definitely a need to help make those choices. And even to just have a common, simple, business-level measuring stick for IT services in the first place. CA Technologies took a market-leading stab at that by contributing to the Service Measurement Index that Carnegie Mellon is developing, and by launching the Cloud Commons community. This post explained both.

So what’s ahead for 2011 in cloud computing?

That sounds like a good topic for a blog post in the new year. Until then, best wishes as you say farewell to 2010. And rest up. If 2011 is anything like 2010, we’ll need it.

Wednesday, December 29, 2010

Making 'good enough' the new normal

In looking back on some of the more insightful observations that I’ve heard concerning cloud computing in 2010, one kept coming up over and over again. In fact, it was re-iterated by several analysts onstage at the Gartner Data Center Conference in Las Vegas earlier this month.

The thought went something like this:

IT is being weighed down by more and more complexity as time goes on. The systems are complex, the management of those systems is complex, and the underlying processes are, well, also complex.

The cloud seems to offer two ways out of this problem. First, going with a cloud-based solution allows you to start over, often leaving a lot of the complexity behind. But that’s been the same solution offered by any greenfield effort – it always seems deceptively easier to start over than to evolve what you already have. Note that I said “seems easier.” The real-world issues that got you into the complexity problem in the first place quickly return to haunt any such project. Especially in a large organization.

Cloud and the 80-20 rule

But I’m more interested in highlighting the second way that cloud can help. That way is more about the approach to architecture that is embodied in a lot of the cloud computing efforts. Instead of building the most thorough, full-featured systems, cloud-based systems are often using “good enough” as their design point.

This is the IT operations equivalent of the 80-20 rule. It’s the idea that not every system has to have full redundancy, fail-over, or other requirements. It doesn't need to be perfect or have every possible feature. You don't need to know every gory detail from a management standpoint. In most cases, going to those extremes means what you're delivering will be over-engineered and not worth the extra time, effort, and money. That kind of bad ROI is a problem.

“IT has gotten away from “good enough” computing,” said Gartner’s Donna Scott in one of her sessions at the Data Center Conference. “There is a lot an IT dept can learn from cloud, and that’s one of them.”

The experiences of eBay

In talking about his experiences working at eBay during the same conference, Mazen Rawashdeh, vice president of eBay's technology operations, talked about his company’s need to be able to understand what made the most impact on cost and efficiency and optimize for those. That mean a lot of “good enough” decisions in other areas.

eBay IT developed metrics that helped drive the right decisions, and then focused, according to Rawashdeh, on innovation, innovation, innovation. They avoided the things that would weigh them down because “we needed to break the linear relationship between capacity growth and infrastructure cost,” said Rawashdeh. At the conference, he laid out a blueprint for a pretty dynamic IT operations environment, stress-tested by one of the bigger user bases on the Web.

Rawashdeh couched all of this IT operations advice in one of his favorite quotes from Charles Darwin: “It’s not the strongest of species that survive, nor the most intelligent, but the ones most responsive to change.” In the IT context, it means being resilient to lots of little changes – and little failures – so that the whole can still keep going. “The data center itself is our ‘failure domain,’” he said. Architecting lots of little pieces to be “good enough” lets the whole be stronger, and more resilient.

Everything I needed to know about IT operations I learned from my cloud provider

So who seems to be the best at “good enough” IT these days? Most would point to the cloud service providers, of course.

Many end-user organizations are starting to get this kind of experience, but aren’t very far yet. Forrester’s James Staten says in his 2011 predictions blog that he believes end-user organizations will build private clouds in 2011, “and you will fail. And that’s a good thing. Because through this failure you will learn what it really takes to operate a cloud environment.” He recommends that you “fail fast and fail quietly. Start small, learn, iterate, and then expand.”

Most enterprises, Staten writes, “aren’t ready to pass the baton” – to deliver this sort of dynamic infrastructure – yet. “But service providers will be ready in 2011.” Our own Matt Richards agrees. He created a holiday-inspired list of some interesting things that service providers are using CA Technologies software to make possible.

In fact, Gartner’s Cameron Haight had a whole session at the Vegas event to highlight things that IT ops can learn from the big cloud providers.

Some highlights:

· Make processes experienced-based, rather than set by experts. Just because it was done one way before doesn’t mean that’s the right way now. “Cloud providers get good at just enough process,” said Haight, especially in the areas of deployment and incident management.

· Failure happens. In fact, the big guys are moving toward a “recovery-oriented” computing philosophy. “Don’t focus on avoiding failure, but on recovery,” said Haight. The important stat with this approach is not mean-time-between-failures (MTBF), but mean-time-to-repair (MTTR). Reliability, in this case, comes from software, not the underlying hardware.

· Manageability follows from both software and management design. Management should lessen complexity, not add to it. Haight pointed toward tools trying to facilitate “infrastructure as code,” to enable flexibility.

Know when you need what

So, obviously, “good enough” is not always going to be, well, good enough for every part of your IT infrastructure. But it’s an idea that’s getting traction because of successes with cloud computing. Those successes are causing IT people to ask a few fundamental questions about how they can apply this approach to their specific IT needs. And that’s a useful thing.

In thinking about where and how “good enough” computing is appropriate, you need to ask yourself a couple questions. First, how vital is the system I’m working on? What’s its use, tolerance for failure, importance, and the like. The more critical it is, the more careful you have to be with your threshold of “good enough.”

Second, is speed of utmost importance? Cost? Security? Or a set of other things? Like Rawashdeh at eBay, know what metrics are important, and optimize to those.

Be honest with yourself and your organization about places you can try this approach. It’s one of the ideas that got quite a bit of attention in 2010 that’s worth considering.

Thursday, December 16, 2010

Survey points to the rise of 'cloud thinking'

In any developing market, doing a survey is always a bit of a roll of the dice. Sometimes the results can be pretty different from what you expected to find.

I know a surprise like that sounds unlikely in the realm of cloud computing, a topic that, if anything, feels over-scrutinized. However, when the results came back from the Management Insight survey (that CA Technologies sponsored and announced today), there were a few things that took me and others looking at the data by surprise.

Opinions of IT executives and IT staffs on cloud don’t differ by too much. We surveyed both decision makers and implementers, thinking that we’d find some interesting discrepancies. We didn’t. They all pretty much thought cloud could help them on costs, for example. And regardless of both groups’ first impressions, I’m betting cost isn’t their eventual biggest benefit. Instead, I’d bet that it’s agility – the reduced time to having IT make a real difference in your business – that will probably win out in the end.

IT staff are of two minds about cloud. One noticeable contradiction in the survey was that the IT staff was very leery about cloud because they see its potential to take away their jobs. At the same time, one of the most popular reasons to support a cloud initiative was because it familiarized them with the latest and greatest in technology and IT approaches. It seems to me that how each IT person deals with these simultaneous pros and cons will decide a lot about the type of role they will have going forward. Finding ways to learn about and embrace change can’t be a bad thing for your resume.

Virtualization certainly has had an impact on freeing people to think positively about cloud computing. I wrote about this in one of my early blogs about internal clouds back at the beginning of 2009 – hypervisors helped IT folks break the connection between a particular piece of hardware and an application. Once you do that, you’re free to consider a lot of “what ifs.”

This new survey points out a definite connection between how far people have gotten with their virtualization work and their support for cloud computing. The findings say that virtualization helps lead to what we’re calling “cloud thinking.” In fact, the people most involved in virtualization are also the ones most likely to be supportive of cloud initiatives. That all makes sense to me. (Just don’t think that just because you’ve virtualized some servers, you’ve done everything you need to in order to get the benefits of cloud computing.)

The survey shows people expect a gradual move from physical infrastructure to virtual systems, private cloud, and public cloud – not a mad rush. Respondents did admit to quite a bit of cloud usage – more than many other surveys I’ve seen. That leads you to think that cloud is starting to come of age in large enterprises (to steal a phrase from today’s press release). But it’s not happening all at once, and there’s a combination of simple virtualization and a use of more sophisticated cloud-based architectures going on. That’s going to lead to mixed environments for quite some time to come, and a need to manage and secure those diverse environments, I’m betting.

There are open questions about the ultimate cost impact of both public and private clouds. One set of results listed cost as a driver and an inhibitor for public clouds, and as a driver and an inhibitor for private ones, too. Obviously, there’s quite a bit of theory that has yet to be put into practice. I bet that’s what a lot of the action in 2011 will be all about: figuring it out.

And who can ignore politics? Finally, in looking at the internal organizational landscape of allies and stonewallers, the survey reported what I’ve been hearing anecdotally from customers and our folks who work with them: there are a lot of political hurdles to get over to deliver a cloud computing project (let alone a success). The survey really didn’t provide a clear, step-by-step path to success (not that I expected it would). I think the plan of starting small, focusing on a specific outcome, and being able to measure results is never a bad approach. And maybe those rogue cloud projects we hear about aren’t such a bad way to start after all. (You didn’t hear that from me, mind you.)

Take a look for yourself

Those were some of the angles I thought were especially interesting, and, yes, even a bit surprising in the survey. In addition to perusing the actual paper that Management Insight wrote (registration required) about the findings, I’d also suggest taking a look at the slide show highlighting a few of the more interesting results graphically. You can take a look at those slides here.

I’m thinking we’ll run the survey again in the middle of next year (at least, that seems like about the right timing to me). Two things will be interesting to see. First, what will the “cloud thinking” that we’re talking about here have enabled? The business models that cloud computing makes possible are new, pretty dynamic, and disruptive. Companies that didn’t exist yesterday could be challenging big incumbents tomorrow with some smart application of just enough technology. And maybe with no internal IT whatsoever.

Second, it will be intriguing to see what assumptions that seem so logical now will turn out to be – surprisingly – wrong. But, hey, that’s why we ask these questions, right?

This blog is cross-posted on The CA Cloud Storm Chasers site.

Tuesday, December 14, 2010

Beyond Jimmy Buffett, sumos, and virtualization? Cloud computing hits #1 at Gartner Data Center Conference

I spent last week along with thousands of other data center enthusiasts at Gartner’s 2010 Data Center Conference and was genuinely surprised by the level of interest in cloud computing on both sides of the podium. As keynoter and newly minted cloud computing expert Dave Barry would say, I’m not making this up.

This was my sixth time at the show (really), and I’ve come to use the show as a benchmark for the types of conversations that are going on at very large enterprises around their infrastructure and operations issues. And as slow-to-move as you might think that part of the market might be, there are some interesting insights to be gained comparing changes over the years – everything from the advice and positions of the Gartner analysts, to the hallway conversations among the end users, to really important things, like the themes of the hospitality suites.

So, first and foremost, the answer is yes, APC did invite the Jimmy Buffett cover band back again, in case you were wondering. And someone decided sumo wrestling in big, overstuffed suits was a good idea.

Now, if you were actually looking for something a little more related to IT operations, read on:

Cooling was hot this year…and cloud hits #1

It wasn’t any big surprise what was at the top of peoples’ lists this year. The in-room polling at the opening keynotes placed data center space, power, and cooling at the top of list of biggest data center challenges (23%). The interesting news was that developing a private/public cloud strategy came in second (16%).

This interest in cloud computing was repeated in the Gartner survey of CIOs’ top technology priorities. Cloud computing was #1. It made the biggest jump of all topics since their ’09 survey, by-passing virtualization, on its way to head up the list. But don’t think virtualization wasn’t important: it followed right behind at #2. Gartner’s Dave Cappuccio made sure the audience was thinking big on the virtualization topic, saying that it wasn’t just about virtualizing servers or storage now. It’s about “the virtualization of everything. Virtualization is a continuing process, not a one-time project.”

Bernard Golden, CEO of Hyperstratus and CIO.com blogger (check out his 2011 predictions here), wondered on Twitter if cloud leapfrogging virtualization didn’t actually put the cart before the horse. I’m not sure if CIOs know whether that’s true or not. But they do know that they need to deal with both of these topics, and they need to deal with them now.

Putting the concerns of 2008 & 2009 in the rear-view mirror

This immediacy for cloud computing is a shift from the previous years, I think. A lot of 2008 was about the recession’s impact, and even 2009 featured sessions on how the recession was driving short-term thinking in IT. If you want to do a little comparison yourself, take a look at a few of my entries about this same show from years past (spanning the entire history of my Data Center Dialog blog to date, in fact). Some highlights: Tom Bittman’s 2008 keynote (he said the future looks a lot like a private cloud), 2008’s 10 disruptive data center technologies, 2008’s guide to building a real-time infrastructure, and the impact of metrics on making choices in the cloud from last year.

The Stack Wars are here

Back to today (or at least, last week), though. Gartner’s Joe Baylock told the crowd in Vegas that this was the year that the Stack Wars ignited. With announcements from Oracle, VCE, IBM, and others, it’s hard to argue.

The key issue in his mind was whether these stack wars will help or inhibit innovation over the next 5 years. Maybe it moves the innovation to another layer. On the other hand, it’s hard for me to see how customers will allow stacks to rule the day. At CA Technologies, we continue to hear that customers expect to have diverse environments (that, of course, need managing and securing, cloud-related and otherwise). Baylock’s advice: “Avoid inadvertently backing into any vendor’s integrated stack.” Go in with your eyes open.

Learning about – and from – cloud management

Cloud management was front and center. Enterprises need to know, said Cameron Haight, that management is the biggest challenge for private cloud efforts. Haight called out the Big Four IT management software vendors (BMC, CA Technologies, HP software, and IBM Tivoli) as being slow to respond to virtualization, but he said they are embracing the needs around cloud management much faster. 2010 has been filled with evidence of that from my employer – and the others on this list, too.

There’s an additional twist to that story, however. In-room polling at several sessions pointed to interest from enterprises in turning to public cloud vendors themselves as their primary cloud management provider. Part of this is likely to be an interest in finding “one throat to choke.” Haight and Donna Scott also noted several times that there’s a lot to be learned from the big cloud service providers and their IT operations expertise (something worthy of a separate blog, I think). Keep in mind, however, that most enterprise operations look very different (and much more diverse) than the big cloud providers’ operations.

In a similar result, most session attendees also said they’d choose their virtualization vendors to manage their private cloud. Tom Bittman, in reviewing the poll in his keynote, noted that “the traditional management and automation vendors that we have relied on for decades are not close to the top of the list.” But, Bittman said, “they have a lot to offer. I think VMware’s capabilities [in private cloud management] are overrated, especially where heterogeneity is involved.”

To be fair: Bittman made these remarks because VMware topped the audience polling on this question. So, it’s a matter of what’s important in a customer’s eyes, I think. In a couple of sessions, this homogeneous v. heterogeneous environment discussion became an important way for customers to evaluate what they need for management. Will they feel comfortable with only what each stack vendor will provide?

9 private cloud vendors to watch

Bittman also highlighted 9 vendors that he thought were worthy of note for customers looking to build out private clouds. The list included BMC, Citrix, IBM, CA Technologies, Eucalyptus, Microsoft, Cisco, HP, and VMware.

He predicted very healthy competition in the public cloud space (and even public cloud failures, as Rich Miller noted at Data Center Knowledge) and similarly aggressive competition for the delivery of private clouds. He believed there would even be fierce competition in organizations where VMware owns the hypervisor layer.

As for tidbits about CA Technologies, you won’t be surprised to learn that I scribbled down a comment or two: “They’ve made a pretty significant and new effort to acquire companies to provide strategic solutions such as 3Tera and Cassatt,” said Bittman. “Not a vendor to be ignored.” Based on the in-room polling, though, we still have some convincing to do with customers.

Maybe we should figure out what it’ll take to get the Jimmy Buffett guy to play in our hospitality suite next year? I suppose that and another year of helping customers with their cloud computing efforts would certainly help.

In the meantime, it’s worth noting that TechTarget’s SearchDataCenter folks also did a good job with their run-down on the conference, if you’re interested in additional color. A few of them might even be able to tell you a bit about the sumo wrestling, if you ask nicely.

And I’m not making that up either.

Tuesday, December 7, 2010

Cloud conjures up new IT roles; apps & business issues are front & center

So you’ve managed to avoid going the way of the dodo, and dodged the IT job “endangered species list” I talked about in my last post (and at Cloud Expo). Great. Now the question is, what are some of the roles within IT that cloud computing is putting front & center?

I listed a few of my ideas during my Cloud Expo presentation a few weeks back. My thoughts are based on what I’ve heard and discussed with IT folks, analysts, and other vendors recently. Many of those thoughts even resonated well with what I’ve heard this week here at the Gartner Data Center Conference in Las Vegas.

IT organizations will shift how they are, well, organized

James Urquhart from Cisco put forth an interesting hypothesis a while back on his “Wisdom of Clouds” blog at CNET that identified 3 areas he thought will be (and should be) key for dividing up the jobs when IT operations enters “a cloudy world,” as he put it.

First, there’s a group that James called InfraOps. That’s the team focused on the server, network, and storage hardware (and often virtual versions of those). Those selling equipment (like Cisco) posit that this area will become more homogenous, but I’m not sold on that. James followed that with ServiceOps, the folks managing a service catalog and a service portal, and finally AppOps. AppOps is the group manages the applications themselves. It executes and operates them, makes sure they are deployed correctly, watches the SLAs, and the like.

I thought these were pretty useful starting points. I agree whole-heartedly with the application-centric view he highlights. In fact, in describing the world in which IT is the manager of an IT service supply chain, that application-first approach seems paramount. The role of the person we talk to about CA 3Tera AppLogic, for example, is best described as “applications operations.”

Equally important as an application-centric approach are the skills to translate business needs into technical delivery, even if you don’t handle the details yourself. More on that below.

Some interesting new business cards

I can already see some interesting new titles on IT peoples’ business cards in the very near future thanks to the cloud. Here are some of those that I listed in my presentation, several of which were inspired by the research that Cameron Haight and Milind Govekar have been publishing at Gartner. (If you’re a Gartner client, their write-up on “I&O Roles for Private Cloud-Computing Environments” is a good one to start with.):

· “The Weaver” (aka Cloud Service Architect) – piecing together the plan for delivering business services
· “The Conductor” (aka Cloud Orchestration Specialist) – directing how things actually end up happening inside and outside your IT environment
· “The Decider” (aka Cloud Service Manager) – more of a vendor evaluator and the person who sets up and eliminates relationships for pieces of your business service
· “The Operator…with New Tools” (aka Cloud Infrastructure Administrator) – this may sound like a glorified version of an old network or system administrator, but there’s no way this guy’s going to use the same tools to figure all this out that he has in the past.

In Donna Scott’s presentation at the Gartner Data Center Conference called “Bringing Cloud to Earth for Infrastructure & Operations: Practical Advice and Implementations,” she hit upon these same themes. Some of the new roles needed that she listed included solution architects, the automation specialist, the service owner, cloud capacity manager, and IT financial/costing analyst. Note the focus on business-level issues – both the service and the cost.

Blending

Or maybe the truth is that these roles blend together a bit more. I could see the IT organization evolving to perform three core functions in support of application delivery:

— Source & Operate Resource Pools This person would be someone who would maintain efficient data center resources, including the management of automation and hypervisors. The first is the ability to more effectively manage resources—to determine how much memory and CPU is available at any given time, to scale up and down capacity in response to demand (and have the ability to pay only for what you use). These resources might eventually be sourced either internally or externally, and will most often be a combination of the two. This function will be responsible for making sure the right resources are available at the right time and that the cost of those resources is optimized.

— The second function is Application Delivery, focused on building application infrastructure proactively, so that when the next idea comes down the pike, you can be ready. You can proactively build an appropriately flexible application infrastructure. You can provide the business with a series of different, ready-made configurations from which to choose, and you would have the ability (when they are needed) to quickly get these pre-configured solutions up and running quickly.

— The last, higher-level function is the process of engaging with the business closely. Your job is to Assemble Service. You’ll be able to say to the business ‘bring me your best ideas’ and you’ll be able to turn those concepts into real, working systems quickly, without having to dive into lower-level technical bits & bytes that some of the previously mentioned folks would.

In all cases, I’m talking about ways to deliver your application or business service, and the technical underpinnings are fading into the background a bit. “Instead of being builders of systems,” said Mike Vizard in a recent IT Business Edge article, “most IT organizations will soon find themselves orchestrating IT services.”

James Urquhart talks about this as an important tenant of the DevOps approach: The drive to make the application the unit of administration, not the infrastructure. James had another post more recently that underscored his disappointment that shows like Cloud Expo are focusing more on the infrastructure guys still, not the ones thinking about the applications. I’m all in favor. I heard Gartner’s Cameron Haight suggest why this might be true in most IT shops: while development has spent a lot of time working toward things like agile development, IT ops has not. There’s a mismatch, and work is starting on the infrastructure side of things to address it.

Still, how do you get there from here?

So the premise here is that cloud will change the role of IT as a whole, as well as particular IT jobs. So what should you do? How do you know how to ease people into these roles, how to build these roles, or even how to try to make these role transitions yourself?

I’ll repeat the advice I’ve been giving pretty consistently: try it.

— Figure out what risks you and your organization want to take. Understand if a given project or application is more suited to a greenfield, “clean break” kind of approach or to a more gradual (though drawn-out) shift.
— Set up a “clean room” for experimentation. Create the “desired state” (Forrester’s James Staten advocated that on a recent webcast we did with him). Then use this new approach to learn from.
— Arm yourself with the tools and the people to do it right. Experience counts.
— Work on the connection between IT & business aggressively
— Measure & compare (with yourself and others trying similar things)
— Fail fast, then try again. It’s all about using your experience to get to a better and better approach.

There was one thing that I heard this week at the Gartner show from Tom Bittman that sums up the situation nicely. IT, Tom said, has the opportunity to “become the trusted arbiter and broker [of cloud computing] in your organization.” Now that definitely puts IT in a good place, even a place of strength. However, there’s no denying that many, many folks in IT are going to have to get comfortable with roles that are very different from the roles they have today.

(If you would like a copy of the presentation that I gave at Cloud Expo, email me at jay.fry@ca.com.)