Friday, December 31, 2010

A cloudy look back at 2010

Today seemed like a good day to take stock of the year in cloud computing, at least according to the view from this Data Center Dialog blog – and from what you as readers thought was interesting over the past 12 months.

Setting the tone for the year: cloud computing M&A

It probably isn’t any big surprise that 3 of the 4 most popular articles here in 2010 had to do with one of the big trends of the year in cloud computing – acquisitions. (Especially since my employer, CA Technologies, had a big role in driving that trend.) CA Technologies made quite a bit of impact with our successive acquisitions of Oblicore, 3Tera, and Nimsoft at the beginning of the year. We followed up by bringing onboard others like 4Base, Arcot, and Hyperformix.

But those first three set the tone for the year: the cloud was the next IT battleground and the big players (like CA) were taking it very seriously. CRN noted our moves as one of the 10 Biggest Cloud Stories of 2010. Derrick Harris of GigaOm called us out as one of the 9 companies that drove cloud in 2010. And Krishnan Subramanian included CA's pick-up of 3Tera and Nimsoft in his list of key cloud acquisitions for the year at CloudAve.

As you’d expect, folks came to Data Center Dialog to get more details on these deals. We had subsequent announcements around each company (like the release of CA 3Tera AppLogic 2.9), but the Nimsoft one got far and away the most interest. I thought one of the more interesting moments was how Gary Read reacted to a bunch of accusations of being a “sell-out” and going to the dark side by joining one of the Big 4 management vendors they had been aggressively selling against. Sure, some of the respondents were competitors trying to spread FUD, but he handled it all clearly and directly -- Gary's signature style, I’ve come to learn.

What mattered a lot? How cloud is changing IT roles

Aside from those acquisitions, one topic was by far the most popular: how cloud computing was going to change the role of IT as a whole – and for individual IT jobs as well. I turned my November Cloud Expo presentation into a couple posts on the topic. Judging by readership and comments, my “endangered species” list for IT jobs was the most popular. It included some speculation that jobs like capacity planning, network and server administration, and even CIO were going the way of the dodo. Or were at least in need of some evolution.

Part 2 conjured up some new titles that might be appearing on IT business cards very soon, thanks to the cloud. But that wasn’t nearly as interesting for some reason. Maybe fear really is the great motivator. Concern about the changes that cloud computing is causing to peoples’ jobs certainly figured as a strong negative in the survey we published just a few weeks back. Despite a move toward “cloud thinking” in IT, fear of job loss drove a lot of the negative vibes about the topic. Of course, at the same time, IT folks are seeing cloud as a great thing to have on their resumes.
All in all, this is one of the major issues for cloud computing, not just for 2010, but in general. The important issue around cloud computing is not so much about figuring out technology, it’s about figuring out how to run and organize IT in a way that makes the best use of technology, creates processes that are most useful for the business, and that people learn to live and work with on a daily basis. I don’t think I’m going out on a limb here to say that this topic will be key in 2011, too.
Learning from past discussions on internal clouds
James Urquhart noted in his “cloud computing gifts of 2010” post at CNET that the internal/private cloud debate wound its way down during the year, ending in a truce. “The argument died down…when both sides realized nobody was listening, and various elements of the IT community were pursuing one or the other – or both – options whether or not it was ‘right.’” I tend to agree.
These discussions (arguments?), however, made one of my oldest posts, “Are internal clouds bogus?” from January 2009, the 5th most popular one – *this* year. I stand by my conclusion (and it seems to match where the market has ended up): regardless of what name you give the move to deliver a more dynamic IT infrastructure inside your 4 walls, it’s compelling. And customers are pursuing it.
Cloud computing 101 remained important
2010 was a year in which the basics remained important. The definitions really came into focus, and a big chunk of the IT world joined the conversation about cloud computing. That meant that things like my Cloud Computing 101 post, expanding on my presentation on the same topic at CA World in May, garnered a lot of attention.
Folks were making sure they had the basics down, especially since a lot of the previously mentioned arguments were settling down a bit. My post outlined a bunch of the things I learned from giving my Cloud 101 talk, namely don’t get too far ahead of your headlights. If you start being too theoretical, customers will quickly snap you right back to reality. And that’s how it should be.
Beginning to think about the bigger implications of cloud computing
However, several forward-looking topics ended up at the top of the list at Data Center Dialog this year as well. Readers showed interest in some of the things that cloud computing was enabling, and what it might mean in the long run. Consider these posts as starting points for lots more conversations going forward:
Despite new capabilities, are we just treating cloud servers like physical ones? Some data I saw from RightScale about how people are actually using cloud servers got me thinking that despite the promise of virtualization and cloud, people perhaps aren’t making the most of these new-fangled options. In fact, it sounded like we were just doing the same thing with these cloud servers as we’ve always done with physical ones. It seemed to me that missed the whole point.

Can we start thinking of IT differently – maybe as a supply chain? As we started to talk about the CA Technologies view of where we think IT is headed, we talked a lot about a shift away from “IT as a factory” in which everything was created internally, to one where IT is the orchestrator of service coming from many internal and external sources. It implies a lot of changes, including expanded management requirements. And, it caught a lot of analyst, press, customer, -- and reader – attention, including this post from May.

Is cloud a bad thing for IT vendors? Specifically, is cloud going to cut deeply into the revenues that existing hardware and software vendors are getting today from IT infrastructure? This certainly hasn’t been completely resolved yet. 2010 was definitely a year where vendors made their intentions known, however, that they aren’t going to be standing still. Oracle, HP, IBM, BMC, VMware, CA, and a cast of thousands (OK, dozens at least) of start-ups all made significant moves, often at their own user conferences, or events like Cloud Expo or Cloud Connect.

What new measurement capabilities will we need in a cloud-connected world? If we are going to be living in a world that enables you to source IT services from a huge variety of providers, there is definitely a need to help make those choices. And even to just have a common, simple, business-level measuring stick for IT services in the first place. CA Technologies took a market-leading stab at that by contributing to the Service Measurement Index that Carnegie Mellon is developing, and by launching the Cloud Commons community. This post explained both.

So what’s ahead for 2011 in cloud computing?

That sounds like a good topic for a blog post in the new year. Until then, best wishes as you say farewell to 2010. And rest up. If 2011 is anything like 2010, we’ll need it.

Wednesday, December 29, 2010

Making 'good enough' the new normal

In looking back on some of the more insightful observations that I’ve heard concerning cloud computing in 2010, one kept coming up over and over again. In fact, it was re-iterated by several analysts onstage at the Gartner Data Center Conference in Las Vegas earlier this month.

The thought went something like this:

IT is being weighed down by more and more complexity as time goes on. The systems are complex, the management of those systems is complex, and the underlying processes are, well, also complex.

The cloud seems to offer two ways out of this problem. First, going with a cloud-based solution allows you to start over, often leaving a lot of the complexity behind. But that’s been the same solution offered by any greenfield effort – it always seems deceptively easier to start over than to evolve what you already have. Note that I said “seems easier.” The real-world issues that got you into the complexity problem in the first place quickly return to haunt any such project. Especially in a large organization.

Cloud and the 80-20 rule

But I’m more interested in highlighting the second way that cloud can help. That way is more about the approach to architecture that is embodied in a lot of the cloud computing efforts. Instead of building the most thorough, full-featured systems, cloud-based systems are often using “good enough” as their design point.

This is the IT operations equivalent of the 80-20 rule. It’s the idea that not every system has to have full redundancy, fail-over, or other requirements. It doesn't need to be perfect or have every possible feature. You don't need to know every gory detail from a management standpoint. In most cases, going to those extremes means what you're delivering will be over-engineered and not worth the extra time, effort, and money. That kind of bad ROI is a problem.

“IT has gotten away from “good enough” computing,” said Gartner’s Donna Scott in one of her sessions at the Data Center Conference. “There is a lot an IT dept can learn from cloud, and that’s one of them.”

The experiences of eBay

In talking about his experiences working at eBay during the same conference, Mazen Rawashdeh, vice president of eBay's technology operations, talked about his company’s need to be able to understand what made the most impact on cost and efficiency and optimize for those. That mean a lot of “good enough” decisions in other areas.

eBay IT developed metrics that helped drive the right decisions, and then focused, according to Rawashdeh, on innovation, innovation, innovation. They avoided the things that would weigh them down because “we needed to break the linear relationship between capacity growth and infrastructure cost,” said Rawashdeh. At the conference, he laid out a blueprint for a pretty dynamic IT operations environment, stress-tested by one of the bigger user bases on the Web.

Rawashdeh couched all of this IT operations advice in one of his favorite quotes from Charles Darwin: “It’s not the strongest of species that survive, nor the most intelligent, but the ones most responsive to change.” In the IT context, it means being resilient to lots of little changes – and little failures – so that the whole can still keep going. “The data center itself is our ‘failure domain,’” he said. Architecting lots of little pieces to be “good enough” lets the whole be stronger, and more resilient.

Everything I needed to know about IT operations I learned from my cloud provider

So who seems to be the best at “good enough” IT these days? Most would point to the cloud service providers, of course.

Many end-user organizations are starting to get this kind of experience, but aren’t very far yet. Forrester’s James Staten says in his 2011 predictions blog that he believes end-user organizations will build private clouds in 2011, “and you will fail. And that’s a good thing. Because through this failure you will learn what it really takes to operate a cloud environment.” He recommends that you “fail fast and fail quietly. Start small, learn, iterate, and then expand.”

Most enterprises, Staten writes, “aren’t ready to pass the baton” – to deliver this sort of dynamic infrastructure – yet. “But service providers will be ready in 2011.” Our own Matt Richards agrees. He created a holiday-inspired list of some interesting things that service providers are using CA Technologies software to make possible.

In fact, Gartner’s Cameron Haight had a whole session at the Vegas event to highlight things that IT ops can learn from the big cloud providers.

Some highlights:

· Make processes experienced-based, rather than set by experts. Just because it was done one way before doesn’t mean that’s the right way now. “Cloud providers get good at just enough process,” said Haight, especially in the areas of deployment and incident management.

· Failure happens. In fact, the big guys are moving toward a “recovery-oriented” computing philosophy. “Don’t focus on avoiding failure, but on recovery,” said Haight. The important stat with this approach is not mean-time-between-failures (MTBF), but mean-time-to-repair (MTTR). Reliability, in this case, comes from software, not the underlying hardware.

· Manageability follows from both software and management design. Management should lessen complexity, not add to it. Haight pointed toward tools trying to facilitate “infrastructure as code,” to enable flexibility.

Know when you need what

So, obviously, “good enough” is not always going to be, well, good enough for every part of your IT infrastructure. But it’s an idea that’s getting traction because of successes with cloud computing. Those successes are causing IT people to ask a few fundamental questions about how they can apply this approach to their specific IT needs. And that’s a useful thing.

In thinking about where and how “good enough” computing is appropriate, you need to ask yourself a couple questions. First, how vital is the system I’m working on? What’s its use, tolerance for failure, importance, and the like. The more critical it is, the more careful you have to be with your threshold of “good enough.”

Second, is speed of utmost importance? Cost? Security? Or a set of other things? Like Rawashdeh at eBay, know what metrics are important, and optimize to those.

Be honest with yourself and your organization about places you can try this approach. It’s one of the ideas that got quite a bit of attention in 2010 that’s worth considering.

Thursday, December 16, 2010

Survey points to the rise of 'cloud thinking'

In any developing market, doing a survey is always a bit of a roll of the dice. Sometimes the results can be pretty different from what you expected to find.

I know a surprise like that sounds unlikely in the realm of cloud computing, a topic that, if anything, feels over-scrutinized. However, when the results came back from the Management Insight survey (that CA Technologies sponsored and announced today), there were a few things that took me and others looking at the data by surprise.

Opinions of IT executives and IT staffs on cloud don’t differ by too much. We surveyed both decision makers and implementers, thinking that we’d find some interesting discrepancies. We didn’t. They all pretty much thought cloud could help them on costs, for example. And regardless of both groups’ first impressions, I’m betting cost isn’t their eventual biggest benefit. Instead, I’d bet that it’s agility – the reduced time to having IT make a real difference in your business – that will probably win out in the end.

IT staff are of two minds about cloud. One noticeable contradiction in the survey was that the IT staff was very leery about cloud because they see its potential to take away their jobs. At the same time, one of the most popular reasons to support a cloud initiative was because it familiarized them with the latest and greatest in technology and IT approaches. It seems to me that how each IT person deals with these simultaneous pros and cons will decide a lot about the type of role they will have going forward. Finding ways to learn about and embrace change can’t be a bad thing for your resume.

Virtualization certainly has had an impact on freeing people to think positively about cloud computing. I wrote about this in one of my early blogs about internal clouds back at the beginning of 2009 – hypervisors helped IT folks break the connection between a particular piece of hardware and an application. Once you do that, you’re free to consider a lot of “what ifs.”

This new survey points out a definite connection between how far people have gotten with their virtualization work and their support for cloud computing. The findings say that virtualization helps lead to what we’re calling “cloud thinking.” In fact, the people most involved in virtualization are also the ones most likely to be supportive of cloud initiatives. That all makes sense to me. (Just don’t think that just because you’ve virtualized some servers, you’ve done everything you need to in order to get the benefits of cloud computing.)

The survey shows people expect a gradual move from physical infrastructure to virtual systems, private cloud, and public cloud – not a mad rush. Respondents did admit to quite a bit of cloud usage – more than many other surveys I’ve seen. That leads you to think that cloud is starting to come of age in large enterprises (to steal a phrase from today’s press release). But it’s not happening all at once, and there’s a combination of simple virtualization and a use of more sophisticated cloud-based architectures going on. That’s going to lead to mixed environments for quite some time to come, and a need to manage and secure those diverse environments, I’m betting.

There are open questions about the ultimate cost impact of both public and private clouds. One set of results listed cost as a driver and an inhibitor for public clouds, and as a driver and an inhibitor for private ones, too. Obviously, there’s quite a bit of theory that has yet to be put into practice. I bet that’s what a lot of the action in 2011 will be all about: figuring it out.

And who can ignore politics? Finally, in looking at the internal organizational landscape of allies and stonewallers, the survey reported what I’ve been hearing anecdotally from customers and our folks who work with them: there are a lot of political hurdles to get over to deliver a cloud computing project (let alone a success). The survey really didn’t provide a clear, step-by-step path to success (not that I expected it would). I think the plan of starting small, focusing on a specific outcome, and being able to measure results is never a bad approach. And maybe those rogue cloud projects we hear about aren’t such a bad way to start after all. (You didn’t hear that from me, mind you.)

Take a look for yourself

Those were some of the angles I thought were especially interesting, and, yes, even a bit surprising in the survey. In addition to perusing the actual paper that Management Insight wrote (registration required) about the findings, I’d also suggest taking a look at the slide show highlighting a few of the more interesting results graphically. You can take a look at those slides here.

I’m thinking we’ll run the survey again in the middle of next year (at least, that seems like about the right timing to me). Two things will be interesting to see. First, what will the “cloud thinking” that we’re talking about here have enabled? The business models that cloud computing makes possible are new, pretty dynamic, and disruptive. Companies that didn’t exist yesterday could be challenging big incumbents tomorrow with some smart application of just enough technology. And maybe with no internal IT whatsoever.

Second, it will be intriguing to see what assumptions that seem so logical now will turn out to be – surprisingly – wrong. But, hey, that’s why we ask these questions, right?

This blog is cross-posted on The CA Cloud Storm Chasers site.

Tuesday, December 14, 2010

Beyond Jimmy Buffett, sumos, and virtualization? Cloud computing hits #1 at Gartner Data Center Conference

I spent last week along with thousands of other data center enthusiasts at Gartner’s 2010 Data Center Conference and was genuinely surprised by the level of interest in cloud computing on both sides of the podium. As keynoter and newly minted cloud computing expert Dave Barry would say, I’m not making this up.

This was my sixth time at the show (really), and I’ve come to use the show as a benchmark for the types of conversations that are going on at very large enterprises around their infrastructure and operations issues. And as slow-to-move as you might think that part of the market might be, there are some interesting insights to be gained comparing changes over the years – everything from the advice and positions of the Gartner analysts, to the hallway conversations among the end users, to really important things, like the themes of the hospitality suites.

So, first and foremost, the answer is yes, APC did invite the Jimmy Buffett cover band back again, in case you were wondering. And someone decided sumo wrestling in big, overstuffed suits was a good idea.

Now, if you were actually looking for something a little more related to IT operations, read on:

Cooling was hot this year…and cloud hits #1

It wasn’t any big surprise what was at the top of peoples’ lists this year. The in-room polling at the opening keynotes placed data center space, power, and cooling at the top of list of biggest data center challenges (23%). The interesting news was that developing a private/public cloud strategy came in second (16%).

This interest in cloud computing was repeated in the Gartner survey of CIOs’ top technology priorities. Cloud computing was #1. It made the biggest jump of all topics since their ’09 survey, by-passing virtualization, on its way to head up the list. But don’t think virtualization wasn’t important: it followed right behind at #2. Gartner’s Dave Cappuccio made sure the audience was thinking big on the virtualization topic, saying that it wasn’t just about virtualizing servers or storage now. It’s about “the virtualization of everything. Virtualization is a continuing process, not a one-time project.”

Bernard Golden, CEO of Hyperstratus and blogger (check out his 2011 predictions here), wondered on Twitter if cloud leapfrogging virtualization didn’t actually put the cart before the horse. I’m not sure if CIOs know whether that’s true or not. But they do know that they need to deal with both of these topics, and they need to deal with them now.

Putting the concerns of 2008 & 2009 in the rear-view mirror

This immediacy for cloud computing is a shift from the previous years, I think. A lot of 2008 was about the recession’s impact, and even 2009 featured sessions on how the recession was driving short-term thinking in IT. If you want to do a little comparison yourself, take a look at a few of my entries about this same show from years past (spanning the entire history of my Data Center Dialog blog to date, in fact). Some highlights: Tom Bittman’s 2008 keynote (he said the future looks a lot like a private cloud), 2008’s 10 disruptive data center technologies, 2008’s guide to building a real-time infrastructure, and the impact of metrics on making choices in the cloud from last year.

The Stack Wars are here

Back to today (or at least, last week), though. Gartner’s Joe Baylock told the crowd in Vegas that this was the year that the Stack Wars ignited. With announcements from Oracle, VCE, IBM, and others, it’s hard to argue.

The key issue in his mind was whether these stack wars will help or inhibit innovation over the next 5 years. Maybe it moves the innovation to another layer. On the other hand, it’s hard for me to see how customers will allow stacks to rule the day. At CA Technologies, we continue to hear that customers expect to have diverse environments (that, of course, need managing and securing, cloud-related and otherwise). Baylock’s advice: “Avoid inadvertently backing into any vendor’s integrated stack.” Go in with your eyes open.

Learning about – and from – cloud management

Cloud management was front and center. Enterprises need to know, said Cameron Haight, that management is the biggest challenge for private cloud efforts. Haight called out the Big Four IT management software vendors (BMC, CA Technologies, HP software, and IBM Tivoli) as being slow to respond to virtualization, but he said they are embracing the needs around cloud management much faster. 2010 has been filled with evidence of that from my employer – and the others on this list, too.

There’s an additional twist to that story, however. In-room polling at several sessions pointed to interest from enterprises in turning to public cloud vendors themselves as their primary cloud management provider. Part of this is likely to be an interest in finding “one throat to choke.” Haight and Donna Scott also noted several times that there’s a lot to be learned from the big cloud service providers and their IT operations expertise (something worthy of a separate blog, I think). Keep in mind, however, that most enterprise operations look very different (and much more diverse) than the big cloud providers’ operations.

In a similar result, most session attendees also said they’d choose their virtualization vendors to manage their private cloud. Tom Bittman, in reviewing the poll in his keynote, noted that “the traditional management and automation vendors that we have relied on for decades are not close to the top of the list.” But, Bittman said, “they have a lot to offer. I think VMware’s capabilities [in private cloud management] are overrated, especially where heterogeneity is involved.”

To be fair: Bittman made these remarks because VMware topped the audience polling on this question. So, it’s a matter of what’s important in a customer’s eyes, I think. In a couple of sessions, this homogeneous v. heterogeneous environment discussion became an important way for customers to evaluate what they need for management. Will they feel comfortable with only what each stack vendor will provide?

9 private cloud vendors to watch

Bittman also highlighted 9 vendors that he thought were worthy of note for customers looking to build out private clouds. The list included BMC, Citrix, IBM, CA Technologies, Eucalyptus, Microsoft, Cisco, HP, and VMware.

He predicted very healthy competition in the public cloud space (and even public cloud failures, as Rich Miller noted at Data Center Knowledge) and similarly aggressive competition for the delivery of private clouds. He believed there would even be fierce competition in organizations where VMware owns the hypervisor layer.

As for tidbits about CA Technologies, you won’t be surprised to learn that I scribbled down a comment or two: “They’ve made a pretty significant and new effort to acquire companies to provide strategic solutions such as 3Tera and Cassatt,” said Bittman. “Not a vendor to be ignored.” Based on the in-room polling, though, we still have some convincing to do with customers.

Maybe we should figure out what it’ll take to get the Jimmy Buffett guy to play in our hospitality suite next year? I suppose that and another year of helping customers with their cloud computing efforts would certainly help.

In the meantime, it’s worth noting that TechTarget’s SearchDataCenter folks also did a good job with their run-down on the conference, if you’re interested in additional color. A few of them might even be able to tell you a bit about the sumo wrestling, if you ask nicely.

And I’m not making that up either.

Tuesday, December 7, 2010

Cloud conjures up new IT roles; apps & business issues are front & center

So you’ve managed to avoid going the way of the dodo, and dodged the IT job “endangered species list” I talked about in my last post (and at Cloud Expo). Great. Now the question is, what are some of the roles within IT that cloud computing is putting front & center?

I listed a few of my ideas during my Cloud Expo presentation a few weeks back. My thoughts are based on what I’ve heard and discussed with IT folks, analysts, and other vendors recently. Many of those thoughts even resonated well with what I’ve heard this week here at the Gartner Data Center Conference in Las Vegas.

IT organizations will shift how they are, well, organized

James Urquhart from Cisco put forth an interesting hypothesis a while back on his “Wisdom of Clouds” blog at CNET that identified 3 areas he thought will be (and should be) key for dividing up the jobs when IT operations enters “a cloudy world,” as he put it.

First, there’s a group that James called InfraOps. That’s the team focused on the server, network, and storage hardware (and often virtual versions of those). Those selling equipment (like Cisco) posit that this area will become more homogenous, but I’m not sold on that. James followed that with ServiceOps, the folks managing a service catalog and a service portal, and finally AppOps. AppOps is the group manages the applications themselves. It executes and operates them, makes sure they are deployed correctly, watches the SLAs, and the like.

I thought these were pretty useful starting points. I agree whole-heartedly with the application-centric view he highlights. In fact, in describing the world in which IT is the manager of an IT service supply chain, that application-first approach seems paramount. The role of the person we talk to about CA 3Tera AppLogic, for example, is best described as “applications operations.”

Equally important as an application-centric approach are the skills to translate business needs into technical delivery, even if you don’t handle the details yourself. More on that below.

Some interesting new business cards

I can already see some interesting new titles on IT peoples’ business cards in the very near future thanks to the cloud. Here are some of those that I listed in my presentation, several of which were inspired by the research that Cameron Haight and Milind Govekar have been publishing at Gartner. (If you’re a Gartner client, their write-up on “I&O Roles for Private Cloud-Computing Environments” is a good one to start with.):

· “The Weaver” (aka Cloud Service Architect) – piecing together the plan for delivering business services
· “The Conductor” (aka Cloud Orchestration Specialist) – directing how things actually end up happening inside and outside your IT environment
· “The Decider” (aka Cloud Service Manager) – more of a vendor evaluator and the person who sets up and eliminates relationships for pieces of your business service
· “The Operator…with New Tools” (aka Cloud Infrastructure Administrator) – this may sound like a glorified version of an old network or system administrator, but there’s no way this guy’s going to use the same tools to figure all this out that he has in the past.

In Donna Scott’s presentation at the Gartner Data Center Conference called “Bringing Cloud to Earth for Infrastructure & Operations: Practical Advice and Implementations,” she hit upon these same themes. Some of the new roles needed that she listed included solution architects, the automation specialist, the service owner, cloud capacity manager, and IT financial/costing analyst. Note the focus on business-level issues – both the service and the cost.


Or maybe the truth is that these roles blend together a bit more. I could see the IT organization evolving to perform three core functions in support of application delivery:

— Source & Operate Resource Pools This person would be someone who would maintain efficient data center resources, including the management of automation and hypervisors. The first is the ability to more effectively manage resources—to determine how much memory and CPU is available at any given time, to scale up and down capacity in response to demand (and have the ability to pay only for what you use). These resources might eventually be sourced either internally or externally, and will most often be a combination of the two. This function will be responsible for making sure the right resources are available at the right time and that the cost of those resources is optimized.

— The second function is Application Delivery, focused on building application infrastructure proactively, so that when the next idea comes down the pike, you can be ready. You can proactively build an appropriately flexible application infrastructure. You can provide the business with a series of different, ready-made configurations from which to choose, and you would have the ability (when they are needed) to quickly get these pre-configured solutions up and running quickly.

— The last, higher-level function is the process of engaging with the business closely. Your job is to Assemble Service. You’ll be able to say to the business ‘bring me your best ideas’ and you’ll be able to turn those concepts into real, working systems quickly, without having to dive into lower-level technical bits & bytes that some of the previously mentioned folks would.

In all cases, I’m talking about ways to deliver your application or business service, and the technical underpinnings are fading into the background a bit. “Instead of being builders of systems,” said Mike Vizard in a recent IT Business Edge article, “most IT organizations will soon find themselves orchestrating IT services.”

James Urquhart talks about this as an important tenant of the DevOps approach: The drive to make the application the unit of administration, not the infrastructure. James had another post more recently that underscored his disappointment that shows like Cloud Expo are focusing more on the infrastructure guys still, not the ones thinking about the applications. I’m all in favor. I heard Gartner’s Cameron Haight suggest why this might be true in most IT shops: while development has spent a lot of time working toward things like agile development, IT ops has not. There’s a mismatch, and work is starting on the infrastructure side of things to address it.

Still, how do you get there from here?

So the premise here is that cloud will change the role of IT as a whole, as well as particular IT jobs. So what should you do? How do you know how to ease people into these roles, how to build these roles, or even how to try to make these role transitions yourself?

I’ll repeat the advice I’ve been giving pretty consistently: try it.

— Figure out what risks you and your organization want to take. Understand if a given project or application is more suited to a greenfield, “clean break” kind of approach or to a more gradual (though drawn-out) shift.
— Set up a “clean room” for experimentation. Create the “desired state” (Forrester’s James Staten advocated that on a recent webcast we did with him). Then use this new approach to learn from.
— Arm yourself with the tools and the people to do it right. Experience counts.
— Work on the connection between IT & business aggressively
— Measure & compare (with yourself and others trying similar things)
— Fail fast, then try again. It’s all about using your experience to get to a better and better approach.

There was one thing that I heard this week at the Gartner show from Tom Bittman that sums up the situation nicely. IT, Tom said, has the opportunity to “become the trusted arbiter and broker [of cloud computing] in your organization.” Now that definitely puts IT in a good place, even a place of strength. However, there’s no denying that many, many folks in IT are going to have to get comfortable with roles that are very different from the roles they have today.

(If you would like a copy of the presentation that I gave at Cloud Expo, email me at

Wednesday, November 24, 2010

As cloud computing changes IT roles, which IT jobs are on the ‘endangered species list’?

A funny thing happened during my Cloud Expo presentation in Santa Clara recently that I wasn’t expecting. I was trying to come up with a few points in the session where I could get a read on whether the audience was following me or not. And since my topic was “How Cloud Computing is Changing the Role of IT…and What You Can Do About it,” I figured at least someone would have an opinion. So I asked a few questions of the audience.

And (you guessed it), there was no shortage of opinions. In fact, I found myself in the middle of a full-fledged discussion. Especially when we started talking about which IT job titles were or were not going to exist any longer as a result of cloud computing. Interesting stuff. Here’s a summary of the presentation I gave, plus a few comments from the audience thrown in for color:

Cloud computing is causing two big IT-related role shifts. Cloud computing has given business users choices it didn’t have before; it’s breaking the monopoly that IT has had on the sourcing of IT service. It means business can go around IT, can “go rogue” and find the services they think best suit themselves. The doom & gloom scenario for IT is that it’s time to pack up and go do something else.

Will IT go away because of cloud computing? Half of my session attendees (which was two-thirds end users) thought that cloud computing will at least cause IT to lose jobs. However, I’m not nearly that negative; I tend to agree with our own Bert Armijo who likes to say that IT jobs don’t go away when faced with the Next Big Thing. They just reshuffle. So the real question, then, is what does the deck look like after all the shuffling is done? More on that in a moment.

First: IT is becoming an orchestrator of a new IT service supply chain

What I told folks at Cloud Expo is that from what I see, cloud computing and its focus on business services, rather than the underlying technology, are altering the broad responsibilities and the approach of IT. Cloud is changing IT’s role inside an organization. It opens up the possibility of getting IT service from many different sources and the corresponding need to orchestrate those many sources. Mike Vizard talked about this in a recent IT Business Edge article: “Senior IT leaders,” he said, “will soon find themselves managing a supply chain of IT services.”

This role as orchestrator of things you don’t own is a new one in some ways, though many folks have been managing outsourcing (similar in some ways) for a while now. Shifting all of IT to have this mindset is very significant; it means adjusting to a role as a service provider, selecting the best of both internal and external services as appropriate.

And that’s where things get interesting for the actual jobs inside the IT department. Some of those are going to have to change.

Second: how IT jobs themselves are changing

One of my points was that this IT supply chain concept creates a whole new set of questions, opportunities, and requirements to make things work. (Here’s an earlier post of mine describing some of those new requirements.) IT will certainly be more business-oriented and less focused on the underlying infrastructure components as layers of abstraction help make it smarter -- and possible -- to think beyond the nitty gritty.

So how is cloud changing the individual roles and jobs within the IT department? I identified 3 important influencers:

· Automation – fewer manual tasks means less demand for the people who do them today
· Abstraction – technical minutiae are less important to focus on, while the business service itself takes on a much more primary role
· Sourcing – vendor management and comparison (and some way to make choices) move to the forefront

Products (like several of ours) can help take these new approaches in what are often very fast, effective steps. But the IT folks themselves need to realize that they really need a new way of looking at their jobs, and, in fact, some of their jobs may end up very different from where they are today. That’s not a quick problem to solve. I remember this issue being a weighty problem back in my Cassatt days, too.

So what IT jobs are on the endangered species list?

At Cloud Expo, I listed off a bunch of titles to see what the audience thought were the soon-to-be-extinct IT jobs. I let people categorize the jobs as either Cockroaches (as in, “will always have a job,” you can’t get rid of them), Chimps (“need to adapt to survive”), or Dodos (“buh-bye”). OK, so the categories weren’t necessarily flattering, but it added a touch of levity to what can be a somber topic for IT.

My in-room responses had some dissenting opinions, obviously, but they were, in fact, surprisingly uniform. Here's where people more or less categorized a few different IT jobs:

Cockroaches: service provider vendor manager, SLA manager
Chimps: software/apps deployment person, network administrator, server monitoring/ “Mr. Fix-it”
Dodos: capacity planning, CIO

Yes, you just read that my audience thought CIOs were going to go extinct.

I sensed a bit of dramatic license was being taken for impact in the room, but the discussion was definitely interesting. There is certainly a case to be made that the tools, approaches, and processes that the CIO relies on today, for example, need to get a major revamp, or things will look pretty bleak for the folks in those jobs.

There was agreement that the business-level capabilities of the IT people who look at service levels and manage service providers matches up well with the shift that seems to be going on toward business-level conversations.

And, the realization that even the most core of administrator roles (network, server) and the break-fix mentality currently in place in a huge majority of IT shops just doesn’t seem as relevant in a world seriously involved with cloud computing. Or, at least, it makes sense that there will need to be an important set of adaptations. In the world of cloud computing, you don’t fix the malfunctioning commodity server. Instead, you ensure that your systems route around it (through internal or external sources) and continue to deliver the service.

It’s time for quite a lot of staff evolution, from the sounds of it. At least, that’s what my Cloud Expo audience thought. My next post will cover exactly that: I’ll build on these comments and look at the approaches, roles, and solutions that I highlighted to help IT leverage the cloud computing trend.
And for the squeamish, I'll try not to mention cockroaches again.
(If you'd like a copy of my Cloud Expo presentation, e-mail me at

Monday, November 8, 2010

Cloud Expo recap: Acceleration and pragmatism

Last week at Cloud Expo in Santa Clara was a pleasant surprise. Previous events had me a bit cautious, holding my expectations firmly in check. Why?

SYS-CON’s Santa Clara show in 2009 was disappointing in my book, filled with too much repetitive pabulum from vendors about the definition and abstract wonders of cloud computing, but none of the specifics. Of course, maybe there wasn’t much harm done, since there didn’t seem to be too many end users at the event. Yet.

The 2010 New York Cloud Expo show back in April was a leap forward from there. The location played a big role in its improvement over the fall event – placing it in Manhattan put within easy striking distance for folks on Wall Street and a bunch of other nearby industries. The timing was right, too, to give early end users of cloud computing a look at what the vendor community had been gradually building into a more and more coherent and useful set of offerings. A certain Icelandic volcano tried to put a kink in a few peoples’ travel plans, but in general, the April show was better.

And what about last week? A marked improvement, I say.

And that doesn’t even count the fun of watching the 3-run homer and final outs of the San Francisco Giants’ first-ever World Series win at the conference hotel bar with fellow attendees who were (also) skipping the Monday late sessions. Or the CA Technologies Led Zepplica party (‘70s hairdos and facial hair are in this year, it seems).

At Cloud Expo, however, I noticed two themes: the acceleration of end user involvement in the cloud computing discussion and a strong undercurrent of pragmatism from those end users.

Acceleration of end user involvement in cloud computing
Judging from the keynotes and session rooms, Cloud Expo attendance seemed to be quite a bit ahead of last year’s Santa Clara show. Jeremy Geelan and the SYS-CON folks can probably describe where those additional folks came from this time around officially, but I think I know. They were end users (or at least potential end users).

I say this for a couple reasons. At the CA Technologies booth, we had quite a few discussions with a significant number of interested end users during the 4 days on a variety of aspects around cloud computing. Discussions ranged from automation, to the role of virtualization management, to turnkey private cloud platforms.

Also, the presenters in several of the sessions I attended asked the make-up of the audience. I did the same during my “How the Cloud is Changing the Role of IT and What to Do About It” session. A full two-thirds of the audience members were in end user organizations. The remaining third identified themselves as service providers and vendors. For those keeping score, that’s a complete turn-about from last year’s event.
End users aren’t just along for the ride: showing a pragmatic streak regarding the cloud
Not only did there seem to be more customers, but the end users who were there seemed to be really interested in getting specific advice they could take back home the week after the show to use in their day jobs. The questions I heard were often pretty straightforward, basic questions about getting started with cloud.
They generally began with “My organization is looking at cloud. We’re interested in how we should…” and then launched into very particular topics. They were looking for particular answers. Not generalities, but starting points.
Some were digging into specifics on technologies or operating models. Others (like the ones I talked to after my “changing role of IT” session) were trying to get a handle on best practices for organizations and the IT responsibilities that need to be in place for an IT organization to really make forward progress with cloud computing. Those are really good questions, especially given how organizationally disruptive cloud can be. (I’ll post a summary of my talk on this topic shortly.)
My initial (snarky) comment was that since the end users were showing up only now, many speakers could have used their presentations from either of the past 2 Cloud Expo conferences for this instantiation of the event without too much trouble. But, I think many of the vendor presentations, too, have been improving as a result of working with actual customers over the past 12 months. But, there’s still lots of work to do on that front, in my opinion.
Infrastructure v. developers in the cloud?
James Urquhart of Cisco and CNET “Wisdom of Clouds” fame made an interesting point about the audience and the conversations he heard at Cloud Expo last week. “What struck me,” tweeted James, “was the ‘how to replicate what you are doing [currently in IT] in cloud’ mentality.”

Just trying to repeat the processes you have followed with your pre-cloud IT environment leaves out a lot of possible upside when you start really trying out cloud computing (see my recent post on this topic). However, in this case, James was more interested in why a lot of the discussion he heard at Cloud Expo targeted infrastructure and operations, not developers or “devops” concepts.

I did hear some commentary about how developers are becoming more integrated into the operations side of things (matching what the CTO of CA 3Tera AppLogic customer PGi said a few weeks back). However, I agree with James, it does seem like folks are focusing on selling to operations today, leaving the development impact to be addressed sometime in the future. James, by the way, recently did a great analysis piece on the way he thought IT operations should run in a cloudy world on his CNET blog.
Interesting offerings from cloud service providers
One other interesting note: there were several of the CA 3Tera AppLogic service provider partners that I was able to spend time with at the show. I met the CEO of ENKI, talked with the president and other key execs from Birdhosting, and got to see Berlin-based ScaleUp’s team face-to-face again. All are doing immediately useful things (several of which we profiled in the CA Technologies booth) to help their customers take advantage of cloud services now.
ScaleUp, for example, has put together a self-service portal that makes it easier for less technical users to get the components they need to get going on a cloud-based infrastructure. They call it “infrastructure as a platform.” Details are available in their press release and this YouTube walk-through.
So, all in all, it was a useful week from my point of view. If you have similar or contradictory comments or anecdotes, I’d love to hear them. In the meantime, I’ll finish up turning my Cloud Expo presentation into a blog post here. And I just might peek at my calendar to see if I can make it to the New York Cloud Expo event next June.

Tuesday, October 26, 2010

Using cloud to test deployment scenarios you didn't think you could

Often, IT considers cloud computing for things you were doing anyway, with the hope of doing them much cheaper. Or, more likely from what I’ve heard, much faster. But last week a customer reminded me about one of the more important implications of cloud: you can do things you really wouldn’t have done otherwise.

And what a big, positive benefit for your IT operations that can be.

A customer’s real, live cloud computing experience at Gartner Symposium

The occasion was last week’s Gartner’s Symposium and ITXpo conference in Orlando. I sat in on the CA Technologies session that featured Adam Famularo, the general manager of our cloud business (and my boss), and David Guthrie, CTO of Premiere Global Services, Inc. (PGi). It probably isn’t a surprise that Guthrie’s real-world experiences with cloud computing provided some really interesting tidbits of advice.

Guthrie talked about how PGi, a $600 million real-time collaboration business that provides audio & web conferencing, has embraced cloud computing (you’ve used their service if you’ve ever heard “Welcome to Ready Conference” and other similar conferencing and collaboration system greetings).

What kicked off cloud computing for PGi? Frustration. Guthrie told the PGi story like this: a developer was frustrated with the way PGi previously went about deploying business services. From his point of view, it was way too time-consuming: each new service required a procurement process, the set-up of new servers at a co-location facility, and the like. That developer tracked down 3Tera AppLogic (end of sales pitch) and PGi began to put it to use as a deployment platform for cloud services.

What does PGi use cloud computing for? Well, everything that’s customer facing. “All the services that we deliver for our customers are in the cloud,” said Guthrie. That means they use a cloud infrastructure for their audio and web collaboration meeting services. They use it for their web site, sales gateways, and customer-specific portals as well.

Guthrie stressed the benefits of speed that come from cloud computing. “It’s all about getting technology to our customers faster,” said Guthrie. “Ramp time is one of the biggest benefits. It’s about delivering our services in a more effective way – not just from the cost perspective, but also from the time perspective.”

Cloud computing changes application development and deployment. Cloud, Guthrie noted in his presentation, “really changed out entire app dev cycle. We now have developers closer to how things are being deployed.” He said it helped fix consistency problems between applications across dev, QA, and production.

During his talk, Guthrie pointed to some more fundamental differences they've been seeing, too. He described how the cloud-based approach has been helping his team be more effective. “I’ve definitely saved money by getting things to market quicker,” Guthrie said.

But, more importantly, said Guthrie, “it makes you make better applications.” “I was also able to add redundancy where I previously wouldn’t have considered it” thanks to both cost efficiencies and the ease with which cloud infrastructure (at least, in his CA 3Tera AppLogic-based environment) can be built up and torn down.

“You can test scenarios that in the past were really impractical,” said Guthrie. He cited doing side-by-side testing of ideas and configurations that never would have been possible before to see what makes the most sense, rolling out the one that worked best, and discarding those that didn't. Except in this case, "discarding" doesn't mean tossing out dedicated silos of hardware (and hard-wired) infrastructure that you had to manually set up just for this purpose.

Practical advice, please: what apps are cloud-ready?

At the Gartner session, audience members were very curious about the practicalities of using cloud computing, asking Guthrie to make some generalizations from his experience on everything from what kind of applications work best in a cloud environment, to how he won over his IT operations staff. (I don’t think last week’s audience is unique in wanting to learn some of these things.)

“Some apps aren’t ready for this kind of infrastructure,” said Guthrie, while “some are perfectly set for this.” And, unlike the assumption many in the room expressed, it’s not generally true that packaged apps are more appropriate to bring to a cloud environment than home-grown applications.

Guthrie’s take? You should decide which apps to take to the cloud not by whether they are packaged or custom, but with a different set of criteria. For example, stateless applications that live on the web are perfect for this kind of approach, he believes.

Dealing with skepticism about cloud computing from IT operations

One of the more interesting questions brought out what Guthrie learned about getting the buy-in and support of the operations team at PGi. “I don’t want you to think I’ve deployed these without a lot of resistance. I don’t want to act like this was just easy. There were people that really resisted it,” he said.

Earning the buy-in of his IT operations people required training for the team, time, and even a quite a bit of evangelism, according to Guthrie. He also laid down an edict at one point: for everything new, he required that “you have to tell me why it shouldn’t go on the cloud infrastructure.” What seemed draconian at first, actually turned out to be the right strategic choice.

“As the ops team became familiar [with cloud computing], they began to embrace it,” Guthrie said.

Have you had similar real-world experiences with the cloud? Or even contradictory ones? I’m eager to hear your thoughts on PGi’s approach and advice (as you might guess, I much prefer real-world discussions to theoretical ones any day of the week). Leave comments here or contact me directly; I’ll definitely continue this discussion in future posts.

Wednesday, October 13, 2010

The first 200 servers are the easy part: private cloud advice and why IT won’t lose jobs to the cloud

The recent webcast that featured Bert Armijo of CA Technologies and James Staten of Forrester Research offered some glimpses into the state of private clouds in large enterprises at the moment. I heard both pragmatism and some good, old-fashioned optimism -- even when the topic turned to the impact of cloud computing on IT jobs.

Here are some highlights worth passing on, including a few juicy quotes (always fun):

Cloud has executive fans, and cloud decisions are being made at a relatively high level. In the live polling we did during the webcast, we asked who was likely to be the biggest proponent of cloud computing in attendees’ organizations. 53% said it was their CIO or senior IT leadership. 23% said it was the business executives. Forrester’s James Staten interpreted this to mean that business folks are demanding answers, often leaning toward the cloud, and the senior IT team is working quickly to bring solutions to the table, often including the cloud as a key piece. I suppose you could add: “whether they wanted to or not.”

Forrester’s Staten gave a run-down of why many organizations aren’t ready for an internal cloud – but gave lots of tips for changing that. If you’ve read James’ paper on the topic of private cloud readiness (reg required), you’ve heard a lot of these suggestions. There were quite a few new tidbits, however:

· On creating a private cloud: “It’s not as easy as setting up a VMware environment and thinking you’re done.” Even if this had been anyone’s belief at one point, I think the industry has matured enough (as have cloud computing definitions) for it not to be controversial any more. Virtualization is a good step on the way, but isn’t the whole enchilada.

· “Sharing is not something that organizations are good at.” James is right on here. I think we all learned this on the playground early in life, but it’s still true in IT. IT’s silos aren’t conducive to sharing things. James went farther, actually, and said, “you’re not ready for private cloud if you have separate virtual resource pools for marketing…and HR…and development.” Bottom line: the silos have got to go.

· So what advice did James give for IT organizations to help speed their move to private clouds? One thing they can do is “create a new desired state with separate resources, that way you can start learning from that [cloud environment].” Find a way to deliver a private cloud quickly (I can think of at least one).

· James also noted that “a private cloud doesn’t have to be something you build.” You can use a hosted “virtual private cloud” from a service provider like Layered Tech. Bert Armijo, the CA Technologies expert on the webcast, agreed. “Even large customers start with resources in hosting provider data centers.” Enterprises with CA 3Tera AppLogic running at their service provider and internally can then move applications to whichever location makes the most sense at a given point in time, said Armijo.

· What about “cloud-in-a-box” solutions? James was asked for his take. “Cloud-in-a-box is something you should learn from, not take apart,” he said. “The degree of completeness varies dramatically. And the way in which it suits your needs will vary dramatically as well.”

The biggest cloud skeptics were cited as – no surprise – the security and compliance groups within IT, according to the polling. This continues to be a common theme, but shouldn’t be taken as a reason to toss the whole idea of cloud computing out, emphasized Staten. “Everyone loves to hold up the security flag and stop things from happening in the organization.” But don’t let them. It’s too easy to use it as an excuse for not doing something that could be very useful to your organization.

Armijo also listed several tips for finding successful starting points in the move to creating a private cloud. It was all about pragmatic first steps, in Bert’s view. “The first 200 servers are the easy part,” said Armijo. “Because you can get a 50-server cloud up doesn’t mean you have conquered cloud.” His suggestions:

- Start where value outweighs the perceived risk of cloud computing for your organization (and it will indeed be different for each organization)
- Find places where you will have quick, repeated application or stack usage
- If you’re more on the bleeding edge, set up IT as an internal service provider to the various parts of the business. It’s more challenging, for sure, but there are (large) companies doing this today, and it will make profound improvements to IT’s service delivery.

Will cloud computing eliminate jobs? A bit of Armijo’s optimism was in evidence here: he said, in a word, no. “Every time we hit an efficiency wall, we never lose jobs,” he said. “We may reshuffle them. That will be true for clouds as well.” He believed more strategic roles will grow out of any changes that come as a result of the impact of cloud on IT.

“IT people are the most creative people on the face of the planet,” said Armijo. “Most of us got into IT because we like solving problems. That’s what cloud’s going to do – it’s going to let our creative juices flow.”

If you’re interested in listening to the whole webcast, which was moderated by Jim Malone, editorial director at IDG, you can sign up here for an on-demand, encore performance.

Tuesday, October 12, 2010

Delivering a cloud by Tuesday

Much of what I heard at VMworld in San Francisco (and is likely being repeated for the lucky folks enjoying themselves this week in Copenhagen) was about the long, gradual path to cloud computing. Lauren Horwitz captured this sentiment well in this VMworld Europe preview at SearchServerVirtualization.

And I totally agree: cloud computing is a long-term shift that will require time to absorb and come to terms with, while taking the form of public and private clouds – and even becoming a hybrid mix of the two. How the IT department in the largest organizations will assess and incorporate these new, more dynamic ways of operating IT is still getting sorted out.

Mike Laverick, the virtualization expert and the RTFM blogger quoted in the SearchServerVirtualization story, said that “users shouldn’t be scared thinking that they have to deliver a cloud by Tuesday of next week. It’s a long-term operational model over a 5- or 10-year period.”

But what if you can’t wait that long to deliver?

What if you really do have to deliver a cloud (or at least an application) next Tuesday? And then what if that app is expected to handle drastic changes the following Thursday?

Despite the often-repeated “slow and steady” messages about IT infrastructure evolution, I think it’s worth making sure we all don’t forget that there’s another choice, too. After all, cloud computing is all about choices, right? And cloud computing is about using IT to make an immediate impact on your business. While there is a time and place for the slow, steady, incremental change, there’s also a very real need now and again to make a big leap forward.

There are times to throw incrementalism out the window. In those situations, you actually do have another choice: getting a private cloud set up in record time with a more turnkey-type approach so you can immediately deliver the application or service you’re being asked for.

Turnkey cloud platform trade-offs

A turnkey approach for a cloud platform (which is, in the spirit of full disclosure, the approach that our CA 3Tera AppLogic takes) can get you the speedy delivery times you’re looking for. Of course, the key to speed is being able to have all of the complicating factors under your control. The 3Tera product does this, for example, by creating a virtual appliance out of the entire stack: from infrastructure on up to the application, simplifying things immensely by turning normally complicated components into software-only versions that can be easily controlled.

A turnkey cloud is probably best suited for situations where the quick delivery of the application or service is more critical than following existing infrastructure and operations procedures. After all, those procedures are the things that are keeping you from delivering in the first place. So there’s your trade-off: you can give yourself the ability to solve some of the messier infrastructure problems by changing some of the rules. And processes. The good news (for CA 3Tera AppLogic customers, anyway) is that even when you do break a few rules there’s a nice way to link everything back to your existing environment as that becomes important (and, eventually, required).

Create a real-world example of what your data center is evolving into

For these types of (often greenfield) situations, a turnkey private cloud platform gives you the chance to set up a small private cloud environment that delivers exactly what you’re looking for at the moment, which is the critical thing. It also sets you up for dramatic changes in requirements as conditions vary.

But beyond solving the immediate crisis (and preparing you for the next one), there’s a hidden benefit that shouldn’t be overlooked: you get experience working with your apps and infrastructure in this future, more cloud-like state. You get firsthand knowledge of the type of environment you’d like your entire data center to evolve into over the next 5-10 years. You’ll find out the things that work well. You’ll find out what the pitfalls are. Most of all, you’ll get comfortable with things working differently and learn the implications of that. That’s useful for the technology, but also for the previously mentioned processes, and, most of all, for the IT people involved.

So, when is that “big leap” appropriate?

The Forrester research by James Staten that I’ve referred to in previous posts talks about the different paths for moving toward a cloud-based environment – one more incremental and the other more turnkey. I’m betting you’ll know when you’re in the scenarios appropriate for each.

Service providers (including some of those working with us that I mentioned in my last post) have been living in this world for a few years now. They are very obviously eager to try new approaches. They are looking for real, game-changing cloud platform solutions upon which they can build their next decade of business. And the competitive pressures on them are enormous.

If you’re in an end user IT organization that’s facing an “uh oh” moment -- where you know what you want to get done, but you don’t really see how you can get there from here – it’s probably worth exploring this leap. At least in a small corner of your environment.

After all, Tuesday’s just around the corner.

Monday, October 4, 2010

New CA 3Tera release: Extending cloud innovations while answering enterprise & MSP requirements

Today CA Technologies announced the first GA release of CA 3Tera AppLogic.

Now, obviously, it’s not the first release of the 3Tera product. That software, which builds and runs public and private clouds, is well known in its space and has been in the market for several years now. In fact, CA 3Tera AppLogic has 80+ customers spread across 4 continents and has somewhere in the neighborhood of 400 clouds currently in use, some of which have been running continuously for years. I mention a few of the service provider customers in particular at the end of this post.

However, this will be the first GA release of the product since being acquired back in the spring – the first release under the CA Technologies banner. A lot of acquisitions in this industry never make it this far. From my perspective, this one seems to be working for a few reasons:

Customers need different approaches to cloud computing.

We’re finding that customer approaches to adopting some form of cloud computing – especially for private clouds – usually take one of two paths. I talked about this in a previous post: one path involves incrementally building on your existing virtualization and automation efforts in an attempt to make your data center infrastructure much more dynamic. That one is more evolutionary. The second path is much more revolutionary. That path is one where competitive pressure is high or the financial squeeze is on. Customers in that situation are looking for a more turnkey cloud approach, one that abstracts away a lot of the technical and process complexity. They are looking for a solution that gets them to cloud fast.

That last point is the key. A more turnkey approach like CA 3Tera AppLogic is going to be appealing when the delivery time or cost structure of the more slow & steady “evolutionary” approach is going to jeopardize success. That faster path is especially appropriate for managed service providers these days, given the rate of change and turbulence in the market (and the product’s partner & customer list proves that).

Now, having this more “revolutionary” approach in our portfolio alongside things like CA Spectrum Automation Manager lets us talk with customers about both paths. We get to ask what customers want, and then work with them on whichever approach makes sense for their current project. Truth is, they may take one path for one project, and another path for the next. And, as Forrester’s James Staten mentioned in a webcast on about private clouds (that CA sponsored) last week, quickly getting at least some part of your environment to its new, (cloudy) desired state lets you learn quite a bit for your IT environment as a whole.

The 3Tera cloud platform innovations remain, well…innovative.

The other reason (I think) that you’re seeing progress from the combination of CA Technologies and 3Tera is actually pretty basic: even after all this time, no one else is doing what this platform can do. The CA 3Tera AppLogic cloud platform takes an application-centric approach. This means that IT is able to shift its time, effort, and focus away from the detailed technology components, and instead focus on what’s important – the business service that they are trying to deliver.

This application focus is different. Most of what you see in the market doesn’t look at things this way. It is either an incremental addition to the way IT has been managing and improving virtualization, or focuses at a level much lower than the application. Or both.

Plus, if you’ve ever seen a demo of CA 3Tera AppLogic, you’ll also be struck by the simplicity that the platform’s very visual interface brings you. You draw up the application and infrastructure you want to deploy and it packages this all up as a virtual appliance. That virtual package can then be deployed or destroyed as appropriate. When you need something, you just draw it in. Need more server resources or load balancers? Drag them in from your palette and drop them onto the canvas. (Here’s a light-hearted video that explains all this pretty simply in a couple of minutes.)

Keeping this sort of innovative technology, which was ahead of its time when it came out, out in front years later is really important, and a big focus for CA Technologies. The market is changing rapidly; we have to continue to lead the charge for customers.

Finding a way to meld innovation with extended enterprise-level & MSP requirements

I think it’s really good news that many of the new capabilities that CA Technologies is bringing to market with CA 3Tera AppLogic 2.9 are about making the product even more suitable as the backbone of an enterprise’s private cloud environment, or the underpinnings of a service provider’s cloud offerings. Things like high availability, improved networking, and new web services APIs should make a lot of the bigger organizations very happy.

Already, we’ve seen an accelerated eagerness by large enterprises to hear about what CA 3Tera AppLogic is and what it can do since it became part of CA Technologies. Maybe it’s the continued rough economic waters, but the big name standing behind 3Tera now goes a long way, from what I’ve heard and seen of customer reaction. Let’s just say, the guys out talking about this stuff are busy.

There are customers doing this today. Lots of them.

The true test of whether the timing is right, the technology is right, and the value proposition makes sense is, well, whether people are buying and then really using your solution. 3Tera started out seeing success with smaller MSPs. That success continues. The enterprises have moved slower, but leading-edge folks are making moves now. The enterprise use cases are very interesting, actually. Anything that involves spiky workloads, where standardized stacks need to be set up and torn down quickly is ideal. I’m going to try to get some interviews for the blog with a couple of these folks when they are ready to talk.

There are some excellent MSPs and other service providers, on the other hand, that are out there offering CA 3Tera AppLogic-based services today and shaking things up.

“Gone are the days of toiling in the depths of the data center,” said David Corriveau, CTO of Radix Cloud, “where we’d have to waste the efforts of smart IT pros to manually configure every piece of hardware for each new customer.” Now Radix Cloud centralizes their technical expertise in a single location with the CA 3Tera AppLogic product, which they say improves security and reliability.

I talked to Mike Michalik, CEO of Cirrhus9, and they use the product to quickly stand up new applications and websites for clients of theirs, with robustness and product maturity being key ingredients for success.

Berlin-based ScaleUp Technologies is also knee-deep delivering cloud services to customers. CEO Kevin Dykes said (my summer vacation plans actually enabled me to make an in-person visit to their offices) that a turnkey cloud platform means they, as a service provider, have a strong platform to start from, but can still offer differentiated services. “Our customers directly benefit, too,” said Dykes. “They are able to focus on rapid business innovation, quickly and easily moving applications from dev/test to production, or adding capacity during spikes in demand.”

Jonathan Davis, CTO of DNS Europe, will be co-presenting with my colleague Matt Richards at VMworld Europe in Copenhagen, talking about how he’s been able to change the way they do business. “Very quickly,” said Davis, “we have been able to ramp up new revenue streams and win new customers with even higher expectations and more complex needs.”

I find the user stories to be some of the most interesting pieces of this – and one of the things that is so hard to find in the current industry dialog about cloud computing. So, now that we have CA 3Tera AppLogic announced, I’ll be interviewing a few of these folks to hear firsthand what’s working – and what’s not – in their view of cloud computing. Check back here for more details.

Thursday, September 23, 2010

Cloudwashing, Oracle, and the "logic" of IT product naming

After hearing cries of “cloudwashing” following nearly every product announcement from established vendors these days, I started pondering a bit about IT product naming: is it helping or hurting? From IBM in the ‘90s…to VMware’s recent moves…to Oracle’s newest product this week, some amusing patterns emerge that just have to give IT buyers a chuckle now and again.

Should you hang your hat on a single letter?
Coming out of VMworld 2010, I was convinced that the ‘v’ was the ‘e’ of our decade. You remember how the poor ‘e’ was abused in the ‘90s, don’t you? Everything was either e-commerce, e-business, or e-something-else. At BEA we had eWorld and the e-generation. The list went on. Never mind that “e-mail” had already been ubiquitous for quite a while. The reality was that the IT industry latched on to that ‘e’, sunk its teeth in, and held on like a pit bull. Marketers know a good thing when they see it.

The lesson of the ‘e’? Find a good letter – and stick with it.

Fast-forward 15 years. VMware has built a successful franchise around bringing the simple idea of virtualization to a new hardware platform. From that they inspired a lot of talk about VMs (virtual machines) and companies starting their names with a ‘v’ (including VMLogix, which was snapped up by Citrix at VMworld this year). Don’t forget all the talk of migrating work from P to V (P2V, if you’re really cool). I even got into the act myself in a previous blog reminding folks about the things IT misses out on if they assume V = P in a cloud environment.

And, as we all probably should have expected, the ‘v’ product naming began in earnest. First, vCenter and vSphere. Now vCloud, vShield, and vFabric. Kudos to VMware CTO Steve Herrod for having a little fun with this and poking fun at his marketing guys onstage at VMworld for the last-minute value-add of adding the ‘v’ to vCloud Director.
And so the ‘v’ brand was born and nurtured. Pretty effective. Add a ‘v’ in front of something and it’s gotta be something virtual. True? Not necessarily. Cool? Certainly. At least at the moment. Until the next big thing comes along, which just might be…well, cloud computing.
It’s ‘v’ -- but is it cloud?
A big focus of VMworld 2010 (and even as far back as 2008 – see this “golden oldie” post from former co-worker Ken Oestreich, now at Egenera) was underscoring their plans for taking customers to the cloud. So the big question is, will ‘v’ be able to get you there (brandwise, anyway…technology is a separate discussion). One vendor voted with its feet: VMOps changed its name to several months ago. The cloud, to them, was beyond just a ‘v’.
Time for the next letter to weigh in: the Big O
As Oracle Open World was underway this week, it’s only fitting that they put their stamp on things. The Big O, of course, went from cloud-bashing rants one year to having a full slate of cloud computing sessions at their conference the next. As they do. Larry's no dummy. And he did buy those Sun guys ('s fun to watch the head-to-head video clips courtesy of Matt Stansberry at SearchDataCenter here).

But Oracle took a different naming tact altogether. They boldly went where many had gone before and made the, um, logical choice of “Oracle Exalogic Elastic Cloud” for their new cloud-in-a-box offering. No ‘v’, no ‘e’, not even an ‘x’(a favorite of some of the Sun folks). Nope. Instead: “logic.”
Funny. I was just musing how IT needed more creativity in product naming because so many IT products seem to end with “-logic” these days. It’s actually quite impressive. Here’s my (partial) list: WebLogic, Pano Logic, LogLogic, MarkLogic, OpenLogic, Alert Logic, and even our own CA 3Tera AppLogic. Originally I even left off Science Logic, (the former) BEA AquaLogic, and the close permutations of the aforementioned VMLogix and Passlogix.

Logical or not, is Oracle cloudwashing with Exalogic?
Larry Dignan at ZDNet ranted about Larry Ellison going from cloud skeptic to cloudwasher. There are a couple good articles on the Exalogic announcement (Loraine Lawson lists quite a few), many very aggressive in their slam on Ellison trying to pitch a physical box as a cloud.
“Cloud computing is supposed to turn capital expenses into operating expenses,” writes Dignan. “Exalogic looks like more capital spending. …But is Exalogic really elastic? Is it really cloud?” Probably not, he concludes. “Simply put, Oracle’s Exalogic box gives you capacity on demand because you’re still buying more capacity than you need.”
Others marveled (or, with all the Iron Man references, Marveled) at the high-end nature of the machinery involved – it’s certainly a far cry from stringing together a set of disposable commodity boxes – and the corresponding high-end price. My CA Technologies colleague Jeff Abbott posted a few questions about the cloud-in-a-box concept in the first place. He wondered if these kinds of systems aren't actually against the self-interest of the customer -- and the vendor -- in the long run.
Still others called Exalogic a “credible offering” and thought it would have a big impact. It’s certainly a big vote in favor of internal clouds. As Ellison said in no uncertain terms, “we believe customers will build their own private clouds behind their firewalls.” And the product certainly backs up that premise.

Time to bring on the cloud
I think it is interesting how similar Exalogic is in name alone to brands Oracle already owns – products that really had nothing to do with the cloud. At least, not originally.

The name gives Oracle the ability to fit this product line in with the software they bought with BEA, the hardware they bought with Sun, and attach it to the cause of the moment – cloud computing. And that seems to be exactly what they want to do, no matter what they said a few years ago.
The good news: they can do all of that without having to put an ‘o’ in front of all their product names. (Don’t laugh: others like MorphLabs are trying it. But I guess it’s not so wacky – sticking an ‘i’ front of things certainly hasn’t hurt Mr. Jobs’ revenues over the past decade.) However, don’t expect anything to stop them from injecting “cloud” into every one of their customer conversations.
In the end, before proclaiming this the decade of ‘v’, or getting to upset about Oracle’s choices one way or the other, it’s worth remembering that all of these products will be judged by whether they do what they’re supposed to – and at an acceptable price – for two other important letters: IT.