Showing posts with label private clouds. Show all posts
Showing posts with label private clouds. Show all posts

Sunday, August 21, 2011

Why cloud computing hype isn't bad for IT after all

A week or two back, ReadWriteWeb ran and published the results of a reader poll of the “most over-hyped cloud technologies.” Amusingly, the results (aside from a NoSQL mention) read like the basic NIST definition of the key components of cloud computing. Software as a Service, private clouds, Infrastructure as a Service, and Platform as a Service all made the top 5.

Wow, I thought. That barely scratches the surface. Plenty more cloud computing terms were enjoying their moment of irrational exuberance, but were being left out in the cold by this particular survey. A few Twitter conversations unearthed some very deserving nominations. Not to be forgotten:


Hybrid clouds. Apparently hybrid clouds didn’t have quite enough hype-y-ness to make the list. Weird, considering that term tends to be the punch line to nearly every cloud strategy and direction conversation that I hear. Better luck at next year’s awards ceremony, I guess.


Cloud bursting (nominated by @reillyusa and @AaronMotsinger). Some folks have been arguing back and forth about whether it is really a legitimate (or even possible) use case. @pdowning1077 noted he much preferred the term “capacity on demand,” but that doesn’t help settle the argument.


Cloud brokers. Forrester has been posting some interesting research for its subscribers on this new role (also defined by NIST in the July 5 version of its standards roadmap, if you want a standards org to weigh in for legitimacy). I’d say this conversation is still very early. The hype wagon train for this term has just set off down the road.


But probably the most impactful comment was another by @pdowning1077. “How about just the term ‘cloud’ [in general]?” he asked. How could they forget to include the mother of all hype-worthy terms in their polling?


So much hype that “cloud computing” becomes meaningless?


The same week of all this discussion, David Linthicum reported that cloud computing (the term) had now become essentially “meaningless.” That comment came on the heels of Gartner’s annual publication of their hype cycles. A quick scan notes that cloud computing is still close to where it was last year, just nosing over the (hype-laden) peak of inflated expectations. Private cloud computing is rapidly moving to join it, perched perilously over the trough of disillusionment, ready to take the leap.


OK, no one would argue with the extreme levels of marketing attention from everyone from start-ups to 30-year-old software companies (who, us?) to service providers. But just because a bunch of marketing people are in a frenzy doesn’t mean we should write off the trend they are talking about as a bunch of meaningless fluff.


The hype has caused IT to pay attention to cloud computing


In fact, if I’m reading the market right, I’d say that there are actually some really good things that have come out of the hype around cloud (and continue to do so).


We suddenly had something to call this good idea. There were a bunch of technologies and entrepreneurs out there struggling for several years to put a palatable name to what they were working on. Some started off calling this grid computing, some utility computing, and others more obscure terms than those. But, the early hype around cloud computing a few years back gave a name to the idea. We pulled several of these companies into CA (Cassatt and 3Tera, to name two), but many others were struggling with this same issue. One of my early posts on this blog was about how the term private cloud may not have been precise or perfect, but it enabled us to have the right conversation. I think the same thing goes for the overall cloud computing concept.

It created a way to catch the attention and imagination of enterprise IT. By talking about a Big Vision of IT infrastructure that matched compute supply with compute demands at any given time (and matched costs accordingly), ears perked up. It was the next logical topic to discuss with the IT guys who were fresh from thinking about how virtualization could free them up from particular pieces of hardware. In a world in which IT is fighting for every budget dollar, mostly just to keep treading water, an idea about how to get off this downhill hamster wheel is at least appealing to consider. That’s step one. (Ken Oestreich, by the way, has a great blog from a few months back on the brief history of the vision of cloud computing.)

The hype extended the discussion past the technologists to the business people. All the hubbub over cloud computing got the business users excited at a time when the economy was giving them little to be excited about. “So, you mean I might have a way to turn some of these business ideas into reality, despite the drubbing that the sour economy has given us and the measly budget that my IT partners say we have at our disposal?” This has been important – the business guys are the ones, in the end, pushing when IT starts to get nervous and pulls back from the visionary edge that cloud puts them on.


The hype has pressured big vendors into some self-reflection that will be beneficial for their customers. Many of the larger vendors jumped on the cloud bandwagon through new offerings, blatant rebranding of old offerings (shame on you), acquisitions, and the like. To make any of these moves, vendors have had to take stock and rethink what they can and should be providing given what their customers want. In some cases (like here at CA with Nimsoft), it causes the vendors to broaden the set of customers they are actually serving.


The intense amount of discussion has started an intense amount of scrutiny, revealing how useful cloud can actually be. One thing that happens when the hype levels reach fever pitch is that people start pushing back. The recent demand for real-world examples and exasperation over cloud outages has been the natural backlash from being force-fed lots and lots of best-case scenarios, rainbows, and unicorns. Journalists and analysts have often helped push for these kind of reality checks, though they also tend to pile on as technologies or ideas drop into the “trough of disillusionment” that Gartner is so fond of describing. Enterprise IT, business users, and the vendors themselves all eventually do a fair bit of policing, sometimes too late for their own good, but we seem to be headed in this (positive) direction right now.


So while a lot of the hype can seem like so much wasted energy from all parties, when the trend or shift being hyped actually has merit, something useful comes out the other end. Now, would

most of us (advertising agencies and ad reps excluded) prefer some way to skip the aggravation of this process and jump right to the end? I’d bet so. However, consider this all a bit of a trial by fire. The only way for something to be proven strong enough to pass through the fire is, well, to actually do it.


So, hold your nose and smile. Hype is good – with a few important caveats. Be critical. Be well-armed with the right questions to ask in order to discern the valuable from the merely fancifully over-marketed. Be ready to see the value in approaching something a new way, even if it’s something you’ve done the same way for decades. Be pragmatic enough to know it won’t happen overnight or with the wave of a magic wand.

If it makes you feel any better, cloud computing isn’t the only term getting the Gartner Hype Curve treatment this year. Added to the list, according to this ReadWriteWeb article, were big data, gamification, Internet of Things, and consumerization. Misery loves company, I guess.


And, in the meantime, it may be time to come up with your own term to start campaigning for next year’s Cloud Hype Awards. I think the hype is here to stay for a while longer.

Friday, July 29, 2011

A service provider ecosystem gaining steam in the cloud is good news for enterprises

If you’re in enterprise IT, you (or that guy who sits next to you) are very likely looking to figure out how to start using cloud computing. You’ve probably done a fair bit of sleuthing around the industry to see what’s out there. Stats from a number of different analyst firms point overwhelmingly to the fact that enterprises are first and foremost trying to explore private cloud, an approach that gives them a lot of control and (hopefully) the ability to use their existing security and compliance implementations and policies.

And, all that sleuthing will most likely lead you to one pretty obvious fact: there are lots of approaches to building and delivering a private cloud.

So, here’s an additional thing to think about when picking how you’re going to deliver that private cloud: an ecosystem is even more valuable than a good product.

While your current plans may call for a completely and utterly in-house cloud implementation, you just might want to expand your search criteria to include this “what if” scenario: what if at some point you’d like to send a workload or two out to an external provider, even if only on a limited basis?

That “what if” should get you thinking about the service provider partners that you will have to consider when that time comes. Or even a network of them.

The CA Technologies cloud product announcements on July 27 talked a lot about the offerings from CA Technologies (including those targeted for service providers specifically), but I wanted to make sure that the cloud service provider ecosystem that’s building steam around CA AppLogic got the attention it deserved as well.

Kick-starting a cloud ecosystem

In the early days of cloud computing (and prior to being acquired by CA), 3Tera made a lot of headway with service providers. Over the 16 months since 3Tera came onboard, CA has been working to build on that. We’ve really seen why and how to work closely with our growing set of partners. You can’t just focus on technology, but you must also figure out how to enable your partner’s business. For service providers, that’s all about growing revenues and building margin.

We’re now seeing results from those efforts. In the first half of this year alone, we’ve announced new and extended partnerships with ScaleMatrix, Bird Hosting, StratITsphere, Digicor, and others around the world.

A benefit to enterprises: service providers as a safety valve

Interestingly enough, enterprises view this ecosystem as a real benefit as they consider adopting a private cloud platform like CA AppLogic. Not only can they quickly get a private cloud up and running, but they have a worldwide network of service providers that they can rely on as a safety valve for new projects, cloud bursting for existing applications, plus the real-world expertise that comes with having done this many times before.

A number of the partners and service providers using CA AppLogic to deliver cloud services to their customers joined in on the announcement of CA AppLogic 3.0 on July 27. Many posted blogs that talked about how they believed the new capabilities would prove useful for their own business – and why it would be intriguing for enterprise customers as well.

Here are a few highlights from their blog comments:

Kevin Van Mondfrans of Layered Tech pointed to CA AppLogic’s continued innovation for application deployment “with its intuitive drag and drop application deployment interface.” The visual interface, he said, is the “hallmark of AppLogic,” and it “continues to differentiate this platform from the others. AppLogic enables complete deployment of entire application environments including virtual load balancers, firewalls, web servers, application server and databases in a single motion.”

The new AppLogic 3.0 capabilities “are interesting enhancements,” said Kevin in his post, “and they “enable a broader set of use cases for our customers with privacy requirement and who want to migrate VMware and Xen environments to the cloud.”

Mike Vignato of ScaleMatrix notes that he believes “having the ability to use VMware inside the AppLogic environment will turbo charge the adoption rate” of CA AppLogic as a cloud platform. Why? “It will allow enterprise CIOs to leverage their VMware investment while simplifying cloud computing and lowering overall costs.”

DNS Europe thought the ability to import workloads from VMware or Xen using the Open Virtualization Format (OVF) import feature was a “further testament to CA's longstanding commitment to open standards. OVF import simplifies operations, increases agility, and liberates VMware- or Xen-based workloads for operation within AppLogic applications.” They also called out Roll-Based Access Control, as well as the new Global Fabric Controller: “Automatic detection and inventory functions further boost AppLogic as the number one ITSM friendly cloud platform.” (Go here for my interview with Stephen Hurford, their cloud services director.)

Christoph Streit of ScaleUp posted this summary of what’s important to their business in AppLogic 3.0: “One of the most important new features from our perspective is the new VLAN tagging support. This has been a feature that most of our customers, who are mainly service providers, have been asking for. This new feature enables a service provider to offer their customers cloud-based compute resources (such as VMs or complete application stacks) in the same network segment as their co-located or dedicated servers. Also, this makes it possible for a service provider to segregate customer traffic more easily.”

Mike Michalik, whose team at Cirrhus9 spent over 200 documented hours working with the beta code, agreed. The new VLAN capabilities, he writes, “will allow Cirrhus9 to basically build true multi-tenant grids for our MSP clients. This will give them the flexibility to have a single grid that has truly segregated clients on it, as opposed to having multiple grids for each one. It will also allow for easier system administration per client and geographically disperse data centers for the same overall grid.”

ScaleMatrix’s Mark Ortenzi was on the same page. “The ability for CA AppLogic 3.0 to allow VLAN tagging support…is an amazing new feature that will really change the way MSPs can go to market.” (You can read a Q&A I did with Mark a few months back about their business here.)

“The really cool thing I’m excited about,” writes Mike Michalik at Cirrhus9, “is the bandwidth metering. I like the flexibility that this option gives us now because we can have several billing models, if we choose to. Tiered billing can be in place for high consumption users while fixed billing options can be provided to clients that have set requirements that don’t vary much.”

The value of an ecosystem

ScaleUp’s Christoph Streit underscored the importance of these kinds of partnerships in the cloud space. He used ScaleUp and CA AppLogic as an example: “We can effectively show the value of cloud computing to everyone – IT departments, business users, developers, the CIO and, in some cases, even the CEO.”

We’ll be working with these partners and many others even more actively as part of the CA Cloud Market Accelerator Program for Service Providers that we announced this week as well.

Watch this space for more updates on the ecosystem. For profiles on many for them, you can check out the Cloud Accelerator profiles we’ve created as well.

Wednesday, May 25, 2011

Hurford of DNS Europe: service providers and SaaS developers are showing enterprises how cloud is done

The antidote to cloud computing hype is, well, reality. For example: talking to people who are in the middle of working on cloud computing right now. We’ve started a whole new section of the ca.com website that provide profiles, videos, and other details of people doing exactly that.


One of the people highlighted on this list of Cloud Luminaries and Cloud Accelerators is Stephen Hurford of DNS Europe. DNS Europe is a London-based cloud hosting business with over 500 customers across the region. They have been taking the cloud-based business opportunities very seriously for several years now, and provide cloud application hosting and development, hybrid cloud integration services, plus consulting to help customers make the move to cloud.


I convinced Hurford, who serves as the cloud services director for DNS Europe, to share a few thoughts about what customers and service providers are – and should be – doing right now and some smart strategies he’d suggest.


Jay Fry, Data Center Dialog: From your experiences, Stephen, how should companies think about and prepare for cloud computing? Is there something enterprises can learn from service providers like yourself?


Stephen Hurford, DNS Europe: Picking the right project for cloud is very important. Launching into this saying “we’re going to convert everything” to the cloud in this short period of time is almost doomed. There is a relatively steep learning curve with it all.


But this is an area of opportunity for all of the service providers that have been using CA 3Tera AppLogic [like DNS Europe] for the last three years. We’re in a unique position to be able to help enterprises bypass the pitfalls that we had to climb out of.


Because the hosting model has become a much more accepted approach in general, enterprises are starting to look much more to service providers. They’re not necessarily looking to service providers to host their stuff for them, but to teach them about hosting, because that’s what their internal IT departments are becoming – hosting companies.


DCD: What is the physical infrastructure you use to deliver your services?

Stephen Hurford: We don’t have any data centers at all. We are a CSP that doesn’t believe in owning physical infrastructure apart from the rack inwards. We host with reputable Tier 3 partners like Level3, Telenor, and Interxion, but it means that we don’t care where the facility is. We can deploy a private cloud for a customer of ours within a data center anywhere in the world and with any provider.


DCD: When people talk about cloud, the public cloud is usually their first thought. There are some big providers in this market. How does the public cloud market look from your perspective as a smaller service provider? Is there a way for you to differentiate what you can do for a customer?


Stephen Hurford: The public cloud space is highly competitive – if you look at Amazon.com, GoGrid, Rackspace, the question is how can you compete in that market space as a small service provider? It’s almost impossible to compete on price, so don’t even try.


But, one thing that we have that Amazon and Rackspace and GoGrid do not have is they do not have is an up-sell product – they cannot take their customers from a public cloud to a private cloud product. So when their customers reach a point where they say, “Well, hang on, I want control of the infrastructure,” that’s not what you get from Amazon, Rackspace and GoGrid. From those guys you get an infrastructure that’s controlled by the provider. Because we use CA 3Tera AppLogic, the customer gets control, whether hosted by a service provider or by themselves internally.


DCD: My CA Technologies colleague Matt Richards has been blogging a bit about smart ways for MSPs to compete and win with so much disruption going on. Where do you recommend a service provider start if they want to get into the cloud services business today?


Stephen Hurford: My advice to service providers who are starting up is to begin with niche market targeting. Pick a specific service or an application or a target market and become very good at offering and supporting that.


We recommend starting at the top, by providing SaaS. SaaS is relatively straightforward to get up and running on AppLogic if you choose the right software. The templates already exist – they are in the catalog, they are available from other service providers, and soon will be available in a marketplace of applications. Delivering SaaS offerings is the easiest technical and learning overhead approach and that’s why we recommend it.


DCD: Looking at your customer base, who is taking the best advantage of cloud platform capabilities right now? Is the area you just mentioned – SaaS – where you are seeing a lot of movement?

Stephen Hurford: Yes. The people who have found us and who “get it” are the SaaS developers. In fact, 90% of our customers are small- to medium-sized customers who are providing SaaS to enterprises and government sectors. It’s starting to be an interesting twist in the tale: these SaaS providers are starting to show enterprises how it’s done. They are the ones figuring out how to offer services. The enterprises are starting to think, “Well, if these SaaS providers can offer services based on AppLogic, why can’t I build my own AppLogic cloud?” That’s become our lead channel into the enterprise market.


DCD: How do service providers deal with disruptive technologies like cloud computing?

Stephen Hurford: From a service provider perspective, it’s simple: first, understand the future before it gets here. Next, push to the front if you can. Then work like crazy to drive it forward.


Cloud is a hugely disruptive technology, but without our low-cost resources we could not be as far forward as we are.


One of the fundamentally revolutionary sides of AppLogic is that I only need thousand-dollar boxes to run this stuff on. And if I need more, I don’t need to buy a $10,000 box. I only need to buy 4 $1,000 boxes. I have a grid and it’s on commodity servers. That is where AppLogic stood out from everything else.


DCD: Can you explain a bit more about the economics of this and your approach to keeping your costs very low? It sounds like a big competitive weapon for you.

Stephen Hurford: One of the big advantages of AppLogic is that it has reduced our hardware stock levels by 75%. That’s because all cloud server nodes are more or less the same, so we can easily reuse a server from one customer to another, simply by reprovisioning it in a new cloud.


One of the key advantages we’ve found is that once hardware has reached the end of its workable life in terms of an enterprise’s standard private cloud, it can very easily be repurposed into our public cloud where there is less question of “well, exactly what hardware is it?” So we’ve found we can extend the workable lifespan of our hardware by 40-50%.


DCD: What types of applications do you see customers bringing to the cloud now? Are they starting with greenfield apps, since this would give them a clean slate? The decision enterprises make will certainly have an impact for service providers.
Stephen Hurford: Some enterprises are taking the approach to ask “OK, how can I do my old stuff on a cloud? How do I do old stuff in a new way?” That’s one option for service providers – can I use the benefits of simplifying and reducing my stock levels, reducing my management overhead from my entire system by moving to AppLogic and then I won’t have 15 different types of servers and 15 different types of management teams. I can centralize it and get immediate benefits from that.


The other approach is to understand that this is a new platform with new capabilities, so I should look at what new stuff can I do with this platform. For service providers, it’s about finding a niche – being able to do something that works for a large proportion of your existing customers. Start there because those are the folks that know you and they know your brand. Think about what they are currently using in the cloud and whether they would rather do that with you.

DCD: Some believe that there is (or will soon be) a mad rush to switch all of IT to cloud computing; others (and I’m one of those) see a much more targeted shift. How do you see the adoption of cloud computing happening?


Stephen Hurford: There will always be customers who need dedicated servers. But those are customers who don’t have unpredictable growth. And need maximum performance. Those customers will get a much lower cost benefit from moving to the cloud.


For example, we were dealing with a company in Texas that wanted to move a gaming platform to the cloud. These are multi-player, shoot-‘em-up games. You host a game server that tracks all the coordinates of every object in space in real-time between 20 and 30 players and sends that data to the Xbox or PlayStation that renders it so they can play in the same gamespace. If you tried to do that with standard commodity hardware, you’re not getting the needed performance on the disk I/O.


The question to that customer was, “Do you have a fixed requirement? If you need 10 servers for a year and you’re not going to need to grow or shrink, don’t move to the cloud.” Dedicated hardware, however, is expensive. They said, “We don’t know what our requirements are and we need to be able to deploy new customers within 10 minutes.” I told them you don’t want to use dedicated servers for that, so you’re back to the cloud, but perhaps with a more tailored hardware solution such as SSD drives to optimize I/O performance.


DCD: So what do you think is the big change in approach here? What’s the change that a cloud platform like what you’re using for your customers is driving?

Stephen Hurford: Customers are saying it’s very easy with our platform to open up 2 browsers and move an entire application and infrastructure stack from New York to Tokyo. But that’s not enough.


Applications need to be nomadic. The concept of nomadic applications is way distant in the future, but for me, what we’re able to offer today is a clear sign-post for the future. Applications with their infrastructure will eventually become completely separated from the hardware level and the hypervisor. My application can go anywhere in the world with its data and with its OS that it needs to run on. All it needs to do is to plug into juice (hardware, CPU, RAM) – and I’ve got what I need.


Workloads like this would be liable to know where they are. By the minute, they’ll be able to find the least-cost service provisioning. If you’ve got an application and it doesn’t really matter where it is and it’s easy to move it around, then you can take advantage of least-cost service provisioning within a wider territorial region.




Thanks, Stephen, for the time and for sharing your perspectives. You can watch Stephen’s video interview and read more about DNS Europe here.

Friday, December 31, 2010

A cloudy look back at 2010

Today seemed like a good day to take stock of the year in cloud computing, at least according to the view from this Data Center Dialog blog – and from what you as readers thought was interesting over the past 12 months.

Setting the tone for the year: cloud computing M&A

It probably isn’t any big surprise that 3 of the 4 most popular articles here in 2010 had to do with one of the big trends of the year in cloud computing – acquisitions. (Especially since my employer, CA Technologies, had a big role in driving that trend.) CA Technologies made quite a bit of impact with our successive acquisitions of Oblicore, 3Tera, and Nimsoft at the beginning of the year. We followed up by bringing onboard others like 4Base, Arcot, and Hyperformix.

But those first three set the tone for the year: the cloud was the next IT battleground and the big players (like CA) were taking it very seriously. CRN noted our moves as one of the 10 Biggest Cloud Stories of 2010. Derrick Harris of GigaOm called us out as one of the 9 companies that drove cloud in 2010. And Krishnan Subramanian included CA's pick-up of 3Tera and Nimsoft in his list of key cloud acquisitions for the year at CloudAve.

As you’d expect, folks came to Data Center Dialog to get more details on these deals. We had subsequent announcements around each company (like the release of CA 3Tera AppLogic 2.9), but the Nimsoft one got far and away the most interest. I thought one of the more interesting moments was how Gary Read reacted to a bunch of accusations of being a “sell-out” and going to the dark side by joining one of the Big 4 management vendors they had been aggressively selling against. Sure, some of the respondents were competitors trying to spread FUD, but he handled it all clearly and directly -- Gary's signature style, I’ve come to learn.

What mattered a lot? How cloud is changing IT roles

Aside from those acquisitions, one topic was by far the most popular: how cloud computing was going to change the role of IT as a whole – and for individual IT jobs as well. I turned my November Cloud Expo presentation into a couple posts on the topic. Judging by readership and comments, my “endangered species” list for IT jobs was the most popular. It included some speculation that jobs like capacity planning, network and server administration, and even CIO were going the way of the dodo. Or were at least in need of some evolution.

Part 2 conjured up some new titles that might be appearing on IT business cards very soon, thanks to the cloud. But that wasn’t nearly as interesting for some reason. Maybe fear really is the great motivator. Concern about the changes that cloud computing is causing to peoples’ jobs certainly figured as a strong negative in the survey we published just a few weeks back. Despite a move toward “cloud thinking” in IT, fear of job loss drove a lot of the negative vibes about the topic. Of course, at the same time, IT folks are seeing cloud as a great thing to have on their resumes.
All in all, this is one of the major issues for cloud computing, not just for 2010, but in general. The important issue around cloud computing is not so much about figuring out technology, it’s about figuring out how to run and organize IT in a way that makes the best use of technology, creates processes that are most useful for the business, and that people learn to live and work with on a daily basis. I don’t think I’m going out on a limb here to say that this topic will be key in 2011, too.
Learning from past discussions on internal clouds
James Urquhart noted in his “cloud computing gifts of 2010” post at CNET that the internal/private cloud debate wound its way down during the year, ending in a truce. “The argument died down…when both sides realized nobody was listening, and various elements of the IT community were pursuing one or the other – or both – options whether or not it was ‘right.’” I tend to agree.
These discussions (arguments?), however, made one of my oldest posts, “Are internal clouds bogus?” from January 2009, the 5th most popular one – *this* year. I stand by my conclusion (and it seems to match where the market has ended up): regardless of what name you give the move to deliver a more dynamic IT infrastructure inside your 4 walls, it’s compelling. And customers are pursuing it.
Cloud computing 101 remained important
2010 was a year in which the basics remained important. The definitions really came into focus, and a big chunk of the IT world joined the conversation about cloud computing. That meant that things like my Cloud Computing 101 post, expanding on my presentation on the same topic at CA World in May, garnered a lot of attention.
Folks were making sure they had the basics down, especially since a lot of the previously mentioned arguments were settling down a bit. My post outlined a bunch of the things I learned from giving my Cloud 101 talk, namely don’t get too far ahead of your headlights. If you start being too theoretical, customers will quickly snap you right back to reality. And that’s how it should be.
Beginning to think about the bigger implications of cloud computing
However, several forward-looking topics ended up at the top of the list at Data Center Dialog this year as well. Readers showed interest in some of the things that cloud computing was enabling, and what it might mean in the long run. Consider these posts as starting points for lots more conversations going forward:
Despite new capabilities, are we just treating cloud servers like physical ones? Some data I saw from RightScale about how people are actually using cloud servers got me thinking that despite the promise of virtualization and cloud, people perhaps aren’t making the most of these new-fangled options. In fact, it sounded like we were just doing the same thing with these cloud servers as we’ve always done with physical ones. It seemed to me that missed the whole point.

Can we start thinking of IT differently – maybe as a supply chain? As we started to talk about the CA Technologies view of where we think IT is headed, we talked a lot about a shift away from “IT as a factory” in which everything was created internally, to one where IT is the orchestrator of service coming from many internal and external sources. It implies a lot of changes, including expanded management requirements. And, it caught a lot of analyst, press, customer, -- and reader – attention, including this post from May.

Is cloud a bad thing for IT vendors? Specifically, is cloud going to cut deeply into the revenues that existing hardware and software vendors are getting today from IT infrastructure? This certainly hasn’t been completely resolved yet. 2010 was definitely a year where vendors made their intentions known, however, that they aren’t going to be standing still. Oracle, HP, IBM, BMC, VMware, CA, and a cast of thousands (OK, dozens at least) of start-ups all made significant moves, often at their own user conferences, or events like Cloud Expo or Cloud Connect.

What new measurement capabilities will we need in a cloud-connected world? If we are going to be living in a world that enables you to source IT services from a huge variety of providers, there is definitely a need to help make those choices. And even to just have a common, simple, business-level measuring stick for IT services in the first place. CA Technologies took a market-leading stab at that by contributing to the Service Measurement Index that Carnegie Mellon is developing, and by launching the Cloud Commons community. This post explained both.

So what’s ahead for 2011 in cloud computing?

That sounds like a good topic for a blog post in the new year. Until then, best wishes as you say farewell to 2010. And rest up. If 2011 is anything like 2010, we’ll need it.

Wednesday, December 29, 2010

Making 'good enough' the new normal

In looking back on some of the more insightful observations that I’ve heard concerning cloud computing in 2010, one kept coming up over and over again. In fact, it was re-iterated by several analysts onstage at the Gartner Data Center Conference in Las Vegas earlier this month.

The thought went something like this:

IT is being weighed down by more and more complexity as time goes on. The systems are complex, the management of those systems is complex, and the underlying processes are, well, also complex.

The cloud seems to offer two ways out of this problem. First, going with a cloud-based solution allows you to start over, often leaving a lot of the complexity behind. But that’s been the same solution offered by any greenfield effort – it always seems deceptively easier to start over than to evolve what you already have. Note that I said “seems easier.” The real-world issues that got you into the complexity problem in the first place quickly return to haunt any such project. Especially in a large organization.

Cloud and the 80-20 rule

But I’m more interested in highlighting the second way that cloud can help. That way is more about the approach to architecture that is embodied in a lot of the cloud computing efforts. Instead of building the most thorough, full-featured systems, cloud-based systems are often using “good enough” as their design point.

This is the IT operations equivalent of the 80-20 rule. It’s the idea that not every system has to have full redundancy, fail-over, or other requirements. It doesn't need to be perfect or have every possible feature. You don't need to know every gory detail from a management standpoint. In most cases, going to those extremes means what you're delivering will be over-engineered and not worth the extra time, effort, and money. That kind of bad ROI is a problem.

“IT has gotten away from “good enough” computing,” said Gartner’s Donna Scott in one of her sessions at the Data Center Conference. “There is a lot an IT dept can learn from cloud, and that’s one of them.”

The experiences of eBay

In talking about his experiences working at eBay during the same conference, Mazen Rawashdeh, vice president of eBay's technology operations, talked about his company’s need to be able to understand what made the most impact on cost and efficiency and optimize for those. That mean a lot of “good enough” decisions in other areas.

eBay IT developed metrics that helped drive the right decisions, and then focused, according to Rawashdeh, on innovation, innovation, innovation. They avoided the things that would weigh them down because “we needed to break the linear relationship between capacity growth and infrastructure cost,” said Rawashdeh. At the conference, he laid out a blueprint for a pretty dynamic IT operations environment, stress-tested by one of the bigger user bases on the Web.

Rawashdeh couched all of this IT operations advice in one of his favorite quotes from Charles Darwin: “It’s not the strongest of species that survive, nor the most intelligent, but the ones most responsive to change.” In the IT context, it means being resilient to lots of little changes – and little failures – so that the whole can still keep going. “The data center itself is our ‘failure domain,’” he said. Architecting lots of little pieces to be “good enough” lets the whole be stronger, and more resilient.

Everything I needed to know about IT operations I learned from my cloud provider

So who seems to be the best at “good enough” IT these days? Most would point to the cloud service providers, of course.

Many end-user organizations are starting to get this kind of experience, but aren’t very far yet. Forrester’s James Staten says in his 2011 predictions blog that he believes end-user organizations will build private clouds in 2011, “and you will fail. And that’s a good thing. Because through this failure you will learn what it really takes to operate a cloud environment.” He recommends that you “fail fast and fail quietly. Start small, learn, iterate, and then expand.”

Most enterprises, Staten writes, “aren’t ready to pass the baton” – to deliver this sort of dynamic infrastructure – yet. “But service providers will be ready in 2011.” Our own Matt Richards agrees. He created a holiday-inspired list of some interesting things that service providers are using CA Technologies software to make possible.

In fact, Gartner’s Cameron Haight had a whole session at the Vegas event to highlight things that IT ops can learn from the big cloud providers.

Some highlights:

· Make processes experienced-based, rather than set by experts. Just because it was done one way before doesn’t mean that’s the right way now. “Cloud providers get good at just enough process,” said Haight, especially in the areas of deployment and incident management.

· Failure happens. In fact, the big guys are moving toward a “recovery-oriented” computing philosophy. “Don’t focus on avoiding failure, but on recovery,” said Haight. The important stat with this approach is not mean-time-between-failures (MTBF), but mean-time-to-repair (MTTR). Reliability, in this case, comes from software, not the underlying hardware.

· Manageability follows from both software and management design. Management should lessen complexity, not add to it. Haight pointed toward tools trying to facilitate “infrastructure as code,” to enable flexibility.

Know when you need what

So, obviously, “good enough” is not always going to be, well, good enough for every part of your IT infrastructure. But it’s an idea that’s getting traction because of successes with cloud computing. Those successes are causing IT people to ask a few fundamental questions about how they can apply this approach to their specific IT needs. And that’s a useful thing.

In thinking about where and how “good enough” computing is appropriate, you need to ask yourself a couple questions. First, how vital is the system I’m working on? What’s its use, tolerance for failure, importance, and the like. The more critical it is, the more careful you have to be with your threshold of “good enough.”

Second, is speed of utmost importance? Cost? Security? Or a set of other things? Like Rawashdeh at eBay, know what metrics are important, and optimize to those.

Be honest with yourself and your organization about places you can try this approach. It’s one of the ideas that got quite a bit of attention in 2010 that’s worth considering.

Thursday, December 16, 2010

Survey points to the rise of 'cloud thinking'

In any developing market, doing a survey is always a bit of a roll of the dice. Sometimes the results can be pretty different from what you expected to find.

I know a surprise like that sounds unlikely in the realm of cloud computing, a topic that, if anything, feels over-scrutinized. However, when the results came back from the Management Insight survey (that CA Technologies sponsored and announced today), there were a few things that took me and others looking at the data by surprise.

Opinions of IT executives and IT staffs on cloud don’t differ by too much. We surveyed both decision makers and implementers, thinking that we’d find some interesting discrepancies. We didn’t. They all pretty much thought cloud could help them on costs, for example. And regardless of both groups’ first impressions, I’m betting cost isn’t their eventual biggest benefit. Instead, I’d bet that it’s agility – the reduced time to having IT make a real difference in your business – that will probably win out in the end.

IT staff are of two minds about cloud. One noticeable contradiction in the survey was that the IT staff was very leery about cloud because they see its potential to take away their jobs. At the same time, one of the most popular reasons to support a cloud initiative was because it familiarized them with the latest and greatest in technology and IT approaches. It seems to me that how each IT person deals with these simultaneous pros and cons will decide a lot about the type of role they will have going forward. Finding ways to learn about and embrace change can’t be a bad thing for your resume.

Virtualization certainly has had an impact on freeing people to think positively about cloud computing. I wrote about this in one of my early blogs about internal clouds back at the beginning of 2009 – hypervisors helped IT folks break the connection between a particular piece of hardware and an application. Once you do that, you’re free to consider a lot of “what ifs.”

This new survey points out a definite connection between how far people have gotten with their virtualization work and their support for cloud computing. The findings say that virtualization helps lead to what we’re calling “cloud thinking.” In fact, the people most involved in virtualization are also the ones most likely to be supportive of cloud initiatives. That all makes sense to me. (Just don’t think that just because you’ve virtualized some servers, you’ve done everything you need to in order to get the benefits of cloud computing.)

The survey shows people expect a gradual move from physical infrastructure to virtual systems, private cloud, and public cloud – not a mad rush. Respondents did admit to quite a bit of cloud usage – more than many other surveys I’ve seen. That leads you to think that cloud is starting to come of age in large enterprises (to steal a phrase from today’s press release). But it’s not happening all at once, and there’s a combination of simple virtualization and a use of more sophisticated cloud-based architectures going on. That’s going to lead to mixed environments for quite some time to come, and a need to manage and secure those diverse environments, I’m betting.

There are open questions about the ultimate cost impact of both public and private clouds. One set of results listed cost as a driver and an inhibitor for public clouds, and as a driver and an inhibitor for private ones, too. Obviously, there’s quite a bit of theory that has yet to be put into practice. I bet that’s what a lot of the action in 2011 will be all about: figuring it out.

And who can ignore politics? Finally, in looking at the internal organizational landscape of allies and stonewallers, the survey reported what I’ve been hearing anecdotally from customers and our folks who work with them: there are a lot of political hurdles to get over to deliver a cloud computing project (let alone a success). The survey really didn’t provide a clear, step-by-step path to success (not that I expected it would). I think the plan of starting small, focusing on a specific outcome, and being able to measure results is never a bad approach. And maybe those rogue cloud projects we hear about aren’t such a bad way to start after all. (You didn’t hear that from me, mind you.)

Take a look for yourself

Those were some of the angles I thought were especially interesting, and, yes, even a bit surprising in the survey. In addition to perusing the actual paper that Management Insight wrote (registration required) about the findings, I’d also suggest taking a look at the slide show highlighting a few of the more interesting results graphically. You can take a look at those slides here.

I’m thinking we’ll run the survey again in the middle of next year (at least, that seems like about the right timing to me). Two things will be interesting to see. First, what will the “cloud thinking” that we’re talking about here have enabled? The business models that cloud computing makes possible are new, pretty dynamic, and disruptive. Companies that didn’t exist yesterday could be challenging big incumbents tomorrow with some smart application of just enough technology. And maybe with no internal IT whatsoever.

Second, it will be intriguing to see what assumptions that seem so logical now will turn out to be – surprisingly – wrong. But, hey, that’s why we ask these questions, right?

This blog is cross-posted on The CA Cloud Storm Chasers site.

Tuesday, December 14, 2010

Beyond Jimmy Buffett, sumos, and virtualization? Cloud computing hits #1 at Gartner Data Center Conference

I spent last week along with thousands of other data center enthusiasts at Gartner’s 2010 Data Center Conference and was genuinely surprised by the level of interest in cloud computing on both sides of the podium. As keynoter and newly minted cloud computing expert Dave Barry would say, I’m not making this up.

This was my sixth time at the show (really), and I’ve come to use the show as a benchmark for the types of conversations that are going on at very large enterprises around their infrastructure and operations issues. And as slow-to-move as you might think that part of the market might be, there are some interesting insights to be gained comparing changes over the years – everything from the advice and positions of the Gartner analysts, to the hallway conversations among the end users, to really important things, like the themes of the hospitality suites.

So, first and foremost, the answer is yes, APC did invite the Jimmy Buffett cover band back again, in case you were wondering. And someone decided sumo wrestling in big, overstuffed suits was a good idea.

Now, if you were actually looking for something a little more related to IT operations, read on:

Cooling was hot this year…and cloud hits #1

It wasn’t any big surprise what was at the top of peoples’ lists this year. The in-room polling at the opening keynotes placed data center space, power, and cooling at the top of list of biggest data center challenges (23%). The interesting news was that developing a private/public cloud strategy came in second (16%).

This interest in cloud computing was repeated in the Gartner survey of CIOs’ top technology priorities. Cloud computing was #1. It made the biggest jump of all topics since their ’09 survey, by-passing virtualization, on its way to head up the list. But don’t think virtualization wasn’t important: it followed right behind at #2. Gartner’s Dave Cappuccio made sure the audience was thinking big on the virtualization topic, saying that it wasn’t just about virtualizing servers or storage now. It’s about “the virtualization of everything. Virtualization is a continuing process, not a one-time project.”

Bernard Golden, CEO of Hyperstratus and CIO.com blogger (check out his 2011 predictions here), wondered on Twitter if cloud leapfrogging virtualization didn’t actually put the cart before the horse. I’m not sure if CIOs know whether that’s true or not. But they do know that they need to deal with both of these topics, and they need to deal with them now.

Putting the concerns of 2008 & 2009 in the rear-view mirror

This immediacy for cloud computing is a shift from the previous years, I think. A lot of 2008 was about the recession’s impact, and even 2009 featured sessions on how the recession was driving short-term thinking in IT. If you want to do a little comparison yourself, take a look at a few of my entries about this same show from years past (spanning the entire history of my Data Center Dialog blog to date, in fact). Some highlights: Tom Bittman’s 2008 keynote (he said the future looks a lot like a private cloud), 2008’s 10 disruptive data center technologies, 2008’s guide to building a real-time infrastructure, and the impact of metrics on making choices in the cloud from last year.

The Stack Wars are here

Back to today (or at least, last week), though. Gartner’s Joe Baylock told the crowd in Vegas that this was the year that the Stack Wars ignited. With announcements from Oracle, VCE, IBM, and others, it’s hard to argue.

The key issue in his mind was whether these stack wars will help or inhibit innovation over the next 5 years. Maybe it moves the innovation to another layer. On the other hand, it’s hard for me to see how customers will allow stacks to rule the day. At CA Technologies, we continue to hear that customers expect to have diverse environments (that, of course, need managing and securing, cloud-related and otherwise). Baylock’s advice: “Avoid inadvertently backing into any vendor’s integrated stack.” Go in with your eyes open.

Learning about – and from – cloud management

Cloud management was front and center. Enterprises need to know, said Cameron Haight, that management is the biggest challenge for private cloud efforts. Haight called out the Big Four IT management software vendors (BMC, CA Technologies, HP software, and IBM Tivoli) as being slow to respond to virtualization, but he said they are embracing the needs around cloud management much faster. 2010 has been filled with evidence of that from my employer – and the others on this list, too.

There’s an additional twist to that story, however. In-room polling at several sessions pointed to interest from enterprises in turning to public cloud vendors themselves as their primary cloud management provider. Part of this is likely to be an interest in finding “one throat to choke.” Haight and Donna Scott also noted several times that there’s a lot to be learned from the big cloud service providers and their IT operations expertise (something worthy of a separate blog, I think). Keep in mind, however, that most enterprise operations look very different (and much more diverse) than the big cloud providers’ operations.

In a similar result, most session attendees also said they’d choose their virtualization vendors to manage their private cloud. Tom Bittman, in reviewing the poll in his keynote, noted that “the traditional management and automation vendors that we have relied on for decades are not close to the top of the list.” But, Bittman said, “they have a lot to offer. I think VMware’s capabilities [in private cloud management] are overrated, especially where heterogeneity is involved.”

To be fair: Bittman made these remarks because VMware topped the audience polling on this question. So, it’s a matter of what’s important in a customer’s eyes, I think. In a couple of sessions, this homogeneous v. heterogeneous environment discussion became an important way for customers to evaluate what they need for management. Will they feel comfortable with only what each stack vendor will provide?

9 private cloud vendors to watch

Bittman also highlighted 9 vendors that he thought were worthy of note for customers looking to build out private clouds. The list included BMC, Citrix, IBM, CA Technologies, Eucalyptus, Microsoft, Cisco, HP, and VMware.

He predicted very healthy competition in the public cloud space (and even public cloud failures, as Rich Miller noted at Data Center Knowledge) and similarly aggressive competition for the delivery of private clouds. He believed there would even be fierce competition in organizations where VMware owns the hypervisor layer.

As for tidbits about CA Technologies, you won’t be surprised to learn that I scribbled down a comment or two: “They’ve made a pretty significant and new effort to acquire companies to provide strategic solutions such as 3Tera and Cassatt,” said Bittman. “Not a vendor to be ignored.” Based on the in-room polling, though, we still have some convincing to do with customers.

Maybe we should figure out what it’ll take to get the Jimmy Buffett guy to play in our hospitality suite next year? I suppose that and another year of helping customers with their cloud computing efforts would certainly help.

In the meantime, it’s worth noting that TechTarget’s SearchDataCenter folks also did a good job with their run-down on the conference, if you’re interested in additional color. A few of them might even be able to tell you a bit about the sumo wrestling, if you ask nicely.

And I’m not making that up either.

Wednesday, October 13, 2010

The first 200 servers are the easy part: private cloud advice and why IT won’t lose jobs to the cloud

The recent CIO.com webcast that featured Bert Armijo of CA Technologies and James Staten of Forrester Research offered some glimpses into the state of private clouds in large enterprises at the moment. I heard both pragmatism and some good, old-fashioned optimism -- even when the topic turned to the impact of cloud computing on IT jobs.

Here are some highlights worth passing on, including a few juicy quotes (always fun):

Cloud has executive fans, and cloud decisions are being made at a relatively high level. In the live polling we did during the webcast, we asked who was likely to be the biggest proponent of cloud computing in attendees’ organizations. 53% said it was their CIO or senior IT leadership. 23% said it was the business executives. Forrester’s James Staten interpreted this to mean that business folks are demanding answers, often leaning toward the cloud, and the senior IT team is working quickly to bring solutions to the table, often including the cloud as a key piece. I suppose you could add: “whether they wanted to or not.”

Forrester’s Staten gave a run-down of why many organizations aren’t ready for an internal cloud – but gave lots of tips for changing that. If you’ve read James’ paper on the topic of private cloud readiness (reg required), you’ve heard a lot of these suggestions. There were quite a few new tidbits, however:

· On creating a private cloud: “It’s not as easy as setting up a VMware environment and thinking you’re done.” Even if this had been anyone’s belief at one point, I think the industry has matured enough (as have cloud computing definitions) for it not to be controversial any more. Virtualization is a good step on the way, but isn’t the whole enchilada.

· “Sharing is not something that organizations are good at.” James is right on here. I think we all learned this on the playground early in life, but it’s still true in IT. IT’s silos aren’t conducive to sharing things. James went farther, actually, and said, “you’re not ready for private cloud if you have separate virtual resource pools for marketing…and HR…and development.” Bottom line: the silos have got to go.

· So what advice did James give for IT organizations to help speed their move to private clouds? One thing they can do is “create a new desired state with separate resources, that way you can start learning from that [cloud environment].” Find a way to deliver a private cloud quickly (I can think of at least one).

· James also noted that “a private cloud doesn’t have to be something you build.” You can use a hosted “virtual private cloud” from a service provider like Layered Tech. Bert Armijo, the CA Technologies expert on the webcast, agreed. “Even large customers start with resources in hosting provider data centers.” Enterprises with CA 3Tera AppLogic running at their service provider and internally can then move applications to whichever location makes the most sense at a given point in time, said Armijo.

· What about “cloud-in-a-box” solutions? James was asked for his take. “Cloud-in-a-box is something you should learn from, not take apart,” he said. “The degree of completeness varies dramatically. And the way in which it suits your needs will vary dramatically as well.”

The biggest cloud skeptics were cited as – no surprise – the security and compliance groups within IT, according to the polling. This continues to be a common theme, but shouldn’t be taken as a reason to toss the whole idea of cloud computing out, emphasized Staten. “Everyone loves to hold up the security flag and stop things from happening in the organization.” But don’t let them. It’s too easy to use it as an excuse for not doing something that could be very useful to your organization.

Armijo also listed several tips for finding successful starting points in the move to creating a private cloud. It was all about pragmatic first steps, in Bert’s view. “The first 200 servers are the easy part,” said Armijo. “Because you can get a 50-server cloud up doesn’t mean you have conquered cloud.” His suggestions:

- Start where value outweighs the perceived risk of cloud computing for your organization (and it will indeed be different for each organization)
- Find places where you will have quick, repeated application or stack usage
- If you’re more on the bleeding edge, set up IT as an internal service provider to the various parts of the business. It’s more challenging, for sure, but there are (large) companies doing this today, and it will make profound improvements to IT’s service delivery.

Will cloud computing eliminate jobs? A bit of Armijo’s optimism was in evidence here: he said, in a word, no. “Every time we hit an efficiency wall, we never lose jobs,” he said. “We may reshuffle them. That will be true for clouds as well.” He believed more strategic roles will grow out of any changes that come as a result of the impact of cloud on IT.

“IT people are the most creative people on the face of the planet,” said Armijo. “Most of us got into IT because we like solving problems. That’s what cloud’s going to do – it’s going to let our creative juices flow.”

If you’re interested in listening to the whole webcast, which was moderated by Jim Malone, editorial director at IDG, you can sign up here for an on-demand, encore performance.

Tuesday, October 12, 2010

Delivering a cloud by Tuesday

Much of what I heard at VMworld in San Francisco (and is likely being repeated for the lucky folks enjoying themselves this week in Copenhagen) was about the long, gradual path to cloud computing. Lauren Horwitz captured this sentiment well in this VMworld Europe preview at SearchServerVirtualization.

And I totally agree: cloud computing is a long-term shift that will require time to absorb and come to terms with, while taking the form of public and private clouds – and even becoming a hybrid mix of the two. How the IT department in the largest organizations will assess and incorporate these new, more dynamic ways of operating IT is still getting sorted out.

Mike Laverick, the virtualization expert and the RTFM blogger quoted in the SearchServerVirtualization story, said that “users shouldn’t be scared thinking that they have to deliver a cloud by Tuesday of next week. It’s a long-term operational model over a 5- or 10-year period.”

But what if you can’t wait that long to deliver?

What if you really do have to deliver a cloud (or at least an application) next Tuesday? And then what if that app is expected to handle drastic changes the following Thursday?

Despite the often-repeated “slow and steady” messages about IT infrastructure evolution, I think it’s worth making sure we all don’t forget that there’s another choice, too. After all, cloud computing is all about choices, right? And cloud computing is about using IT to make an immediate impact on your business. While there is a time and place for the slow, steady, incremental change, there’s also a very real need now and again to make a big leap forward.

There are times to throw incrementalism out the window. In those situations, you actually do have another choice: getting a private cloud set up in record time with a more turnkey-type approach so you can immediately deliver the application or service you’re being asked for.

Turnkey cloud platform trade-offs

A turnkey approach for a cloud platform (which is, in the spirit of full disclosure, the approach that our CA 3Tera AppLogic takes) can get you the speedy delivery times you’re looking for. Of course, the key to speed is being able to have all of the complicating factors under your control. The 3Tera product does this, for example, by creating a virtual appliance out of the entire stack: from infrastructure on up to the application, simplifying things immensely by turning normally complicated components into software-only versions that can be easily controlled.

A turnkey cloud is probably best suited for situations where the quick delivery of the application or service is more critical than following existing infrastructure and operations procedures. After all, those procedures are the things that are keeping you from delivering in the first place. So there’s your trade-off: you can give yourself the ability to solve some of the messier infrastructure problems by changing some of the rules. And processes. The good news (for CA 3Tera AppLogic customers, anyway) is that even when you do break a few rules there’s a nice way to link everything back to your existing environment as that becomes important (and, eventually, required).

Create a real-world example of what your data center is evolving into

For these types of (often greenfield) situations, a turnkey private cloud platform gives you the chance to set up a small private cloud environment that delivers exactly what you’re looking for at the moment, which is the critical thing. It also sets you up for dramatic changes in requirements as conditions vary.

But beyond solving the immediate crisis (and preparing you for the next one), there’s a hidden benefit that shouldn’t be overlooked: you get experience working with your apps and infrastructure in this future, more cloud-like state. You get firsthand knowledge of the type of environment you’d like your entire data center to evolve into over the next 5-10 years. You’ll find out the things that work well. You’ll find out what the pitfalls are. Most of all, you’ll get comfortable with things working differently and learn the implications of that. That’s useful for the technology, but also for the previously mentioned processes, and, most of all, for the IT people involved.

So, when is that “big leap” appropriate?

The Forrester research by James Staten that I’ve referred to in previous posts talks about the different paths for moving toward a cloud-based environment – one more incremental and the other more turnkey. I’m betting you’ll know when you’re in the scenarios appropriate for each.

Service providers (including some of those working with us that I mentioned in my last post) have been living in this world for a few years now. They are very obviously eager to try new approaches. They are looking for real, game-changing cloud platform solutions upon which they can build their next decade of business. And the competitive pressures on them are enormous.

If you’re in an end user IT organization that’s facing an “uh oh” moment -- where you know what you want to get done, but you don’t really see how you can get there from here – it’s probably worth exploring this leap. At least in a small corner of your environment.

After all, Tuesday’s just around the corner.

Monday, October 4, 2010

New CA 3Tera release: Extending cloud innovations while answering enterprise & MSP requirements

Today CA Technologies announced the first GA release of CA 3Tera AppLogic.

Now, obviously, it’s not the first release of the 3Tera product. That software, which builds and runs public and private clouds, is well known in its space and has been in the market for several years now. In fact, CA 3Tera AppLogic has 80+ customers spread across 4 continents and has somewhere in the neighborhood of 400 clouds currently in use, some of which have been running continuously for years. I mention a few of the service provider customers in particular at the end of this post.

However, this will be the first GA release of the product since being acquired back in the spring – the first release under the CA Technologies banner. A lot of acquisitions in this industry never make it this far. From my perspective, this one seems to be working for a few reasons:

Customers need different approaches to cloud computing.

We’re finding that customer approaches to adopting some form of cloud computing – especially for private clouds – usually take one of two paths. I talked about this in a previous post: one path involves incrementally building on your existing virtualization and automation efforts in an attempt to make your data center infrastructure much more dynamic. That one is more evolutionary. The second path is much more revolutionary. That path is one where competitive pressure is high or the financial squeeze is on. Customers in that situation are looking for a more turnkey cloud approach, one that abstracts away a lot of the technical and process complexity. They are looking for a solution that gets them to cloud fast.

That last point is the key. A more turnkey approach like CA 3Tera AppLogic is going to be appealing when the delivery time or cost structure of the more slow & steady “evolutionary” approach is going to jeopardize success. That faster path is especially appropriate for managed service providers these days, given the rate of change and turbulence in the market (and the product’s partner & customer list proves that).

Now, having this more “revolutionary” approach in our portfolio alongside things like CA Spectrum Automation Manager lets us talk with customers about both paths. We get to ask what customers want, and then work with them on whichever approach makes sense for their current project. Truth is, they may take one path for one project, and another path for the next. And, as Forrester’s James Staten mentioned in a webcast on CIO.com about private clouds (that CA sponsored) last week, quickly getting at least some part of your environment to its new, (cloudy) desired state lets you learn quite a bit for your IT environment as a whole.

The 3Tera cloud platform innovations remain, well…innovative.

The other reason (I think) that you’re seeing progress from the combination of CA Technologies and 3Tera is actually pretty basic: even after all this time, no one else is doing what this platform can do. The CA 3Tera AppLogic cloud platform takes an application-centric approach. This means that IT is able to shift its time, effort, and focus away from the detailed technology components, and instead focus on what’s important – the business service that they are trying to deliver.

This application focus is different. Most of what you see in the market doesn’t look at things this way. It is either an incremental addition to the way IT has been managing and improving virtualization, or focuses at a level much lower than the application. Or both.

Plus, if you’ve ever seen a demo of CA 3Tera AppLogic, you’ll also be struck by the simplicity that the platform’s very visual interface brings you. You draw up the application and infrastructure you want to deploy and it packages this all up as a virtual appliance. That virtual package can then be deployed or destroyed as appropriate. When you need something, you just draw it in. Need more server resources or load balancers? Drag them in from your palette and drop them onto the canvas. (Here’s a light-hearted video that explains all this pretty simply in a couple of minutes.)

Keeping this sort of innovative technology, which was ahead of its time when it came out, out in front years later is really important, and a big focus for CA Technologies. The market is changing rapidly; we have to continue to lead the charge for customers.

Finding a way to meld innovation with extended enterprise-level & MSP requirements

I think it’s really good news that many of the new capabilities that CA Technologies is bringing to market with CA 3Tera AppLogic 2.9 are about making the product even more suitable as the backbone of an enterprise’s private cloud environment, or the underpinnings of a service provider’s cloud offerings. Things like high availability, improved networking, and new web services APIs should make a lot of the bigger organizations very happy.

Already, we’ve seen an accelerated eagerness by large enterprises to hear about what CA 3Tera AppLogic is and what it can do since it became part of CA Technologies. Maybe it’s the continued rough economic waters, but the big name standing behind 3Tera now goes a long way, from what I’ve heard and seen of customer reaction. Let’s just say, the guys out talking about this stuff are busy.

There are customers doing this today. Lots of them.

The true test of whether the timing is right, the technology is right, and the value proposition makes sense is, well, whether people are buying and then really using your solution. 3Tera started out seeing success with smaller MSPs. That success continues. The enterprises have moved slower, but leading-edge folks are making moves now. The enterprise use cases are very interesting, actually. Anything that involves spiky workloads, where standardized stacks need to be set up and torn down quickly is ideal. I’m going to try to get some interviews for the blog with a couple of these folks when they are ready to talk.

There are some excellent MSPs and other service providers, on the other hand, that are out there offering CA 3Tera AppLogic-based services today and shaking things up.

“Gone are the days of toiling in the depths of the data center,” said David Corriveau, CTO of Radix Cloud, “where we’d have to waste the efforts of smart IT pros to manually configure every piece of hardware for each new customer.” Now Radix Cloud centralizes their technical expertise in a single location with the CA 3Tera AppLogic product, which they say improves security and reliability.

I talked to Mike Michalik, CEO of Cirrhus9, and they use the product to quickly stand up new applications and websites for clients of theirs, with robustness and product maturity being key ingredients for success.

Berlin-based ScaleUp Technologies is also knee-deep delivering cloud services to customers. CEO Kevin Dykes said (my summer vacation plans actually enabled me to make an in-person visit to their offices) that a turnkey cloud platform means they, as a service provider, have a strong platform to start from, but can still offer differentiated services. “Our customers directly benefit, too,” said Dykes. “They are able to focus on rapid business innovation, quickly and easily moving applications from dev/test to production, or adding capacity during spikes in demand.”

Jonathan Davis, CTO of DNS Europe, will be co-presenting with my colleague Matt Richards at VMworld Europe in Copenhagen, talking about how he’s been able to change the way they do business. “Very quickly,” said Davis, “we have been able to ramp up new revenue streams and win new customers with even higher expectations and more complex needs.”

I find the user stories to be some of the most interesting pieces of this – and one of the things that is so hard to find in the current industry dialog about cloud computing. So, now that we have CA 3Tera AppLogic announced, I’ll be interviewing a few of these folks to hear firsthand what’s working – and what’s not – in their view of cloud computing. Check back here for more details.

Thursday, August 26, 2010

Back to school -- for the cloud? Try not to forget the multiple paths for virtualization & cloud

Summer vacation is really a bad idea.

At least, that’s what TIME Magazine reported a few weeks back. Despite our glorified, nostalgic memories of endless hours on the tire swing above the old water hole (or, more likely, trying to find towel space on a lounge chair by the gym’s overcrowded pool), apparently kids forget stuff when they aren’t in school.

So, now that everyone’s headed back to the classroom and hitting the books again, they’ve got to jog their memories on how this learning stuff worked.

Luckily, as working adults who think about esoteric IT topics like virtualizing servers and actually planning cloud computing roll-outs, we can say this is never an issue. Right? Anyone? Bueller? Bueller?

However, with VMworld imminent and people returning from vacations, it’s a good time to reiterate what I’ve been hearing from customers and others in the industry about how this journey around virtualization and cloud computing goes.

Some highlights (take notes if you’d like; there might be a short quiz next period):

Rely on the scientific method. You’re going to hear lots of announcements at VMworld next week. (In fact, many folks jumped the gun and lobbed their news into the market this week.) In any case, be a good student and diligently take notes. But then you should probably rely a bit on the scientific method. And question authority. Know what you need or at least what you think you need to accomplish your business goal. Look at any/all of our vendor announcements through that lens. You’ll probably be able to eliminate about two-thirds of what you hear next week from VMware and all its partners (and, of course, I realize that probably includes us at CA Technologies, too). But that last third is worth a closer look. And some serious questions and investigation.

The answers aren’t simply listed in the back of your textbook. Meaning what? Well, here's one thing for starters: just because you’re knee-deep in virtualization doesn’t mean you’re automagically perfectly set up for cloud computing. Virtualization is certainly a key technology that can be really useful in cloud deployments, but as I've noted here before, it’s not sufficient all by itself. The NIST definition of cloud computing (and the one I use, frankly), doesn’t explicitly mention virtualization. Of course, you do need some smart way to pool your computing resources, and 15,000 VMworld attendees can’t be wrong…right? (Go here for my write-up on last year’s VMworld event.) But, just keep that in mind. There’s more to the story.

In fact, there may be more than one right answer. There isn’t one and only one path to cloud computing. My old BEA cohort Vittorio Viarengo had a piece in Forbes this week talking about virtualization as the pragmatic path to cloud. It can be. I guess it all depends what that path is and where it goes. It just may not be ideally suited for your situation.

On the “path to cloud computing,” to borrow Vittorio’s term, there are two approaches we’ve heard from folks:

Evolution: No, Charles Darwin isn’t really a big cloud computing guru (despite the beard). But many companies are working through a step-by-step evolution to a more dynamic data center infrastructure. They work through consolidation & standardization using virtualization. They then build upon those efforts to optimize compute resources. As they progress, they automate more, and begin to rely on orchestration capabilities. The goal: a cloud-style environment inside their data center, or even one that is a hybrid of public and private. It’s a methodical evolution. This method maps to infrastructure maturity models that folks like Gartner talk about quite a bit.

Revolution: This is not something you studied in history class involving midnight rides and red coats. If organizations have the freedom (or, more likely, the pressure to deliver), they can look at a more holistic cloud platform approach that is more turn-key. It’s faster, and skips or obviates a lot of the steps mentioned in the other approach by addressing the issues in completely different ways. The benefit? You (a service provider or an end user organization) can get a cloud environment up and running in a matter of weeks. The downside? Many of the processes you’re used to will be, well, old school. You have to be OK with that.

Forrester’s James Staten explained ways to deliver internal clouds using either approach in his report about why orgs aren’t ready for internal clouds in the first place. Both the evolutionary and the revolutionary approaches are worthy of more detail in an additional post or two in the near future, I think. But the next logical question – how do you decide what approach to take? – leads to the next bit of useful advice I’ve heard:

When in doubt, pick ‘C’. Even customers picking a more evolutionary approach won’t have the luxury of a paint-by-numbers scenario. Bill Claybrook’s recent in-depth Computerworld article about the bumpy ride that awaits many trying to deliver private clouds underscores this. “Few, if any, companies go through all of the above steps/stages in parallel,” he writes. “In fact, there is no single ‘correct’ way to transition to a private cloud environment from a traditional data center.”

So, the answer may not only be a gradual evolution to cloud by way of increasing steps of virtualization, automation, and orchestration. And it may not only be a full-fledged revolution. Instead, you want to do what’s right for each situation. That means the co-existence of both approaches.

How do you decide? It’s probably a matter of time. Time-to-market, that is. In situations where you have the luxury of a longer, more methodical approach, the evolutionary steps of extending virtualization, automation, and standardization strategies is probably the right way to go. In situations where there is a willingness, eagerness, or, frankly, a need to break some glass to get things done, viva la revolution! (As you probably can guess, the CA 3Tera product falls into this latter category.)

Learn from the past. Where people have gotten stuck with things like virtualization, you’ll need to find ways around it. Sometimes that will be helped by tools from folks like VMware themselves, broader management tools from players like, oh, say CA Technologies or a number of others. Sometimes that help will need to be in the form of experts. As I previously posted, we’ve just brought a few of these experts onboard with the 4Base Technologies acquisition, and I bet there will be a few consulting organizations in the crowd at VMworld. Just a hunch.

Back to Claybrook’s Computerworld article for a final thought: “[O]ne thing is very clear: If your IT organization is not willing to make the full investment for whatever part of its data center is transitioned to a private cloud, it will not have a cloud that exhibits agile provisioning, elasticity and lower costs per application.”

And that’s enough to ruin anyone’s summer vacation. See you at Moscone.

If you are attending VMworld 2010 and are interested in joining the San Francisco Cloud Club members for drinks and an informal get-together on Wednesday evening before INXS, go here to sign up.