Wednesday, May 25, 2011

Hurford of DNS Europe: service providers and SaaS developers are showing enterprises how cloud is done

The antidote to cloud computing hype is, well, reality. For example: talking to people who are in the middle of working on cloud computing right now. We’ve started a whole new section of the ca.com website that provide profiles, videos, and other details of people doing exactly that.


One of the people highlighted on this list of Cloud Luminaries and Cloud Accelerators is Stephen Hurford of DNS Europe. DNS Europe is a London-based cloud hosting business with over 500 customers across the region. They have been taking the cloud-based business opportunities very seriously for several years now, and provide cloud application hosting and development, hybrid cloud integration services, plus consulting to help customers make the move to cloud.


I convinced Hurford, who serves as the cloud services director for DNS Europe, to share a few thoughts about what customers and service providers are – and should be – doing right now and some smart strategies he’d suggest.


Jay Fry, Data Center Dialog: From your experiences, Stephen, how should companies think about and prepare for cloud computing? Is there something enterprises can learn from service providers like yourself?


Stephen Hurford, DNS Europe: Picking the right project for cloud is very important. Launching into this saying “we’re going to convert everything” to the cloud in this short period of time is almost doomed. There is a relatively steep learning curve with it all.


But this is an area of opportunity for all of the service providers that have been using CA 3Tera AppLogic [like DNS Europe] for the last three years. We’re in a unique position to be able to help enterprises bypass the pitfalls that we had to climb out of.


Because the hosting model has become a much more accepted approach in general, enterprises are starting to look much more to service providers. They’re not necessarily looking to service providers to host their stuff for them, but to teach them about hosting, because that’s what their internal IT departments are becoming – hosting companies.


DCD: What is the physical infrastructure you use to deliver your services?

Stephen Hurford: We don’t have any data centers at all. We are a CSP that doesn’t believe in owning physical infrastructure apart from the rack inwards. We host with reputable Tier 3 partners like Level3, Telenor, and Interxion, but it means that we don’t care where the facility is. We can deploy a private cloud for a customer of ours within a data center anywhere in the world and with any provider.


DCD: When people talk about cloud, the public cloud is usually their first thought. There are some big providers in this market. How does the public cloud market look from your perspective as a smaller service provider? Is there a way for you to differentiate what you can do for a customer?


Stephen Hurford: The public cloud space is highly competitive – if you look at Amazon.com, GoGrid, Rackspace, the question is how can you compete in that market space as a small service provider? It’s almost impossible to compete on price, so don’t even try.


But, one thing that we have that Amazon and Rackspace and GoGrid do not have is they do not have is an up-sell product – they cannot take their customers from a public cloud to a private cloud product. So when their customers reach a point where they say, “Well, hang on, I want control of the infrastructure,” that’s not what you get from Amazon, Rackspace and GoGrid. From those guys you get an infrastructure that’s controlled by the provider. Because we use CA 3Tera AppLogic, the customer gets control, whether hosted by a service provider or by themselves internally.


DCD: My CA Technologies colleague Matt Richards has been blogging a bit about smart ways for MSPs to compete and win with so much disruption going on. Where do you recommend a service provider start if they want to get into the cloud services business today?


Stephen Hurford: My advice to service providers who are starting up is to begin with niche market targeting. Pick a specific service or an application or a target market and become very good at offering and supporting that.


We recommend starting at the top, by providing SaaS. SaaS is relatively straightforward to get up and running on AppLogic if you choose the right software. The templates already exist – they are in the catalog, they are available from other service providers, and soon will be available in a marketplace of applications. Delivering SaaS offerings is the easiest technical and learning overhead approach and that’s why we recommend it.


DCD: Looking at your customer base, who is taking the best advantage of cloud platform capabilities right now? Is the area you just mentioned – SaaS – where you are seeing a lot of movement?

Stephen Hurford: Yes. The people who have found us and who “get it” are the SaaS developers. In fact, 90% of our customers are small- to medium-sized customers who are providing SaaS to enterprises and government sectors. It’s starting to be an interesting twist in the tale: these SaaS providers are starting to show enterprises how it’s done. They are the ones figuring out how to offer services. The enterprises are starting to think, “Well, if these SaaS providers can offer services based on AppLogic, why can’t I build my own AppLogic cloud?” That’s become our lead channel into the enterprise market.


DCD: How do service providers deal with disruptive technologies like cloud computing?

Stephen Hurford: From a service provider perspective, it’s simple: first, understand the future before it gets here. Next, push to the front if you can. Then work like crazy to drive it forward.


Cloud is a hugely disruptive technology, but without our low-cost resources we could not be as far forward as we are.


One of the fundamentally revolutionary sides of AppLogic is that I only need thousand-dollar boxes to run this stuff on. And if I need more, I don’t need to buy a $10,000 box. I only need to buy 4 $1,000 boxes. I have a grid and it’s on commodity servers. That is where AppLogic stood out from everything else.


DCD: Can you explain a bit more about the economics of this and your approach to keeping your costs very low? It sounds like a big competitive weapon for you.

Stephen Hurford: One of the big advantages of AppLogic is that it has reduced our hardware stock levels by 75%. That’s because all cloud server nodes are more or less the same, so we can easily reuse a server from one customer to another, simply by reprovisioning it in a new cloud.


One of the key advantages we’ve found is that once hardware has reached the end of its workable life in terms of an enterprise’s standard private cloud, it can very easily be repurposed into our public cloud where there is less question of “well, exactly what hardware is it?” So we’ve found we can extend the workable lifespan of our hardware by 40-50%.


DCD: What types of applications do you see customers bringing to the cloud now? Are they starting with greenfield apps, since this would give them a clean slate? The decision enterprises make will certainly have an impact for service providers.
Stephen Hurford: Some enterprises are taking the approach to ask “OK, how can I do my old stuff on a cloud? How do I do old stuff in a new way?” That’s one option for service providers – can I use the benefits of simplifying and reducing my stock levels, reducing my management overhead from my entire system by moving to AppLogic and then I won’t have 15 different types of servers and 15 different types of management teams. I can centralize it and get immediate benefits from that.


The other approach is to understand that this is a new platform with new capabilities, so I should look at what new stuff can I do with this platform. For service providers, it’s about finding a niche – being able to do something that works for a large proportion of your existing customers. Start there because those are the folks that know you and they know your brand. Think about what they are currently using in the cloud and whether they would rather do that with you.

DCD: Some believe that there is (or will soon be) a mad rush to switch all of IT to cloud computing; others (and I’m one of those) see a much more targeted shift. How do you see the adoption of cloud computing happening?


Stephen Hurford: There will always be customers who need dedicated servers. But those are customers who don’t have unpredictable growth. And need maximum performance. Those customers will get a much lower cost benefit from moving to the cloud.


For example, we were dealing with a company in Texas that wanted to move a gaming platform to the cloud. These are multi-player, shoot-‘em-up games. You host a game server that tracks all the coordinates of every object in space in real-time between 20 and 30 players and sends that data to the Xbox or PlayStation that renders it so they can play in the same gamespace. If you tried to do that with standard commodity hardware, you’re not getting the needed performance on the disk I/O.


The question to that customer was, “Do you have a fixed requirement? If you need 10 servers for a year and you’re not going to need to grow or shrink, don’t move to the cloud.” Dedicated hardware, however, is expensive. They said, “We don’t know what our requirements are and we need to be able to deploy new customers within 10 minutes.” I told them you don’t want to use dedicated servers for that, so you’re back to the cloud, but perhaps with a more tailored hardware solution such as SSD drives to optimize I/O performance.


DCD: So what do you think is the big change in approach here? What’s the change that a cloud platform like what you’re using for your customers is driving?

Stephen Hurford: Customers are saying it’s very easy with our platform to open up 2 browsers and move an entire application and infrastructure stack from New York to Tokyo. But that’s not enough.


Applications need to be nomadic. The concept of nomadic applications is way distant in the future, but for me, what we’re able to offer today is a clear sign-post for the future. Applications with their infrastructure will eventually become completely separated from the hardware level and the hypervisor. My application can go anywhere in the world with its data and with its OS that it needs to run on. All it needs to do is to plug into juice (hardware, CPU, RAM) – and I’ve got what I need.


Workloads like this would be liable to know where they are. By the minute, they’ll be able to find the least-cost service provisioning. If you’ve got an application and it doesn’t really matter where it is and it’s easy to move it around, then you can take advantage of least-cost service provisioning within a wider territorial region.




Thanks, Stephen, for the time and for sharing your perspectives. You can watch Stephen’s video interview and read more about DNS Europe here.

Tuesday, May 17, 2011

Uptime: Eeyore forever, or can cloud computing help the facilities-IT gap?

I dropped by the Uptime Institute Symposium in Santa Clara last week. It was a chance to step outside of my IT-focused world for a moment and hear the discussions from the other side of the fence: the facilities folks. And, I have to say, the view was a bit different.

In general, I think it's safe to say that facilities is not yet comfortable with where the cloud computing conversation is taking them.

Cloud computing is a part of the conversation on both sides of the fence, but people are looking at the cloud from very different angles. IT, despite having its own running battle with business users about cloud, can at least see cloud as an opportunity. Facilities in many cases views it as a direct, job-endangering threat. And, while there were hints at alignment, there are definitely some ruffled feathers.

A history of disconnects: IT & facilities

But this shouldn't be a surprise. A few years back, before Cassatt Corp. was acquired by CA Technologies, those of us at Cassatt spent quite a bit of time and effort understanding the facilities world and working to connect what the IT guys were doing with what was going on in the facilities realm. At that point, cloud computing was barely even called that. But the beginnings were there. The prototype ideas that would become cloud computing were starting to find their way into the IT and data center efficiency conversations in some of the more forward-looking companies. (If you want some historical snapshots, check out the work of the Silicon Valley Leadership Group in this area, plus the early entries from this blog and those by Ken Oestreich, now cloud marketing guy at EMC).

One of the biggest issues we ran into again and again was a disconnect between IT and facilities. And, after a few days at the most recent Uptime Institute event, I think it’s safe to say that the rift is still there.

IT (at the urging of the business) is leading the cloud charge

To show how the two groups are still pretty far apart, I’ll highlight a couple of the presentations I heard at the event. Several 451 Group analysts had presentations throughout the week, providing the IT perspective. One was William Fellows’ rapid-fire survey of where things are with cloud today. His premise was that cloud is moving from the playground to production in both public and private cloud incarnations.

Fellows pointed out that providing cloud-enabling technologies for service providers was one of the hottest spaces at the moment – he’s tracking a list of some 80 vendors at this point. Demand is moving cloud from an “ad hoc developer activity to become a first-class citizen.” Production business applications are “creeping up in public cloud” because of the ability to flexibly scale.

Enterprise IT, said Fellows, “wants to unlock their inner service-provider selves. They want to use cloud as just another node in the system.” In other words, the IT guys are starting to make leaps forward in how they are sourcing IT service, and even in how they are thinking about the IT role.

But from the other Uptime sessions and discussions, these forwarding-looking glimpses seemed to be the exception, rather than the rule.

Facilities is grappling with cloud’s implications and feeling uneasy

Contrast this with the keynote from AOL’s Mike Manos. Mike spent a chunk of his stage time on a self-described rant about how facilities people were feeling left out – “a bit gloomy” even – when it comes to the cloud computing discussion.

Manos compared facilities folks to Eeyore, the mopey character from the Winnie the Pooh children’s books. That prompted a few knowing chuckles in the crowd. But despite a predisposition to getting bummed out when the topic comes up, “you can’t duck your head,” said Manos, when the discussion turns to cloud computing.

He pointed out that the things that a Google keynoter from earlier in the conference had mentioned were not revolutionary, despite all they have accomplished. In fact, “Google is asking us to do the things we’ve been talking about [at Uptime conferences] for the past 10 years.”

The advice from Manos was good – and assertive. Facilities should step aggressively into the conversation about cloud computing. Don’t be worried that cloud might suddenly mean that data centers are suddenly going to disappear and you might lose your job. It won’t mean that, and especially not if you play your cards right. Instead of dreading cloud, figure out how to be part of (or even lead) the business decisions.

“No matter what, you’re going to have a hybrid model” in which data centers from external cloud providers will provide some of your IT service, and your own data centers will provide some as well. And, once you’re in that situation, “you’re going to have to manage it,” Manos said.

Now, there is a big list of things the facilities guys will need to get going on before they can take this head-on. Manos listed things as basic as “knowing what you have” in your data center and what it’s doing, as well as things that aren’t normally taken into account, including “soft costs you hardly ever capture.”

The cloud computing challenge for facilities

The ironic thing in all this is that the big cloud providers are given lots of kudos for their IT operations and their ability to enable IT service to aggressively support their business. One of the reasons that Google, Amazon, and others have gotten good at IT service delivery is, in fact, that they are good at the facilities side of things, too. Their facilities teams are integral to their success. So, folks, it’s possible.

Manos left his audience with a challenge – a challenge to jump into the cloud computing conversation with both feet. It means an investment to get applications ready for what happens when infrastructure fails (which it does) and to understand the operational impact of moving to the cloud (which is too often overlooked). It means an acknowledgement that a move to the cloud means a clearer understanding between how applications are architected and how data center facilities are run. Or at least an understanding of what you need to know when computing begins to happen both inside and outside your physical premises.

So, maybe cloud can actually help bridge the IT world and the facilities world. To some of us who have watched these two worlds dance around each other for a while, it’s been a long time coming. And, for sure, it’s not here yet. But Manos and others, in conjunction with the pressures facilities people are feeling from their business discussions about cloud comptuing, might just be providing the nudge they need.

Or, at the very least, a great nickname.

Sunday, May 15, 2011

Is a revolutionary, greenfield approach to cloud The Ultimate Answer? (Or is it still 42?)

If you were looking for the Ultimate Answer to Life, the Universe, and Everything, chances are cloud computing is not high on your list of things to worry about. You’re probably more interested in downing your Pan-Galactic Gargleblaster and getting on with things.

However, my CA Technologies colleague Andi Mann (@AndiMann on Twitter) and I used our recent Cloud Slam presentation to try to provide some straightforward advice on different approaches enterprises and service providers can take to move to a cloud-based infrastructure – while paying tribute to Douglas Adams and his Hitchhiker’s Guide to the Galaxy series in the process. The result? "The Hitchhiker's Guide to Cloud Computing: Tips for Navigating the Evolutionary and Revolutionary Paths to Cloud."


Folks willing to wade through the egregious puns and overstretched sci fi references got a view of 2 different cloud computing approaches – one more evolutionary and one quite revolutionary – that customers are taking. (I touched on this topic in a previous blog myself a few months back.) For those that missed Cloud Slam, Andi posted his portion of the presentation – pros and cons of the more evolutionary approach – at his blog. I’m doing the same here, using this post to highlight what the revolutionary approach should get you thinking about.

Sometimes Marvin’s right and it’s easier to just start over

In looking at the pile of technology and processes that most IT shops are dealing with on a daily basis, the idea of building on top of what exists to slowly evolve your way to a more cloud-like environment sounds like a lot of work and a lot of complexity. Why?

• Your existing IT investments bog down new things
• New technologies don’t always fit easily in existing orgs
• Those existing IT processes can restrict your range of innovation
• IT organization politics & culture can put up some impressive resistance

Yes, the roadblocks seem big enough to depress even Marvin the Paranoid Android.

A ‘probable’ option: a turnkey cloud platform


One way of getting to the cloud through the logjam is to kick your Infinite Improbability Drive into high gear. A more probable way? Use a turnkey cloud platform that picks up where server virtualization leaves off.


Virtualization breaks the chains between the hardware resources and the applications, but is still weighed down by networking and storage concerns. A more revolutionary approach is to set up a pool of computing resources and then create a virtual business service to run on top of that. A virtual business service is a multi-tier application and its infrastructure packaged together as a single object that can be moved, scaled, and replicated as needed. This approach allows you to skip right past a lot of the drawbacks of virtualization and evolving your existing systems toward a cloud architecture. For example, load balancers, SANs, and switches become virtual, programmable items.

Why use a revolutionary approach?


This revolutionary approach to cloud is an economic and agility game-changer for IT, enabling you to go far beyond what is enabled by simple server virtualization. It sets up huge potential improvements in the speed and operational expense of managing your applications and infrastructure – I can point you to at least one product in this space where you can literally draw what you need. This approach lets IT enter into conversations about your org’s business needs – conversation it wouldn’t have been in before. Service providers can use this approach to deliver cloud offerings faster, while building margin at the same time. Enterprises can make themselves ready to move apps to and from multiple different service providers.

In a world which IT has been a bit timid at taking a stand on cloud, this approach gives IT a position of strength to deliver what the business is asking for. It’s not a bad way to come out looking like a hyper-intelligent, pan-dimensional being.

What the revolutionary approach looks like in the real world

At Cloud Slam, I talked about two customers taking this approach today that I know about from our work here at CA Technologies.

PGi: PGi turned to the cloud to help it quickly roll out new advanced meeting, conferencing and collaboration solutions services worldwide at a much faster pace and, as it turned out, a far lower price point than they could otherwise. As PGi enters new markets, it needs to quickly secure scalable data center resources near the new markets it will be serving to ensure the best service for its new customers. Building a new data center is extremely expensive and time-consuming; it can typically take about 18 months to create a new data center. Instead, PGi subscribes to data center capacity from a set of different service providers.

PGi uses CA 3Tera AppLogic to create a movable “infrastructure stack” supporting the services it creates. This infrastructure stack consists of all of the servers, firewalls, storage and other components uniquely configured to support a given service, and can be moved with the click of a button from one service providers’ network to another. This way, PGi can choose the provider with the right price points and geographic coverage for each new market it enters. It can even easily move workloads between its own data centers and service providers’ networks.

PGi’s Chief Technology Officer David Guthrie (who has a video clip up on CA.com talking about all this) says that “using 3Tera, we’ve been able to spin-up new virtual data centers to support 5 new locations for what it would have cost us to build one physical data center.”PGi’s time-to-market with new services has improved significantly. Previously, it would take up to 6 months to purchase new hardware resources and 4-6 weeks to deploy a new software application. Today, it takes between 2-5 days to complete these processes from start to finish.

ScaleMatrix: My second example was a new service provider customer of ours that I profiled in a recent interview. ScaleMatrix is a brand-new business that started from scratch last year. And, they’re already selling cloud services to their clients. By definition, that means no legacy systems. Instead they have taken this revolutionary approach to heart. They used the CA 3Tera AppLogic software and custom-designed hardware to get their offerings off the ground. Fast. If you want more details about what ScaleMatrix has been able to do, check out their profile here.

What are the challenges to taking this revolutionary path?

All is not simple with this revolutionary path to cloud, though.

You’ll break a lot of glass. You’re going to have to be ready to do a lot of learning about both the approach and a new use of technology. “Antibodies” from inside and outside IT will appear to resist this, often because this is such a different way to look at things. If you do take advantage of being able to move virtual business services among different providers, vendor management will take more of your time than it has.

And, of course, fewer people have taken this path. The risks are certainly higher, but so are the rewards. Picture yourself as Arthur Dent, complete with his hitchhiking towel, his guidebook, and hopefully a whole lot more luck.





The good news…getting your organization to cloud fast





This revolutionary approach means you won’t have to wait around for generations for the Ultimate Answer, or even to see the initial benefits of the move to cloud. You jump right to the end state. You get application portability, mobility, and replication as part of the deal, plus some other very useful things for free, like standard application images, DR, and security.

A word of warning, however: to hear some people tell it, you’re not going to want to go back to doing things the old way.



Which path should I choose?

How do you translate all of this commentary (and the ones Andi gave) into real action? We provided a bit of a virtual Babel Fish at the end of our Cloud Slam presentation, to help you figure out whether to take Andi’s more evolutionary approach, or the revolutionary one I described here.

Here are a couple situations where you’d rather take the more revolutionary approach:

You could be a service provider that needs:
• To deliver a cloud offering now
• To get to market with new offerings
• To deliver multiple offerings (like IaaS and SaaS) while maintaining margin

Or, you could be an enterprise that:

• Wants an Amazon EC2-like infrastructure internally and can build up a greenfield infrastructure. To get the benefits, you need to control the components, have standardized (x86) hardware, a spikey usage profile, and be ready for a new development model
• Is out of time. Maybe a competitor has made a move you must counter, or maybe you’re just trying to deliver something to respond quickly to a new business initiative. Either way, you’ve figured out that the old way won’t work.





Either way – evolutionary or revolutionary – the answer that Andi and I gave in our 42-slide deck (and no, Douglas Adams fans, that wasn’t a coincidence) is “Don’t Panic.” Both are appropriate approaches in particular situations. In fact, most organizations will probably end up pursuing both. You’ll notice that the use cases that both of us gave are not mutually exclusive. Spend the time to think it through.

Hopefully this recap, alongside Andi’s, gives you some useful advice to having those Deep Thoughts. And, you’ll be happy to hear that gratuitous Hitchhiker’s Guide references are, well, “mostly harmless.”

Unlike Vogon poetry.