Showing posts with label Forrester. Show all posts
Showing posts with label Forrester. Show all posts

Wednesday, March 14, 2012

The new iPad -- with an enterprise twist

Last week’s “new iPad” announcement elicited the breathless attention of the industry, as expected.

One angle that didn’t get too much play by Apple itself, however, was the impact of this new device on the IT departments in large enterprises. But fear not. Others noticed the oversight. For example, salesforce.com CEO Marc Benioff complained over Twitter that Apple missed a trick in not recounting the iPad’s in-roads in the enterprise. Last year’s iPad 2 launch, he tweeted “was enterprise friendly,” implying this one was, well, not.

In fact, many industry commentators took a shot at what Apple’s new announcement will mean for the enterprise. The sheer quantity of commentary tells me that the enterprise impact of this device, uncertain even 18 months ago, is pretty much guaranteed. The consumerization of IT is alive and well. Or at least it’s what people want to talk about.

Different ways that enterprises and iPads mix

The enterprise-related comments about the new iPad announcement fell into a few categories. First, they recounted where they thought enterprises were on the adoption curve of iPads -- and tablets in general. Second, they reviewed what the popularity of the iPad means to IT and the systems those folks must tirelessly maintain every day. Third, there were a few comments on how the enterprise might respond to all this. Here are some highlights I thought were worth repeating:

CIOs want to move to the tablet…now

Ted Schadler of Forrester believes things are moving aggressively, based on the 100 or so inquiries from CIOs he has taken in the past 6 months. So what has been the most common question they’ve asked? “How do I get business applications onto the tablet?” Our field folks here at Framehawk are hearing the same kinds of questions.

Ted also recounted all the business software that’s making a move to the iPad, though most of his examples are productivity-style apps that you’d find on a PC. The customers we are talking to are certainly interested in those, but they are also looking for ways to use the more business-critical applications with tablets as well.

“Post-PC”

Schadler also imagined a new question that the newest iPad will bring to the forefront: which employees get laptops instead of PCs? Eric Lai of SAP/Sybase called out the “post-PC-ness” of the new device in his CNET write-up. In fact, Wired called the “post-PC revolution” that Tim Cook talked about as “fighting words.” What does “post-PC” mean, specifically? Tim Carmody of Wired said it this way: it means that “computing time and attention [by consumers and the IT department, frankly] shifts to phones and tablets and television screens, among others. And the traditional PC becomes a more specialized device for particular tasks.”

The new iPad’s improvements will impact existing IT systems

But even before that shift happens, there's a chance that everyone in the enterprise is going to feel the impact of the new iPad -- and not in a good way. A Computerworld story by Matt Hamblen noted that people bringing the new iPad to the office (sanctioned or otherwise) might actually end up causing a huge network crunch.

Hamblen reported the possibility of employees trying to avoid high personal mobile network charges for downloading HD movies and the like just might do it at the office, impacting corporate Wi-Fi networks. He also pointed to what might happen if many people are trying to get iOS or app updates at the same time. “The Wi-Fi download burden on corporate networks could be severe,” said the experts that Matt interviewed.

Will the enhanced processing power change what’s ported to an iPad?

One interesting commentary that the specs of the new iPad brought up: the enhanced processing power could be a boon to running big applications on the iPad. One of the users interviewed by John Cox in a separate Network World article mentioned "4G, the new processor speed, and improved screen resolutions will allow IT to port more backend applications like Oracle, and Siebel to iPad."

In reality, it’s not only the processor that’s holding this back. It’s the development effort it takes to rewrite things. So while the new iPad’s souped up specs are beneficial improvements, don’t think it will mean suddenly SAP will be ported to your iPad. At least, not with this traditional approach. (Now, if you’re interested in some alternative approaches, I know some people to talk to.)

Cheap and cheerful for enterprises: iPad 2 ROI

Finally, Cox of Network World, Geoff Simon of Technorati, and several other folks commented that the lowering of the price for the iPad 2 might just be the ticket for the enterprise IT departments. “Starting at just $399, the iPad2 with 16GB is perfect as an enterprise-level business tool," says Simon. "For enterprise, the promise of skyrocketing ROI is what makes the iPad so irresistible.” Simon believes enterprises will start with productivity and process management tools, eventually moving more toward business intelligence capabilities.

Leaving competitors in the dust?

Is this demand the same for all tablets? Schadler of Forrester and a number of data points say no: the other devices aren’t getting the same uptake or interest. Carmody of Wired reported that in his interview Schadler wasn’t “quite so bullish on other tablets” (including forthcoming Windows 8-related efforts), given Apple’s head start and the consumer-driven preferences of people selecting their own devices (read: Apple).

The projects that we’re working on at Framehawk seem to match this thinking: iPad projects are under consideration first, everything else after that. (I posted some relative adoption stats in a previous blog if you’re interested.) A round-up of analyst reports from Apple’s announcement continues the lovefest – with general agreement that Apple has lapped its competitors.

Either way, the era of tablets (which Apple ushered in in the first place) is certainly being accelerated by the newly announced iPad. And despite little commentary from Apple, that impact is definitely stretching far into the enterprise.

Monday, February 13, 2012

Catching up with some crazy mobile and tablet adoption stats for the enterprise

As I’ve been settling in to my gig at Framehawk, I’ve been continuing to watch the cloud computing marketing closely (hello to everyone at Cloud Connect in Santa Clara this week). However, I’ve also been expanding that focus to mobility and how that relates to the enterprise market.

I’m finding a barrage of new stats about adoption come out almost daily. They are worth noting because this market is so new. And, even for those immersed in it day-to-day, they are pretty crazy.

I made note of a bunch of data points about mobile devices and the enterprise from my trip to CES a few weeks back and from articles in various trade journals as they have appeared recently. We’ve tweeted a few of them from @Framehawk; I thought I’d share a few of them here as well. Any conversation about the consumerization of IT or an enterprise’s “bring your own device” (BYOD) policy is going to start from (or certainly be colored by) these stats.

Frankly, lots of tablets

For something that didn’t exist only a few years ago, the tablet market has been downright explosive. It even pulled me in last year. According to research from the Displaysearch division of NDP, I bought one of the 72.7 million tablets sold in 2011. That was a 252% increase over 2010 -- the year the iPad made its debut.

And the growth isn’t showing any signs of stopping. Tablet sales are expected to grow 38.8% and hit approximately 248.6 million units sold by 2015, according to predictions from Transparency Market Research.

In the enterprise (the market we’re tracking closely at Framehawk), there is currently 1 tablet request for every 3 smartphones, says a Cisco survey.

It’s (currently) all about the iPad in the enterprise

From what the numbers say, and what we at Framehawk are hearing from customers, the enterprise market is currently all about the iPad. There are other devices that are being talked about, but only as a distant second choice. At least, for now.

However, just before CES, Sarah Rotman Epps of Forrester quoted some numbers saying that the percent of U.S. shoppers that preferred Android jumped from 9 to 18% in the first 9 months of 2011. During that same period, the number of folks that would actually prefer Windows dropped from 46 to 25% (still placing it higher than the numbers for Android).

Of course, as new form factors appear, and the initial “cool factor” of the iPad wears off, its dominance could fade. For example, Framehawk’s own CTO, Stephen Vilke, took a particular liking to the Samsung Galaxy Note he saw at CES, stylus and all ("This may sound crazy, but the stylus is more natural for a guy like me who spent his school years with a pen or pencil in my hand," said Stephen in a recent CNET article on the topic).

As new devices launch and gain adoption at the expense of others, enterprises must be ready to react.

After initial caution, enterprises are being more aggressive about adopting tablets

Despite a reputation for moving slowly, enterprise IT seems to be jumping into the adoption of tablets faster than you might expect.

After “testing the waters” in 2011, according to a Forrester report, companies are expected to buy $10 billion worth of iPads this year and $16 billion in 2013. This lends credence to some stats from Apple from October 2011 purporting that 93% of Fortune 500 companies have deployed or are testing iPads.

I’d definitely believe that projects to test how best to make use of these devices have sprung up nearly everywhere in the enterprise, despite a few hold-outs prohibiting their use for official purposes. On the other hand, production deployments, from what I can tell, are less likely at the moment, though there’s intense pressure to get there – and to do so quickly.

What will the impact of all this be in the enterprise?

This is a lot of change to absorb, especially for big organizations. People are getting used to how and when to use tablets versus their other computing devices. In fact, an IDG Connect study reported that 16% of their respondents claim their iPad has replaced their PC.

I find it unlikely (as did a few of my Twitter followers) that this 16% actually handed back their PCs, but the point is an interesting one. What devices does the IT infrastructure team need to optimize for now? What devices should be considered important by the mobile application development team? And is there a way to take all of these changes and uncertainties in stride?

Stay tuned…I’ll be tackling some of those questions (and, yes, highlighting some of ways Framehawk might help) in the near future.

In the meantime, let me know of any other jaw-dropping stats worth adding to this list. And, if you’re interested in any others that we dig up, follow @Framehawk – we tweet some of the more intriguing ones as we find them.

Monday, January 31, 2011

More about the ‘people side’ of cloud: good-bye wizards, hello wild ducks and T-shaped skills

One of the problems that has dragged down the transformational ideas around cloud computing is the impact of this new operating model on organizations and the actual people inside them. I noticed this back in my days at Cassatt before the CA acquisition. The Management Insight survey I wrote about a few weeks back pointed this out. And, if you’ve been following this blog, I’ve mentioned it a couple other times over the past few months as the hard part of this major transition in IT.

While the technical challenges of cloud computing aren’t simple, the path to solve them is pretty straightforward. With the people side of things…not so much.

However, I’m optimistic about a new trend I've noticed throughout the first month of 2011: much of the recent industry discussion is actually talking through the people issues and starting to think more directly about how cloud computing will affect the IT staff.
Now, maybe that’s just beginning-of-the-year clarity of thought. Or people sticking to some sort of resolutions.

(I have a friend, for example, who swears off alcohol for 31 days every January 1st. He does it to make up for the overly festive holiday season that has been in high-gear the previous month. But I think he also uses it as an attempt to start the year out focusing himself on the things he thinks he should be doing, rather than having an extra G&T. And it usually works. For a while, anyway.)

Whatever the reason, I thought it was worth highlighting interesting commentary that’s recently appeared in the hopes of extending (and building on) the discussion about the human impact of cloud computing.

It’s got to be you
Back in December, the folks on-stage at the Gartner Data Center Conference had a bunch of relevant commentary on IT and its role in this cloudy transition.
Gartner analyst Tom Bittman noted in his keynote that for cloud computing, IT needs to focus. The IT folks are the ones that “should become the trusted arbiter and broker of cloud in your organization.” He saw evangelism in the future of every IT person in his audience from the sounds of it. “Who is going to be the organization to tell the business that there is a new opportunity based on cloud computing? That’s got to be you.” Are people ready for that level of commitment? We’ll find out. With great power comes great responsibility, right?
Automation & staffing levels
With cloud also comes an increasing reliance on automation. That hands IT people a big reason to push back on cloud computing right there. As Gartner analyst Ronni Colville said in one of her sessions, “the same people who write the automation are the ones whose jobs are going to change.”
In another session, Disney Interactive Media Group’s CTO Bud Albers talked about how the company’s cloud-based approach to internal IT has impacted staffing levels. “No lay-offs,” he said, “but you get no more people.” That means each person you do have (and keep) is going to have to be sharpening their skills toward this transition.

T-shaped business skills

In the era of cloud computing, then, what do you want that staff to be able to do?
Gartner’s Dave Cappuccio talked about creating a set of skills that are “T-shaped” – deep in a few areas, but having broad capabilities across how the business actually works. He believes that “technology depth is important, but not as key as business breadth.” The biggest value to the business, he said, is this breadth.

So that means even more new IT titles from cloud computing

To get to what Cappuccio is proposing, organizations are going to have to create some new roles in IT, especially focusing on the business angle. A few posts ago, I rattled off a whole list of new titles that will be needed as organizations move to cloud computing. Last week, Bernard Golden’s blog at CIO.com talking about “Cloud CIO: How Cloud Computing Changes IT Staffs” made some insightful suggestions (and some, I’m happy to say, matched mine). He noted the rising importance of the enterprise architect, as well as an emphasis on operations personnel who can deal with being more “hands-off.” He saw legal and regulatory roles joining cloud-focused IT teams and security designers needing to handle the “deperimeterization” of the data center. And, IT financial analysts become more important in making decisions.

Gartner’s Donna Scott agreed with that last one in her session on the same topic at the Gartner Data Center Conference. She believed that the new, evolved roles that would be needed would include IT financial/costing analysts. She also called out solution architects, automation specialists, service owners, and cloud capacity managers.

Wild ducks & the correct number of pizzas

So what personalities do you need to look for to fill those titles?

At the same conference, Cameron Haight discussed how to organize teams and whom to assign to them. “If you have that ‘wild duck’ in your IT shop, they’re dangerous. But they are the ones who innovate,” he said.

Haight noted that the hierarchical and inflexible set up of a traditional IT org just won’t work for cloud. What’s needed? Flatter and smaller. “Encourage the ‘wild duck’ mentality,” said Haight. “Encourage critical thinking skills and challenge conventional wisdom” in individuals.

As for organizing, “use 2-pizza teams,” he suggested, meaning groups should be no larger than 2 pizzas would feed. (He left the choice of toppings up to us, thankfully.) Groups then should support a service in its entirety by themselves. Haight believes this drives autonomy, cohesiveness, ownership, and will help infrastructure and operations become more like developers, lessening the “velocity mismatch” between agile development and slow and methodical operations teams.

To take this even farther, take a look at the Forrester write-up called “BT 2020: IT’s Future in the Empowered Era.” Analysts Alex Cullen and James Staten talk about a completely new mindset that’s needed for IT (or, as they call it, BT – business technology) by 2020. Why? Your most important customers today won’t be so important then, what those new customers will want IT doesn’t yet provide, and the cost of energy is going to destroy current operating models.

Other than that, how was the play, Mrs. Lincoln? But, hey, 2020 is still a ways off.

Moving past wizardry: getting real people onboard with real changes today

Getting the actual human beings in IT to absorb and actually be a part of all these changes is hard and has to be thought through.

At a recent cloud event we held, I interviewed Peter Green, CTO and founder of Agathon Group, a cloud service provider that uses CA 3Tera AppLogic (this clip has the interview; he talks about cloud and IT roles starting at 3 min., 55 sec.). These changes are the hardest, Green said, “where IT sees its role as protector rather than innovator. They tend to view their job as wizardry.” That’s not a good situation. Time to pull back the curtain.

Mazen Rawashdeh, eBay’s vice president of technology operations, noted onstage at the Gartner conference that he has found a really effective way to get everyone pointed the same direction through big changes like this. “The moment your team understands the ‘why’ and you keep the line of communications open, a lot of the challenges will go away.” So, communicate about what you're up against, what you're working on, and how you're attacking it. A lot.

Christian Reilly posted a blog last month that I thought was a perfect example of that eyes-wide-open attitude, despite the uncertainty that all of these shifts bring. Reilly (@reillyusa on Twitter) is in IT at a very large end-user organization dealing with cloud and automation directly.

“I am under no illusion,” Reilly posted, “that in the coming months (or years)…automation, in the guise of the much heralded public and private cloud services, will render large parts of my current role and responsibility defunct. I am under no illusion that futile attempts to keep hold of areas of scope, sets of repeatable tasks or, for that matter, the knowledge I’ve collected over the years will render me irreplaceable.

“Will I shed tears? Yes. But they will be tears of joy.”

A little over the top, sure, but he gets a gold star for attitude. Green of Agathon Group thinks the cloud is actually the opportunity to bring together things that have been too separate.

“Where I see a potential, at least,” said Green, “[is] for cloud computing to act as a common area where tech and management can start to converse a little bit better.”

So, as I said, January has given me a little hope that the industry is on a good path. Let’s hope this kind of discussion continues for the rest of the year.

Wednesday, December 29, 2010

Making 'good enough' the new normal

In looking back on some of the more insightful observations that I’ve heard concerning cloud computing in 2010, one kept coming up over and over again. In fact, it was re-iterated by several analysts onstage at the Gartner Data Center Conference in Las Vegas earlier this month.

The thought went something like this:

IT is being weighed down by more and more complexity as time goes on. The systems are complex, the management of those systems is complex, and the underlying processes are, well, also complex.

The cloud seems to offer two ways out of this problem. First, going with a cloud-based solution allows you to start over, often leaving a lot of the complexity behind. But that’s been the same solution offered by any greenfield effort – it always seems deceptively easier to start over than to evolve what you already have. Note that I said “seems easier.” The real-world issues that got you into the complexity problem in the first place quickly return to haunt any such project. Especially in a large organization.

Cloud and the 80-20 rule

But I’m more interested in highlighting the second way that cloud can help. That way is more about the approach to architecture that is embodied in a lot of the cloud computing efforts. Instead of building the most thorough, full-featured systems, cloud-based systems are often using “good enough” as their design point.

This is the IT operations equivalent of the 80-20 rule. It’s the idea that not every system has to have full redundancy, fail-over, or other requirements. It doesn't need to be perfect or have every possible feature. You don't need to know every gory detail from a management standpoint. In most cases, going to those extremes means what you're delivering will be over-engineered and not worth the extra time, effort, and money. That kind of bad ROI is a problem.

“IT has gotten away from “good enough” computing,” said Gartner’s Donna Scott in one of her sessions at the Data Center Conference. “There is a lot an IT dept can learn from cloud, and that’s one of them.”

The experiences of eBay

In talking about his experiences working at eBay during the same conference, Mazen Rawashdeh, vice president of eBay's technology operations, talked about his company’s need to be able to understand what made the most impact on cost and efficiency and optimize for those. That mean a lot of “good enough” decisions in other areas.

eBay IT developed metrics that helped drive the right decisions, and then focused, according to Rawashdeh, on innovation, innovation, innovation. They avoided the things that would weigh them down because “we needed to break the linear relationship between capacity growth and infrastructure cost,” said Rawashdeh. At the conference, he laid out a blueprint for a pretty dynamic IT operations environment, stress-tested by one of the bigger user bases on the Web.

Rawashdeh couched all of this IT operations advice in one of his favorite quotes from Charles Darwin: “It’s not the strongest of species that survive, nor the most intelligent, but the ones most responsive to change.” In the IT context, it means being resilient to lots of little changes – and little failures – so that the whole can still keep going. “The data center itself is our ‘failure domain,’” he said. Architecting lots of little pieces to be “good enough” lets the whole be stronger, and more resilient.

Everything I needed to know about IT operations I learned from my cloud provider

So who seems to be the best at “good enough” IT these days? Most would point to the cloud service providers, of course.

Many end-user organizations are starting to get this kind of experience, but aren’t very far yet. Forrester’s James Staten says in his 2011 predictions blog that he believes end-user organizations will build private clouds in 2011, “and you will fail. And that’s a good thing. Because through this failure you will learn what it really takes to operate a cloud environment.” He recommends that you “fail fast and fail quietly. Start small, learn, iterate, and then expand.”

Most enterprises, Staten writes, “aren’t ready to pass the baton” – to deliver this sort of dynamic infrastructure – yet. “But service providers will be ready in 2011.” Our own Matt Richards agrees. He created a holiday-inspired list of some interesting things that service providers are using CA Technologies software to make possible.

In fact, Gartner’s Cameron Haight had a whole session at the Vegas event to highlight things that IT ops can learn from the big cloud providers.

Some highlights:

· Make processes experienced-based, rather than set by experts. Just because it was done one way before doesn’t mean that’s the right way now. “Cloud providers get good at just enough process,” said Haight, especially in the areas of deployment and incident management.

· Failure happens. In fact, the big guys are moving toward a “recovery-oriented” computing philosophy. “Don’t focus on avoiding failure, but on recovery,” said Haight. The important stat with this approach is not mean-time-between-failures (MTBF), but mean-time-to-repair (MTTR). Reliability, in this case, comes from software, not the underlying hardware.

· Manageability follows from both software and management design. Management should lessen complexity, not add to it. Haight pointed toward tools trying to facilitate “infrastructure as code,” to enable flexibility.

Know when you need what

So, obviously, “good enough” is not always going to be, well, good enough for every part of your IT infrastructure. But it’s an idea that’s getting traction because of successes with cloud computing. Those successes are causing IT people to ask a few fundamental questions about how they can apply this approach to their specific IT needs. And that’s a useful thing.

In thinking about where and how “good enough” computing is appropriate, you need to ask yourself a couple questions. First, how vital is the system I’m working on? What’s its use, tolerance for failure, importance, and the like. The more critical it is, the more careful you have to be with your threshold of “good enough.”

Second, is speed of utmost importance? Cost? Security? Or a set of other things? Like Rawashdeh at eBay, know what metrics are important, and optimize to those.

Be honest with yourself and your organization about places you can try this approach. It’s one of the ideas that got quite a bit of attention in 2010 that’s worth considering.

Wednesday, October 13, 2010

The first 200 servers are the easy part: private cloud advice and why IT won’t lose jobs to the cloud

The recent CIO.com webcast that featured Bert Armijo of CA Technologies and James Staten of Forrester Research offered some glimpses into the state of private clouds in large enterprises at the moment. I heard both pragmatism and some good, old-fashioned optimism -- even when the topic turned to the impact of cloud computing on IT jobs.

Here are some highlights worth passing on, including a few juicy quotes (always fun):

Cloud has executive fans, and cloud decisions are being made at a relatively high level. In the live polling we did during the webcast, we asked who was likely to be the biggest proponent of cloud computing in attendees’ organizations. 53% said it was their CIO or senior IT leadership. 23% said it was the business executives. Forrester’s James Staten interpreted this to mean that business folks are demanding answers, often leaning toward the cloud, and the senior IT team is working quickly to bring solutions to the table, often including the cloud as a key piece. I suppose you could add: “whether they wanted to or not.”

Forrester’s Staten gave a run-down of why many organizations aren’t ready for an internal cloud – but gave lots of tips for changing that. If you’ve read James’ paper on the topic of private cloud readiness (reg required), you’ve heard a lot of these suggestions. There were quite a few new tidbits, however:

· On creating a private cloud: “It’s not as easy as setting up a VMware environment and thinking you’re done.” Even if this had been anyone’s belief at one point, I think the industry has matured enough (as have cloud computing definitions) for it not to be controversial any more. Virtualization is a good step on the way, but isn’t the whole enchilada.

· “Sharing is not something that organizations are good at.” James is right on here. I think we all learned this on the playground early in life, but it’s still true in IT. IT’s silos aren’t conducive to sharing things. James went farther, actually, and said, “you’re not ready for private cloud if you have separate virtual resource pools for marketing…and HR…and development.” Bottom line: the silos have got to go.

· So what advice did James give for IT organizations to help speed their move to private clouds? One thing they can do is “create a new desired state with separate resources, that way you can start learning from that [cloud environment].” Find a way to deliver a private cloud quickly (I can think of at least one).

· James also noted that “a private cloud doesn’t have to be something you build.” You can use a hosted “virtual private cloud” from a service provider like Layered Tech. Bert Armijo, the CA Technologies expert on the webcast, agreed. “Even large customers start with resources in hosting provider data centers.” Enterprises with CA 3Tera AppLogic running at their service provider and internally can then move applications to whichever location makes the most sense at a given point in time, said Armijo.

· What about “cloud-in-a-box” solutions? James was asked for his take. “Cloud-in-a-box is something you should learn from, not take apart,” he said. “The degree of completeness varies dramatically. And the way in which it suits your needs will vary dramatically as well.”

The biggest cloud skeptics were cited as – no surprise – the security and compliance groups within IT, according to the polling. This continues to be a common theme, but shouldn’t be taken as a reason to toss the whole idea of cloud computing out, emphasized Staten. “Everyone loves to hold up the security flag and stop things from happening in the organization.” But don’t let them. It’s too easy to use it as an excuse for not doing something that could be very useful to your organization.

Armijo also listed several tips for finding successful starting points in the move to creating a private cloud. It was all about pragmatic first steps, in Bert’s view. “The first 200 servers are the easy part,” said Armijo. “Because you can get a 50-server cloud up doesn’t mean you have conquered cloud.” His suggestions:

- Start where value outweighs the perceived risk of cloud computing for your organization (and it will indeed be different for each organization)
- Find places where you will have quick, repeated application or stack usage
- If you’re more on the bleeding edge, set up IT as an internal service provider to the various parts of the business. It’s more challenging, for sure, but there are (large) companies doing this today, and it will make profound improvements to IT’s service delivery.

Will cloud computing eliminate jobs? A bit of Armijo’s optimism was in evidence here: he said, in a word, no. “Every time we hit an efficiency wall, we never lose jobs,” he said. “We may reshuffle them. That will be true for clouds as well.” He believed more strategic roles will grow out of any changes that come as a result of the impact of cloud on IT.

“IT people are the most creative people on the face of the planet,” said Armijo. “Most of us got into IT because we like solving problems. That’s what cloud’s going to do – it’s going to let our creative juices flow.”

If you’re interested in listening to the whole webcast, which was moderated by Jim Malone, editorial director at IDG, you can sign up here for an on-demand, encore performance.

Monday, August 2, 2010

Internal clouds: with the right approach, you might be more ready than you think

If you saw the headline “You’re Not Ready for Internal Cloud” making the rounds last week, you might have thought Forrester analyst James Staten was putting the nail in the coffin of the private cloud market. Or maybe you just thought he was channeling Jack Nicholson’s Colonel Jessep from “A Few Good Men” (yelling "You can’t handle the truth!”…about cloud computing?).
It turns out neither is the case as you read that Forrester report and look around at what customers are doing: internal clouds aren’t to be dismissed. And customers can handle the truth about what’s possible here and now.
James deserves congratulations for once again crystallizing some key issues that customers are having around cloud computing – and for having some good advice on ways to deal with them. (It’s not the first time James has been ahead of the curve with practical advice, I might add. For some other examples, check out the links in my interview with him from a few months back.)

It turns out James hides one of the most important messages about “being ready for internal clouds” in the subhead of the report: “…You Can Be: Here’s How.”
Here are a few things that struck me in reading both James’ report and commentary on it last week (Forrester clients in good standing can read James’ write-up here):
The industry expects internal (or private) clouds to either be dead simple or impossibly complex. The truth is in the middle somewhere. If you’re trying to take what you already have and get those IT systems to begin behaving in a cloud-like manner, there’s certainly some work to do. Rodrigo Flores of newScale paraphrased Staten’s reasoning in his blog here as this: “Cloud is a HOW, not a what. It’s a way to operate and offer IT service underpinned by self-service and automation.”

Creating an internal cloud includes both technological changes and alterations to the approach that IT departments are taking to managing and distributing their infrastructure and resources. Because of that, it really isn’t possible to buy an internal cloud out of a single box. Plus, I’ve heard customers that are very interested in building on what they’ve already invested in. That requires an IT department willing to invest as well, in things like time, effort, but most of all, change. But there are some comparatively easier options. More on that in a moment.
You can’t just turn to your vendors for a simple definition of cloud computing that’s going to automatically match what you want. You have to get involved yourself. James has been one of the industry watchers who isn’t afraid to call vendors out when they are using a cloud label on their offerings without good reason. He notes that the term is thrown around “sloppily, with many providers adding the moniker to garner new attention for their products.” He warns that you can’t just “turn to your vendor partners to decode the DNA of cloud computing.” His comments match what I say in my Cloud Computing 101 overview. I believe the definition of cloud computing (and in this case internal or private cloud) is less important than actually figuring out what you can and should be doing with it. And that’s something that IT departments need to work on in conjunction with their vendors and partners. You can’t just take their word for it.

You do need to be pretty self-critical and evaluate your level of readiness for an internal cloud. In his article about James’ report, Larry Dignan from ZDNet notes that “there’s a lot of not-so-sexy grunt work to do before cloud computing is an option. IT services aren’t standardized and you need that [in order] to automate.” The Forrester report has a check list of things you should think about to decide if you’re ready to take some of these steps.
One of the highlights from that list is the importance of sharing IT infrastructure among multiple groups. As Dignan says, “various business units inside a company are rarely on the same infrastructure.” However, writes James, “the economic benefits of cloud simply don’t work without this” kind of sharing.
A while back (over a year ago, in fact), my blog co-contributor Craig Vosburgh posted some of his thoughts about what you must be ready to do from a technical standpoint to really operate a private cloud. It turns many of these are pretty complementary to James’ list. Craig mentioned the willingness to be comfortable with change of a style and scope that IT shops haven’t been used to before, given that workloads will possibly be shifting around from place to place at a moment’s notice, along with a number of other, more technologically-focused considerations. Whatever list you use, the point is that you need to have one. And you need to make sure you’ve thought through where you are now and where you’re going.

One measure that James spends a good portion of his paper talking about is the level of an organization’s virtualization maturity. His belief is that this will have a strong effect on their ability to make an internal cloud a reality.
Is this a useful thing to measure? From what I’ve seen, yes…and no. Virtualization maturity is important, but greenfield opportunities give you the chance to start from scratch with something simple and show big gains. James mentions several ways to actually deliver internal clouds. Only one of those involves progressing through all stages of a virtualization & automation project involving you existing, complex infrastructure for your key applications. He notes (and customers have, too) that you can actually start with something much simpler.
After all, as Rodrigo Flores noted in his blog, “why get stuck solving the hardest, most complex applications” when, in fact, those tend to be pretty stable, don’t really move the needle, and are often under the most intense governance and compliance scrutiny.
This is where a turnkey cloud solution can be a big help (and, yes, it should be noted that CA Technologies’ 3Tera solution falls into that category). Instead of having to move through a slower set of evolutionary steps around the systems and applications that you already have running and don’t want to disrupt, the other option is to start something new. Pick one of those new applications you’ve been asked for that’s not a bet-the-business proposition and try something that gives you a big leap over what already exists or the standard approach. If it fails, you learn something at minimal risk. But if it’s successful, you have a powerful new model to think about and try to replicate in other areas.

And, enough with the excuses. The bottom line is that trying one of these approaches is going to be critical for the IT department’s future. There are certainly very good reasons that internal or private clouds are going to be difficult. But I believe IT is going to be a lot like James Urquhart envisions in his recent post: it's going to be “hybrid IT.” For some things, you will leverage applications and IT infrastructure from inside your firewall; for other things, you'll get help from outside. There will be a need for managing each piece, and to manage the whole thing as a coherent IT supply chain. The more experience that you and your IT shop have with even a piece of this early in the process, the better choices you’ll be able to make from here on out.

In either case, take a look at James Staten’s report if you can. It’s helpful in thinking through these and a bunch of other very relevant considerations around internal clouds. But maybe its most useful role is to serve as a bit of an internal rallying cry, helping you get your act together on internal clouds.

Who knows? With the right approach, you might be more ready for internal clouds than you think.

Tuesday, April 13, 2010

Forrester's Staten: Realities of private and hybrid clouds aren't what you're expecting

James Staten does not pull punches. And for an IT industry analyst, that’s a good thing.
I first met James a few years back when he joined Forrester during my time at Cassatt. I heard him do a couple presentations at that year’s Forrester IT Forum, had some briefing sessions with him, and realized that with James, friendly conversations quickly turn into very specific advice and commentary. Even better, it was advice and commentary that was very much on target.

For those of you who don’t know James, he is a principal analyst in Forrester Research’s Infrastructure & Operations practice, helping IT ops folks make sense of topics like virtualization, cloud computing, IT consolidation best practices, and other data center- and server-focused issues. From my perspective, James has been a great addition to Forrester, especially in his role as one of the earliest voices helping describe the impact and meaning of cloud computing.

So, I thought I’d turn him loose on a couple current cloud computing topics and see if we couldn’t find a few things to argue over. In a good way, of course. Here’s that interview:
Jay Fry, Data Center Dialog: There’s lots of debate about how cloud computing takes hold in an organization. Some see people starting with public clouds. There is also lots of conversation about private clouds. In your discussions with customers, what path do you see end user organizations taking now – and is it even possible to make any generalizations?

James Staten, Forrester: Most of the time enterprises start by experimenting with public clouds. They are easy to consume and deliver fast results. And typically the folks in the enterprises who do so are application developers and they are simply trying to get their work done as fast and easy as possible and see IT ops as slow, expensive, and a headache to deal with.

In response to this, IT ops likes the idea of a private cloud – their word, not mine. This usually translates into an internal cloud and the desire is to transform parts (if not all) of their virtual server environment into a cloud. Usually this transformation happens only in name, not in operation -- and that’s where a disconnect arises. IT ops pros tend to rebut public cloud by saying, “Hey, I can provision a VM in minutes, too.” But that’s not the full value of cloud computing. Fast deployment is just the beginning.

DCD: How do you see hybrid cloud environments coming into the picture? How soon will that be a reality?

James Staten: It’s a reality for some firms today, but not in the way that many people think. A hybrid between your internal data center and a public cloud isn’t very realistic due to latency, cost, and security. A hybrid between your internal cloud and a public cloud isn’t realistic either. (See the above answers for why.) What is realistic is a hybrid between dedicated hosting and cloud hosting within the same hosting provider. USA.gov is doing just such a thing in Terremark’s Alexandria, VA data center (Forrester customers can read a case study on this here). This kind of a hybrid allows the two separate environments to share the same backbone network and security parameters. And it allows the business service being supported to match the right type of resources to the right parts of the service.
For example, you may not want or need elastic scalability for your database tier, so it’s more stable and economical to host it on a dedicated resource with a predictable 12-month contract. But you have a lot of resource consumption volatility in the app and web tiers and so they are best served being hosted in a cloud environment.

DCD: I’ve written here before about rogue deployments using cloud computing from business users without the explicit buy-in of the IT department. Do you see that as likely or commonplace? How should IT deal with these?
James Staten: They are extremely common place as evidenced by our role-based research, which shows that when you ask application developers if their company is using cloud, you get back that 24% say yes; you ask the same question to IT ops pros and get back that 5% say yes. Clearly app dev is using the cloud and cimcumventing IT ops in doing so. It’s no surprise. They see IT ops as slow and rigid.

How should IT ops respond? First, you can’t be draconian and say, “Don’t use cloud.” Those who are parents know how well that works. Instead, IT ops needs to add value to this activity so they are invited into this process. They need to embrace the use of public clouds by asking how they can help.

DCD: What are some of the big changes that you see underway with regard to cloud computing this year so far? Has anything really surprised you?
James Staten: This year will be all about understanding where to start for most companies, how to move from testing the waters for the percentage who have already gotten this far, and how to optimize your cloud deployment for those who have already moved into production on public clouds.

For IT ops, this year is about roadmapping your transformation for running a virtual server environment to running an internal cloud. They won’t get there in one year but they can build the plan and start moving now.

Nothing has really surprised me in the past 12 months but I look forward to seeing the innovative ways companies will devise to take advantage of clouds in the future. We’ve seen some very promising starts.

DCD: Acquisitions are shaping up to be a major storyline this year from my point of view. And I’m not just saying that because of CA’s recent moves with Cassatt, Oblicore, 3Tera, and Nimsoft; there have been many. You were recently commenting on Twitter about confusion coming from Oracle about the management products related to their Sun acquisition. What role do you think acquisitions are (and should) play in shaping what customers have to choose from in the cloud computing space?

James Staten: For the most part, acquisitions in a new emerging space are about speed to market. The leading software companies could adapt their existing enterprise products to incorporate and integrate cloud computing, but that’s hard, as there are already 50-100 other priorities on the roadmap that keep the installed base happy. Acquisitions also inject new blood (new thinking, new technologies, perhaps even new culture) into an existing player and that is often sorely needed to drive a sense of urgency around addressing a new market opportunity because the immediate revenue opportunity in the new market is much smaller. This is at the heart of Clayton Christensen’s Innovator’s Dilemma. Cloud computing looks very much like a disruptive innovation and according to Christensen’s theory, the disruption can be a king-maker for those that lead the disruption (and make a pauper out of those being disrupted). So that fuels the belief that acquisitions are necessary.

DCD: You wrote one of the earliest analyst reports on cloud computing in March 2008 (“Is Cloud Computing Ready for the Enterprise?”). You’ve also been on top of “how to” topics for end users with research to help clarify some of the fuzziness around cloud computing (“Which Cloud Computing Platform is Right for You?” [April 2009], “Best Practices: Infrastructure-as-a-Service” [Sept. 2009], etc.). Given how the cloud computing category – and the industry debate – has evolved, would you approach some of your earlier research differently now? Any conclusions you would change?

James Staten: No, I feel proud of the research we have done as it was backed by real-world interviews with those who are leading this evolution, rather than theories and leaps of faith about what might occur. We were also very clear in our objective of clarifying what was truly new and different about cloud computing to help guide customers through what we knew would be a hype-filled world. It’s unfortunate the industry has latched on so tightly to the term “private cloud” because in our opinion it is a very nebulous and thus meaningless concept. Heck, anything can be made private. But then again, cloud computing isn’t most precise term, either.
DCD: If many years from now we’re thinking back on how cloud computing got started versus how it ended up, there will be an interesting storyline about the degree of difference between the two. How much of the hype around cloud computing should be ignored and how fundamental do you think it will end up being? What’s going to be the thing that really makes cloud more mainstream in your opinion?
James Staten: The one part about the hype that I think should be ignored is the belief that everything will be cloud in the future. While this is a nice, disruptive statement that draws a ton of discussion and "what if" thinking, it simply isn’t realistic. If it were, there would be no mainframes today. Everything old is still running in enterprise data centers; we simply contain these technologies as we move to adopt new ones that arise.

And cloud computing is definitely not an answer to every question. Some applications have no business in the cloud and frankly will be less efficient if put there.

It’s through better understanding of what the cloud is good for and what it isn’t that IT will move forward. At the heart of cloud computing is the idea of shared empowerment – that by agreeing to a standardized way of doing something, we can share that implementation and thus garner greater efficiencies from it. That concept will manifest itself in many ways over the coming years. SaaS is a classic example that is delivering new and greater functionality to customers faster. IaaS is a great example because when multiple customers share the same pool of resources the overall utilization of the pool can be higher and thus the cost of the infrastructure can be spread more effectively across a bigger customer set, lowering the cost of deployment for all. Take this further and you can imagine many more scenarios:
· Why buy any software when it is more efficient to rent it only when you need it? We haven’t seen that model in SaaS yet.
· Why write code when you can more easily construct a business service by stringing service elements together? This is the core of SOA and builds upon the Java construct of reusable libraries. How far can we take that?
· Why store anything yourself when storing things on the Internet allows that data to be anywhere you are, whenever you need it to be – and to be self-correcting? I’ve been using Plaxo for over 5 years to keep track of all my business contacts so I don’t really care if I lose my mobile phone, laptop, or other device that stores this information. Now that Plaxo links with Facebook and LinkedIn, all the people in my address book can update their own records and I get this info as soon as it is synced. That’s distributed data management in the cloud. Can this same model be applied to other data and storage problems?
DCD: So, what area do you think is the next battleground?
James Staten: There are many areas that will be effected by cloud computing. One market that greatly interests me right now is HPC/grid computing. Loosely-coupled applications are a great fit for the cloud today and flip the economics of grid on their head. There are some incredible examples of companies using this combination to transform how they do business in healthcare, financial services, government and many other fields. Business intelligence is fertile ground for change due to this and the influx of MapReduce. I’m really looking forward to seeing what changes in this arena.

DCD: What do you believe will be some of the more interesting starting points for customers’ cloud-based activity?

James Staten: You gotta start with test and development because this is the one area where any company can get immediate benefit. Every enterprise has a queue of test projects waiting to get lab resources. Use the cloud as an escape valve for these projects. Then take a look at your web applications. If they have traffic volatility they are natural fit for cloud.

DCD: Do you see any dead-ends that some customers are heading down that others should be careful to avoid?

James Staten: It’s a total dead end to think that you can simply buy a “cloud in a box” and suddenly you have an internal cloud. Internal cloud isn’t a “what,” it’s a “how.” How you operate the environment is far more the determiner of [the benefit of] cloud computing than the infrastructure and software technologies it is based on. This is true for service providers, too. Just because an ISP has expertise in managing physical or virtual servers doesn’t mean they can effectively run a cloud. Sure, some of the cloud building block technologies can help you get there, but this is totally an operational efficiency play.

DCD: Forrester recently changed its blogging policy for its analysts to require that any content in research-related areas be posted only to official Forrester blogs. Did this move make it harder or easier to blog, and do you think it’s going to help customers in the long run?

James Staten: It made it significantly easier for me as I strongly believe in separation between work and personal life, and equally believe in freedom of expression – and thankfully, so does Forrester. While I can’t speak for every analyst (nor can I speak for Forrester on this topic), I can say personally that this change in policy was a good one. Prior to this we had team blogs, rather than blogs for each analyst, and if someone wanted their own stage, they had to go outside to do it. Now our blogging platform gives every analyst their own outlet while preserving the aggregation of blog content by client role. We also moved to a blogging system that makes it much, much easier for me to author and publish blog entries myself. Anything that makes me more productive and helps clients consume our value is a good thing.

Thanks to James for spending the time for this extended interview. Of course, given that James is a pretty intense endurance runner during his off hours (he’s shooting for 8 marathons and 6 half-marathons this year – and 50 marathons by his 50th birthday), a marathon interview seemed somewhat appropriate.

If you have thoughts or feel an urge to disagree with James about any of topics we touched upon here, feel free to add your comments.

Tuesday, August 18, 2009

Isn't IT automation inherently evil? (I mean, you saw 'The Terminator,' right?)

While the general take on CloudWorld in San Francisco last week may have been that it was merely a shadow of what industry attendees were expecting, at least one presentation seems to have registered on the "worthy-of-discussion" meter. Lew Tucker from Sun was written up by Larry Dignan of ZDNet, Reuven Cohen, The Register, and others, for Lew's commentary on self-provisioning applications and "future cloud apps that won't need humans."

A couple things strike me about this "humanless computing" (as Reuven put it): first, whether people really think it through or not, this kind of automation is absolutely required for cloud computing. The types of dynamic infrastructures that businesses are hoping to get from the cloud just can't have a human in the minute-by-minute IT operations loop. (See also: human telephone switch operators.)

OK, fine. But that brings me to point #2: people hate automation. It's assumed to be faceless, out of control, and most likely up to no good. There's a whole Terminator movie & TV franchise built on the premise that the "rise of the machines" is to be avoided at all costs.

But while the evil of IT automation is great fodder for summer blockbusters, it's bad IT policy. Some specifics to think about:

Automation is a big part of cloud computing
Automation similar to what Lew was explaining is at the core of what cloud computing really is (or more honestly, what it will be when it's fully realized). Contrary to what many folks assume, the core requirement for a cloud is not virtualization. Virtualization is a technology that can be helpful in creating a cloud-style model, but it's not an absolute necessity. On the other hand, automation is one of those core cloud requirements -- the ability for systems to scale up and down without manual efforts, adding and releasing resources, and in many respects managing themselves.

Lori MacVittie of F5 explained this well in her post about "Putting the Cloud Before the Horse." "It isn't really a cloud unless it's automated," said Lori. "Without that automation what do you have? A bunch of servers running applications. That those applications are virtualized is really irrelevant to the architecture because you haven't done anything but changed which physical server they're being deployed on. Without the ability for the infrastructure to make decisions based on actionable data that's shared between the components, you really don't have anything all that much different than you did before."

Automation remains scary to IT shops
Earlier this year, before CA's acquisition of the Cassatt assets and expertise, I wrote a bit about some of the reasons that IT remains leery of automation and boiled it down to three reasons: the IT operations folks are a conservative bunch (and rightly so), automation requires a great deal of trust, and vendors have made it hard on themselves by frequently overpromising and under-delivering.

The complexity of IT environments is part of the problem here, something that Forrester’s Glenn O’Donnell underscored in "IT Operations 2009: An Automation Odyssey" (Forrester client link) last month (Denise Dubie of Network World did a nice analysis of Glenn's paper here.) "A combination of forces, including skyrocketing complexity and severe economic pressure, are radically and irreversibly altering the IT landscape," said Glenn. "Evidence indicates an automation 'tipping point' is already under way this year."

2009 as the tipping point for automation?
So, if you agree that automation is an important piece for enterprises to begin to use public and private clouds, and you agree that IT avoids automation like the plague, are we at an impasse? Why would this year (of all years) see huge shifts?

As I speculated earlier and as Glenn notes, "the one-two punch of virtualization and economic pressure represents a tectonic shift for IT. For many IT services, complexity has surpassed human ability." (Once again, see also: human telephone switch operators.)
So does that mean it's time to jump in whole hog? As with anything in the enterprise, don't bet on it. Glenn has two good bits of advice here. First, find your threshold for automation. For every IT organization that will be different.

To steal (and probably mangle) an analogy from Cassatt's former CTO Rob Gingell, some IT folks are like new Prius owners that go through their daily commute entranced by the little dashboard graphics showing when & how different parts of their hybrid engine are charging up or firing away. Others are less interested in the moment-by-moment changes going on and trust that the engine will do what it's supposed to do. (Often these are the same people, just a few weeks later.) This second group are the ones who are more willing to try more automation (after they see it perform admirably on its test runs, of course). Those are also the ones where the costs of tools, people, and process improvements (what Glenn calls "automation pain") are going to be offset by their org's "automation gain," an important balance to strike.
Another point Rob Gingell often makes is that the history of computing has been all about the drive toward increasing automation. Cloud computing is just another step. Previously you'd have to tell your operating system to do things in order for computing to proceed -- a step that we've thankfully evolved beyond. It will be the same with applications self-provisioning to maintain their service levels. Eventually.

OK, but shouldn't we still be a little worried about the Terminator scenario?
Lew Tucker's CloudWorld talk referenced Skynet, the self-aware (and quite nasty) computing system from the Terminator movies. He gave a nod to the underlying worry we humans have (at least, IT operations humans have) that the moral equivalent of a T-1000 is going to show up and take out some critical part of your data center and bring down important applications. And, sure, there are some twisted ways that automation can go wrong. (Stuart Charlton of Elastra pointed me to some amusing ones, which also note where humans make things worse.)

But, it was with no small amount of irony that Cassatt used the Skynet name as the internal codename for our next-generation product architecture, in which more and more of the things your system needs to do to were handled autonomously, not for purposes of world domination or hunting down the last remaining humans, but instead to maintain application service levels.

Be on the correct side of history
So, scary as things could seem around cloud computing and automating IT operations, it's logical that it will follow the pattern that enterprise IT has followed since it began: there is a gradual move toward making the impossible or hard-to-imagine things possible, even if the rate of that movement is somewhat unpredictable. Not long ago, doing e-commerce and Internet banking were seen as science fiction. Little by little, the risks became manageable and/or understood. IT incorporated what was needed and moved on.
I expect much the same is in store for the automation that's needed to enable cloud computing. Why? Lori MacVittie put it well: "If you want to take it to the next level, you're going to have to automate processes because that's where real operational benefits will be realized."

One other suggestion for those whose jobs are potentially impacted by automation comes from Glenn's work at Forrester: "Be the automator not the automated." Especially as the economy is forcing drastic cuts or at least hard choices, being on the side of providing the solution is more advisable. So, think a bit about how automation could actually make a difference, and help advocate that change.
So, in the end, IT automation is both important and possible. It's the key to some of the new cloud-driven capabilities that are within reach for organizations. Some organizations are jumping in with both feet now, and if Glenn's right a lot more will be experimenting this calendar year.

Which by some estimates is even a little late. I mean, wasn't Skynet supposed to take over starting in 1997?

Thursday, April 16, 2009

The Great Internal Cloud Debate: Where are we now?

In case you haven't been spending 24x7 keeping track of the industry chatter on the internal cloud and/or private cloud issue, I thought I'd point you to some recent relevant discussions. And maybe highlight what sounds something like a consensus that seems to be building about how this concept will affect (and even benefit) IT, shocking though that may be to you.

One of the most methodically thought-through and extensively discussed sets of definitions for clouds-that-aren't-really-what-everyone-meant-by-cloud-computing-in-the-first-place that I've seen recently was proposed by Chris Hoff (@Beaker on Twitter), which came complete with visual aids (thank you for that, Hoff, actually). Hoff's original point was to try to add some clarity to the "vagaries of cloudcabulary" as he described it -- and to show why using the HIPPIE (Hybrid, Public, Private, Internal, and External) terms for clouds interchangeably (as, ahem, I've kinda been doing myself around here) really doesn't help matters.

In cloud computing, there are lots of hairs to split on where the physical (er, or virtual) compute resource is located, who owns it, who manages it, who can access it -- and how. And, it turns out that after much debate, the private cloud term is the one that seems to be the squishiest. Hoff ended up with something that a lot of people liked (read his post, updates, and the comments to get the full picture), but I'm betting that the precision with which his definitions have been sculpted will be lost on many. He acknowledges that, too, in saying "I don't expect people to stop using [the] dumbed down definitions" and points specifically to comparing "private" clouds to "internal" ones as a prime offender.

So when is an internal cloud private? Or vice versa?

Since this internal v. private cloud distinction isn't one that we've really been making on this blog up to this point, I think it's worth explaining what we mean by each in light of the issues Hoff raised.

When we talk "internal clouds" here, we are mainly talking about using what you already have in your data center to create a dynamic resource pool, managed through policy-based automation software like what Cassatt provides. That means we are, for the most part, ignoring the status of a lot of the other key issues that Hoff discusses in our initial conversations. It's not that management and access (to name a few) aren't important, but they are topics that we add to the discussion along the way with customers. They are just not necessarily the first words out of our mouths.

Why?

Because we're trying to highlight what we think is the most important value to Cassatt customers: being able to leverage the concept of cloud computing by using what you already own inside your data center. In beginning this discussion about improving the efficiency of the data center resources an organization already has, the "internal cloud" moniker seems a fair, if somewhat imprecise, starting point. But you have to start somewhere.

Of course, after heading down that path a bit with a customer, the "private cloud" term may be the one that actually makes the most sense to describe what they are doing or working toward. It may be that the customer's ideal set-up includes both internal and external resources (I'm talking location here), and may need to be used by people and resources inside/outside the company, but still need to be trusted and integrated sufficiently to be considered part of that company's internal compute infrastructure. Hybrid cloud situations could definitely fall into this category, as they begin to move from the realm of PowerPoint to that of reality. And in all those cases, we should absolutely use the private cloud term.

So, we'll endeavor to be more precise in what we mean. Thanks for the pointers in the right direction, Hoff.

And, by the way, private cloud computing is suddenly everywhere

Having just said that there is a distinction between how someone uses private and internal clouds as a label, I am forced to note that the IT press and analyst communities seem to have latched onto the "private cloud" term much more aggressively, regardless of any distinctions. Or maybe those publishing recently have been following the debate (some certainly have on Twitter). I'll let you decide. In any case, here are a couple write-ups on private (and internal) clouds worth noting of late:

· InformationWeek’s Charlie Babcock covered “Why ‘Private Cloud’ Computing Is Real – And Worth Considering” pretty thoroughly. He argues that even though no single piece of an internal cloud architecture may look like a breakthrough, "private clouds represent a convergence of trends holding great promise for enterprise computing," enabling users to tap computing power without a lot of know-how. If your IT guys can master virtualization, he says, you'll master the private cloud (despite virtualization not being a requirement, it seems to be a good measuring stick). And, notes Charlie, "internal clouds aren't just a more efficient way of maintaining old data center practices." Instead, you have to rethink how you do things. Craig Vosburgh did a whole Data Center Dialog post about that topic if you're interested.

· Forrester's James Staten explained his view on how to "Deliver Cloud Benefits Inside Your Walls." While James does use both the internal and private cloud nomenclature, his first report in their "private cloud" series published April 13, 2009, puts Forrester's stake in the ground on the topic. While their definition is a little too virtualization- and developer-based for my tastes, I can't disagree with James that "the end result looks a lot like organic IT" -- the term Forrester has been using for a dynamic, utility-style data center since 2002.

· "Private Cloud Computing Is Real -- Get Over It," said Gartner's Tom Bittman in one in a series of blog posts on the topic. Tom has been pretty clear and pragmatic in his posts on this topic. Whether the name is precisely accurate is not the important point, he says. Instead, it's the idea. And he's in the middle of writing a bunch of research that, if his blog posts are any indication, will put Gartner's full weight behind the concept.

· Also from InformationWeek: what GE is doing with their private cloud, and what private cloud tools are hitting the market. (Yep, Cassatt got a quick mention.)

· 451 Group/Tier 1 Research explained that “The Sky’s the Limit: How Cloud Computing Is Changing the Rules” in their recent webcast. William Fellows recounted BT, Betfair, and Bechtel examples of how real customers are using private (and even hybrid) clouds in this webcast, created from a new report of theirs. Customer examples like this (and the ones in the InformationWeek articles) are great to see.

So where are we now?

To borrow a phrase, we've come a long way, baby. Frankly, even from when Cassatt started actively using the "internal cloud" term aggressively in customer-facing conversations in the middle of 2008 or when I first blogged on the topic here ("Are internal clouds bogus?"), there's been a notable change in both quality and quantity of discussions.

On the quality side of things: the conversation is no longer about whether this concept makes sense, but instead about who is doing it and the distinctions necessary for other companies to really get there. (Our own example: our recent webcast was explicitly about the steps toward creating an internal cloud.) This qualitative step forward is a good sign that the hype is starting to get outpaced by a little bit of real-world execution.

As for quantity, let's just say that my Google alerts for "private clouds" and "internal clouds" are going crazy. For fun (more or less), I set up "dueling Google alerts" on these two specific phrases a few months back. Some days they are more weighted toward one term, some days toward the other ("private clouds" won today, 8 mentions to 6). But the reality is that if I didn't have Google limiting their appearance in my inbox to only once a day, I wouldn't be able to keep my head above the, well, clouds.

Tuesday, April 7, 2009

Webcast polls: Fast progress on internal clouds, but org barriers remain

Today's free advice: you should never miss out on the opportunity to ask questions of end users. Surprise, surprise, they tell you interesting things. And, yes, even surprise you now and again. We had a great opportunity to ask some cloud computing questions last week, and found what looks like an interesting acceleration in the adoption -- or at least consideration -- of internal clouds.

As you've probably seen, Cassatt does an occasional webcast on relevant data center efficiency topics and we like to use those as opportunities to take the market's temperature (even if we are taking the temperature of only a small, unscientific sample). Back in November, we asked attendees of the webcast Cassatt did with James Staten of Forrester (covering the basics of internal clouds) some very, well, basic questions about what they were doing with cloud computing. The results: enterprises weren't "cloudy" -- yet. 70% said they had not yet started using cloud computing (internal or external).

Last Thursday we had another webcast, and again we used it as an opportunity to ask IT people what they actually are doing with internal clouds today. As expected, end users have only just started down this path and they are being conservative about the types of applications they say they will put into an internal cloud at this point. But you'd be surprised how much tire-kicking is actually going on.

This is a bit of a change from what we heard from folks back in the November Forrester webcast. In last week's webcast we were a little more specific in our line of questioning, focusing our questions on internal clouds, but the answers definitely felt like people are farther along.

Some highlights:

The webcast itself: tips on how to create an internal cloud from data center resources you already have. If you didn't read about it in my post prior to the event, we had Craig Vosburgh, our chief engineer and frequent contributor to this blog, review what an internal cloud was and the prerequisites your IT operations team must be ready for (or at least agree upon) before you even start down the path of creating a private cloud. He previewed some of what he said in the webcast in a posting here a few months back ("Is your organization ready for an internal cloud?"). The second part of the webcast featured Steve Oberlin, Cassatt chief scientist and blogger in his own right, covering 7 steps he suggests following (based on direct Cassatt customer experiences) to actually get an internal cloud implementation going.

On to the webcast polling question results:

IT is just beginning to investigate internal cloud computing, but there's significant progress. The biggest chunk of respondents by far (37%) were those who were just starting to figure out what this internal cloud thing might actually be. Interestingly, 17% had some basic plans in place for a private cloud architecture and were beginning to look into business cases. 7% had started to create an internal cloud and 10% said they were already using an internal cloud. Those latter two numbers surprised me, actually. That's a good number of people doing serious due diligence or moving forward fairly aggressively.

One word about the attendee demographics before I continue: people paying attention to or attending a Cassatt webcast are going to be more likely than your average bear to be early adopters. Our customers and best prospects are generally large organizations with very complex IT environments -- and IT is critical to the survival of their business. And, I'm sure that we got a biased sampling because of the title of our webcast ("How to Create an Internal Cloud from Data Center Resources You Already Have"), but it's still hard to refute the forward progress. Another interesting thing to note: we had more registrations and more attendees for this webcast than the one featuring Forrester back in November. I think that's another indication of the burgeoning interest level in the topic (and certainly not a ding at Forrester or their market standing -- James Staten did a bang-up job on the November one).

Now, if it makes the cloud computing naysayers feel any better, we did get 10% of the respondents to the first polling question saying they had no plans to create an internal cloud. And, there was another 20% who didn't know what an internal cloud was. We were actually glad to have that last group at the event; hopefully they had a good feel for some basic terminology by the end of the hour.

IT organizational barriers are the most daunting roadblocks for internal clouds. At the end of Craig's section of the webcast, he recapped all the prerequisites that he mentioned and then turned around and asked the audience what they thought their organization's biggest hurdles were from the list he provided. Only one of the technical issues he mentioned even got votes. Instead, 45% of the people said their organization's "willingness to make changes" was the biggest problem. A few (17%) also mentioned problems with the willingness to decouple applications and services from their underlying compute infrastructure -- an issue that people moving to virtualization would be having as well. 5% weren't comfortable with the shifts in IT roles that internal clouds would cause.

So, despite the 17% that said they had the prerequisites that Craig mentioned well in hand, this seems to be The Big Problem: how we've always done things. Getting a whole bunch of very valuable benefits still has to overcome some pretty strong organizational and political inertia.

IT isn't sure what its servers are doing. One of the 7 steps Steve mentioned was to figure out what you already have before trying to create an internal cloud out of it. Sounds logical. However, by the look of things from our recent survey work and in this webcast poll, this is a gaping hole. Only 9% said they had a minute-by-minute profile of what their servers were doing. Instead, they either only had a general idea (41%), they knew what their servers were originally set up to do but weren't sure that was still the case (24%), or they didn't have a really good handle on what their servers were doing at all (18%). Pretty disturbing, and as Steve mentioned on the webcast, it's important to get this information in hand before you can set up a compute cloud to handle your needs. (We found this problem so prevalent with our customers that Cassatt actually created a service offering to help.)

Test and development apps are the first to be considered for an internal cloud. In the final polling question (suggested from a question I posed on Twitter), we asked "What application (or type of application) was being considered to move to an internal cloud first?" And, despite the data nightmare that would ensue, we decided to let the answer be free-form. After sifting through and trying to categorize the answers, we came up with roughly 3 buckets of responses. People were interested in trying out an internal cloud approach first with:

· Development/test or non-mission-critical apps
· Web servers, often with elastic demand
· New, or even just simple, apps

While a few people also said back-up or DR applications (you can read about one approach to DR using an internal cloud in a previous post) and some pointed to specific apps like electronic health records, most were looking to try something with minimal risk. A very sensible approach, actually. It matches the advice Steve gave everyone, to be honest (step #3: "Start small").

For those who missed the webcast, we've posted the slides on our site (a quick registration is required). Helpful hint when you get the deck: check out the very last slide. It's a great summary of the whole thing, subtly entitled "The 1 Slide You Need to Remember about Creating Internal Clouds." The event recording will also be posted shortly (to make sure you are notified when it is, drop us an e-mail at info@cassatt.com).

In the meantime, let us know both what you thought was useful (or not) about the webcast content, and also what topic we should cover in the follow-on webcast that we've already started planning. Hybrid clouds, anyone?