The myth of the open cloud

cloud in hands

Cloud computing has grown up fast. This is good news by and large as there has been something of a creative maelstrom of rapidly prototyping technologies working to find a natural symbiotic fit with each other.

While this storm has fuelled both innovation and competitiveness, it has equally thrown up challenges related to interoperability and interconnectivity.

This would explain the call for open standards. While they were initially billed as a panacea for cloud connectivity, open standards are not without their pitfalls.

Notwithstanding the ‘open cloud wars’ brought on by the arrival OpenStack, CloudStack, Eucalyptus and others, surely we can argue that the mere existence of more than one open standard itself must inherently limit openness in a general sense?

Specific open cloud limitations

In reality the limitations are quite specific. Cloud computing evangelist David Linthicum has commented on the fact that there are spiraling industry concerns over the use of features (such as APIs) relating to one particular cloud or other, as this in itself represents some level of proprietary alignment – even if it is to an open standard based cloud technology.

“The reality is that using any technology, except the most primitive, causes some degree of dependency on that technology or its service provider,” writes Linthicum. The self-styled cloud ‘visionary’ and CTO/founder of Blue Mountain Labs speaks a lot of sense. Indeed, while open standards are fine in principle, no collective group of vendors is ever going to “fly in close enough formation” (as he puts it) to allow data flows between each other with complete freedom. It just doesn’t make economically good sense.

VMware’s chief technologist for Europe, the Middle East and Africa (EMEA), Joe Baguley, speaks from a similar corner. Insisting that total openness is indeed a myth, Baguley points out that lock-in is inevitable but we do have a choice of what level of infrastructure or solution we get locked in at. So do we accept lock in at the IaaS level, at the PaaS level, with regard to a particular virtualisation channel or at the SaaS application level only?

“All across our industry there are some generic standards that enable interoperability (TCP/IP for example), but at some point you look to use more advanced features provided by a vendor or solution that will then lock you in to that stack,” said Baguley.

Like Unix, like cloud

So, to use his example, do we consider ourselves to be locked-in to TCP-IP? Or are we locked into our switch or router vendor? Baguley suggest that we’ve seen this all before with the old Unix open systems of the early 90s, which were all meant to work together. But then came HP-UX, which was a little different from AIX and from Solaris and so on.

The problem stems of course from legacy lock-in concerns that have always cast anxiety over a CIO’s decision to adopt a certain flavour of IT stack. When we choose a particular database, language, application server and web component we logically close a few doors. We can then be seen (and we largely have) to have thrown as much integration technology as possible at these application silos, but ultimately we will always be restricted by a degree of lock in.

It would seem there is a hunt for ‘ultimate portability’ in the world of cloud in the hope that perhaps someday we will be able to move our data stacks and workloads from one cloud provider to another. But is it the nature of data itself that poses a more fundamental barrier to interoperability.

“Though ultimate portability is feasible in the purest sense in some cases, it is very difficult to achieve,” argues Baguley. “At the IaaS layer at least, you have the 'standard container' of a virtual machine and the OVF standard helps with this. But what actually locks you in is your data i.e. it is big, it gets bigger all the time and becomes hard to move at speed.”

If not open standards; are de facto standards good enough?

If we accept that open cloud standards are no cure-all, what choices do we have? If not open standards, will de facto standards suffice? Is the evolving pile of Apache 2-licensed code that is OpenStack ‘enough openness’ to get us by? Should we accept VMware’s vision of its 150 public provider-backed vCloud alongside its concept of the ‘software defined datacentre’ with its claims of hardware independence?

Many theorist’s (or indeed practically-tasked CIOs) next step here may indeed be to look at the option of software defined networks. In this scenario we are not locked to a particular piece of hardware (or indeed a particular architectural standard) to support our choice of application and/or service.

The danger here is that (if we don’t buy VMware vCloud) we also then look to build custom tailored cloud management software to oversee our re-architecting needs as they arise. Can you guess what happens next? We get locked in a prison of our own making and have to exist within confines which we ourselves have in fact defined and established.

Does it get more open further up the stack?

We’ve looked mainly at infrastructural layer technologies here and focused on ‘open IaaS lock in’, to coin a new and slightly uncomfortable phrase. Whether PaaS at the platform level should be any less of a lock in is questionable. Even if we ascribe to CloudFoundry and its open source support of a large number of languages and services, then we’re still trapped in one corner, albeit an extremely portable, open standards-focused corner at that.

Applications and SaaS do not necessarily get any easier either, if we choose an online SaaS vendor for ERP or CRM (or some other central computing function) then exactly how easy is it to export and import data from one application to another if we decide we want to change?

The bottom line is that while we may centralise upon open cloud technologies that do indeed boast open source code at their heart, we’re still locking ourselves into one ecosystem or another. Perhaps we shouldn’t talk about open cloud standards; perhaps we need to talk about ‘more open than proprietary’ cloud standards.

Open cloud science fiction

Does open cloud guarantee portability and interoperability? Linthicum’s answer here is a firm “no!” it does not. Linthicum says that the notion of data and its supporting applications porting from one localised version of an open cloud to another is essentially a work of science fiction.

"The myth of the 'open cloud' is being adopted with almost religious fervour. I believe that we all want emerging cloud computing solutions to support open standards and thus provide portability and avoid provider lock-in. However, instead of committing to a single cloud and technology provider we're committing to ecosystems that we may find are just as closed as proprietary cloud solutions," he said.

Linthicum continued: "We seem to be good at following standards for more ‘permeative’ standards, such as TCP/IP or 802.11. However, cloud computing brings us much further up the stack, and that is where we can't seem to agree on just a limited number of standard approaches and mechanisms. This is somewhat an old problem that we've yet to solve."

With our still-fragmented levels of open cloud standards we are left with what is essentially a choice between non-portable ecosystems. So then, OpenStack is from Venus and CloudStack is from Mars and we currently live on planet Earth anyway.

Only science fiction can save us now.