IT’s hybrid future

Surveys tell us that the majority of corporations are planning some level of hybrid cloud for their mainstream computing.  This is a radical departure from business as usual, so it’s worth taking a step back and examining what a hybrid cloud is, how it can be implemented, what benefits it brings and the potential pitfalls to a successful installation.

A hybrid cloud is a mashup of an in-house cloud capability with one or more public cloud services, allowing flexibility in resource utilization. We all kind of know what a public cloud is, but we should be careful to keep current with the rapidly evolving features of AWS, Google and Azure. That’s because creating a good hybrid should take advantage of the best of the public cloud and, frankly, should track the public cloud in capabilities quite closely.

With that in mind, the choices we can make for cloud software in the private cloud segment are quite limited. OpenStack is the leading choice for a cloud suite, since it is both open-source and comprehensive in services and features. It’s backed by many IT vendors and has a strong ecosystem of tools built around it. OpenStack is likely the preferred approach for Linux shops.

The Windows shop should be looking at Azure’s private cloud, which is essentially a repackage of the public cloud for in-house use. This is more turnkey than OpenStack, but less flexible and less featured. Of course, either cloud stack will run Linux or Windows instances. More importantly, they will support Docker containers. (It is possible AWS or Google might enter the private cloud space with turnkey cloudware, but AWS wants users to just go all-in with public clouds.)

Hardware for the cloud platform is a no-brainer in one sense. COTS x64 servers and related gear using open interfaces are the clear winner in the iron stakes. Frankly, it really doesn’t matter which software stack is used. The hardware platform is the same.

We’ve addressed the “how” of building a hybrid. Next, let’s look at the “why” of a hybrid approach. The answer has two parts. One is workload flattening. The private portion of the cloud can be sized to an average workload, rather than buying a lot of extra gear to handle the peak loads. This can be a substantial saving in hardware and software licenses. Peak loads are serviced by adding public cloud instances, a process called cloud-bursting. Such instances are paid for by the minute or the hour, and go away once the peak abates, so they are much more cost effective than buying extra gear.

The second reason for a hybrid cloud is to allow complete and efficient off-loading of non-critical workloads to the public cloud. Web page front-ends and ad personalization come to mind. Such work can be set up in the public cloud very easily, since it is already designed for multiple server deployment. Part of such processing typically impinges on data that will be kept in-house, at least in the near term, which is why this is a hybrid solution. Updating inventories and sales records might be done on an in-house database in the private portion of the cloud.

This split of function is often driven by security concerns, but the assessment of the industry today is that public clouds are at least as good at security as private IT operations and may even be better. The leading risks for security breaches are insider mischief and careless app-level coding or configuration, which does not really depend on the underlying cloud or where it is. Even so, the commitment of mission-critical data and apps to the cloud is a learning experience over some years, making the hybrid approach essential for at least a good while.

Hybrid clouds clearly increase IT’s agility and lower costs, but there are downsides to installing and maintaining a private cloud. Obviously, there’s quite a lot of new code to learn, though we are moving quickly towards a “software-defined infrastructure” that will automate much of the orchestration and management in a cloud. This automation will create an environment that uses policies to control a user’s “virtual datacenter”, moving system administration from central IT to departmental level, while minimizing it dramatically.

Even so, today, OpenStack is not turnkey and has many options and features to select and integrate. To offset this, vendors provide integration services, but these can be expensive and require a lock-in to specific pricy hardware platforms. However, by choosing a software-only vendor such as StrataCloud for orchestration and management tools, both installation and operation can be a good deal easier.

There are other pitfalls in the hybrid approach. An important issue is data locality. Hybrid clouds have unsymmetrical pathways to data. The need to cross over a WAN to reach data is a serious bottleneck in the hybrid approach. This can be obviated by careful data management. Solutions include sharding the processing so that some portions always reside in the public cloud. This allows an asynchronous reconciliation of data, rather than latency heavy real-time updates.

As an alternative, since data is the crucial issue in the hybrid mode, it should be possible to use the cloud backup process to snapshot core data to the cloud regularly, with an update of the differences when cloud-bursting is needed. This type of approach minimizes the time to start up bursting.

The hybrid approach offers a good balance between traditional methods and an all-in public cloud. The long-term decision on whether to remain hybrid or go public is an economic one. With vendor bases moving towards low-cost servers, etc. from the public cloud’s ODMs and with the expectation of much more open-source code in our future, it isn’t clear if the public cloud will have the economic advantage it currently enjoys, and so the pressure to go public may abate somewhat over time.

At the same time, Software-as-a-Service is gaining momentum and will probably provide a much-needed life-raft to users of legacy code such as COBOL. SaaS runs in the public cloud today, so operating in a hybrid mode could be more complex than if all the apps are owned. There are security and workflow issues tied into this that we are only just beginning to understand, again tied into dataflows, but on the other hand, we do understand how to work with offsite apps if we already use SaaS.

Posted in: