What is meant by hybrid cloud?
A hybrid cloud is an infrastructure solution that combines any of; on-premise physical machines, on-premise private cloud and public cloud elements. For example you might use RedHat virtual machines provided by both an on-premise VMWare vSphere and also virtual images from Amazon Web Services. There are some permutations which can allow a user to define setup criteria, in order to short-cut the setup process, but essentially these are all just ways of creating machines to form part of your infrastructure. As has been said before “there is no cloud, it’s just someone else’s computer”.
The glass is half-full…
One of the main drivers for looking to public cloud solutions is ease of management. Public cloud solutions are typically driven by an administration portal, which makes it very simple to call into existence infrastructure, platforms and services that your organisation can use almost instantly, avoiding any lengthy setup process. VMWare vSphere allows this also to a degree, however the management of something like a VMWare vSphere server does not come for free, this kind of server is large and complex and generally requires at least 1 full-time administrator (often several) to maintain it. It requires patching and perhaps redundancy over several sites which needs to be set up to fail-over correctly, it would need regular testing, and comes with all the management headache of personal server infrastructure.
However with a public cloud solution, all this functionality is provided for you – you don’t need to worry about redundancy (DR is built in to the solution) and the cloud provider will take care of patching and testing their own infrastructure.
Though, with that said there are certain situations, like financial institutions, where data is of such importance that it must be ring-fenced and not allowed to be transferred (even in a transitory way) to another country or region. There are cloud providers where specific regions can be isolated, but this may not be enough reassurance, so having your data held on-site (specifically in a production environment) may be a ‘must-have’.
Cost, particularly with low-powered, short-lived environments, such as you would find in a development or test environment, is significantly reduced when looking at public cloud solutions – you remove the maintenance overhead of the setup, the initial provisioning of the machines, the resources used on the virtualised infrastructure, and just pay a flat fee per Gb / CPU, etc. These cost savings are not as apparent when looking at larger scale, permanent environments, where the cost becomes a mounting monthly fee, rather than a one-off purchase you have already paid for.
So, a hybrid cloud solution allows you to pull the benefits from both sides of this equation, reducing management overhead, datacentre rack space and ultimately, cost.
Hybrid cloud, however, does not come without its challenges. Because you may be using solutions from multiple vendors, there are more things to know (i.e. you need more experts – or experts in different technologies – to run your business). Take Amazon Web Services – a fantastic suite of technology that provides amazing service, but can you just sign up and have your existing infrastructure team just use it? Probably not, it has grown so large and complex, there are courses available of several weeks duration just to get the basics of it.
Additionally the cost savings achieved by ability to call in to existence short-lived test environments, is dependent upon the time taken to get from a machine being provisioned, to the machine being usable for its purpose (QA, demos, etc). If the machine sits provisioned but being unused for any amount of time while installations, configuration, deployment, etc is taking place, that cost saving is getting smaller and smaller.
Are your processes the same? When you start with 2 environments, one private cloud and 1 public cloud, do you follow exactly the same steps to apply your special sauce – perhaps there are different teams that perform the steps, or there are different things to do because the machines differ in some way. Differences like these are a poisoned chalice to DevOps, where consistency is key.
Why not just fill up the glass?
The best way of plugging these gaps in the hybrid cloud is to use a tool. Something that will speed up delivery of your environments, ensure consistency and reduce overhead.
One such tool is Talos since Talos provides a common way of applying configurations and applications to your middleware, all environments are viewable and stored within the tool and appear the same regardless of their location or provider.
You create configuration templates for WebSphere Application Server, Portal, Commerce, Tomcat, Liberty, and many other middlewares and apply them directly from the tool. The templates have associated variables which store the environmental differences between the environments. All of this means you have a single point of access to all your environments (public, private and bare metal) and you can be sure that all environments are built and treated exactly the same way. As the cloud details are in Talos itself, your templates are applied to new environments in exactly the same way as your local environments and you can even set different nodes of the same cluster to different provider types.
When using Talos with a public cloud, you specify the details of the environment to be created, so the provisioning happens at build time – the environment is being used straight away, so no time is being wasted. Talos plugs in to several of the de-facto industry standard orchestration engines (Ansible, Jenkins, UrbanCode Deploy to name 3), so you get from base machine to fully set-up and ready to use in a single step.
If this blog post has been at your interest I strongly recommend you to explore Talos yourself by simply downloading the 30 day free trial here+