DevOps – current state of play

The most important thing in development projects is to get going quickly, both in terms of getting to market (before someone else does with a competing idea) and getting fingers on the keyboard.  All too often a good business idea or opportunity is missed because the infrastructure is not ready to support the product – or even the development of the product.  The agile method (the development approach of choice for most modern projects) tells us “release early, release often”, and while the methodology and Project Management governing it goes a certain way toward enforcing this approach, it can’t be achieved without tools and a certain infrastructure type.  Developers have for a long time had automated build and deploy, unit testing, performance testing and feedback into the next iteration of development (e.g. did your last commit break anything? Did it slow the system down? Does it provide the functionality it is supposed to) – and for a long time (even now in a lot of cases) this was supported by a team of ‘Ops Guys’ whose job it was to make sure that the infrastructure could keep up – with no such diligent approach to managing infrastructure changes.

DevOps is a blurring of the lines between development and operations – it is not, as some have reported, “Developers doing their own operations” (or no-Ops as it is sometimes known).  While it is entirely possible to call environments into existence that allow you to run code and get to integration testing very quickly, the path from Dev to Prod is almost always going to involve an operations team, however small.  And that progression from conception to delivery should be handled with the same rigorous practices whether the artefact in question is a library, a web page, or a piece of configuration supporting the infrastructure or platform.  And, yes, this does apply to the cloud – having your infrastructure or platform hosted as a service does not absolve you of the responsibility of ensuring your application-supporting configuration is right and managed correctly.

Automation, Automation, Automation…

Automation is almost always used in order to manage infrastructure configuration, but it is surprising how often this automation is a set of monolithic scripts managed by a few expensive experts in a closed fashion, which need constant attention to update and mould this solution.  Generally with a large scripting approach, environments can be created very quickly with minimal fuss and new infrastructure changes are quickly retrofitted into the scripts.   However infrastructure changes are introduced new and progressed through the SDLC – as code is – but how often in these instances are the scripts version controlled?  How often are they tied to a particular change?  How often are they fully tested (yes, performance testing your infrastructure changes, it’s a thing).  Often in large scripted automation there is only one version of the infrastructure and that is the latest.  Why would you want to go back to a previous version of the infrastructure?  Well, configuration is inextricably linked to code – the system configuration is what ties the code to the platform.

Every time there is a problem in a post-integration environment, it is for one reason.  Differences.  Differences either in the deployment; the code or the infrastructure and 9 times out of 10 it is the latter.  So it makes complete sense that if you want to roll back to a previous version of a codebase, then the configuration base should follow suit (along with the database, LDAP configuration and any other changes that might affect the functionality).  In order to eliminate the differences, code and infrastructure need to be connected.  They both need to be stored in version control, they both need to be applied at the same time and they need to be integration tested, performance tested and signed off, together.

Tools of the trade

Using a tool, rather than hand-cranking scripts, can help you with this, but which tool?  It’s a big marketplace with a lot of options.  Most of what we see in this space are Orchestration Engines, their primary job is to manage the flow of activity – for example Jenkins, Ansible, Puppet, IBM UrbanCode Deploy – and they are fantastic at this.  Some even offer plugins or pre-written scripts to integrate with well-known middleware, but more often than not, only the top level configuration types are supported with the ‘gaps’ being obtainable via scripts that you write and maintain yourself.  Other tools available are classed as Configuration Managers (Chef, CFEngine, Talos)– the primary purpose of these tools is to manage the configuration of runtimes (and so, predictably, they do this job rather better than the Orchestration Engines), however these tools have to be ‘called’ by something and so typically have to be run hand-in-hand with an Orchestration Engine.

A Perfect World

So the answer to the question, “which tool?” is probably “multiple”.  A well-managed environment will need at least an Orchestration Engine and a Configuration Manager, with the Orchestration Engine being responsible for both code deployment and setting the configuration level.  All your configuration will be in your versioning repository alongside the code base (and tagged as such), your automated deploy will tie in with the automated infrastructure/configuration management – and in a deployment, everything will be applied together.  That is how you minimise the risk of environmental differences causing issues in the development process.

If you want to assess your own DevOps check out our DevOps Health Check that will help you to create a collaboration culture between development and business operations.

About the Author Philip Leaper

Philip Leaper is a senior technical consultant specialising in IBM solutions for the last 16 years. In the last 6 years his main focus has been DevOps and Automation, utilising best-of-breed technologies to bring organisations real value from their DevOps investments. Phil is currently working for Avnet on the team that developed Talos, working directly with customers on Talos projects and contributing new functionality to the code where possible. Phil lives in Aylesbury (UK) and is engaged with a variety of large enterprise customers across Europe and the US.

One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s