Cloud Journey Part 1: An introduction and a brief history

The journey towards a private cloud data centre, which may encompass the leveraging new technologies, new processes, new service delivery models and potentially changes to roles and responsibilities, can seem a daunting and mammoth sized undertaking.

I wanted to document some of the challenges and experiences that have shaped my cloud adoption project, with the hope that these strike a chord or offer an insight for those embarking on the same journey.

As a bit of background I work (at the time of writing) in the financial services sector in the UK.  Not a large enterprise, but maybe the third or fourth largest player in our particular market segment, with around 5000 employees in the Group. The potted history of the IT is that it moved from mainframe technologies circa 2001 and have completely standardised on x86 server platforms and Windows Server operating systems. Along with many other companies the move from a consolidated point of management, i.e. the mainframe, to an x86 architecture that inevitably started to sprawl, led to them adopting VMware ESX as a consolidation platform around 2008. As with a lot of other companies that adopted virtualisation this was mainly to reduce the cost of deploying under utilised physical tin. They fell into the same trap as many others have before them and continued to manage the estate as if it was made up of physical servers, with build processes, back up methodologies etc. not really catching up with what the technology offered us.

The ability to spin up virtual machines quickly resulted in some level of agility not previously available, however the standardisation that had been possible in the slower approach required for deploying physical servers didn’t keep pace and we started to suffer from VM sprawl. The size of the managed server estate grew exponentially, the management overhead grew in line, the standardisation went in the opposite direction and the people resource required to look after the estate actually shrunk as the banking crisis and credit crunch started to bite. This also resulted in a move towards ‘sweating assets’ longer than we had done previously, which resulted in capacity issues as we took a series of tactical steps to juggle server resources around.

It left the IT function in a difficult position where the quality of the product was suffering, the support was suffering and the business was suffering as we couldn’t respond quickly enough to requests due to the capacity constraints.

I initially started to look at private cloud in anger in 2011. Cloud as a buzzword had probably been around for two or three years but it always seemed to be in the context of third party hosting. To be honest in my (at the time) cynical mind it was just hosting services rebadged. In my defence that is probably because that is what a lot of the services being marketing at the time were!

Having a very Microsoft centric background I’d started to play with Hyper V a bit, 2008 R2 SP1 specifically, and I had attended a UK event which was a rollup of the previous Microsoft Management Summit in Las Vegas that was to showcase what to expect from System Center 2012. Orchestrator and Virtual Machine Manager showcasing the automated build of servers from a self service portal giving rise to the promise of a private cloud…. The penny dropped and I saw the solution to the problems we had been having; we needed to adopt a service provider mentality to delivering IT Solutions.

I subsequently deployed Hyper-V and SCVMM as a proof of concept for one of our larger subsidiaries as they had adopted agile development and the speed we were able to deliver VMs to them was holding them up. They had been installing type two hypervisors on their local PCs, bridging the networks and then adding these VMs to AD domain (every user domain account has a limited number of computer domain joins). Our AD admins and Security bods had kittens, so the idea of creating an entirely segregated development environment seemed like a great use case to prove.

I didn’t get my POC back which was a success and a failure in equal measure. A success as I had proven the concepts and I had demonstrated a clear need (internal market?!) for private cloud functionality, however I had been too focussed on the proving the technology and not enough on thinking about how it might transition from POC to a production service; in my naivety I expected a follow on project to design and deploy it properly. The POC as it was wasn’t suitable really to be just left as it was. However office politics being as they are, once the teams had the functionality they didn’t want to spend any more money on it and continued to use it as it was for another 18 months until there was a subsequent failure (physical hardware). There were none of the things that you would associate with something that had gone in as production in terms of HA, DR, backup etc.

I realised I was also guilty of having facilitated the deployment of shadow IT – Gartner lesson 101 on the ‘what not to do when adopting Cloud’ checklist! There were a couple of lessons learned here – have a plan that stipulates what happens once the POC is proven and get a mandate approved before starting, the second lesson was that I needed to sell a bigger vision than simply automating server builds to developers.

After the initial POC work was completed my focus moved to the parent organisation, as the same issues in terms of agility, standardisation etc. were evident here. The System Center 2012 POC proved to be valuable tool in terms of arming myself with positive feedback. Developers seem to have a natural inclination to bypass the operational IT areas and look to AWS or Azure to fulfil their agile requirement. Given the industry we are in simply opening the gate to Public Cloud consumption was difficult and the ability to demonstrate that we could offer a similar service internally proved to be a boon.

Share this post:
Facebooktwitterredditpinterestlinkedin

Leave a Reply

Your email address will not be published.