Cloud Journey Part 5: Getting the Requirements Defined

The workshops described in the last post helped us flesh out requirements that we could take to our business sponsors’ to get some buy in for the project. The business drivers will differ from company to company I guess, but working in a highly regulated sector results in a high degree of nervousness around the adoption of public cloud. The business had come out of the 2008 financial crisis and was now in a period of growth and investment. Projects were coming think and fast so the key driver was to deploy systems that are both cost effective and which can be scaled or delivered quickly enough to keep pace with business change, all whilst not introducing risk that may result in incurring the raft of regulators. Public Cloud seemed to offer everything the business wanted but with risk attached, Private Cloud offer the same benefits without the risk but the time to get there would take a bit longer – we decided on Private Cloud

The requirements below constitute the key ones that drove our project.

Automation
Most of the key benefits of adopting a cloud delivery model are around automation, however my thought is that its really the exercise of mapping processes that will end up in the automation tool that starts to deliver value.

Automation allows you to deliver solutions more quickly, leading to improved agility. However even more importantly we found that that the exercises we undertook to map our processes resulted in us collating a lot of information that lived in peoples head or lived in different teams and this allowed us to document and review them to ensure they were as Lean and complete as possible. Collating this information so that it could be mapped ready to be represented in an automation tool resulted in a degree of risk mitigation associated with people hoarding information. It effectively democratized processes so that anyone could look at and deliver them. Of course this could have happened without Cloud, but the Cloud project was the catalyst for making this happen in our organisation.

Automation also has the added benefit of ensuring that services are delivered consistently, as it removes much of the scope for human error. During our requirements phase there were examples provided of build test environments that took 12 months to complete. This in part due to the number of disparate teams involved, all with their own mini processes, and because the processes were manual there were invariable issues when delivered to our development and test teams. Then the environments needed to revisit all the teams involved with the build to establish where the issue occurred, ad finitum (or at least it seemed so at the time). So in short automation also ensures the quality of the end product.

Service Catalogue
Understanding which process to investigate, map and automate was guided by the services we defined as being the first items we were going to deliver from our service catalogue. This was a collaborative effort with our Systems Development areas as they were going to be the first and probably biggest users of the Cloud platform. In effect it ranged from Infrastructure Services (IaaS) through to Platform services (PaaS) with a degree of automating the delivery of code on the aforementioned services. This would support any efforts to adopt a more automated application life cycle management model to deliver code and any future move towards a devops style culture.

Show back
Understanding the cost of each of services we delivered was also key for us. Not so that we could charge back the business necessarily, but rather that we could demonstrate where costs were being incurred and also as a mechanism for helping reclaim resource that was no longer being used. In theory to help mitigate the potential for server hugging and to demonstrate what new projects were going to cost from an IT infrastructure perspective.

Standardised Infrastructure
Another consideration around agility and cost is the move towards commodity infrastructure and consuming it from software. Whether you want to call this ‘infrastructure as code’ or ‘software defined data center’ is probably more down to your technologies bias’s than anything else, but ultimately it has the same aim – consumption of virtualised hardware resources from a resource pool, in an automation fashion, or abstract, pool, automate if you will!

Now your idea of commodity hardware might depend where you are coming from. If its Google or Facebook then it usually means a white box from China, however for a financial services organisation that has traditionally used mainframes and big iron servers, then commodity can mean your normal x86 rack or blade servers from the likes of Dell, HP IBM or Cisco and that’s certainly where we were coming from. Building our Cloud infrastructure on standardised platform was one of our key requirements, as was ensuring that whatever we were for was well proven and had fully supported and well defined architecture blueprints that we could take to any vendor or partner to deliver. This primarily to de risk the delivery of a new platform and remove the need for the custom plumbing and extensive testing required when every component in your platform is ‘best of breed’. Good enough is, well, good enough!.

Share this post:
Facebooktwitterredditpinterestlinkedin

Leave a Reply

Your email address will not be published.