Cloud Journey Part 8 – The Solution

So to the techie bits!

Once the dust has settled on the tender process the organisation opted for a solution that gave a solid base to build a Private Cloud on, with the ability to bridge to Public Cloud as future requirements became clearer. Hybrid Cloud Ready I guess you could call it!

Technology wise this was based on an EMC vSpex Solution to complement our existing investment in EMC storage, Cisco MDS storage switches, Cisco Nexus Core network. The missing component from a hardware perspective to give us a reference architecture was Cisco UCS, a solution which we had been researching for a while. The advantage of reference architecture is that it removes a lot of the complexity around technology plumbing and the model is typically supported by a wide range of technology providers. Cisco also has reference architectures with lots of different storage providers meaning that we were not necessarily tied to EMC for the long term.

I discussed commoditisation of hardware in an earlier post and some people may disagree with me calling UCS or vSPEX a commodity solution, however my rational is that its commodity because it no longer needs the organisation I work for to build up ‘expert’ level support people to maintain it, as we can get that support from a multitude of partners. Also that x86 two socket blades are a heck of a lot cheaper that four+ socket big iron servers to provision and support. It’s not white box so I concede on that point if that is your definition, however what we were building with the software abstraction layer below would facilitate the move to white boxes for server, network and storage if the desire was there. The reality is that in the Financial Services space you need the comfort of a big vendor to back off too when the proverbial hits the fan!

From a software perspective we decided not to go down the Cisco UCS Director (formerly Cloupia) route to deliver IaaS. At the time this product was very much focused on the provisioning of physical tin for example UCS Servers and present Block storage LUNS via the MDS for example. We were pushing hard to increase our visualization figures into the high 90%s and wanted to focus our Cloud and automation efforts at this virtualization layer. To deliver this we chose the VMware vCloud Suite.

As an existing VMware customer using vSphere Enterprise plus and Site Recovery Manager we had a significant investment in the products and established knowledge base. One of the things VMware offer for customers taking an Enterprise Licence Agreement was that you can exercise a ‘fair trade value’ against your existing licence entitlement for the new agreement. We did this and when taking into account what we would have had to pay on our existing investment in support and maintenance over through the difference to upgrade from 42x Enterprise+ with SRM to 48x vCloud Suite Enterprise was only about 20% difference in price. Some complain about VMware being expensive but you pay for a great product with great support and I think this was really reasonable when you consider what we were getting in addition:

vRealize Operations Manager Suite (vROP)
* Infrastructure Navigator (vIN)
* vConfiguration Manager (vCM)
* Hyperic OS and application Monitoring
vRealize Automation Center (vRAC)
IT Business Management (ITBM)
vRealize Application Services (vRAS)
vCloud Networking and Security (vCNS) – Subsequently NSX
vCloud Director (vCD)

This is a great suite for standing up a Private Cloud or software defined data center (SDDC) and the tools also facilitated the ability to consume and measure use of AWS or vCloud Air for Hybridity.

One challenge we had to address was that VMware are transitioning away from vCD so we had to take a short term punt on the fact functionality would be moved to the Hypervisor and into vRAC. However we learnt early on that weeks before the deployment was due to start that vCNS was being EoL’d in 18 months – IE well before the end of our contract term. We had planned to use vCNS heavily to create secure network zones to isolate networks. However the VMware account team in partnership with our SI managed to negotiate us an uplift to NSX, VMware’s new network and security solution for a very good price (confidential sorry!).

Two of the Key pillars for VMware’s SDDC in vSphere (Compute) and NSX Network) were to be in place to deliver a fully software defined data centre. vSphere 6 building on VASA, storage profiles and the like with Virtual Volumes, provided a clear road map with regards our next storage refresh. Support for storage service consumption via virtual volumes would be a key requirement when selecting the next Storage platform to ensure the third pillar of the SDDC was executed against.

The conceptual diagram below highlights the investment areas addressed and the non technical areas that were also in scope of the project:

copy righted www.abstractpoolautomate.com

 

 

 

 

 

Share this post:
Facebooktwitterredditpinterestlinkedin

Leave a Reply

Your email address will not be published.