The IT leader today wears a heavy crown. Supporting a vocal end user community that has consumer aspirations for instant-on IT, he/she is also tasked with managing the miracle – digital transformation that leverages IT’s full potential to wrestle new sources of value from the organization’s competitive landscape. The key to success, so the story goes, is a simple imperative: the enterprise must align technology decisions with business requirements, and by inviting IT to board room discussions, create the killer app that is unleashed by advanced cloud infrastructure technologies. But in a world of multiple cloud options, how simple is simple, and what’s the measure of effective IT/business collaboration? According to Shawn Rosemarin, Chief of Staff Americas Systems Engineering, VMware, for enterprises looking to thrive in the multi-platform future, the answer is a dependable set of financial metrics that will empower businesses to make appropriate decisions that ensure the right IT environment for each workload at the right price.
This practical approach, which assesses tech decisions in terms of hard cost and new value, takes aim at a moving target. While cloud services available in the marketplace are in rapid growth mode, the individual company’s business and technology needs constantly evolve, as does cloud adoption practice in the industry as a whole. InsightaaS research has found that, for example, while Canadian enterprises had deployed an average 3.1 IT delivery platforms in 2015, two years later, the large business will have deployed 4.8 out of a total 6 possible options (see figure below). Additionally, the enterprise may have multiple instances of “public cloud,” with a mix of SaaS applications, or SaaS, IaaS and PaaS from different providers deployed across the organization, pushing the number of platforms in use further still. This plethora of platforms presents new management challenge as financial complexity is associated with each platform choice; however, as Rosemarin has explained in a compelling overview of tech decision making, cost transparency, which has emerged with public cloud, has served to render these decisions less opaque.
Cost factors through an historical lens
When cloud first appeared back in 2008 as a mainstream option, it was a technology in search of definition, and “initial doubt” characterized discussions around its deployment. People were not sure, Rosemarin argued, if cloud was new, or if it was something they should be focused on. The first glimmers of understanding with respect to cloud cost benefits surfaced with SaaS: users concluded “yes, there probably is a better way to be buying my software, and moving it from CAPEX to OPEX makes a lot of sense,” he explained, as services such as Salesforce and Workday led new models for procuring and delivering software.
Cloud did not touch the core of the data centre for many years, though, due to lingering doubt around whether cloud could drive tangible value, or was simply a fad. But with the SaaS “feasibility study” proved, many companies began to build cloud strategies. Around 2013-14, Gartner introduced the term “bimodal” to describe the simultaneous presence within enterprises of separate traditional and emerging agile/cloud IT delivery modes, and new models for migrating from an existing data centre to the cloud began to emerge. In this period, iApps – customizable user frameworks for deploying applications with template functionality – also began to take shape.
In Rosemarin’s schema, a key factor in migration decisioning was a new level of transparency around the true cost infrastructure, which emerged with public cloud offerings. “A CFO could now say, if it costs “X” in Amazon, what are we paying internally?” Traditional businesses had never had to report on what an IT service cost, and if IT managers knew the price of servers and software, they didn’t have visibility into total costs which include OPEX, facilities, power and space. According to Rosemarin, this new knowledge drove businesses to one of two conclusions: “if internal IT is more expensive than the public cloud, and if I can buy exactly what I have on-premise for less cost, then it’s time to move. Or, if we are going to keep it in-house, we will either have to prove significant differentiation – justify paying more because we’re doing something that the public cloud cannot do, such as compliance, level of service, application support – or we will have to do something to bring down our internal costs.” But most important of all, he added, was the notion that “at the very least we need to find a way to measure that cost internally on a repeatable basis, to find a way that we can constantly benchmark ourselves against what’s available, and, as the public cloud continues to emerge and new offerings become available, to constantly revisit whether we should be running this internally or someone else should be.”
Think of your exit before you enter – revising the path to cloud
Based on this cost logic, the Initial thought was that companies would move all workloads to cloud as this was a cheaper way to buy infrastructure, Rosemarin explained. Indeed Gartner’s bimodal analysis called for the gradual shift towards investment in mode two delivery, as agile development methodologies and Dev/Ops, combined with cloud infrastructure, were viewed as the surest path to new value and as more worthy (than traditional mode one infrastructure and processes) of ongoing investment. And when the enterprise is building out a net new application in the cloud, this reasoning would appear to hold: the organization has multiple platforms to choose from – including IBM’s Bluemix, Amazon’s integrated services, Azure, Google – that automate provisioning of cloud native apps to ease and speed deployment. However, when companies had to consider migration of legacy applications, requiring rewrite, sophisticated integrations or professional services support, “that’s when all the trouble started,” Rosemarin noted, and the cost equation became less clear.
This challenge has led to a revision of accepted thinking on the cloud migration path. At Gartner’s recent Summit for CIOs, for example, Rosemarin found the analyst firm prepared to revisit the bimodal concept, based on a re-evaluation of the contribution made by mode one IT to value creation. Gartner’s original position advised businesses to focus on the development of new cloud-based applications rather than on improving efficiencies in existing infrastructure, and many companies built new style web-based apps designed to capture the imagination of customers. However, with scale, the cost equation of these applications began to change. “What began as a $2,000 a month application became a $20,000 a month application, became a $200,000 a month, and became a $500,000 application. We’re talking about $6 million a year to run an application,” Rosemarin observed, and many companies began to look at bringing applications back in-house.” An oft cited example is Dropbox, which initially built inside Amazon, but moved to internal IT service delivery via on-premise infrastructure once it reached critical mass. Armed with an understanding of the dynamics of the application, Dropbox shifted to DIY to drive operating costs lower than could be achieved through use of Amazon’s cloud. According to chief operating officer Dennis Woodside, the company now gets “substantial economic value by running its own operation.”
As a web-scale company that derives all value from its web application and has the resources needed to effect major transition, Dropbox occupies a unique position. Other kinds of enterprise may have less flexibility, though: in cases where mode one IT has been neglected and capability depreciated, the migration of systems from the cloud to company infrastructure is no longer possible – or so complex that migration back in-house is not cost effective.
To avoid this kind of issue, while positioning to capture new value, Rosemarin advises the build out of several mode one key pillars that are additional to the operation of traditional IT. In support of businesses looking to find cost savings through optimization of existing assets and to build the flexibility and insight needed to act on financially-based cloud decisions, Rosemarin has provided the following cost/value guidance:
Get one’s own house in order. Businesses are advised to not neglect the traditional data centre, but rather to “use virtualization to sweat every asset possible,” and to drive down OPEX further by automating day-to-day operations. With “the easy button for IT,” for example, which automates the provisioning of servers, storage and networking/security, the spin up of new workloads in the data center can be accelerated from “months to minutes” to deliver both cost saving and new business agility.
Prepare for ‘lift and shift’. Better economics may achieved for variable workloads such as test/dev or disaster recovery through access to public resources that enable a “shift” from CAPEX to OPEX. For applications that are identified as good candidates for cloud hosting, enterprises should look to platforms offering hybrid compatibility. For example, businesses with existing workloads that have been built and perform extremely well on vSphere should consider VMware’s Hybrid Cloud Management suite as a means of easing the “lift” to public platforms, such as vCloud Air or other cloud offerings. To provide more choice in public infrastructure resources, VMware recently partnered with IBM, enabling VMware customers to burst to IBM’s SoftLayer Cloud, while relying on its Hybrid Cloud solution to manage multi-cloud assets. The company quickly followed suit by partnering with OVH, the large French-based data centre provider, which now offers “Software Defined Data Centre-as-a-Service.” These relationships mean that VMware users can now consume ESX, NSX, vRealize, etc. resources on partners’ public clouds on an OPEX basis.
Cost a key determinant in multi-platform architectures. Many modern systems will combine capabilities from multiple platforms: an application service with a high powered mobile front end, for example, may be built on Amazon infrastructure, with collaboration capabilities from Microsoft designed to interface with an app built in Azure, and with core analytics that fit within Google. It may also need to integrate with the enterprise’s customer record system running on-premise on SAP. The cloud architect will have a clear sense of where many components of the application should run. But for other, more generic components, cost will determine what is the appropriate environment. In these situations, costing systems can be helpful. VMware’s vRealize Business Enterprise, for example, is an IT financial management tool that can provide cloud cost comparisons – reporting on the total cost of infrastructure platforms, including CAPEX and OPEX, across multiple vendors, based on real time rates for what is currently available in the public cloud – factoring in the cost for VMware private cloud deployments. vRealize can also provide ongoing monitoring of individual departmental cloud usage, an additional cost input that can help the business unit better understand and control infrastructure costs.
Work towards a “business technology platform.” Gartner’s new phrase for mode one IT is based on optimizing the operation of traditional IT, and on the extension of enterprise data centre capability beyond employee core systems to achieve interface with customer and partner systems, with the “Things” in IoT, and with analytics from within and without the organization to inform business and technology decisions. The goal, Rosemarin explained, is to “making sure your data centre platform is extensible enough to reach outside the walls of your own data centre.… We are evolving towards a world where applications and the components therein will sit on the platform that they are best suited to. The business case for dragging applications between clouds for the heck of it is limited,” but companies that are able to build and architect a system with the flexibility to extend components to the most appropriate platform – cloud or on-premise – will benefit from both cost control and new value derived from digital transformation.
Summarizing the importance of platform interplay, and paraphrasing Gartner’s repositioning on bimodal IT, Rosemarin concluded: “It’s actually in the best interest of the enterprise to have a mode one environment that is compatible with mode two. As mode two applications grow from systems of intimacy back to systems of record, as they grow from a cool app that is generating interest to an application that is driving a significant amount of the business, it potentially needs to move from ‘fail fast’, where it’s all about iteration, back to mode one, where it’s all about security and control. It’s not just about innovation and agility anymore, it’s about driving the core value of the business.”