Cloud business value has become axiom in the discourse around new computing models. Beyond the potential cost savings associated with more efficient use of shared compute resources in highly virtualized environments, automated provisioning and orchestration promise on-demand scaling of compute resources to deliver more rapid time to innovation and new levels of business agility — in theory at least. At this year’s Interop conference and expo, CenturyLink’s Jared Wray offered an alternative view of cloud grounded in a realistic assessment of current deployments, and a vision of the art of the possible. In a presentation entitled “The Human Cost of Cloud,” Jared argued that the cloud value proposition is there for the taking, but only with proper management that takes into account total cost of ownership.
Wray is well qualified to speak this subject. An early cloud pioneer, he was founder and CTO of Tier 3, a Seattle-based IaaS/PaaS (Cloud Foundry) provider and recognized innovator in cloud automation and self-service. Architect of the Tier 3 cloud platform, Wray moved with the CenturyLink acquisition of Tier 3 in 2013 and now serves as the CTO of the CenturyLink Cloud, a division of the company founded on roll out of the new technology platform to Tier 3 and CenturyLink Technology Services (formerly Savvis) cloud zones. Wray also has a number of open source projects under his belt, with the extension of Cloud Foundry to the Microsoft .NET ecosystem representing one of his better known initiatives.
Wray’s early expectations for enterprise cloud adoption have largely come to pass. “Enterprises today,” he explained, “are willing to bypass a lot of processes and controls because they need the agility to write software, get it live and in front of customers — and they need to do things at a much faster pace than they were originally used to. Agility is not only about price, but rather about time to market and doing more with less — the whole dev/ops idea. It all really comes down to this idea: how fast can I take my team and how can I manage them better?” This is a key consideration in Wray’s view because while cloud offers benefits like elasticity and agility it also delivers resource sprawl: “before you might have 20 servers that were really expensive and had to maintain those for three years; now you have 200 instances and anybody can spin them up at any point. And that sprawl creates more need for manual tasks. So while everybody is talking about the pricing of infrastructure, which is just driving to zero, the real cost that everybody is going to have to start paying attention to is the cost of a human who has to maintain and operate the software and the infrastructure and all the associated resources, which is extremely expensive.”
Ironically, cloud is creating additional staff demands at a time of resource constraint. “Good ideas,” Wray noted, “rarely get approval fast enough to be executed on” due to the need to maintain legacy infrastructure and applications — licensing and process management around these have become the primary activity in most enterprises. As a result, Wray observed, “a lot of IT guys are not levelling up in terms of understanding cloud or even business applications.” In addition, many organizations have IT process overhead: relying on virtualization in private cloud will not solve business agility issues if the VMs have to pass through the same (slow) standard processes that were set up for physical servers, he explained. Another issue in many enterprises is “critical data loss” or the inability to maintain systems at scale: “it’s extremely hard for enterprises to employ best practices in infrastructure maintenance as they don’t have the teams or the skills to do this in the way that the big clouds are doing it today.” A plethora of shadow applications on different clouds are a final management and governance headache for the IT department of today.
If companies are limping along today with these additional costs and maintenance issues, Wray believes that as cloud becomes more pervasive within organizations and as business units and developers look to exploit cloud agility, IT will not be able to maintain standard processes, as these are maintained by humans. “This will require a huge change in the way enterprises look at this, but today most are not prepared to do things at scale.” “With the customers that we see today, when they move to public cloud, they typically add 50% more resources in the first year that they have to manage.” In “erector set” cases, where IT needs to put together a number of loosely coupled services in commodity clouds, Wray noted that companies are actually adding expensive IT staff. “Are you realizing the savings you need from cloud,” he asked, “if you have to double your staff to maintain it?”
In his presentation, Wray outlined three waves of “cloud maturity”: cloud experimentation, cloud leverage and cloud optimization. Most organizations are currently somewhere between the first two phases, and in this experimental stage (with any technology, including cloud) users typically do not take TCO into account. Going forward, Wray sees this as a huge issue as many organizations are poised to enter the second stage where cloud is leveraged for everything — Wray cited a Forrester 2012 survey claiming that 86% of workloads are still “not cloud” to illustrate the magnitude of the looming problem. And while public cloud providers continue to reduce the cost of their offerings, associated TCO expenses will put cloud out of reach of the enterprise: “we have heard the first inklings from companies that started with cloud that it is actually more expensive to be steady state in a cloud than to build it [the infrastructure] out themselves. Zynga, for example, has come out with that.” “Price does not equal TCO,” Wray argued, as operational and maintenance costs are a key factor in the equation. Going forward, he urged, companies need to consider the relative price/performance of different providers, the cost of maintaining software on the cloud and what tools the provider can add to manage and maintain their resources, in addition to the average monthly infrastructure cost — a complex analysis for the many line of business (LoB) managers who have increasing budgetary discretion over cloud sourcing, but are ill equipped to understand basic computing principles which form the basis for metrics such as price/performance.
Ultimately, the goal is to shift to the ‘cloud optimization’ stage, when decisions around cloud expand beyond initial cost savings on infrastructure to take into account all three variables outlined above. In this scenario, the role of the service provider is to deliver competitive performance, as well as the automation needed to allow the businesses to scale its infrastructure without adding a lot of staff. According to Wray, the CenturyLink cloud, built on the Tier 3 platform, is able to support a range of customers at different maturity stages. While platform automation and tools may support the LoB who is launching an application without sophisticated computer skill sets, the developer may take advantage of the AppFog PaaS, which offers additional levels of automation, and managed service providers may use the wholesale and retail capabilities in the CenturyLink/Tier 3 platform to build out their own offerings. CenturyLink’s broader goal as he described it is to help enterprises, and other customers, manage their resources more efficiently and optimize them at a faster pace. Through Tier 3 technology, the company is equipped to design, deploy, manage and scale very complex applications in an automated fashion, offer subaccounts to customers who can then give each business division their own subaccount while still providing governance, and add policies around groups of resources — capabilities that help eliminate the mundane tasks for IT operations teams and developers, and thereby reduce the human cost of cloud.