Creating continuous cloud

Like the avaricious and the prodigals in In Dante’s 4th circle of Hell who push giant boulders in opposite directions for eternity, shouting at each other ‘Why do you hold?’ ‘Why do you throw away?’, advocates of public vs. private cloud are engaged in a seemingly endless debate over the advantages of each path to IT service delivery. While proponents of private cloud argue for the relative security and privacy benefits of the walled garden approach, public cloud enthusiasts point to the efficiency and cost economies of multi-tenant environments. But boulders meet in the middle on one issue — the need to remove the networking bottleneck to cloud with automated networks that seamlessly connect data traffic at cloud speed. Within the data centre, technologies such as network virtualization and SDN promise considerable advance on this agenda; in the end-to-end continuum that IT managers have to contend with, linkage with access and service provider networks needed to seamlessly connect users to cloud data has proved more challenging.

CE 2.0 cloud
Courtesy of the MEF

The CloudEthernet Forum (CEF) is looking to change this with the OpenCloud Project, a live test environment aimed at validating end-to-end interoperability for cloud, data centre and network services. With nine founding members, the CEF was spun out of a much larger group last spring, the Metro Ethernet Forum, which is an industry alliance composed of over 220 telco service providers, cable MSOs, network equipment manufacturers that is focused on building consensus around standards, service specifications and certifications for deployment and delivery of Carrier Ethernet (CE) services.

So far, the MEF’s Technical Committee has developed 40 specifications, covering a range of service definitions, attributes, architectures, test suites and management models for CE, and now CE 2.0, a class of network services that can accommodate the advanced traffic management conditions that are an increasing requirement of cloud. At launch in 2013, the CEF’s stated goal was to adapt the Ethernet protocol to the needs of VLAN scaling, Layer 2 performance across large domains, consolidate network storage technologies onto Ethernet and reduce the number of network management layers that cloud operators have to deal with.

At the end of July, this group took a step towards realization of its goals with the OpenCloud Project. According to CEF president James Walker, a key challenge to the burgeoning cloud services marketplace (estimated at $200 B by the CEF) is the fact that “network service providers, cloud service providers, datacenter operators and enterprises all use different APIs and interfaces to communicate.” From the user perspective, the result is a “patchwork quilt of SLAs that may not cover the whole solution,” a mismatch between the provisioning of virtual resources that can be turned up in minutes and network services that may take months, lack of unified risk management, auditable processes or properly enforced security policy to address compliance, regulatory and privacy requirements, solution performance myopia since different suppliers report on different attributes for their services and the inability of an enterprise’s multiple cloud services to “talk directly to another in a secure and standardized way.”

Vinay Saxena, distinguished technologist and chief architect, NFV business unit, HP
Vinay Saxena, distinguished technologist and chief architect, NFV business unit, HP

So how specifically is the OpenCloud Project looking to address these issues? Vinay Saxena, distinguished technologist and chief architect of HP’s Networks Functions Virtualization (NFV) business unit, who is also a CEF board member and spearhead of HP’s participation in the project, has offered some observations. HP was a founding member of the CEF, which Saxena explained was intended as “an ecosystem on which the whole fabric around which cloud services can be orchestrated and delivered as a seamless service” could be based. Questions for the CEF, he added, are “What standards do we need, and what interoperabilities need to exist between various players, particularly in hybrid scenarios where enterprises are looking to have services running in private cloud, and public cloud, shared environments? How do we bring all of these things together with a common Ethernet fabric where we can run these services with solid QoS and SLAs, and with deterministic traffic patterns so we can know exactly how the traffic traverses data borders and country boundaries to meet regulatory needs?” HP’s involvement was born out of recognition that progress on these issues would help address HP customer needs: Saxena noted, “we look at this as a very challenging process, and we see multiple benefits — learning from it, and taking advantage of those learnings to bring together better solutions for our customers.”

The CloudProject’s aim is not magic bullet — a new, single standard that would work across various enterprise, service provider and telco networks and cloud and carrier Ethernet exchanges — but rather to establish what Saxena called “a working environment to showcase how this [interoperability] can be done, and then to work with associated entities such as the MEF, ONF SDN, OpenStack or Open DayLight, providing recommendations on how things can be done better, or guidance on how standards should be modified to ensure interoperability for the cloud services that rely on this foundational infrastructure — the network.” On a practical level, this involves the creation of reference architectures that may serve as a model for various stakeholders to build end-to-end interoperability. “One of the reasons we feel this is important,” Saxena explained, “is because if you look at classical standards and application delivery cycles, it takes a few years to develop standards, and it takes a couple of years for vendors to launch products based on those standards, and another nine months to build applications on top. The CEF is hoping that recommendations back to the standards body will create better standards coverage so we can shorten this cycle. That is the basic goal of the forum.”

According to Saxena, consensus on the need for open source and open standards has gathered sufficient momentum to overcome vendor instinct towards proprietary systems, which in turn has enabled the kind of work that the CEF is doing in the OpenCloud Project. “We are involved in everything from ONF to Open DayLight to OpenStack to the new open NFV platform, and we see the CEF as an entity that tries to bring all of these things together and grow the ecosystem. If we can show working environments, working reference architectures, we believe this will encourage adoption of these technologies in the marketplace, and grow the market for everybody.”

HP is currently working with other CEF board member to define how exactly it will participate. One obvious model might be the creation of a lab that hosts test infrastructure and contributes people resources and software technologies that are part of a larger deployment based on reference architectures created by the CEF. Other members of the project would contribute other portions of the architecture: “we might set up some equipment in a data centre where we run some test applications; Verizon or Comcast [also founding project members] might provide network connectivity; Tata Communications [founding member] might be providing the Carrier Ethernet exchange capabilities,” Saxena explained. Together, these would run some sample applications to showcase how they can run and migrate seamlessly from the data centre to the cloud service provider, carrying the same IP addresses and networking capabilities. Through this pilot, the OpenCloud group would define the application ecosystem and the APIs that need to be exposed for the architecture to work properly in conjunction with standards recommendations. A final goal would be to demonstrate the solution through HP’s public website or in web presentations.

So far, 200 companies have expressed interest in the OpenCloud Project, and the founding group is already seeing dialogue between enterprises, cloud and communications service providers that Saxena expects will help build consensus around reference architectures that can meet a large or significant portion of the requirements that customers share. Though the OpenCloud Project is in early days, Saxena noted that the technical committee does have one reference architecture blueprint that is just now being finalized, and that the clarity of objectives — “what are the dependencies as we look at large scale deployments?” for example — will help drive the project. And because OpenCloud is focused on formalizing work that is being done elsewhere, albeit in an ad hoc fashion, he believes there will be an acceleration of project outcomes. The group will adopt a phased approach to project demonstration, but Saxena expects to see some technical specs in operation by mid next year, as is germane to cloud technologies: “in terms of time horizon, in the cloud day and age, anything above a six month cycle is extremely large. So we want to be aggressive, while staying aware that this is a community-led approach.” Focus areas on this journey, and technologies that Saxena agrees will be crucial to delivering network services at cloud scale will be NFV in the carrier environments, SDN programmability at the enterprise level, virtualized network overlays and network virtualization, the “core plumbing technologies” that will enable providers to offer services like WAN optimization, but also deliver the SLAs and service guarantees needed for cloud interoperation and network automation.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.