There has been lots of talk about disruptive innovation and the IT transformation that occurs when “software is eating the world.” Significant progress has in fact been made over the past decade in many areas of IT including digital technologies, processing and storage infrastructures, software platforms, and digital business models.
But are we on the verge of disrupting the data centre entirely or are we really just facing another IT transformation?
As used by Clayton M. Christensen in his HBR articles, the term “disruptive innovation” represents a major positive change in a business model or market sector (such as Uber in transport and Airbnb in the hotel industry). Geoffrey Moore’s recent book, Zone to Win, offers a methodology for managing disruptive and sustaining innovation in large companies. IT transformation, on the other hand, could be defined as an important, but less radical, change in systems that derives from new technologies or innovative designs.
Cloud computing is now about a decade old and is maturing quickly. Options for cloud-based solutions are forcing a reconsideration and reevaluation of the role and value of the on-premise enterprise data centre – the “EDC.”
Some basic questions come to mind. Can an EDC be fully replaced by cloud services and, if so, why is this desirable? Can new technologies rejuvenate the EDC? And, finally, does a mixed (i.e., hybrid) IT environment really provide the best of both worlds?
Answers can be found by first looking at EDC/cloud synergies at the macro level and then by examining the technologies themselves.
Enterprise hybrid IT
The corporate data processing centre was the “king of the IT castle” from the 1970s to the early 2000s. IT departments developed and delivered Systems of Record using mainframes and/or server farms. These back-office applications were, and still are, critical to success for most organizations.
Today, however, the IT ecosystem is more complex. The EDC is no longer the only focus of IT operations. Various new options are now available including public SaaS services, public or private PaaS services, on-premise cloud appliances, or even multi-cloud, hybrid infrastructures. Soon, distributed architectures and new services being developed for IoT systems (fog computing and network function virtualization, for example) will also increase the range of solution options.
Today, “Systems of Engagement” and “Systems of Things” are connecting the enterprise to the external world – to international customers (e.g., shopping portals), to external open data sets (Toronto City data is one example), and to smart systems for health and safety, transportation, smart city services, and homes. The cumulative scale of these systems is much, much larger than traditional data processing manages – there are forecasts for as many as 50 billion connected “things” by 2020. These systems are more suited to cloud solutions that offer multi-tenancy, open access, and elastic provisioning (i.e., scalability).
If we could simply “lift and shift” traditional back office applications to the cloud, then the EDC could more easily be retired, resulting in real disruptive change. The reality, however, is that today any such move would entail critical technical, financial and cultural challenges. Conversely, a do-it-yourself, internal EDC only strategy that ignores cloud opportunities would be an expensive and complicated choice, especially when multi-vendor, multi-national IoT systems are deployed.
As a result, many CIOs are choosing to transform IT using a “cloud preferred” strategy combined with a best-of-breed hybrid IT architecture – thereby avoiding a complete disruption of their IT operation.
[Tweet “#InsightaaS considers enterprise data centre “disruption and transformation”.]
Software defined data centres
All data centres – either cloud-based or on-premise – must organize, operate, and secure the hardware, software and network resources that comprise the IT infrastructure. The goal is to provide user access to these resources at low cost while also maximizing security, performance, quality and manageability.
Several generations of EDC infrastructure have come and mostly gone – mainframes, minicomputers, packaged servers, and racked blade servers. Similarly, EDC operating software has evolved from bare metal firmware to dedicated operating systems to the virtual machines that are now widely used. However, most data centres are left with a mix of technologies from different vendors and different vintages. This results in the need for customization, operating inefficiencies, bespoke designs, and high costs.
Even though the innovation focus has been placed on cloud, a transformation is also occurring in the EDC. The following are three examples of emerging data centre technologies:
The software defined data centre (SDDC) is an example of data centre modernization. EDC resources are virtualized and configured through the automation of operations management. This includes software defined networking (SDN), software-defined storage (SDS), and processor virtualization. Software container capabilities will soon be included in this mix as well. SDDC assumes that resources can be standardized and made accessible through standard APIs. This facilitates self-service provisioning, agile configurations, and rapid scalability. A good example of the SDDC solution is the product available from VMWare.
A hyperconverged infrastructure, or HCI, is a data centre component design that organizes resources into “appliances” (or software platforms) that can be scaled out. Other EDC functions, such as deduplication, switches, storage caches, backup software, replication functions, gateways, can all be incorporated into an HCI node. Automation software interconnects multiple nodes into a complete infrastructure and provides orchestration and automation services. Various suppliers, including Cisco, Dell/EMC, Nutanix, and Simplivity, are competing to offer the best HCI systems.
Finally, serverless computing is an emerging cloud model that hides the details of the server platform and environment from the developer. Developers do not have to be concerned about infrastructure details or functions such as scalability, resilience, security, and so on. While real servers exist “under the covers” the code execution service provided does not necessarily map to a physical server. Examples of cloud-based services for serverless computing include Azure Functions, AWS Lambda, and the Apache OpenWhist project (implemented in IBM’s Bluemix).
It is worth noting that software is the major driving force for most data centre innovations.
Requirements for the future EDC
Why is an EDC transformation needed at all?
First, to avoid obsolescence, the next generation EDC will need to offer services and qualities of service that compete with those available from cloud service providers. If a buyer has a choice, the preference will always be for the richer, more cost-effective solution!
Second, the EDC cannot be frozen in time. There is increasing demand for self-service acquisition of services, for automation of most routine management functions, for standardization of service access and abstraction of resources, and for agile architectures to accommodate rapid business change. Other technical challenges include consolidated identity management; security and privacy by design across all platforms, providers and locations; and coordinated management of the embedded software in all the systems.
Thirdly, the EDC cannot be the weak link in the IT ecosystem. Hybrid IT solutions that incorporate public clouds, private clouds, and legacy systems will need to interoperate as close to “out of the box” as possible.
This is what I’m thinking – your comments and ideas are welcome.