The Great Canadian DC Symposium: disruption in the data centre

The recent Great Canadian Data Centre Symposium brought a group of data centre operators, suppliers, academics and key user communities together in Hamilton for a lively discussion about the future of the data centre. Over the course of a day in June, various panels and workshops developed some highly relevant insights to the leading-edge of data centre innovation.

InsightaaS and the Computing Infrastructure Research Centre (CIRC), co-producers of the Great Canadian Data Centre Symposium held at the McMaster Innovation Park on June 14, 2018, created an excellent forum for data centre experts to exchange ideas and for the CIRC to showcase its research. The first in an annual series of events focused on data centre innovation, this session served as launch pad for a new best practices community of DC industry professionals, and for a new collaboration between InsightaaS, long term supporters of industry education, and, led by tech innovator Suvojit Ghosh, the CIRC, the “only centre in Canada, and among very few others worldwide, that can provide hands-on learning opportunities in the design, operations, and maintenance of data centres.” Through a variety of expert panels, technology breakout sessions and workshop exercises, the Symposium provided an inspired platform to test the producers claims, and to identify research issues for the future dialogue.

(See video presentation at article conclusion)

The Symposium opened with remarks from Hamilton Mayor Fred Eisenberger, who added his support for the initiative as one that can help position the city as a centre for innovation in it is own right. Eisenberger shared several  proofs of Hamilton’s current momentum: the city was selected as one of the top seven cities on the ICF’s Intelligent Communities list for 2018, and Hamilton is also involved in the recently announced Advanced Manufacturing Supercluster in Southern Ontario. The Mayor noted that high quality broadband communications has been key to his goal of making Hamilton a desirable and intelligent community.

Technology and economic issues

The opening plenary panel of CTOs, CEOs and academics, led by principal analyst for InsightaaS Michael O’Neil, explored the technology and economic issues facing data centre operators. Mark Brouillard, CTO of the Government of Canada, for example, noted that the government’s legacy is “one of everything” deployed in up to 700 data centres. According to Cogeco Peer1 VP and GM of Canada and AsiaPac Jaime Leverton, many enterprises have a similar experience, which has led them to adopt a hybrid cloud approach. She added, however, that some early adopters have retreated from public clouds in favour of managed private services. For Dr. Rong Zheng, principal investigator at CIRC, the current focus of data centre research is on sensors and artificial intelligence rather than legacy applications. This focus is a response to newer applications, which create workloads that are computationally intensive, often real time and sensitive to latency.

In a phrase, Peter Gross, VP of mission critical systems, Bloom Energy, summed up the plenary panel: “these are extraordinary times.” Data centres that were considered big just a few years ago used 5MW of power, he noted, now 150 MW is considered normal and even edge data centres use 1-1.5MWs. Amazon, Google and Microsoft are now responsible for over 60 percent of the spending on data centres, which is a dramatic change from data centre landscape of just a few years ago. A major shift of workloads from on-premises to external data centres has occurred and is continuing.

Data centre power

Power for data centre equipment and cooling has always been one of the primary considerations in data centre projects. Today’s data centres aren’t a lot different from the ‘old days’ – power generators and distribution systems are much the same as decades ago. The difference now is that there are many kinds of data centre, each with different goals, architectures and workload requirements.

The traditional rule of thumb for data centre was ‘be cautious and evolutionary’ – don’t rock the boat at the expense of quality and availability. But it has become very difficult to correlate power demand and processing requirements. And this situation gets even more complicated when there’s a divide between the IT department and facilities operators. The question that the power panel wrestled with is how well enterprises know what infrastructure they have, what the equipment does, what power it needs and how to deliver these requirements.

Peter Panfil, VP global power at Vertiv, said we are entering the third generation of data centre, comparing DC transition to the change from horse-drawn carriages to the car. In his view, operators must achieve low first cost, scalability, high availability and advanced infrastructure management in order to succeed. Bloom Energy’s Peter Gross pointed to several technologies that he expects will impact future data centres, including software-defined power, approaches developed under the Open Compute Project umbrella, on-site and distributed generation and silicon carbide devices. National sales manager for Eaton Joe Oreskovic added “power-limited computing” to the list of disruptors and suggested that management of data centre facilities would become an IoT application.

Ana Badila, electrical design and pre-construction director at EllisDon, summed up the discussion nicely by identifying the three pillars of power: smart power, single pane of glass for power management, and proactive management.

The hybrid data centre imperative

Panelists in this session all agreed that it’s a hybrid world and will remain so for the foreseeable future, although the balance will be different for everyone. Marc Brouillard indicated that 30 percent of the government’s compute capacity still comes from mainframe applications and these are not easily migrated to the cloud. The Canadian government started with a “right cloud” strategy and has now adopted a “cloud first” direction. Even with this goal, many new applications will continue to need to exist in a hybrid world.

The panel agreed that requirements should be specified and data centre designs tailored to the organization’s needs. As Richard Lichtenstein, president of Civatree noted, the risk is that everyone is running to the cloud even if it’s not the right answer. CIO/CEO’s should look instead to hybrid environments as a way to avoid losing control over their data centre operations and performance.

Rob Adley, VP solutions and technology, HPE

According to HPE’s VP of solutions and technology Rob Adley, while HPE has recognized that IT is both hybrid and edge-driven, it’s not just about the technology – focusing on outcomes is critical. Beyond cost, many factors contribute to system design; for example, the selection of a public versus private cloud is not simply an architectural choice, rather business outcomes impact decision-making. Some organizations experience ‘sticker shock’ when they discover the actual costs for the cloud.

Adley also raised a philosophical question: just what is a data centre anyway? It is no longer that big windowless building down the street, rather data centres are everywhere. The data centre has become an ecosystem, not just a building. He also advised that big changes are coming in the next few years – innovation is building like never before! To illustrate, Adley pointed to HPE’s work on “The Machine” as a good example of how innovation can be game-changing for the data centre. The Machine shifts traditional data processing into a new memory-driven computing paradigm through extensive use of photonics, which generate less heat.

Greg Walker, hybrid IT business development for Dimension Data, asked how we know we’re on the right path, and concluded that the answer is based on knowing the requirements: “Do I want to own the infrastructure? Where do I want my workloads to run? Do I want to have control myself? And then, can I save money, and can it really be done?”

Designing for the future

Data centres are not replaced easily or quickly, but they do need to evolve and grow. Facilities being built today should last for 20-30 years, according to Shaunak Pandit, VP of mission critical facilities at Morrison Hershfield. He suggested designing data centres for a range of needs, aiming for payback within three to four years, and the creation of good balance between CAPEX and OPEX.

According to Pandit, users are beginning to open their minds to not needing an icebox to run their computers. Today’s technologies allow data centres to run hotter – but with temperatures up to 45 degrees C, there are new safety issues for staff working on the equipment. Cooling with warm water and using different temperatures for different parts of the data centre are options that can help.

Better predictive models for workloads will also be important, although Scott Northrup, SciNet hardware and operations analyst who maintains the University of Toronto’s Niagara supercomputer, recommended building for what you need today while keeping an eye on the future. Modular data centres can help by allowing a better match of demand to supply, especially in hyper-scale installations.

Erwin Van Hout, CTO, Sick Kids Hospital

Anticipating the need to support artificial intelligence, deep learning/machine learning and quantum computing as these technologies develop, Erwin Van Hout, CTO at the Hospital for Sick Kids, is currently facing the need to replace an aging data centre (one of two they operate internally today). His organization has chosen converged infrastructure to simplify design challenges. Van Hout argued that virtually all devices will eventually have artificial intelligence built-in, including data centre components. The Hospital for Sick Kids is looking to use technology to improve process flows – for example, reducing paper is used in day-to-day activities to require fewer physical steps for staff. Van Hout also noted that hospitals cannot really bet on start-ups and fail-fast experiments for safety reasons; however, they can collaborate with each other, even though each hospital is typically at a unique stage of development.

IBM’s data centre consultant Bernie Oegema believes that a wide range of data centre designs will be needed – everything from edge computing to hyper-scale cloud data centres will remain in demand. Artificial intelligence, Big Data and IoT will help create a new generation of automated data centres that will feature “single pane of glass” for managers. Oegema added that going forward data centres will need to support quantum computing, IoT and Blockchain-based applications.

Bernie Oegema, data centre consultant, IBM

On the topic of data centre management, automation and the potential for use of AI in operations, Oegema noted that DCIM is both hard to implement and hard for people to use; hence, augmented intelligence is likely to be the next step in data centre management (see IBM’s comments to a White House RFI here). Another hot spot, according to Oegema, is Blockchain for the data centre, especially in areas such as supply chain management. Finally, he advised that data centre operators need to move to fact-based thinking and away from fear, which requires managers to know what processing capability they have, where it is located, and what it is supporting.

Edge data processing

Edge computing is one of the newer IT buzzwords, but it is already having an impact. In a panel devoted to distributed computing, InsightaaS CCO and panel moderator Mary Allen noted that some market  estimates claim that in five years’ time, more money will be spent on edge computing than on cloud technologies. Edge computing will be important for Internet of Things (IoT) applications, which can be sensitive both to latency and cost. Another example is the basic PC – why not provide PC functions using a local edge data centre?

Wilfredo Sotolongo, VP and GM of IoT for Lenovo’s data centre group, noted that though edge architectures are not really different conceptually from more traditional IT deployments, practically speaking, edge computing is very different and has different use cases. While most edge systems today rely on current technology, new designs are coming quickly, and advanced functions such as machine learning will soon be available at the edge.

The panelists agreed on the importance of edge management. CEO of Cinnos Mission Critical Hussan Haroum believes that the key challenge is edge software – implementing DCIM at the edge will be required. He argued that DCIM must be in place to listen to the sensors since the primary inhibitor to IoT is broken devices, and as a means to remote maintenance.

There are many edge-related questions still to be answered. Herb Villa, senior IT systems consultant at Rittal, was asked the seemingly simple question, “where is the edge?” and answered with a broad-based market point of view. While edge computing can reside at multiple levels in an IT solution (at the device or gateway levels), from a maturity perspective, edge computing today is at about the same state as cloud computing was five years back – so there is lots of development yet to be come. In Rittal’s view, edge data centres are where Information Technology (IT) and Operational (OT) will come together.

Schneider Electric VP Frank Panza and Jason Houck offered more technical points of view, with Panza outlining benefits of modularity, containers and metrics, and Houck detailing the benefits of platforms that can offer single pane of glass insight into diverse infrastructure deployments as well as unanticipated edge outcomes such as better security. Sotolongo wrapped the panel with notes on issues – such as interoperability – that must be overcome for edge to deliver on its promise.

In summary….

One important theme of the Symposium was disruption. Data centres are evolving and getting smarter, much like cities, cars and phones. IT data centres have been undergoing a Renaissance with new strategies and technologies leading to change at all levels of data centre design, deployment and operation. Everything from the choice of location to the intelligent automation of facilities; from modularity to “hybridity”; and from cloud to computing at the edge have been touched by innovation. Panelists were united in their caution, however, that it is still critical for operators to begin by identifying the business problems that needs to be solved before adopting new technologies.

 

 

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.