Great Canadian Data Centre Symposium 2019 – moving to the edge

Don Sheppard provides wrap up of the Great Canadian Data Centre Symposium, noting themes of scale and adaptability.

Fred Eisenberger, Mayor of Hamilton
Fred Eisenberger, Mayor of Hamilton

Data centre and digital transformation experts and practitioners gathered at The McMaster Innovation Park in Hamilton on June 20th for a day of thought-provoking panels, discussions and networking. The second annual Great Canadian Data Centre Symposium (GCDCS19), co-produced by InsightaaS and the Computing Infrastructure Research Centre (CIRC), put the spotlight on data centre innovation and emerging technologies. At the GCDCS19’s first annual Academic Congress, held on the previous day, attendees also heard from leaders in data centre research and development. The conference opened with a welcome address from Hamilton Mayor Fred Eisenberger who said that one of Hamilton’s goals is to encourage more data centres to locate in the area. He referred to initiatives such as smart city, the Intelligent Community Sub-Committee, emerging broadband infrastructure and autonomous vehicle testing as drivers that will support data centre infrastructure build and innovation.

The GCDCS goal for 2019 was to provide insights on a wide scope of data centre topics, ranging from intelligent facilities and edge computing strategies to data centre innovation platforms. Speakers throughout the day generally reinforced the importance of both the basics – power, cooling and security – and innovation. The all-star plenary panel set the tone of the day by offering their personal words of wisdom. Scalability and adaptability were common themes this year: they covered the massive increase in data volumes and transaction rates; the growing number and capacity of shared data centres, including those featuring hyperscale cloud computing; the rapid proliferation of edge nodes; and, the growing number and type of devices for IoT and mobile personal systems.

The explosion of data and devices

End user devices are no longer wired to a single data centre, assigned to one application and owned by the enterprise. Traditional office workstations have morphed into a variety of intelligent sensors, actuators, smartphones and mobile systems, creating a corresponding explosion in the amount of data being generated, transferred and stored. These devices may supply data to multiple service providers and have become a major driving force for shared data centres (such as public clouds).

Several experts at the event observed that more than volume is required: data must be efficiently collected, transported, processed and protected for it to have any real value. Stephen Abraitis, VP of data centre infrastructure and strategy at the Royal Bank of Canada, stated that data has become “the name of the game” due to its inherent value for business analytics purposes. The amount of data and the location of processing functions have become issues, however, especially for non-stop realtime operations. To address the issue of data centre growth, Francois Sterin, chief industrial officer for the OVH Group, noted in the plenary session that the distinction between central and edge data centres is not yet well-defined, and hence he advised the adoption of standardized commodity hardware. The work of The Open Source Infrastructure, the Open Compute Project and Open19 were provided as examples of open standards initiatives that could provide guidance. Rami Rathi, senior software application engineer at Intel, noted that an autonomous vehicle can generate about four terabytes of data per day; this alone could drive significant data centre expansion if data were to be sent to a central processing location. But the underlying question is, why collect and store all the data unless it can be used to add business value in some way?

Dr. Ghada Badawy, principal research engineer, CIRC, McMaster University
Dr. Ghada Badawy, principal research engineer, CIRC, McMaster University

Network capacity to support billions of communicating devices with the high performance they require is another design consideration. The consequences of network overload were recently demonstrated at the Raptors victory parade in downtown Toronto – millions of fans in a concentrated area taking photos and videos swamped local cellular services. Dr. Ghada Badawy, principal research engineer at CIRC, pointed out that there are many data sources even in a vehicle. Traffic data, weather data, personal health monitoring and even onboard entertainment (i.e., Netflix in the car) are some simple examples.

Software-defined data centres

Eugene Roman, IT leader at Canadian Tire, OpenText and Bell
Eugene Roman, IT leader at Canadian Tire, OpenText and Bell

Canadian tech icon Eugene Roman noted that the term data centre is a misnomer – the data centre is really the core engine for online, realtime digital business, not just an office expense. Joe Belinsky, VP of technology services at Moneris, referred to the software-defined future as managing ‘Infrastructure-as-Software’. He believes that implementing a software-defined data centre also requires that people change how they think about data centres. Peter Gross, VP at Bloom Energy, pointed out that the data centre industry is actually maturing and becoming more commoditized and is, in fact, focused on software-defined services as a means of managing costs and increasing efficiency.

Software-defined data centres have emerged as a best practice for IT modernization. The GCDCS19 panel on the “software-defined future” described this transformation as the shift from an asset-based model to a services-based infrastructure. The basic idea is to configure and optimize data centre resources and facilities (including power and cooling) using intelligent management software. Ronnie Scott, data centre and cloud architect for Cisco, argued that software-defined networking has been an active conversation in the industry for at least the last 18 months. Charlie Ancheta, a senior IT manager in the Ontario Government, indicated that government has about 2,400 applications to support and is adopting a software-defined hybrid approach for their next generation data centres, rather than simply re-building traditional data centres.

Software-defined data centres, when combined with artificial intelligence and data analytics, allow for granular control of facilities down to the rack level. A software-defined environment also permits more dynamic matching of supply to demand – for example, workloads can be re-allocated to optimize energy use. Software-defined systems also support rapid re-distribution of workloads to cloud data centres or to the edge nodes. According to Dr. Zheng, associate professor of computing and software at McMaster, software-defined data centres are on the critical path for future innovation. Joe Belinsky, however, reminded the audience that it is all about preparing people to accept change.

Data centres at the edge

In the plenary panel, Peter Gross indicated that many data centre experts are gearing up for a major expansion of edge computing. For example, edge nodes on 5G cellular towers could create 70 to 100 million (yes, million!) edge data centres.

The GCDCS19 panel on building an integrated cloud/edge strategy took the view that data centres have become an amalgam of widely distributed facilities that may include everything from the office closet to the cloud instance in a content delivery network. Asked if there is really any difference between edge computing and traditional distributed processing, the quick answer from the panel was no, but panelists were quick to point to significant differences in the scale of processing at the edge today, and in the complexity of the specialized edge capabilities and functionality. As with most simplistic questions, reality is often too complicated to be addressed with a one-word answer.

While the aim of edge computing can vary, issues with network latency and reliability improvements generally apply to most use cases. Edge processing may be essential for industrial IoT-based robots or autonomous vehicles, while in other applications, the trade-off between local processing capability and network costs is front and centre. One example of edge computing provided by panelist and CTO of Sick Kids Hospital in Toronto Erwin van Hout is hospital monitoring of beds that uses on-site, AI-based analysis to detect and respond to events such as a patient’s falling out of bed. In this scenario, the speed of the response is the critical factor, indicating the need for local processing. For continuous glucose monitoring, on the other hand, sugar readings may be collected and stored, and sent in batches for further analysis at a later time. The cost of multiple data transfers can be minimized through storage and preliminary data analysis at the edge.

Marie Josée Drouin, CIO, National Film Board of Canada
Marie Josée Drouin, CIO, National Film Board of Canada

Edge computing also offers the ability to physically distribute data centre resources while maintaining the appearance of an integrated cloud delivery system. In the case of an early adopter of edge computing, the National Film Board of Canada, edge nodes are more easily delivered by a large cloud provider that has multiple locations from which to distribute local capabilities – as panelist and NFB CIO Marie Josée Drouin described it, the NFB acts as a consumer of specialized edge services. According to Jeff Cowan, chief technology officer for HCE Telecom, in his area, the processes for managing edge computing will be complex and need to be highly automated. Security and privacy could also be difficult problems if the edge nodes are in open, easily accessible locations. According to Scott Killian, VP, efficient IT programs at the Uptime Institute, when edge computing becomes business as usual, we will stop talking about it.

Continuous innovation and adaptation

The plenary panel was asked what technology or practices would be needed to address projected data centre demand. Peter Gross noted that server disaggregation, hyperconvergence, fiber and photonics, and distributed resiliency were all important technologies. Stephen Abraitis pointed to containerization, homogeneous interfaces and more general applications of as-a-service concepts. He cautioned, however, that these are hard to deliver if legacy systems dominate the landscape. An interesting observation from Dr. Badawy was that the hundreds of thousands of edge data centres using small racks could be located just about anywhere – in a self-driving car, Netflix served from an edge data centre in the trunk is a definite possibility. In such a scenario, maintenance would be challenging so edge data centres must be designed for very high reliability and resilience or remote service.

In his closing remarks, Eugene Roman argued that data centres will be a platform for Digital 2.0 innovation and transformation, which is the most powerful disruption in the last 50 years in Canada. Unfortunately, the impacts and outcomes are still poorly understood. Though more than a trillion sensors will be networked by 2025, he explained, enterprises still have trouble dealing with newer technologies such as AI, Big Data and analytics. And this is because despite all the advances in technology, human expertise is still critical. So for data centre professionals who are able to stay current with technology advances, opportunities will continue to grow, along with the data centre industry.

The bottom line

When asked to name one thing in the industry they would change if they held a magic wand, the plenary panelists responses were divided between education for everyone who resists change, removal of network bottlenecks and the provision of risk-free migration capabilities. Francois Sterin even joked about improving the speed of light to address network issues around data transfer.

GCDCS19 lived up to its name again this year – it was indeed great!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.