Data centres are hot stuff

Lynn Greiner reviews power and cooling conversation at the Great Canadian Data Centre Symposium 2019.

Data centres are hot. Really hot.

Literally. And that’s a problem.

So said speakers at the 2019 Great Canadian Data Centre Symposium. Power and cooling came up time and time again during discussions about issues surrounding data centre operations, and for good reason. Forbes reports that global data centres accounted for about three percent of total worldwide electricity consumption in 2016, and at the end of 2017, Data | Economy predicted that by 2025, data centres will consume one fifth of the world’s power.

A big chunk of that usage will be – wait for it – running the cooling systems compensating for the heat generated by the power consumption of all the electronics in the data centre. Yup – use power, that generates heat, to get rid of heat.

The big issue, said François Sterin, chief industrial officer at cloud provider OVH, is power density in the data centre. The more equipment there is in a given space, the hotter it gets. And that’s not good for the equipment. Added Peter Gross, VP, Bloom Energy, “With cloud, with mirrored operations, with availability zones, all these things are changing the data centre. It’s is not a binary thing. It’s not just on or off anymore. We have the ability to operate in stages.” And that allows better control over power consumption in operations and cooling.

Dr. Ghada Badawy, principal research engineer at CIRC, thinks that autonomous monitoring and control must be in place since the proliferation of computing devices will far outstrip peoples’ ability to manage them. “The data centre has to be able to control itself, cooling has to be able to control itself,” she said.

Traditional cooling (hot aisle/cold aisle) can’t always keep up anymore either. OVH has used liquid cooling in OVH data centres for the past 15 years and is now using it to cool 300,000 commoditized servers. But, Sterin explained, that means bringing water into the IT room, typically a bad idea.

“Ninety percent of people don’t talk to me after I tell them that. They think I’m crazy,” he laughed. “But we’ve developed operational knowledge of how to deal with that. That’s actually what Open19 (an open standard for datacenter design) is doing.”

Sean Maskell, president and general manager. Cologix Canada
Sean Maskell, president and general manager, Cologix Canada

Sean Maskell, president and general manager, Cologix Canada, runs multi-tenant datacenters and acknowledged that even designing the facilities has changed. It’s no longer possible to design only for day one as was done in the past. Especially in colo environments, where operators may not know what the next tenant will require, everything has to be flexible and scalable. And it has to be designed for the long term.

“We have to be very thoughtful about how we build our facilities,” he said. “Do we have enough power on day one that we can scale for the customer? Do we have enough cooling capacity to allow our business to grow? We can build some flexible and scalable solutions that enable us to manage quick upgrades to power and cooling, which allow us to add high-density racks in areas that typically weren’t built for that. We used to see on average 4KW racks, we’re now seeing on average 8 to 12, with some that are greater than 20.”

The solution? “We’re looking at software-defined everything,” Maskell explained.

Gross pointed out that while we’ve had software defined compute, network and storage for some time, software defined power is, for him, the ‘next big thing’.

“It’s probably the most fascinating new development in this industry,” he said. “Business has been trying to accomplish something like this for a very long time. For the first time we do indeed see the emergence of software defined power.”

Maskell is already controlling power with software to some extent. By providing more power than contracted by a customer and imposing software-controlled thresholds, he eliminates several problems. If the customer’s usage approaches 80 percent of what they’re paying for, Cologix gets an alarm and can contact the customer to ask if they want to buy more capacity. And if the customer happens to spike over their allocated capacity, there isn’t a problem – no tripped breakers – because additional power was already available to the customer.

The idea of software defined power isn’t new. A 2013 article in DataCenter Knowledge by Clemens Pfeiffer, CTO of now defunct Power Assure, described the concepts it had been developing since 2007, which analysts of the time described as forward-looking. Power Assure, which concentrated on the energy management component of Data Center Information Management (DCIM), ran out of money in 2014; analysts speculated that the market just wasn’t ready for the new mindset and new processes required.

Peter Gross, VP, Bloom Energy
Peter Gross, VP, Bloom Energy

It is now. Gross said that the biggest problem he’s seen in datacenters is over provisioning, an expensive proposition. And for co-location data centres in particular, it’s the inability to dynamically modulate the level of reliability to accommodate the needs (and wallet) of the next client or workload.

“The answer,” he said, “is software-defined power. One of the things that I believe is that when it comes to facilities in data centres, the emphasis is going to move from a centralized cooling, centralized power, centralized distribution to a more granular control at the rack level. You’re going to be able to control the level of reliability, the density and power consumption, eliminate overprovisioning by controlling the behavior of the rack itself instead of having massive centralized cooling systems and power systems and so on.”

Software-defined power today, he explained, is a platform that aggregates, pools and manages power resources, and dynamically matches supply and demand. It has components at the rack level to supplement the power delivered to address peaks. This will lead to what he thinks is the next step: being able to dynamically move workloads from place to place in the data centre (or around the world) depending on their reliability requirements, the cost of energy, or other factors.

But customers shouldn’t just look at tomorrow when considering requirements. Maskell advised companies to look as far ahead as possible when planning their business and infrastructure – at least 24 to 36 months – and to make decisions as future-proof as possible. “Don’t look at what your need is today,” he said. “But look at where that’s going to be 36 months from now, and ensure flexibility is built into that solution.”

“Because,” said Gross, “power is becoming the key to the industry’s survival.”

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.