Google using DeepMind AI to reduce energy consumption by 30%

Google using DeepMind AI to reduce energy consumption by 30%

Google is using the Internet of Things (IoT) and artificial intelligence from its DeepMind acquisition to reduce energy consumption in its data centres by as much as 30 percent.

The programme could be highly significant: a 2017 report from Climate Change News suggested that data centres and the world communications sector could consume 20 percent of the world’s power by 2025.

In 2016, Google and DeepMind jointly developed an AI-powered recommendation system to improve the energy efficiency of the data centres, which run popular applications such as search, Gmail, and YouTube. But now DeepMind’s AI is directly controlling cooling independently in the facilities.

Every five minutes, the cloud-based AI takes a snapshot of the cooling system from thousands of sensors and feeds it into DeepMind’s deep neural networks, which predict how different combinations of potential actions will affect future energy consumption.

The AI system then identifies which actions will minimise energy usage while also satisfying a robust set of safety constraints. Those actions are sent back to the data centre, where they are verified by the local control system before being implemented.

“The idea evolved out of feedback from our data centre operators, who had been using our AI recommendation system,” said a statement from the company.

“They told us that although the system had taught them some new best practices – such as spreading the cooling load across more, rather than less, equipment – implementing the recommendations required too much operator effort and supervision. Naturally, they wanted to know whether we could achieve similar energy savings without manual implementation.”

Operators on the line

Google data centre operator Dan Fuenffinger explained, “We wanted to achieve energy savings with less operator overhead. Automating the system enabled us to implement more granular actions at greater frequency, while making fewer mistakes.”

The AI control system has found novel ways to manage cooling that have surprised even experienced staff, such as Fuenffinger. He said, “It was amazing to see the AI learn to take advantage of winter conditions and produce colder than normal water, which reduces the energy required for cooling within the data centre. Rules don’t get better over time, but AI does.”

Ensuring that Google’s many thousands of servers in each of its data centres runs reliably and efficiently is mission-critical to its day-to-day operations, so the company is keen to take advantage of these new efficiencies, while keeping a constant eye on safety.

“We’ve designed our AI agents and the underlying control infrastructure from the ground up with safety and reliability in mind,” said DeepMind, “and use eight different mechanisms to ensure the system will behave as intended at all times.”

Estimating the unknown

One method implemented by the deep learning specialists is to estimate uncertainty in the system. For each potential action – and there are billions – the AI agent calculates its confidence that it will be a good action. Actions with low confidence scores are eliminated from consideration on safety grounds.

Another method is two-layer verification: optimal actions computed by the AI are vetted against an internal list of safety constraints defined by a data centre’s operators. Once the instructions are sent from the cloud to the data centre, the local control system verifies the instructions against its own set of operating and safety constraints.

This redundant check ensures that the system remains within local guidelines for safe operation and Google’s human operators retain full control of the operating boundaries.

“Our data centre operators are always in control and can choose to exit AI control mode at any time,” explained DeepMind. “In these scenarios, the control system will transfer seamlessly from AI control to the on-site rules and heuristics that define the automation industry today.”

In other words, although the AI now controls the cooling system – albeit with the capacity for human intervention – DeepMind has deliberately constrained the system’s optimisation boundaries to prioritise safety and reliability, meaning there is a risk/reward trade off in terms of energy reductions, according to the company.

If the AI does violate safety protocols, automatic (smooth) failover to a neutral state has been built into the system, along with continuous monitoring and constant communication.

Escalating savings

Despite only being in place for nine months, the AI is currently delivering energy savings of around 30 percent on average, from 12 percent at the start of the programme. That’s because the system performs better as more data is gathered, meaning that the 30 percent improvements tally could continue to climb.

“Our optimisation boundaries will also be expanded as the technology matures, for even greater reductions,” said DeepMind. “We’re excited that our direct AI control system is operating safely and dependably, while consistently delivering energy savings.”

Internet of Business says

The technology’s successful application in just one type of business suggests that data centres could be just the beginning of AI and deep learning applications in the field of energy efficiency.

In the long term, there is obvious potential for the technology to be applied in other industrial settings, including within the energy sector itself – or smart city deployments – just as scientists are warning of an approaching tipping point in climate change.

More, DeepMind’s adherence to safety guidelines and its deliberate limiting of the system to minimise risk are instructive, as is building in human agency as a failsafe: a sensible model for future implementations.

Chris Middleton
Chris Middleton is former editor of Internet of Business, and now a key contributor to the title. He specialises in robotics, AI, the IoT, blockchain, and technology strategy. He is also former editor of Computing, Computer Business Review, and Professional Outsourcing, among others, and is a contributing editor to Diginomica, Computing, and Hack & Craft News. Over the years, he has also written for Computer Weekly, The Guardian, The Times, PC World, I-CIO, V3, The Inquirer, and Blockchain News, among many others. He is an acknowledged robotics expert who has appeared on BBC TV and radio, ITN, and Talk Radio, and is probably the only tech journalist in the UK to own a number of humanoid robots, which he hires out to events, exhibitions, universities, and schools. Chris has also chaired conferences on robotics, AI, IoT investment, digital marketing, blockchain, and space technologies, and has spoken at numerous other events.