Climate change is the most existential threat humanity faces today, and AI can solve problems in unprecedented ways. It only makes sense that we use the latter to work on the former. That was a significant sub-theme of NeurIPS 2019, highlighted in particular by the Tackling Climate Change workshop, and talk from some prominent leaders in machine learning suggests that the ML field can and should focus on it.
There are two thrusts: one is about urging machine learning practitioners to use their research to work on solving climate change problems. The other is about ensuring that performing the research itself doesn’t ironically contribute to climate change.
To the first thrust, the aforementioned talk from notable ML leaders, which included Yoshua Bengio, Jeff Dean, Andrew Ng, and Carla Gomes, covered a lot of ground. They discussed how ML researchers should focus more on solving huge challenges like climate change rather than overly concern themselves with the number of publications they’re getting. They suggested techniques that do the work of machine learning with more energy efficiency, like achieving accurate results using smaller data sets and using transfer learning, self-supervised learning, and multitask learning. And they advocated for working with those outside of the field of machine learning to attack climate change.
The second thrust presents a different sort of challenge to ML researchers and practitioners. Essentially, the idea is that so many have been focused on achieving results however they can, but because running models is computationally intensive, doing so can burn a great deal of energy. “[…] Some of these large models are computationally intensive, and they’re reasonably expensive in terms of energy usage,” Dean told VentureBeat in an interview at NeurIPS. “And so I think it’s important for the community to look at what are more efficient algorithmic techniques we can have that make a particular model or outcome that we want,” he added.
As a means of accountability, and a way for conscientious ML practitioners to measure their carbon emissions, a group of researchers created ML CO2 Impact, a machine learning emissions calculator. Borne from their paper, Quantifying the Carbon Emissions of Machine Learning, Alexandre Lacoste et al wanted to call attention to the carbon impact of ML training.
The calculator has a simple web-based interface. You can select your hardware type (Tesla V100, Titan V, etc.), the number of hours you’re going to use it, the provider (Google Cloud, Amazon Web Services, etc.), and even the region. Then, the Compute button is sitting there, almost threatening you to click it. When you work up the courage to do so, the calculator will spit out a result that looks like this:
It tells you how much carbon your project will emit and how much (if any) of that is offset by the provider. You can see the formula the calculator uses, and it offers a suggestion for a more efficient configuration — in this case, if only the model had been run in a different geographic region, the carbon emission would have been significantly lower.
Such a calculator can be both a useful tool for those working in ML and also a philosophical reminder of the responsibilities they bear for their work. Other researchers are pushing for ways to perform this work more efficiently, without sacrificing accuracy.
A 2017 paper, Efficient Processing of Deep Neural Networks: A Tutorial and Survey, laid out some of the ways to reduce the computational demands of deep neural networks (DNNs), including changes to hardware design, changes to collaboration on hardware design, and changes to the algorithms themselves. It also explains some of the benchmarks and comparison metrics that are available, and explores some of the available techniques.
The paper’s first author, Vivienne Sze, professor of electrical engineering and computer science at MIT, gave a slide presentation on the group’s work at NeurIPS 2019 that is publicly accessible in its entirety. She said that it’s going to become a book soon. That team also has a calculator of sorts, called Accelergy, that helps researchers and practitioners estimate the energy consumption of their accelerator designs.
There are many ways to push towards more efficient machine learning. In an article anchored by Facebook AI’s Yann LeCun’s assertion that AR glasses presents an ideal challenge for machine learning, VentureBeat senior AI writer Khari Johnson pulled together a veritable laundry list of resources and ideas from NeurIPS. They include but are not limited to tools for fast deployment on edge devices, quantization for deep neural networks, the idea of creating a compute-per-watt standard for machine learning projects (from Intel AI GM Naveen Rao), adapting industry-level energy analysis for computer scientists, and the suggestion that carbon footprint of a project should be included in academic paper submissions.
As is so often the case, in addition to the moral impetus for working on climate change and making machine learning research and training more efficient, there are also practical financial reasons for doing so.
As part of an earlier interview with VentureBeat, Sanjeev Katariya, eBay’s VP and chief architect of AI and platforms, talked about why it makes business sense to be responsible with AI. The company’s Krylov AI platform demands a great deal of resources, and he said that not being efficient isn’t advantageous. If Krylov’s design was lacking, he said, “We will be copying data all over the place, producing duplicates all over the place, and we will not be efficient in how we run our models, how we build our models, how we collaborate across our models. So black box systems, isolated systems, chew up resources like there’s no tomorrow.”
As if he was writing his own tag line in real time, Katariya summarized it: “Be efficient. Be smart. Use less. Do more.”
It’s an easy sell: Use your AI platform and capabilities to make your business processes more efficient, and spend less money while providing a better product or service. That’s the definition of a win-win.
When you factor in the environmental benefits, it’s actually a win-win-win. And that’s the great strength of this sort of challenge, because we’re all in this together. When it comes to climate change, either we all win, or we all lose.
The gauntlet has been thrown for the field of machine learning by some of its leaders: Address climate change in your work, and don’t contribute to the problem while doing so.
Author: Seth Colaner
Source: Venturebeat