AI & RoboticsNews

How much energy does AI use compared to humans? Surprising study ignites controversy

AI’s carbon footprint is no open-and-shut case, according to scientists from the University of California-Irvine and MIT, who published a paper earlier this year on the open access site arXiv.org that shakes up energy use assumptions of generative AI models, and which set off a debate among leading AI researchers and experts this past week. 

The paper found that when producing a page of text, an AI system such as ChatGPT emits 130 to 1500 times fewer carbon dioxide equivalents (CO2e) compared to a human. 

Similarly, in the case of creating an image, an AI system such as Midjourney or OpenAI’s DALL-E 2 emits 310 to 2900 times less CO2e.  

The paper concludes that the use of AI has the potential to accomplish several significant activities with significantly lower emissions than humans.

However, an ongoing dialogue among AI researchers reacting to the paper this week also highlights how accounting for interactions between climate, society, and technology poses immense challenges warranting continual reexamination.

In an interview with VentureBeat, the authors of the paper, University of California at Irvine professors Bill Tomlinson and Don Patterson, and MIT Sloan School of Management visiting scientist Andrew Torrance, offered some insight into what they were hoping to measure.

Originally published in March, Tomlinson said that the paper was submitted to the research journal Scientific Reports where it is currently under peer review.

The study authors analyzed existing data on the environmental impact of AI systems, human activities, and the production of text and images. This information was collected from studies and databases that study how AI and humans affect the environment. 

For example, they used an informal, online estimate for ChatGPT based on traffic of 10 million queries generating roughly 3.82 metric tons of CO2e per day while also amortizing the training footprint of 552 metric tons of CO2e. As well, for further comparison, they included data from a low impact LLM called BLOOM

On the human side of things, they used both examples of the annual carbon footprints of average persons from the US (15 metric tons) and India (1.9 metric tons) to compare the different per-capita effects of emissions over an estimated amount of time it would take to write a page of text or create an image.

The researchers emphasized the importance of measuring carbon emissions from different activities like AI in order to inform policy making on sustainability issues

“Without an analysis like this, we can’t make any reasonable kinds of policy decisions about how to guide or govern the future of AI,” Paterson told VentureBeat in an exclusive phone interview. “We need some sort of grounded information, some data from which we can take the next step.”

Tomlinson also highlighted the personal questions which inspire their work, explaining “I would like to be able to live within the scope of what the environment of the Earth can support,” he said. “Maybe use [AI] as a creative medium without doing a terrible amount of harm… but if it’s doing a lot of harm, I will stop doing AI work.”

Patterson added some context around their previous analysis of blockchain technology. “The environmental impact of proof-of-work algorithms has been in the news quite a bit. And so I think it’s sort of a natural progression to think about environmental impacts, and these other really enormous, society-wide tools like large language models.”

When asked about variables that might flip the surprising outcome found in the paper. Tomlinson acknowledged the possibility of “rebound effects” where greater efficiency leads to increased usage

He envisioned “a world in which every piece of media that we ever watch or ever consume is dynamically adapted to your exact preferences so that all the characters look slightly like you and the music is slightly attuned to your tastes, and all of the themes slightly reaffirm your preferences in various different ways.” 

Torrance noted that “we live in a world of complex systems. An unavoidable reality of complex systems is the unpredictability of the results of these systems.” 

He framed their work as considering “not one, not two, but three different complex systems” of climate, society, and AI. Their finding that AI may lower emissions “may seem surprising to many people.” However, in the context of these three colliding complex systems, it’s entirely reasonable that people might have guessed incorrectly what the answer might be.

The paper attracted more attention among the AI community this week when Meta Platforms’s chief AI scientist Yann LeCun posted a chart from it on his social account on X (formerly Twitter) and used it to assert that “using generative AI to produce text or images emits 3 to 4 orders of magnitude *less* CO2 than doing it manually or with the help of a computer.”

This attracted attention and pushback from critics of the study’s methodology in comparing the carbon emissions from humans to the AI models. 

Man, this preprint is really the gift that keeps on giving.
In case people missed my previous PSA : you can’t compare the carbon emissions of people and objects. Humans are more than just the work that they do.
(Also, that paper makes a lot of false assumptions in general) https://t.co/bZA414J9YI

“You can’t just take an individual’s total carbon footprint estimate for their whole life and then attribute that to their profession,” said Sasha Luccioni, AI researcher and climate lead at HuggingFace, in a call with VentureBeat. “That’s the first fundamental thing that doesn’t make sense. And the second thing is, comparing human footprints to life cycle assessment or energy footprints doesn’t make sense, because, I mean, you can’t compare humans to objects.”

When quantifying human emissions, Patterson acknowledged that “doing any sort of total energy expenditure kind of analysis is tough, because everything’s interconnected.” Tomlinson agreed boundaries must be set but argued “there is an entire field called life cycle assessment, which we engage more with in the paper under peer review.” 

HuggingFace’s Luccioni agrees that this work has to be done, the approach the study authors took was flawed. Beyond a blunt approach which directly compares humans and AI models, Luccioni pointed out that the actual data which would accurately quantify these environmental effects remains hidden and proprietary. She also noted, perhaps somewhat ironically, that the researchers used her work to gauge the carbon emissions of the BLOOM language model.

Without access to key details about hardware usage, energy consumption, and energy sources, carbon footprint estimates are impossible. “If you’re missing any of those three numbers, it’s not a carbon footprint estimate,’ said Luccioni. 

The greatest issue is a lack of transparency from tech companies. Luccioni explains that: “We don’t have any of this information for GPT. We don’t know how big it is. We don’t know where it’s running. We don’t know how much energy it’s using. We don’t know any of that.” Without open data sharing, the carbon impact of AI will remain uncertain.

The researchers emphasized taking a transparent, science-based approach to these complex questions rather than making unsubstantiated claims. According to Torrance, “science is an agreed on approach to asking and answering questions that comes with a transparent set of rules…we welcome others to test our results with science or with any other approach they prefer.”

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


AI’s carbon footprint is no open-and-shut case, according to scientists from the University of California-Irvine and MIT, who published a paper earlier this year on the open access site arXiv.org that shakes up energy use assumptions of generative AI models, and which set off a debate among leading AI researchers and experts this past week. 

The paper found that when producing a page of text, an AI system such as ChatGPT emits 130 to 1500 times fewer carbon dioxide equivalents (CO2e) compared to a human. 

Similarly, in the case of creating an image, an AI system such as Midjourney or OpenAI’s DALL-E 2 emits 310 to 2900 times less CO2e.  

The paper concludes that the use of AI has the potential to accomplish several significant activities with significantly lower emissions than humans.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

However, an ongoing dialogue among AI researchers reacting to the paper this week also highlights how accounting for interactions between climate, society, and technology poses immense challenges warranting continual reexamination.

From blockchain to AI models, environmental effects need to be measured

In an interview with VentureBeat, the authors of the paper, University of California at Irvine professors Bill Tomlinson and Don Patterson, and MIT Sloan School of Management visiting scientist Andrew Torrance, offered some insight into what they were hoping to measure.

Originally published in March, Tomlinson said that the paper was submitted to the research journal Scientific Reports where it is currently under peer review.

The study authors analyzed existing data on the environmental impact of AI systems, human activities, and the production of text and images. This information was collected from studies and databases that study how AI and humans affect the environment. 

For example, they used an informal, online estimate for ChatGPT based on traffic of 10 million queries generating roughly 3.82 metric tons of CO2e per day while also amortizing the training footprint of 552 metric tons of CO2e. As well, for further comparison, they included data from a low impact LLM called BLOOM

On the human side of things, they used both examples of the annual carbon footprints of average persons from the US (15 metric tons) and India (1.9 metric tons) to compare the different per-capita effects of emissions over an estimated amount of time it would take to write a page of text or create an image.

The researchers emphasized the importance of measuring carbon emissions from different activities like AI in order to inform policy making on sustainability issues

“Without an analysis like this, we can’t make any reasonable kinds of policy decisions about how to guide or govern the future of AI,” Paterson told VentureBeat in an exclusive phone interview. “We need some sort of grounded information, some data from which we can take the next step.”

Tomlinson also highlighted the personal questions which inspire their work, explaining “I would like to be able to live within the scope of what the environment of the Earth can support,” he said. “Maybe use [AI] as a creative medium without doing a terrible amount of harm… but if it’s doing a lot of harm, I will stop doing AI work.”

Patterson added some context around their previous analysis of blockchain technology. “The environmental impact of proof-of-work algorithms has been in the news quite a bit. And so I think it’s sort of a natural progression to think about environmental impacts, and these other really enormous, society-wide tools like large language models.”

When asked about variables that might flip the surprising outcome found in the paper. Tomlinson acknowledged the possibility of “rebound effects” where greater efficiency leads to increased usage

He envisioned “a world in which every piece of media that we ever watch or ever consume is dynamically adapted to your exact preferences so that all the characters look slightly like you and the music is slightly attuned to your tastes, and all of the themes slightly reaffirm your preferences in various different ways.” 

Torrance noted that “we live in a world of complex systems. An unavoidable reality of complex systems is the unpredictability of the results of these systems.” 

He framed their work as considering “not one, not two, but three different complex systems” of climate, society, and AI. Their finding that AI may lower emissions “may seem surprising to many people.” However, in the context of these three colliding complex systems, it’s entirely reasonable that people might have guessed incorrectly what the answer might be.

The ongoing debate

The paper attracted more attention among the AI community this week when Meta Platforms’s chief AI scientist Yann LeCun posted a chart from it on his social account on X (formerly Twitter) and used it to assert that “using generative AI to produce text or images emits 3 to 4 orders of magnitude *less* CO2 than doing it manually or with the help of a computer.”

This attracted attention and pushback from critics of the study’s methodology in comparing the carbon emissions from humans to the AI models. 

“You can’t just take an individual’s total carbon footprint estimate for their whole life and then attribute that to their profession,” said Sasha Luccioni, AI researcher and climate lead at HuggingFace, in a call with VentureBeat. “That’s the first fundamental thing that doesn’t make sense. And the second thing is, comparing human footprints to life cycle assessment or energy footprints doesn’t make sense, because, I mean, you can’t compare humans to objects.”

Life cycle analysis is still early, real world data remains scarce

When quantifying human emissions, Patterson acknowledged that “doing any sort of total energy expenditure kind of analysis is tough, because everything’s interconnected.” Tomlinson agreed boundaries must be set but argued “there is an entire field called life cycle assessment, which we engage more with in the paper under peer review.” 

HuggingFace’s Luccioni agrees that this work has to be done, the approach the study authors took was flawed. Beyond a blunt approach which directly compares humans and AI models, Luccioni pointed out that the actual data which would accurately quantify these environmental effects remains hidden and proprietary. She also noted, perhaps somewhat ironically, that the researchers used her work to gauge the carbon emissions of the BLOOM language model.

Without access to key details about hardware usage, energy consumption, and energy sources, carbon footprint estimates are impossible. “If you’re missing any of those three numbers, it’s not a carbon footprint estimate,’ said Luccioni. 

The greatest issue is a lack of transparency from tech companies. Luccioni explains that: “We don’t have any of this information for GPT. We don’t know how big it is. We don’t know where it’s running. We don’t know how much energy it’s using. We don’t know any of that.” Without open data sharing, the carbon impact of AI will remain uncertain.

The researchers emphasized taking a transparent, science-based approach to these complex questions rather than making unsubstantiated claims. According to Torrance, “science is an agreed on approach to asking and answering questions that comes with a transparent set of rules…we welcome others to test our results with science or with any other approach they prefer.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Bryson Masse
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!