Can you choose an AI model that harms the planet less?

Can you choose an AI model that harms the planet less?

The Star Online - Tech·2025-06-19 14:00

From uninvited results at the top of your search engine queries to offering to write your emails and helping students do homework, generative artificial intelligence is quickly becoming part of daily life as tech giants race to develop the most advanced models and attract users.

All those prompts come with an environmental cost: A report last year from the Energy Department found AI could help increase the portion of the nation’s electricity supply consumed by data centres from 4.4% to 12% by 2028. To meet this demand, some power plants are expected to burn more coal and natural gas.

And some chatbots are linked to more greenhouse gas emissions than others. A study published Thursday in the journal Frontiers in Communication analysed different generative AI chatbots’ capabilities and the planet-warming emissions generated from running them. Researchers found that chatbots with bigger “brains” used exponentially more energy and answered questions more accurately – up until a point.

“We don’t always need the biggest, most heavily trained model, to answer simple questions. Smaller models are also capable of doing specific things well,” said Maximilian Dauner, a doctoral student at the Munich University of Applied Sciences and lead author of the paper. “The goal should be to pick the right model for the right task.”

The study evaluated 14 large language models, a common form of generative AI often referred to by the acronym LLMs, by asking each a set of 500 multiple choice and 500 free response questions across five different subjects. Dauner then measured the energy used to run each model and converted the results into carbon dioxide equivalents based on global averages.In most of the models tested, questions in logic-based subjects, like abstract algebra, produced the longest answers – which likely means they used more energy to generate compared with fact-based subjects, such as history, Dauner said.

AI chatbots that show their step-by-step reasoning while responding tend to use far more energy per question than chatbots that don’t. The five reasoning models tested in the study did not answer questions much more accurately than the nine other studied models. The model that emitted the most, DeepSeek-R1, offered answers of comparable accuracy to those that generated a fourth of the amount of emissions.

There is key information not captured by the study, which only included open-source LLMs: Some of the most popular AI programs made by large tech corporations, such as OpenAI’s ChatGPT and Google’s Gemini, were not included in the results.

And because the paper converted the measured energy to emissions based on a global CO2 average, it only offered an estimate; it did not indicate the actual emissions generated by using these models, which can vary hugely depending on which country the data centre running it is in.

“Some regions are going to be powered by electricity from renewable sources, and some are going to be primarily running on fossil fuels,” said Jesse Dodge, a senior research scientist at the Allen Institute for AI who was not affiliated with the new research.

In 2022, Dodge led a study comparing the difference in greenhouse gas emissions generated by training a LLM in 16 different regions of the world. Depending on the time of year, some of the most emitting areas, like the central United States, had roughly three times the carbon intensity of the least emitting ones, such as Norway.

But even with this limitation, the new study fills a gap in research on the trade-off between energy cost and model accuracy, Dodge said. “Everyone knows that as you increase model size, typically models become more capable, use more electricity and have more emissions,” he said.

Reasoning models, which have been increasingly trendy, are likely further bumping up energy costs, because of their longer answers.

“For specific subjects an LLM needs to use more words to get to a more accurate response,” Dauner said. “Longer answers and those that use a reasoning process generate more emissions.”

Sasha Luccioni, the AI and climate lead at Hugging Face, an AI company, said that subject matter is less important than output length, which is determined by how the model was trained. She also emphasised that the study’s sample size is too small to create a complete picture of emissions from AI.

“What’s relevant here is not the fact that it’s math and philosophy, it’s the length of the input and the output,” she said.

Last year, Luccioni published a study that compared 88 LLMs and also found that larger models generally had higher emissions. Her results also indicated that AI text generation – which is what chatbots do – used 10 times as much energy compared with simple classification tasks like sorting emails into folders.

Luccioni said that these kinds of “old school” AI tools, including classic search engine functions, have been overlooked as generative models have become more widespread. Most of the time, she said, the average person doesn’t need to use an LLM at all.

Dodge added that people looking for facts are better off just using a search engine, since generative AI can “hallucinate” false information.

“We’re reinventing the wheel,” Luccioni said. People don’t need to use generative AI as a calculator, she said. “Use a calculator as a calculator.” – ©2025 The New York Times Company

This article originally appeared in The New York Times.

……

Read full article on The Star Online - Tech

Technology Entertainment Malaysia