Why some AI models throw out 50 times more greenhouse gases to answer the same question

You like it or not, big language models are quickly incorporated into our lives. And because of their intense needs for energy and water, they can also make us stop even faster in climate chaos. However, some LLMS may release more pollution on the planet than others, a new study found.

Applications made to some models generate up to 50 times more carbon emissions than others, according to a new study published in Communication limits. Unfortunately, and perhaps surprisingly, models that are more accurate are inclined to have the biggest energy costs.

It is difficult to judge how bad LLMs are for the environment, but some studies suggest that Chatgpt training has used up to 30 times more energy than average US use in one year. What is not known is whether some models have a higher energy cost than their peers as they answer questions.

Researchers at the University of Applied Sciences Hochschule München in Germany have estimated 14 LLM, ranging from 7 to 72 billion parameters and recruiting, which refine the understanding and manufacture of the language of the model-1000 reference questions about different topics.

LLMS convert any word or parts of words into a string of numbers called a marker. Some LLM, more special, LLMS reasoning, also bring special « thinking markers » into the input sequence to allow additional internal calculations and reasoning before generating exit. This conversion and the subsequent calculations that LLM performs on markers use energy and release CO2.

Scientists compare the number of markers generated by each of the models tested. The reflection models have created 543.5 thinking tokens on average, while short models only require 37.7 tokens at a question, the study found. In the world of Chatgpt, for example, GPT-3.5 is a short model, while the GPT-4O is a model of reasoning.

This process of reasoning increases energy needs, the authors have discovered. « Environmental impact by questioning trained LLMS is definitely determined by their approach to reasoning, » said study author Maximilian Danner, a researcher at the University of Applied Sciences Hochschule München. « We have found that the reasoning activation models produce up to 50 times more CO2 emissions than a concise reaction models. »

The more accurate the models were, the more carbon emissions they produced, the study found. The Cogito reasoning model, which has 70 billion parameters, has reached 84.9% accuracy – but also produces three times more CO2 emissions than models of similar size that generate more short answers.

« We are currently seeing a clear compromise on the resistance to the accuracy inherent in LLM technology, » Dauner said. « None of the models that holds the emissions below 500 grams of CO2 equivalent has achieved higher than 80% accuracy with the correct answers to 1000 questions. » The CO2 equivalent is the unit used to measure the climate effects of various greenhouse gases.

Another factor was the subject. Questions that require detailed or complex reasoning, such as abstract algebra or philosophy, have led to six times higher emissions than more clear objects, according to the study.

However, there are some warnings. The emissions are very dependent on how the local energy networks and the models you explore are structured, so it is not clear how summarizes these findings are. However, the study’s authors said they hoped that work would encourage people to be « selective and thoughtful » about the use of LLM.

« Consumers can significantly reduce emissions by pushing AI to generate short answers or limit the use of high capacity models to tasks that really require that power, » Dauner says in a statement.

(Tagstotranslate) AI

Artificial Intelligence,AI,Carbon Emissions,ChatGPT,Climate change,climate crisis,LLMs

#models #throw #times #greenhouse #gases #answer #question

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *