Is Generative AI Generating Climate Change?
How can the trend be reversed?
Introduction
Future of work, culture, innovation and security: on February 10th and 11th, 2025, Paris hosted the AI Action Summit, a strategic meeting where the biggest players in artificial intelligence discussed the future of this fast-growing technology.
But behind the debates on sovereignty and ethics, the AI-Climate subject took a back seat. Even though the International Energy Agency and the United Nations have briefly mentionned the growing trajectories of electricity consumption and the need to reduce data centers’ energy consumption, it seems that no in-depth discussion took place on the environmental footprint of generative AI models – water consumption, resource extraction, greenhouse gases emissions. Yet those are a potential limiting factor for the sector growth. Can the expansion of generative AI be compatible with countries’s decarbonization targets, or with resource scarcity? Would generative AI really be an asset, or would it just monopolize investments and resources critical to the ecological transition?
Hazy and underestimated direct impacts
Although accounting for the environmental impact of generative AI has recently gained in maturity (mainly through the academic world and open-source community initiatives [1] [2]), it is still hampered by a lack of transparency of the main players in the value chain.
Upstream, the market for AI chips is heavily dominated by NVIDIA with a market share of almost 80% [3], but the designer does not provide figures on the environmental impact of its GPUs[4]. The semiconductor leading supplier TSMC (65% share [5]) is also opaque on the matter. Therefore, the environmental impact of hardware manufacturing can only be estimated on the basis of technical specifications, as proposed by the Boavizta working group [6].
AI model developers, at the heart of the value chain, generally disclose little information about the environmental impact of their models. The giants OpenAI and Anthropic, authors of the GPT and Claude models, do not provide any figures. Microsoft and Google do not disclose the footprint of their models either, but their emissions have increased by almost 50% in 3 years at group level[7] [8] [9].
Figure 1 - Carbon footprint of a large model such as Llama 3-405B (Meta), with and without taking into account other items than GPU electricity consumption during training (see calculation assumptions[10]).
This footprint is representative of a model with 405 billion parameters running for 1 year, processing 1 million queries a day, generating 200 tokens on average.
Note that higher model usage would significantly increase the share of inference in this calculation.
Source: Carbone 4 based on Meta and Carbone 4 studies
Meta communicates on the carbon footprint of its Llama models, but on a restricted perimeter. Impacts associated with GPU manufacturing are excluded, and the inclusion of non-GPU electricity consumption (e.g. for cooling equipment, CPUs used to decode requests, encode responses, coordinate task execution, logging...) is unclear[11] , potentially concealing almost half of the training emissions (see fig.1). Emissions from the end-users’ usage phase (known as “inference”), despite being particularly prone to rebound effects, are not addressed either.
Finally, there is no mention of any training emissions during the research & development phase that could be allocated to the model.
Though transparency might expose these players to some risks in the technological race, it is crucial to properly measure the ecological footprint of generative AI and steer it in the right direction.
The AI Action Summit, bringing together regulators and companies, embodied the hope of international cooperation to promote this transparency between players. A "Coalition for Sustainable AI" was announced at the summit, which will, among other things, promote the displaying of environmental impacts, but does not include OpenAI or any of the MAGMAs[12] among its members. The declaration for an "open, inclusive and ethical" AI signed by 61 countries, including China, has not been ratified by the USA, despite its market leadership, which meddles the call for transparency.
Thus, as of today, it remains difficult to obtain precise figures on the carbon footprint of generative AI models. One thing remains certain, however: the direct impact is substantial, in the order of an extra 100 kgCO2eq in the annual personal carbon footprint of a user making 10 text queries a day[13].
To limit the inflation of generative AI’s emissions, companies will need to focus on efficiency gains (via architecture choices and smaller models), on low-carbon electricity to power data centers, and above all on sufficiency (or frugality[14]), without which rebound effects will prevent any progress from being made (rebound effects on usage following the efficiency gains achieved by the DeepSeek model are already being observed [15]).
Alongside carbon, the massive acceleration in the use of AI also raises the issue of resource hoarding, potentially detrimental to the ecological transition:
- Low-carbon electricity coveted to power data centers will be prized by all the other sectors that will also need it for their own decarbonization (industry, mobility, buildings). In a scenario of strong AI adoption, an increase of around 100 TWh in electricity consumption by data centers in Europe by 2030 can be expected [16], i.e. almost 10% of the additional renewable electricity production forecast by the IEA's STEPS scenario,
- Land for data centers, water for cooling them and mineral resources for the machines they host,
- Investments, with nearly $1,500 billion of private investments by 2030 announced in the last month alone[17], or around $300 billion a year. For comparison, this represents almost 8% of the additional global investment needed for the transition, according to the Climate Policy Initiative [18]. In France, the announced 109 billion represent 33% of the "cost of the transition" quantified by the Pisani-Ferry & Mahfouz report for the 2025-2030 period [19],
- Finally, the media coverage of the subject, the human resources and skills of the people working on the topic.
On an individual basis, users have several levers to control their AI-related environmental footprint: question the need first (especially for the most energy-intensive tasks such as image and video generation), favor lighter models [20], ask for short answers and group questions in a single query.
Generative AI must not catalyze a +4°C economy
In addition to its direct impact on humanity's carbon budget, every digital company ought to analyze its higher-order indirect impacts, i.e. what it “enables” or accelerates. The recent advances in generative AI seem promising for many sectors, but the technological prowess and the few virtuous use cases may hide the acceleration of the degradation of living conditions on Earth. Thus, AI should not be easily accessible to all economic sectors: accelerating the fossil industry, air transportation or advertising sectors goes directly against climate action.
To contribute fairly to global carbon neutrality and contain biodiversity extinction, tech companies must do more than just improve their energy efficiency or integrate a sufficiency approach, they must also care about the end use of their technologies: "who are we selling to? what for?”. This ontological reflection is essential for obvious ethical reasons, but also for good business management: understanding that IT shares its customers' exposure to transition and reputational risks is an essential step in improving the resilience of one’s business. Like banks, digital companies are dependent on and responsible for what they accelerate.
To start thinking about your customers portfolio in concrete terms, Carbone 4 suggests comparing it with Urgewald's Global Coal Exit List for an initial diagnosis. You can also contact us for a more detailed analysis. And if your company provides technological solutions with a potential positive environmental impact, you can follow the first steps of the NZI4IT method in just ten minutes.
Conclusion
Low-carbon electricity is a limited resource, necessary for the decarbonization of the existing economy and subject to conflicts of use. By claiming 10% of the additional low-carbon electricity to come, digital companies are forging a place for AI in tomorrow's world, far from the societal debate on the allocation of humanity's carbon budget.
The question of the right amount of use of AI is essential, in terms of need (do we really need AI to “save time”? and in a holistic approach, what exactly does "saving time" mean?) and purpose (what for?).
Like banks, digital businesses need to analyze what they are catalyzing if they truly want to be part of the solution rather than part of the problem.
Their corporate customers need to ask themselves the question of sufficiency: distinguishing vital versus nice-to-have digital transformation, containing rebound effects and preventing perverse effects using a systemic approach instead of focusing exclusively on direct productivity gains.
Public authorities could help by creating favorable market conditions for digital businesses to mature on sustainability topics (see our publication on what a Buy European and Sustainable Act would bring) and by opening up the debate.
Thus, final users could forge their own opinions as informed citizens and take part in this debate, outside of their filter bubble. How can this be achieved?
Maybe start by not asking ChatGPT?[21]
Contact us
Contact us about any question you have about Carbone 4, or for a request for specific assistance.