AI has an environmental problem

AI has become an indispensable part of our lives, transforming how we work, live, and do business. Broadly defined, AI encompasses technologies that simulate human thinking and decision-making. While basic forms of AI have existed since the 1950s, the field has advanced rapidly in recent years, driven by improvements in computing power and the exponential surge in data availability. With the global AI market valued at $200 billion and projected to contribute up to $15.7 trillion to the global economy by 2030, AI adoption and its recognition as a driver of economic value have reached unprecedented levels. In the U.S., the announcement of the Stargate Project, involving more than $500 billion in AI infrastructure investments over four years, is testament to this. In India, Reliance Industries is planning to build the world’s largest data centre in Jamnagar, in partnership with Nvidia. India has also announced plans to build its own LLM (large language model) to compete with DeepSeek and ChatGPT. However, as governments race to tap AI’s economic potential, it is crucial to acknowledge that its rapid rise brings not only opportunities but also risks, particularly environmental costs.

Impact across stages

The environmental impact of AI arises across several stages of its value chain, including energy consumption from infrastructure, computing hardware production, cloud data centre operations, AI model training, inferencing, validation, and related processes. In terms of hardware, data centres, the backbone of AI operations, contribute 1% of global greenhouse gas emissions, according to the International Energy Agency (IEA). This figure is expected to rise significantly as electricity demand from data centres is projected to double by 2026. Generative AI models like ChatGPT, which rely on sophisticated machine learning (ML) techniques, require 10–100 times more computing power than earlier versions, further driving demand for graphic processing units and worsening the environmental footprint. Moreover, the rapid expansion of data centres is also fuelling a growing e-waste crisis.

AI’s software life cycle emissions arise from processes like data collection, model development, training, validation, maintenance and retirement, and are equally concerning. Training advanced AI models, such as GPT-3, can emit up to 552 tonnes of carbon dioxide equivalent — comparable to the annual emissions of dozens of cars. To mitigate these environmental risks, governments and the private sector must proactively work towards embedding sustainability into AI ecosystem design.

Global conversations on this issue have been gaining momentum. At COP29, the International Telecommunication Union emphasised the urgent need for greener AI practices. Such commitments demand that businesses also align their processes with sustainability targets. Over 190 countries have adopted non-binding ethical AI recommendations addressing the environment, and regions such as the European Union and the U.S. have introduced laws to curb AI’s environmental impact. However, such policies are scarce. While governments across the globe are crafting national AI strategies, they often overlook sustainability, particularly the private sector’s role in reducing emissions.

The way forward

To balance innovation and environmental responsibility, action is needed across the AI value chain. Investing in clean energy is a key step in achieving net-zero emissions. Companies can achieve this by transitioning to renewable energy sources and purchasing carbon credits. Locating data centres in areas with abundant supply of renewable resources can also reduce strain on existing resources and help lower the carbon footprint. AI can also help optimise energy grids, particularly by integrating renewable energy sources. For instance, Google’s DeepMind has leveraged ML to improve wind energy forecasting, enabling more accurate wind pattern predictions and facilitating better integration of wind power into the grid.

Using energy-efficient hardware and ensuring regular maintenance can also significantly minimise emissions. Equally important is the development of efficient AI models. Smaller, domain-specific models that are tailored to their applications can deliver the same outputs with less processing power, reducing demand on infrastructure and resources. A study by Google and the University of California, Berkeley, has found that the carbon footprint of LLMs can be minimised by a factor of 100 to 1,000 through optimised algorithms, specialised hardware, and energy-efficient cloud data centres. Further, instead of collecting new data or training models from scratch, businesses can adapt pre-trained models to new tasks.

Lastly but most importantly, transparency is essential in driving sustainability efforts. Measuring and disclosing the environmental impact of AI systems will help organisations understand their life cycle emissions and address the negative externalities of their operations. Establishing standardised frameworks for tracking and comparing emissions across the industry will ensure consistency and accountability.

Sustainability needs to be incorporated into the very design of the AI ecosystem, in order to ensure its long-term growth and viability. By balancing environmental responsibility with innovation, we can harness AI’s transformative potential without compromising the Earth’s future.

Urmi Tat, U.S.-India AI Fellow, Observer Research Foundation

Leave a Comment