Should we be worried about AI’s environmental impacts?
October 27, 2025
AI’s environmental burden has been discussed a great deal in the media, but precise and comparable information is still scarce. The impacts can be examined from many perspectives: climate emissions, biodiversity loss, and the sufficiency of materials. The technology sector’s share of global greenhouse gas emissions is currently about 2–3 percent, and AI is part of this whole.
AI’s carbon footprint arises mainly from energy consumption. In Finland, electricity consumption accounts for about eight percent of the country’s total carbon footprint, so AI’s share is still very small. On the other hand, we are only at the beginning of AI adoption, and use is increasing rapidly. For example, the U.S. commercial bank Morgan Stanley estimates that AI’s energy consumption will grow by 70 percent per year over the next three years.

AI’s environmental impacts are mainly made up of infrastructure, the energy used to train models, and the energy consumed when using them. For example, the energy consumption of training the GPT-4 model is estimated to correspond to the annual electricity consumption of about 1,500 Finnish households. However, an individual’s ability to influence model creation is very limited. From the perspective of a creative professional, it makes the most sense to focus attention on energy consumption during use.
OpenAI CEO Sam Altman (2024) estimates that an average AI query consumes about 0.34 Wh of energy, which corresponds to running an efficient LED lamp for a few minutes (Sam Altman, The Gentle Singularity). Google’s own measurements of the Gemini model give a very similar estimate: about 0.24 Wh per text prompt, which corresponds to about 0.03 gCO₂e of carbon emissions and about 0.26 ml of water consumption (Google Cloud 2024). An individual user’s impact thus remains very small, but as usage volumes grow, total consumption can be significant.
The energy consumption of image-generating tools is on the order of 1–3 watt-hours (Luccioni & Strubell 2024). Experiments conducted in the LuovAIn! project indicate that when generating images, consumption can be significantly lower as well. For example, the locally used image-producing Flux 1.0 schnell model consumed only about 0.3 Wh to generate a 512×512 pixel image. Making a hundred images thus used as much electricity as keeping a computer monitor on for one hour. For comparison: a human’s energy consumption is about 2,000–2,500 kilocalories per day (2.4–3 kWh). This means that about ten thousand AI queries per day is in the same order of magnitude as one person’s daily energy consumption. Nor are the carbon dioxide emissions particularly large. The energy consumption of a thousand queries or images corresponds, in CO₂ emissions, to driving a passenger car for about a few hundred meters.
Not using AI is not a particularly significant way to influence environmental impacts. Some activities, such as generating videos, are clearly more energy-intensive and can, for example, multiply the energy consumption of image generation by several hundred times. However, from the perspective of energy impacts, watching videos is the most burdensome part of the process.
For example, watching a two-hour 4K movie uses as much electricity as heating a sauna. If you make videos, it would be better to make ones that nobody would want to watch—this understandably sounds very problematic from the perspective of a creative professional. Income and success, however, depend largely on whether the content gets viewers.
Considering environmental issues, including those related to AI, as part of every creative professional’s overall picture is certainly sensible, but there is not necessarily any special reason to emphasize them specifically in the case of AI. As a societal problem, however, AI’s environmental burden should still be examined, even if the burden at the individual level is reasonable. If AI is used in more and more products, energy consumption will inevitably rise.
So far we have not yet seen games or films that are generated individually for each consumer, but products like these could, for example, raise energy consumption exponentially. From the perspective of energy consumption, the best would be if we were satisfied with television broadcasts, and the worst would be if each of us wants individually produced content.

AI’s and the technology sector’s energy use is tied to unequal questions related to climate change. The benefits and harms of AI models do not necessarily distribute evenly. Bender and her research group ( 2021) emphasize that the development of language models should also take “environmental justice” into account—for example, whether it is fair that people suffering from floods in Sudan or residents of the Maldives pay the climatic price for large English-language language models that are not even developed for their own languages. They note that the growing size of models leads to a situation where researchers and engineers no longer understand how the models work or what they have learned during training. This makes assessing environmental impacts even more difficult, when it is not known precisely what energy is used for and why.
As consumers or content creators, we cannot work miracles. We can choose the most responsible AI companies or spend a lot of time finding out the carbon footprint of services, but so far that is very challenging. That is why it would be useful to require greater transparency from AI companies about the carbon footprint of their models and other environmental impacts. For example, an energy label, which is used among other things for household appliances, could be one solution. It is easier to choose an image generator whose information includes a label like A++ or E than to spend dozens of hours searching for information and testing.
So, do we need to be worried about AI’s environmental impacts? Quite evidently, yes. For example, information from autumn 2025 about investments being made in AI infrastructure tells a worrying story. For instance, OpenAI announced in autumn 2025 that it would invest in 10 gigawatts of infrastructure. The amount corresponds to the total output of several nuclear power plants. In addition, other companies have announced similar massive investments that increase computing power as well as energy consumption.
AI companies are driven by a bias: belief in AI’s omnipotence—AGI, artificial general intelligence—that can solve issues such as climate change. For example, Sam Altman argues that AI’s enormous energy consumption can be seen as an investment if it leads to advances in areas such as carbon-free energy or carbon capture.
The AGI idea has been criticized (e.g., Blili-Hamelin et al. 2025). Although interviews with the CEOs of AI companies repeatedly feature talk of general-purpose AI or superintelligence, it is possible that the activities of leading AI companies are guided more by the ideals of childhood science fiction fantasies than by a considered, science-based view of reality.
There is no evidence that AI is capable of providing miracle cures to stop climate change. AI can be beneficial, but also harmful in combating climate change (Vinuesa et al. 2020). In the discussion, it is also good to note two competing approaches prevailing in environmental research. In theories that take a negative view of technological development and emphasize ecological rupture, attention is paid to the unsustainability of the system and the emergence of environmental problems that challenge humanity’s existence. Theories that take a more positive view of technology and emphasize ecological modernization development stress a compromise between the environment and development, in which environmental problems are recognized and development corrects its course toward sustainable interaction (Massa 2009.). Although climate emissions have been reduced through technological solutions, one of the biggest questions in connection with AI development is often raised in relation to the Jevons paradox, according to which technological efficiency does not necessarily reduce total resource consumption, but may even increase it (e.g., Sorrell 2009).
On the other hand, in principle there are very simple means to stop climate change, which are societal in nature. If we acted differently as a society—for example, consumed less, were more present locally, and worked less—climate change and other environmental issues could be resolved almost by themselves.
For example, developing the capabilities of language models does not directly produce means for such societal change, which would require above all changes in attitudes and lifestyles. If we build AI models to serve our current system based on continuous growth and consumption, instead of solving climate change we will get, for example, individually generated meme videos and even faster purchasing decisions.
The article was originally published on the LuovAIn! project’s online course Generative AI in creative work