We are an environmental humanities researcher and an AI scholar interested in the interrelation between rapid technological and ecological changes. We united forces to investigate the implications of AI systems for social and ecological sustainability.
Chatbots are not designed for veracity, but for guessing what the answer to a prompt would be based on content that has been previously written by others (humans and machines). The answers tend to favor the most popular, not necessarily the most critical, content.
We reviewed a significant amount of critical literature on the implications of AI for the environment as well as for the common good and found out that, contrary to prevailing statements generated by chatbot algorithms, AI systems are currently intensifying troublesome global trends by accelerating processes that are significantly increasing social inequality, energy consumption, political polarization, and ecological breakdown.
How could it be that our societies rapidly embrace “smart” technologies that are exacerbating all our most pressing problems?
One of the reasons may be that AI technologies are designed and implemented within a dominant cultural paradigm addicted to constant economic growth that cannot correct its own flaws. Even though it was known for decades that the global economy cannot grow indefinitely on a finite planet without collapsing the living systems of Earth, the ultimate motivation in the implementation of most technologies still is to trigger economic growth.
Most aspects of the ecological crisis are getting worse and global inequality has been increasing during the last decades. Within the dominant growth-oriented economic framework, technological innovations are reinforcing these unsustainable inertias.
In the historical moment in which our collective wellbeing depends on reverting negative trends, our technologies are automating and accelerating them. Making a destructive and unfair techno-social system faster and smarter may not be a great idea.
It would be wiser to first embed the system within ecologically regenerative and socially just principles and, only then, equip it with AI.
Misleading discourses about AI disseminated by tech corporations exaggerate the social promises of these technologies while ignoring their negative consequences.
A number of critical technology scholars warn that most broad implementations of AI amplify societal prejudices, undermine democracy, and tend to punish the poor and make the rich richer (see here, here and here). These studies confirm that AI systems are in fact automating inequality and amplifying existing power asymmetries.
The social risks of AI are more researched than its ecological costs. However, both are interconnected. Often the poorer populations pay the highest environmental price of technological innovation, while the more privileged disproportionally benefit from them and have the means to circumnavigate its side effects.
AI technologies, as most high-techs, are material and energy intensive and therefore inherently unsustainable. It is unclear where the massive energy and mineral requirements of the rapidly growing global computing infrastructure are going to come from in a context of ecological depletion, climate change, and energy decline.
The application of AI systems in projects that focus on sustainability is not making any difference. Within a growth-oriented economic culture, even improvements in ecoefficiency do not reduce absolute material and energy demand.
AI related infrastructures are rapidly increasing energy demands to run data centers, train algorithms, and produce and recharge all sorts of smart devices. Techno-optimism is often an energy blind perspective.
Kate Crawford’s book, Atlas of AI (2021), emphasizes the fact that AI is an extractive industry made out of a massive infrastructure that is drastically transforming both global ecologies and human ways of understanding reality.
There is currently an unprecedented acceleration of planetary extractivism facilitated by recent developments in AI, as well as a proliferation of e-waste accumulating everywhere (including outer space.
Meredith Broussard calls technochauvinism the unexamined assumption that a high-tech solution is always the best option even though, in many cases, simpler, cheaper, safer, and ecologically friendlier solutions exist. For example, some high-tech carbon sequestration machines pollute more than they clean, while regenerative agriculture sequesters carbon and enriches the soil.
As AI systems make decisions for us in automated and opaque ways, there is less room for public discussions and ethical considerations about which trends we consent to automate and therefore perpetuate and which ones we prefer to change. This makes “smart” societies mindless by default.
The fact that many higher education institutions are currently downsizing or eliminating their humanities programs and requirements to prioritize “purely” technical degrees is telling.
Philosophy of science and technology understands that big data and technical skills without historical and cultural reflection will only make our problems more unmanageable and our possible alternatives more unthinkable.
Knowing how to do things without questioning why, what for, or to the benefit of whom will not save the day.
If we want technology to help us deal with ecological overshoot and social inequality, we should first overcome the dominant economic culture and its addiction to constant growth. Then, we should rethink our priorities and values so, as societies, we incentivize technological designs that enhance the common good and the regeneration of the environment while simultaneously disincentivize extractivism and power accumulation.
AI systems will never be smart if the humans behind their algorithmic designs are only trained in technical skills and disregard all the critical perspectives coming from the environmental humanities and the social sciences.
Technologies can only be truly smart if the regulations and economic incentives behind their designs and implementations align with the common good and prioritize the participation of local communities.
Machine learning technology has many unintended consequences, not only for humans, but also for the ecologies upon which human lives depend. The problem is not the tool, but the cultural logic behind its designs and the rapid implementation that doesn’t allow for reflection.
We need a wiser economic culture so our tools reflect not the worst in us, but the best we can be.