“Green AI” is having a moment. The phrase now appears in research agendas, corporate sustainability reports, and policy discussions, sometimes interchangeably with terms like Ecological AI or AI for the Planet. But spend any time in these conversations and it quickly becomes clear that people are often talking about very different things.
Sometimes Green AI refers to making AI itself less energy-intensive. Sometimes it describes deploying AI for ecological monitoring or climate forecasting. And sometimes it points toward a deeper shift: rethinking how humans, machines, and the living world relate to one another. These are not simply different applications of AI; they reflect different assumptions about what is driving ecological breakdown and what kinds of responses are needed.
One way to navigate this quickly evolving terrain is to distinguish between three broad orientations of Green AI: technical greening, ecological intervention, and relational reorientation.
Orientation 1: Technical Greening
The first orientation focuses on reducing AI’s own ecological footprint: smaller and more specialized models, more efficient data centres, lower carbon footprints, and less e-waste. Here, the planet’s biophysical boundaries are treated as an orienting constraint on AI’s design and operation.
This focus is increasingly important. The International Energy Agency estimates that data centres currently consume around 415 terawatt-hours annually, roughly 1.5% of global electricity demand, and projects that this figure will double by 2030, driven in large part by AI. The ecological costs are not limited to energy. AI also relies on the same critical minerals that are tied to geopolitical conflict, labour exploitation, and ecological destruction.
In response, initiatives such as frugal AI, small language models, carbon-aware computing, and energy reporting requirements are gaining traction. This work matters. But efficiency alone does not transform extractive systems. It addresses the footprint of AI, not the direction of its use, nor the wider political economy in which it operates. Efficiency gains can also increase overall consumption, a dynamic often described as Jevons’ paradox. A leaner AI system may be easier to justify and scale, while leaving the wider machinery of extraction intact.
Orientation 2: Ecological Intervention
The second orientation asks what AI can do for the planet. There are dozens of applications: optimizing power grids, monitoring biodiversity, detecting deforestation, forecasting extreme weather, supporting ecosystem restoration, reducing emissions in agriculture and transport.
For many researchers and practitioners, this is where the most urgent and tractable work lies. In many cases, this approach leads to important contributions. Yet it can also frame ecological breakdown primarily as a problem of insufficient data, inadequate prediction, or weak coordination — as though more sensors, better models, and greater computational capacity might allow us to better manage, or even transcend, planetary instability.
That framing can be useful in some contexts, but it can also sidestep the political, economic, and relational drivers of ecological breakdown: colonial land dispossession, extractive development, unequal consumption, and the normalization of human domination over the rest of nature. It can also reproduce top-down and transactional approaches to governance, treating ecosystems as decontextualized data points to be measured and optimized for human purposes, while bypassing the knowledge, authority, and governance of frontline communities.
Indigenous scholars have made this point with particular force: the risk isn’t just that AI gets things wrong, but that it does so in familiar colonial ways: taking data, interpreting territories, and shaping decisions without accountability, reciprocity, or consent.
Orientation 3: Relational Reorientation
The third orientation begins from a different question: what if the problem is not only how much energy AI uses, or what tasks it performs, but the assumptions about intelligence, nature, and relationality that are built into how AI is imagined and governed in the first place?
This strand draws on Indigenous, meta-relational, and posthumanist perspectives to examine how prevailing paradigms of AI often mirror the extractive and exceptionalist patterns of modern culture: efficiency, optimization, control, and reducing non-human nature to “resources.” From this perspective, ecological breakdown is not only a technical challenge, but also a relational one.
What follows is not simply a new application of AI, but a different set of design questions. How might we develop AI in ways that are accountable to living systems, to Indigenous sovereignty, and to more-than-human beings? How might we design AI to support ecological reciprocity and participate in relational repair, rather than only the more efficient management of ecosystems?
Projects like Abundant Intelligences point in this direction by asking what it would mean to situate AI “within the circle of human and other-than-human relationships.” Other emerging examples include Indigenous-led ecological monitoring grounded in traditional knowledge and stewardship responsibilities, machine learning systems designed to shift how people perceive other living beings, and experimental biomimetic approaches to AI architecture.
These efforts face their own tensions. Without engaging the material questions raised by the first two orientations, relational reorientation can remain limited in practice. It can sound compelling on paper while leaving epistemic asymmetries and extractive infrastructures largely untouched. For instance, Indigenous knowledge may be included in AI systems without meaningful data sovereignty protections, and human exceptionalism may return through the back door in projects that claim to translate non-human communication.
Who Gets to Define Green AI?
At present, Green AI is shaped mainly by the first two orientations, which are more likely to attract funding, publications, and institutional legitimacy. The third orientation remains more marginal not only because it is harder to scale, but because it unsettles the assumptions through which expertise, legitimacy, and innovation are usually organized. It asks difficult questions about who defines ecological problems, whose knowledge counts, what kinds of intelligence are recognized, and what forms of governance might begin to heal a damaged planet.
Ultimately, none of the three orientations is sufficient on its own. Technical greening without ethical accountability risks making harmful systems more efficient: a more energy-efficient military surveillance platform remains ethically compromised, regardless of its carbon footprint. Ecological intervention can reproduce surveillance and colonial logics at a planetary scale when detached from the governance and knowledge of local communities. And without attention to material infrastructure and ecological impacts, relational reorientation risks becoming a compelling vision decoupled from the conditions it seeks to transform.
Some of the most generative work emerges at the interfaces of these orientations, such as locally hosted biodiversity tools developed with Indigenous communities or low-energy AI systems designed for community-led climate adaptation. Yet these efforts are also the hardest to sustain within existing institutions.
Staying With the Tensions
None of these efforts is insulated from the unsustainable systems they seek to transform. We will never fully transcend the colonial foundations and ecological impacts of AI, nor our complicity in its harms. In that sense, how we develop and engage with AI amid deepening ecological breakdown is not a sustainability test we can pass or fail — it is an opportunity to practice ethical discernment and relational accountability without claiming innocence.
So when Green AI appears in a funding call, keynote, or strategy document, it is worth asking: Is this about making AI more efficient? Using AI to monitor ecological change? Rethinking the relationship between human, more-than-human, and machine intelligences? Or is it drawing on the language of ecological responsibility without reckoning with what that requires in practice? These questions do not tell us which version of Green AI is “correct.” But they can help clarify what kind of future is being imagined and what kind of world is being reproduced.
Green AI will likely continue to stretch as different communities bring their own priorities and imaginaries to the term. But if its dominant forms remain tethered to extractive assumptions, it risks becoming little more than an alibi for the systems driving planetary breakdown. The challenge is not only to make AI more sustainable, but to ask how humans and AI might co-evolve as participants in, rather than managers of, the web of life.




