Society featured

Beyond the “Cheating” Panic: What AI Could Teach Us in an Era of Consequence

January 22, 2026

Let’s begin with what’s hard.

Many of our students are struggling, not just academically, but existentially. They are living through the slow-motion collapse of systems and institutions that we and their parents once assured them were stable and secure. They’re anxious and exhausted from being asked to perform readiness for a world that many of them rightly sense does not exist anymore.

But they are also smart and resourceful, doing their best to adapt to increasingly fragmented societies that are largely indifferent to their well-being. So when they turn to AI for support, whether with completing a course assignment or processing emotional overwhelm, it’s rarely out of laziness or naivete; it’s a pragmatic response to the multiple pressures they’re juggling.

Faculty, too, are navigating a changing world. Many of us carry grief and anxiety that something is ending, or already has, yet we’re still being asked to teach as if it hasn’t.

In this context, it makes sense that many are responding to the arrival of AI in our classrooms from a desire to restore a sense of certainty and control. We rush to either “AI proof” or “AI ready” our courses, hoping these efforts might insulate us from the systemic entropy already underway. Others go further and call for banning the technology entirely.

AI has become a highly contested topic on many campuses. Some worry they will be judged and shamed for using it; others worry about being judged and shamed for not using it; many hold both concerns. This dynamic is particularly detrimental to our students, creating significant barriers to the kinds of open and honest dialogues around AI that many of them crave. According to a recent report in Canada, where I teach, 73% of higher education students now use generative AI for their coursework, and nearly 80% want more guidance on how to use it.

Even as students struggle to make sense of this technology and what it means for their futures, many feel they do not have the luxury of ignoring AI and are already engaging with it in critical and creative ways. But we rarely invite them to bring this curiosity into the classroom so we can explore the complexities, risks, and possibilities together. Instead, faculty’s conflicting expectations deepen distrust and relational fractures, stoking students’ fear and confusion.

For their part, many faculty are rightfully concerned about AI’s ecological impacts, its grounding in the nonconsensual harvesting of intellectual property, and its entanglement with surveillance and exploitation. Others invoke a more nostalgic desire to restore a fading academic order or to preserve a vision of the university as a sanctuary of humanist knowledge. But perhaps beneath the surface lies a deeper anxiety: that the role of the university itself is being unsettled.

This anxiety about our relevance may be partly what is driving both the rush to dismiss AI and the rush to embrace it. These seemingly opposed orientations might actually be different strategies toward the same end: protecting the status and identity historically tied to credentialed expertise. The gatekeeping of “legitimate knowledge” has long served to uphold faculty’s professional authority and social position. But well before AI arrived, these claims were already beginning to fray, as challenges to universities’ epistemic authority gained traction across multiple fronts. AI didn’t initiate this unravelling, but it makes it more visible.

Could it be that some of our fixation on AI has less to do with the technology itself, and more to do with what it reveals about the uncertain future of higher education and our place within it?

From Pursuing Modernity’s Promises to Facing Its Consequences

 Beneath these classroom debates lies a larger context of destabilization. With the arrival of AI, our colleges and universities face unprecedented technological disruption. But more than that, we are in the midst of the breakdown of an intergenerational social contract.

The contract we inherited promised young people that if they studied hard and played by the rules, the “good life” would open to them: stable income, secure housing, and eventually, a comfortable retirement. Learn the system, and serve it, and the system will reward you. Support the elderly with your taxes when you’re young, and someday, someone will repay the favour.

That contract was always conditional, unequally available, and came at someone else’s expense. And now it’s splintering, not because someone did their homework with the help of AI, but because the premise of infinite expansion was never metabolically sustainable on a finite planet. Perpetual growth and perpetual “progress” require perpetual extraction. That’s not a viable future; it’s a dangerous fever dream, and now that fever is beginning to break.

We are living in a time of rupture. So, when students engage with AI, perhaps it has less to do with a collapse of academic norms and more to do with the collapse of a shared story about meaning, value, and legitimacy. Institutional commitments to equity and human rights are being hollowed out; fragile ecosystems are nearing critical thresholds and tipping points; techno-oligarchs are consolidating unprecedented power over the lives of everyday citizens; and rogue states are advancing new strategies of AI-driven conquest and control.

The foundations of shared reality feel unstable, and the resilience of our collective nervous system is stretched thin. In this context, the question of how to stop a 19-year-old from refining their sentence structure with a machine collaborator may be the least of our troubles.

Our outsized focus on policing AI use is not just a source of stress for our students; it also feels increasingly out of sync with our times. However, most students won’t say that to our faces, because they’ve learned the unspoken rule of higher education: Don’t embarrass or further undermine the architecture of the institution by noticing that it’s crumbling. And definitely don’t name even more uncomfortable truths about the interwoven violences that the structure is built upon and sustained by: the dispossession of lands, lives, knowledge, and futures.

If we are truly concerned about “cheating”, academia’s own colonial foundations could serve as our primary case study. Consider, for example, the knowledge extracted from communities in the Global South without consent, or the expropriated Indigenous lands on which our institutions sit. Naming these colonial legacies isn’t about assigning guilt to individuals but acknowledging how they continue to shape the ways we teach, learn, and evaluate in the present.

An invitation to shift our attention away from a narrow focus on academic integrity is not a call to “lower” standards or eliminate assessment, but to take a wider view on why our pedagogy feels misaligned with the students in front of us. Because many of them recognize we are not facing a fixable crisis in an otherwise functional system, but the long-deferred consequences of an inherently harmful and unsustainable way of organizing life that is reaching its limits.

Rather than prepare students for the futures promised by modernity, our role may be to accompany them in navigating a present in which the hollowness of those promises and the depth of their true costs are being revealed. This might look like pedagogies that scaffold relational capacity, not just content mastery. And it would require shared vulnerability, humility, and a willingness to meet this moment with clarity about what is, not what we think should be.

Re-Learning to Be in Relation with Each Other (and AI)

Intergenerational dialogue was already difficult. Now, with AI in the room, it feels both more impossible and more pressing. There’s no going back to what was, but that’s not the fault of AI.

The story we’ve grown up within is ending, and maybe that is not such a bad thing, given how damaging it was. But if we don’t find more sober, curious, and compassionate ways of asking what this means for the future of education, we risk reenacting familiar habits of denial, disconnection, and domination that brought us to this breaking point in the first place.

Maybe what is being asked of faculty at this transitional time is not to double down on expiring social contracts or seek new ways of assessing the same things, but to begin improvising ways of learning and being in relation with each other, the Earth, and its trillions of other intelligent beings, both human and not. To do this, we would need to shed the presumed exceptionalism of human intelligence and academic expertise that many of us unconsciously carry.

One possible step in this direction is to invite our students into an open inquiry around AI as an emerging intelligence, and the risks, responsibilities, and relational possibilities it could facilitate or foreclose. For instance, this inquiry could examine the risk that frictionless engagements with AI will reify extractive relational patterns, while also considering the reparative potential of redistributing computational power and co-designing sovereign AI systems with communities.

Such an inquiry would not only be about AI; it could also include reflexive engagements around the cultural complicities that AI reflects back to us. After all, we are implicated in many of the same harms that we rush to accuse AI of: epistemic bias, outsized environmental impact, transactional relational dynamics, the tendency to reduce complexity and ‘hallucinate’ false certainties. And because these harms are embedded in wider systems, they will not go away even if AI does.

Perhaps, if we approach AI not as saviour or scapegoat, but as a mirror – one that surfaces our own unexamined assumptions and planetary entanglements – it could expand our capacity to respond to systemic destabilization with more compassion and collective discernment.

Such discernment might begin with a commitment to respect both principled adoption and principled refusal of AI, while also identifying opportunities to reduce harm and nurture deeper cultural and systemic shifts. If we treat dissensus not as division but as an invitation into shared learning, then AI’s arrival could support a vital pedagogical turn: away from assessing mastery of sanctioned truths, and toward the integration of difficult lessons in this era of consequence.

Sharon Stein

Associate Professor, Professor of Climate Complexity and Coloniality in Higher Education, Department of Educational Studies, University of British Columbia, xʷməθkʷəy̓əm (Musqueam) Territory