Connor Leahy — Algorithmic Cancer: Why AI Development Is Not What You Think

June 26, 2025

Recorded on: May 21, 2025

Description

Recently, the risks about Artificial Intelligence and the need for ‘alignment’ have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there’s been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work?

In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls ‘algorithmic cancer’ – AI generated content that crowds out true human creations, propelled by algorithms that can’t tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies.

What kinds of policy and regulatory approaches could help slow down AI’s acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology’s impacts on mental health, meaning, and societal well-being?

About Connor Leahy

Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI.

Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH.

Show Notes & Links to Learn More

00:00 – Connor LeahyConjectureEleutherAI,

Control AI:

 

The Great Simplification Episodes covering Artificial Intelligence (AI):

How Artificial Intelligence Could Harm Future Generations with Zak Stein | TGS 180

The Wide Boundary Impacts of AI with Daniel Schmachtenberger | TGS 132

Zak Stein: “Values, Education, AI and the Metacrisis” | The Great Simplification 122

Daniel Schmachtenberger: “Artificial Intelligence and The Superorganism” | The Great Simplification 71

Frankly’s covering AI:

Who Will You Become As AI Accelerates? | Frankly 96

Artificial Intelligence – What is NOT In Service of Life? | Frankly 92

Artificial Intelligence and the Lost Ark | Frankly 83

“Peak Oil, AI, and the Straw” | Frankly 56

Artificial Intelligence vs. Real Ecology | Frankly 49

 

00:30 – Global Catastrophic RisksTGS Episode on Existential Risk

02:00 – AI vs. Artificial General Intelligence (AGI) vs. Artificial Super Intelligence (ASI)

04:20 – Similarities and differences between chimpanzee and human cognition

04:55 – The Merge Operation in language and how it distinguishes homo sapiens

06:39 – Economic growth is highly correlated with energy use

07:10 – AI exponential improvement and Moore’s Law

07:20 – Technological Singularity – AI to AGI to ASIMore informationSam Altman’s take

08:24 – Geoffrey Hinton – “Godfather of AI”CEOs of major AI companies comments on AGI

10:25 – Recursive Self-Improvement

12:25 – Military desire for “Decision Dominance”

13:00 – Meaningful human control of AI

13:20 – Good Old-Fashioned AI (GOFAI)

14:00 – Neural networks in AIVideo explaining AI Neural Networks in 5 minutes

14:40 – Deep LearningBackpropagation Algorithm

16:50 – Role of Data Centers

17:10 – Significant computational power required of AIComputation used to train notable artificial intelligence systems, by domain

17:45 – GPUs vs. CPUsBinary Large Objects (BLOBs)

17:55 –  MIT Report on AI’s Climate FootprintCalculating Open AI’s electricity consumption

18:25 – AI training vs. inference

19:35 – Search Engines vs AI: energy consumption compared

20:10 – Data Center percent energy usage in U.S., Data Centers near renewable energy sources in rural areas

20:30 – AI & AGI Energy Needs

21:40 – Cybersecurity challenges with AIInternational and National Security Risks with AI

22:40 – Recent policies on U.S. Federal Government AI ProcurementAI use in U.S. Federal and State Governments 2024U.S. Federal Government AI Use Case Inventory

22:45 – AI could replace CEOs

23:10 – AI hallucinations and their improvements

25:10 – AI emulates our delusion, self-deception, and overconfidenceAI-induced psychosis

26:00 – AI ModelAI Fine-tuning

27:00 – Jailbreaking AI and why it’s so easy

28:05 – Risks of Open-Source AI

33:50 – Who Will You Become As AI Accelerates? | Frankly 96

35:30 – Mark Zuckerberg says AI can take place of friendsAI company net worths

36:50 – Luddites

37:40 – No regulation of social media even though it is just as addictive as regulated things like gambling and hard drugs

38:19 – Recent legalization of sport’s gambling in many U.S. States

41:02 – Human Behavior: In-group/Out-group and Tribalism

41:15 – The Carbon Pulse

43:15 – Individual Agency and AdvertisingAttention Economy

44:10 – Economic Superorganism

44:35 – Algorithmic Cancer

47:30 – Early GPT refuses to speak Croatian

48:30 – Update to ChatGPT caused sycophancy

49:00 – The Oil Drum

49:30 – Populism is on the rise

50:55 – Power of fossil fuelsEnvironmental externalities ignoredSpecies migrating poleward (Malin Pinsky + Joe Roman TGS Episodes)

52:50 – Jonathan Haidt TGS EpisodeScreen time and child development

54:10 – Cognitive biases and climate change

55:23 – The Great Simplification MovieReality BlindThe Bottlenecks of the 21st Century

56:50 – ~19 Terawatts of yearly global energy productionMore information

58:30 – AI cognition vs. Human cognitionAI Alignment

59:40 – Evolution of AltruismAI and Sociopathy

1:00:00 – ASI Existential RiskAdvanced AI Extinction Risk & Risk Analysis

1:00:20 – Sam Altman and Dario Amodei openly sharing concerns that AI could lead to human extinction

1:02:10 – ‘Founder as Victim, Founder as God’: Peter Thiel, Elon Musk and the two bodies of the entrepreneur

1:02:40 – Sam Altman’s blog 

1:03:00 – The AGI Race

1:09:00 – Citizen responsibilities in a Democracy 

1:10:16 – Audrey Tang TGS Episode

1:11:50 – LLM-use’s relationship to cognitive decline

1:15:10 – “Realpolitik NatSec (National Security)”

1:17:00 – Today’s humans try to achieve the states of our ancestors

1:17:40 – Precursors to ASIASI Predictions

1:18:37 – Connor’s newest project – Torchbearer Community Interest Form

1:23:38 – China’s big AI players

1:25:00 – Social media and Propaganda

1:34:00 – Humanism

Download transcript

 

Nate Hagens

Nate Hagens

Nate Hagens is the Director of The Institute for the Study of Energy & Our Future (ISEOF) an organization focused on educating and preparing society for the coming cultural transition. Allied with leading ecologists, energy experts, politicians and systems thinkers ISEOF assembles road-maps and off-ramps for how human societies can adapt to lower throughput lifestyles.

Nate holds a Masters Degree in Finance with Honors from the University of Chicago and a Ph.D. in Natural Resources from the University of Vermont. He teaches an Honors course, Reality 101, at the University of Minnesota.