Avsnitt
-
I often talk about the “Doom Train”, the series of claims and arguments involved in concluding that P(Doom) from artificial superintelligence is high. In this episode, it’s finally time to show you the whole track!
00:00 Introduction
01:09 “AGI isn’t coming soon”
04:42 “Artificial intelligence can’t go far beyond human intelligence”
07:24 “AI won’t be a physical threat”
08:28 “Intelligence yields moral goodness”
09:39 “We have a safe AI development process”
10:48 “AI capabilities will rise at a manageable pace”
12:28 “AI won’t try to conquer the universe”
15:12 “Superalignment is a tractable problem”
16:55 “Once we solve superalignment, we’ll enjoy peace”
19:02 “Unaligned ASI will spare us”
20:12 “AI doomerism is bad epistemology”
21:42 Bonus arguments: “Fine, P(Doom) is high… but that’s ok!”
Stops on the Doom Train
AGI isn’t coming soon
* No consciousness
* No emotions
* No creativity — AIs are limited to copying patterns in their training data, they can’t “generate new knowledge”
* AIs aren’t even as smart as dogs right now, never mind humans
* AIs constantly make dumb mistakes, they can’t even do simple arithmetic reliably
* LLM performance is hitting a wall — GPT 4.5 is barely better than GPT 4.1 despite being larger scale
* No genuine reasoning
* No microtubules exploiting uncomputable quantum effects
* No soul
* We’ll need to build tons of data centers and power before we get to AGI
* No agency
* This is just another AI hype cycle, every 25 years people think AGI is coming soon and they’re wrong
Artificial intelligence can’t go far beyond human intelligence
* “Superhuman intelligence” is a meaningless concept
* Human engineering already is coming close to the laws of physics
* Coordinating a large engineering project can’t happen much faster than humans do it
* No individual human is that smart compared to humanity as a whole, including our culture, corporations, and other institutions. Similarly no individual AI will ever be that smart compared to the sum of human culture and other institutions.
AI won’t be a physical threat
* AI doesn’t have arms or legs, it has zero control over the real world
* An AI with a robot body can’t fight better than a human soldier
* We can just disconnect an AI’s power to stop it
* We can just turn off the internet to stop it
* We can just shoot it with a gun
* It’s just math
* Any supposed chain of events where AI kills humans is far-fetched science fiction
Intelligence yields moral goodness
* More intelligence is correlated with more morality
* Smarter people commit fewer crimes
* The orthogonality thesis is false
* AIs will discover moral realism
* If we made AIs so smart, and we were trying to make them moral, then they’ll be smart enough to debug their own morality
* Positive-sum cooperation was the outcome of natural selection
We have a safe AI development process
* Just like every new technology, we’ll figure it out as we go
* We don’t know what problems need to be fixed until we build the AI and test it out
* If an AI causes problems, we’ll be able to turn it off and release another version
* We have safeguards to make sure AI doesn’t get uncontrollable/unstoppable
* If we accidentally build an AI that stops accepting our shutoff commands, it won’t manage to copy versions of itself outside our firewalls which then proceed to spread exponentially like a computer virus
* If we accidentally build an AI that escapes our data center and spreads exponentially like a computer virus, it won’t do too much damage in the world before we can somehow disable or neutralize all its copies
* If we can’t disable or neutralize copies of rogue AIs, we’ll rapidly build other AIs that can do that job for us, and won’t themselves go rogue on us
AI capabilities will rise at a manageable pace
* Building larger data centers will be a speed bottleneck
* Another speed bottleneck is the amount of research that needs to be done, both in terms of computational simulation, and in terms of physical experiments, and this kind of research takes lots of time
* Recursive self-improvement “foom” is impossible
* The whole economy never grows with localized centralized “foom”
* Need to collect cultural learnings over time, like humanity did as a whole
* AI is just part of the good pattern of exponential economic growth eras
AI won’t try to conquer the universe
* AIs can’t “want” things
* AIs won’t have the same “fight instincts” as humans and animals, because they weren’t shaped by a natural selection process that involved life-or-death resource competition
* Smart employees often work for less-smart bosses
* Just because AIs help achieve goals doesn’t mean they have to be hard-core utility maximizers
* Instrumental convergence is false: achieving goals effectively doesn’t mean you have to be relentlessly seizing power and resources
* A resource-hungry goal-maximizer AIs wouldn’t seize literally every atom; there’ll still be some leftover resources for humanity
* AIs will use new kinds of resources that humans aren’t using - dark energy, wormholes, alternate universes, etc
Superalignment is a tractable problem
* Current AIs have never killed anybody
* Current AIs are extremely successful at doing useful tasks for humans
* If AIs are trained on data from humans, they’ll be “aligned by default”
* We can just make AIs abide by our laws
* We can align the superintelligent AIs by using a scheme involving cryptocurrency on the blockchain
* Companies have economic incentives to solve superintelligent AI alignment, because unaligned superintelligent AI would hurt their profits
* We’ll build an aligned not-that-smart AI, which will figure out how to build the next-generation AI which is smarter and still aligned to human values, and so on until aligned superintelligence
Once we solve superalignment, we’ll enjoy peace
* The power from ASI won’t be monopolized by a single human government / tyranny
* The decentralized nodes of human-ASI hybrids won’t be like warlords constantly fighting each other, they’ll be like countries making peace
* Defense will have an advantage over attack, so the equilibrium of all the groups of humans and ASIs will be multiple defended regions, not a war of mutual destruction
* The world of human-owned ASIs is a stable equilibrium, not one where ASI-focused projects keep buying out and taking resources away from human-focused ones (Gradual Disempowerment)
Unaligned ASI will spare us
* The AI will spare us because it values the fact that we created it
* The AI will spare us because studying us helps maximize its curiosity and learning
* The AI will spare us because it feels toward us the way we feel toward our pets
* The AI will spare us because peaceful coexistence creates more economic value than war
* The AI will spare us because Ricardo’s Law of Comparative Advantage says you can still benefit economically from trading with someone who’s weaker than you
AI doomerism is bad epistemology
* It’s impossible to predict doom
* It’s impossible to put a probability on doom
* Every doom prediction has always been wrong
* Every doomsayer is either psychologically troubled or acting on corrupt incentives
* If we were really about to get doomed, everyone would already be agreeing about that, and bringing it up all the time
Sure P(Doom) is high, but let’s race to build it anyway because…
Coordinating to not build ASI is impossible
* China will build ASI as fast as it can, no matter what — because of game theory
* So however low our chance of surviving it is, the US should take the chance first
Slowing down the AI race doesn’t help anything
* Chances of solving AI alignment won’t improve if we slow down or pause the capabilities race
* I personally am going to die soon, and I don’t care about future humans, so I’m open to any hail mary to prevent myself from dying
* Humanity is already going to rapidly destroy ourselves with nuclear war, climate change, etc
* Humanity is already going to die out soon because we won’t have enough babies
Think of the good outcome
* If it turns out that doom from overly-fast AI building doesn’t happen, in that case, we can more quickly get to the good outcome!
* People will stop suffering and dying faster
AI killing us all is actually good
* Human existence is morally negative on net, or close to zero net moral value
* Whichever AI ultimately comes to power will be a “worthy successor” to humanity
* Whichever AI ultimately comes to power will be as morally valuable as human descendents generally are to their ancestors, even if their values drift
* The successor AI’s values will be interesting, productive values that let them successfully compete to dominate the universe
* How can you argue with the moral choices of an ASI that’s smarter than you, that you know goodness better than it does?
* It’s species-ist to judge what a superintelligent AI would want to do. The moral circle shouldn’t be limited to just humanity.
* Increasing entropy is the ultimate north star for techno-capital, and AI will increase entropy faster
* Human extinction will solve the climate crisis, and pollution, and habitat destruction, and let mother earth heal
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Ilya’s doom bunker, proof of humanity, the doomsday argument, CAIS firing John Sherman, Bayesian networks, Westworld, AI consciousness, Eliezer’s latest podcast, and more!
00:00 Introduction
04:13 Doomsday Argument
09:22 What if AI Alignment is *Intractable*?
14:31 Steel-Manning the Nondoomers
22:13 No State-Level AI Regulation for 10 years?
32:31 AI Consciousness
35:25 Westworld Is Real Now
38:01 Proof of Humanity
40:33 Liron’s Notary Network Idea
43:34 Center for AI Safety and John Sherman Controversy
57:04 Technological Advancements and Future Predictions
01:03:14 Ilya Sutskever’s Doom Bunker
01:07:32 The Future of AGI and Training Models
01:12:19 Personal Experience of the Jetsons Future
01:15:16 The Role of AI in Everyday Tasks
01:18:54 Is General Intelligence A Binary Property?
01:23:52 Does an Open Platform Help Make AI Safe?
01:27:21 What of Understandable AI Like Bayesian Networks?
01:30:28 Why Doom Isn’t Emotionally Real for Liron
Show Notes
The post where people submitted questions: https://lironshapira.substack.com/p/5000-subscribers-live-q-and-a-ask
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Saknas det avsnitt?
-
Dr. Himanshu Tyagi is a professor of engineering at the Indian Institute of Science and the co-founder of Sentient, an open-source AI platform that raised $85M in funding led by Founders Fund.
In this conversation, Himanshu gives me Sentient’s pitch. Then we debate whether open-sourcing frontier AGI development is a good idea, or a reckless way to raise humanity’s P(doom).
00:00 Introducing Himanshu Tyagi
01:41 Sentient’s Vision
05:20 How’d You Raise $85M?
11:19 Comparing Sentient to Competitors
27:26 Open Source vs. Closed Source AI
43:01 What’s Your P(Doom)™
48:44 Extinction from Superintelligent AI
54:02 AI's Control Over Digital and Physical Assets
01:00:26 AI's Influence on Human Movements
01:08:46 Recapping the Debate
01:13:17 Liron’s Announcements
Show Notes
Himanshu’s Twitter — https://x.com/hstyagi
Sentient’s website — https://sentient.foundation
Come to the Less Online conference on May 30 - Jun 1, 2025:
https://less.online
Hope to see you there!
If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares —
https://ifanyonebuildsit.com
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of:
https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
My friend John Sherman from the For Humanity podcast got hired by the Center for AI Safety (CAIS) two weeks ago.
Today I suddenly learned he’s been fired.
I’m frustrated by this decision, and frustrated with the whole AI x-risk community’s weak messaging.
Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online
Hope to see you there!
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Prof. Gary Marcus is a scientist, bestselling author and entrepreneur, well known as one of the most influential voices in AI. He is Professor Emeritus of Psychology and Neuroscience at NYU. He was founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016.
Gary co-authored the 2019 book, Rebooting AI: Building Artificial Intelligence We Can Trust, and the 2024 book, Taming Silicon Valley: How We Can Ensure That AI Works for Us. He played an important role in the 2023 Senate Judiciary Subcommittee Hearing on Oversight of AI, testifying with Sam Altman.
In this episode, Gary and I have a lively debate about whether P(doom) is approximately 50%, or if it’s less than 1%!
00:00 Introducing Gary Marcus
02:33 Gary’s AI Skepticism
09:08 The Human Brain is a Kluge
23:16 The 2023 Senate Judiciary Subcommittee Hearing
28:46 What’s Your P(Doom)™
44:27 AI Timelines
51:03 Is Superintelligence Real?
01:00:35 Humanity’s Immune System
01:12:46 Potential for Recursive Self-Improvement
01:26:12 AI Catastrophe Scenarios
01:34:09 Defining AI Agency
01:37:43 Gary’s AI Predictions
01:44:13 The NYTimes Obituary Test
01:51:11 Recap and Final Thoughts
01:53:35 Liron’s Outro
01:55:34 Eliezer Yudkowsky’s New Book!
01:59:49 AI Doom Concept of the Day
Show Notes
Gary’s Substack — https://garymarcus.substack.com
Gary’s Twitter — https://x.com/garymarcus
If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares — https://ifanyonebuildsit.com
Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online
Hope to see you there!
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Dr. Mike Israetel, renowned exercise scientist and social media personality, and more recently a low-P(doom) AI futurist, graciously offered to debate me!
00:00 Introducing Mike Israetel
12:19 What’s Your P(Doom)™
30:58 Timelines for Artificial General Intelligence
34:49 Superhuman AI Capabilities
43:26 AI Reasoning and Creativity
47:12 Evil AI Scenario
01:08:06 Will the AI Cooperate With Us?
01:12:27 AI's Dependence on Human Labor
01:18:27 Will AI Keep Us Around to Study Us?
01:42:38 AI's Approach to Earth's Resources
01:53:22 Global AI Policies and Risks
02:03:02 The Quality of Doom Discourse
02:09:23 Liron’s Outro
Show Notes
* Mike’s Instagram — https://www.instagram.com/drmikeisraetel
* Mike’s YouTube — https://www.youtube.com/@MikeIsraetelMakingProgress
Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineHope to see you there!
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
I want to be transparent about how I’ve updated my mainline AI doom scenario in light of safe & useful LLMs. So here’s where I’m at…
00:00 Introduction
07:59 The Dangerous Threshold to Runaway Superintelligence
18:57 Superhuman Goal Optimization = Infinite Time Horizon
21:21 Goal-Completeness by Analogy to Turing-Completeness
26:53 Intellidynamics
29:13 Goal-Optimization Is Convergent
31:15 Early AIs Lose Control of Later AIs
34:46 The Superhuman Threshold Is Real
38:27 Expecting Rapid FOOM
40:20 Rocket Alignment
49:59 Stability of Values Under Self-Modification
53:13 The Way to Heaven Passes Right By Hell
57:32 My Mainline Doom Scenario
01:17:46 What Values Does The Goal Optimizer Have?
Show Notes
My recent episode with Jim Babcock on this same topic of mainline doom scenarios — https://www.youtube.com/watch?v=FaQjEABZ80g
The Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem
Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
What’s the most likely (“mainline”) AI doom scenario? How does the existence of LLMs update the original Yudkowskian version? I invited my friend Jim Babcock to help me answer these questions.
Jim is a member of the LessWrong engineering team and its parent organization, Lightcone Infrastructure. I’ve been a longtime fan of his thoughtful takes.
This turned out to be a VERY insightful and informative discussion, useful for clarifying my own predictions, and accessible to the show’s audience.
00:00 Introducing Jim Babcock
01:29 The Evolution of LessWrong Doom Scenarios
02:22 LessWrong’s Mission
05:49 The Rationalist Community and AI
09:37 What’s Your P(Doom)™
18:26 What Are Yudkowskians Surprised About?
26:48 Moral Philosophy vs. Goal Alignment
36:56 Sandboxing and AI Containment
42:51 Holding Yudkowskians Accountable
58:29 Understanding Next Word Prediction
01:00:02 Pre-Training vs Post-Training
01:08:06 The Rocket Alignment Problem Analogy
01:30:09 FOOM vs. Gradual Disempowerment
01:45:19 Recapping the Mainline Doom Scenario
01:52:08 Liron’s Outro
Show Notes
Jim’s LessWrong — https://www.lesswrong.com/users/jimrandomh
Jim’s Twitter — https://x.com/jimrandomh
The Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem
Optimality is the Tiger and Agents Are Its Teeth — https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth
Doom Debates episode about the research paper discovering AI's utility function — https://lironshapira.substack.com/p/cais-researchers-discover-ais-preferences
Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Ozzie Gooen is the founder of the Quantified Uncertainty Research Institute (QURI), a nonprofit building software tools for forecasting and policy analysis. I’ve known him through the rationality community since 2008 and we have a lot in common.
00:00 Introducing Ozzie
02:18 The Rationality Community
06:32 What’s Your P(Doom)™
08:09 High-Quality Discourse and Social Media
14:17 Guesstimate and Squiggle Demos
31:57 Prediction Markets and Rationality
38:33 Metaforecast Demo
41:23 Evaluating Everything with LLMs
47:00 Effective Altruism and FTX Scandal
56:00 The Repugnant Conclusion Debate
01:02:25 AI for Governance and Policy
01:12:07 PauseAI Policy Debate
01:30:10 Status Quo Bias
01:33:31 Decaf Coffee and Caffeine Powder
01:34:45 Are You Aspie?
01:37:45 Billionaires in Effective Altruism
01:48:06 Gradual Disempowerment by AI
01:55:36 LessOnline Conference
01:57:34 Supporting Ozzie’s Work
Show Notes
Quantified Uncertainty Research Institute (QURI) — https://quantifieduncertainty.org
Ozzie’s Facebook — https://www.facebook.com/ozzie.gooen
Ozzie’s Twitter — https://x.com/ozziegooen
Guesstimate, a spreadsheet for working with probability ranges — https://www.getguesstimate.com
Squiggle, a programming language for building Monte Carlo simulations — https://www.squiggle-language.com
Metaforecast, a prediction market aggregator — https://metaforecast.org
Open Annotate, AI-powered content analysis — https://github.com/quantified-uncertainty/open-annotate/
Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
David Duvenaud is a professor of Computer Science at the University of Toronto, co-director of the Schwartz Reisman Institute for Technology and Society, former Alignment Evals Team Lead at Anthropic, an award-winning machine learning researcher, and a close collaborator of Dr. Geoffrey Hinton. He recently co-authored Gradual Disempowerment.
We dive into David’s impressive career, his high P(Doom), his recent tenure at Anthropic, his views on gradual disempowerment, and the critical need for improved governance and coordination on a global scale.
00:00 Introducing David
03:03 Joining Anthropic and AI Safety Concerns
35:58 David’s Background and Early Influences
45:11 AI Safety and Alignment Challenges
54:08 What’s Your P(Doom)™
01:06:44 Balancing Productivity and Family Life
01:10:26 The Hamming Question: Are You Working on the Most Important Problem?
01:16:28 The PauseAI Movement
01:20:28 Public Discourse on AI Doom
01:24:49 Courageous Voices in AI Safety
01:43:54 Coordination and Government Role in AI
01:47:41 Cowardice in AI Leadership
02:00:05 Economic and Existential Doom
02:06:12 Liron’s Post-Show
Show Notes
David’s Twitter — https://x.com/DavidDuvenaud
Schwartz Reisman Institute for Technology and Society — https://srinstitute.utoronto.ca/
Jürgen Schmidhuber’s Home Page — https://people.idsia.ch/~juergen/
Ryan Greenblatt's LessWrong comment about a future scenario where there's a one-time renegotiation of power and heat from superintelligent AI projects causes the oceans to boil: https://www.lesswrong.com/posts/pZhEQieM9otKXhxmd/gradual-disempowerment-systemic-existential-risks-from?commentId=T7KZGGqq2Z4gXZsty
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
AI 2027, a bombshell new paper by the AI Futures Project, is a highly plausible scenario of the next few years of AI progress. I like this paper so much that I made a whole episode about it.
00:00 Overview of AI 2027
05:13 2025: Stumbling Agents
16:23 2026: Advanced Agents
21:49 2027: The Intelligence Explosion
29:13 AI's Initial Exploits and OpenBrain's Secrecy
30:41 Agent-3 and the Rise of Superhuman Engineering
37:05 The Creation and Deception of Agent-5
44:56 The Race Scenario: Humanity's Downfall
48:58 The Slowdown Scenario: A Glimmer of Hope
53:49 Final Thoughts
Show Notes
The website: https://ai-2027.com
Scott Alexander’s blog: https://astralcodexten.com
Daniel Kokotajlo’s previous predictions from 2021 about 2026: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Dr. Peter Berezin is the Chief Global Strategist and Director of Research at BCA Research, the largest Canadian investment research firm. He’s known for his macroeconomics research reports and his frequent appearances on Bloomberg and CNBC.
Notably, Peter is one of the only macroeconomists in the world who’s forecasting AI doom! He recently published a research report estimating a “ more than 50/50 chance AI will wipe out all of humanity by the middle of the century”.
00:00 Introducing Peter Berezin
01:59 Peter’s Economic Predictions and Track Record
05:50 Investment Strategies and Beating the Market
17:47 The Future of Human Employment
26:40 Existential Risks and the Doomsday Argument
34:13 What’s Your P(Doom)™
39:18 Probability of non-AI Doom
44:19 Solving Population Decline
50:53 Constraining AI Development
53:40 The Multiverse and Its Implications
01:01:11 Are Other Economists Crazy?
01:09:19 Mathematical Universe and Multiverse Theories
01:19:43 Epistemic vs. Physical Probability
01:33:19 Reality Fluid
01:39:11 AI and Moral Realism
01:54:18 The Simulation Hypothesis and God
02:10:06 Liron’s Post-Show
Show Notes
Peter’s Twitter: https://x.com/PeterBerezinBCA
Peter’s old blog — https://stockcoach.blogspot.com
Peter’s 2021 BCA Research Report: “Life, Death and Finance in the Cosmic Multiverse” — https://www.bcaresearch.com/public/content/GIS_SR_2021_12_21.pdf
M.C. Escher’s “Circle Limit IV” — https://www.escherinhetpaleis.nl/escher-today/circle-limit-iv-heaven-and-hell/
Zvi Mowshowitz’s Blog (Liron’s recommendation for best AI news & analysis) — https://thezvi.substack.com
My Doom Debates episode about why nuclear proliferation is bad — https://www.youtube.com/watch?v=ueB9iRQsvQ8
Robin Hanson’s “Mangled Worlds” paper — https://mason.gmu.edu/~rhanson/mangledworlds.html
Uncontrollable by Darren McKee (Liron’s recommended AI x-risk book) — https://www.amazon.com/dp/B0CNNYKVH1
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Nathan Labenz, host of The Cognitive Revolution, joins me for an AI news & social media roundup!
00:00 Introducing Nate
05:18 What’s Your P(Doom)™
23:22 GPT-4o Image Generation
40:20 Will Fiverr’s Stock Crash?
47:41 AI Unemployment
55:11 Entrepreneurship
01:00:40 OpenAI Valuation
01:09:29 Connor Leahy’s Hair
01:13:28 Mass Extinction
01:25:30 Is anyone feeling the doom vibes?
01:38:20 Rethinking AI Individuality
01:40:35 “Softmax” — Emmett Shear's New AI Safety Org
01:57:04 Anthropic's Mechanistic Interpretability Paper
02:10:11 International Cooperation for AI Safety
02:18:43 Final Thoughts
Show Notes
Nate’s Twitter: https://x.com/labenz
Nate’s podcast: https://cognitiverevolution.ai and https://youtube.com/@CognitiveRevolutionPodcast
Nate’s company: https://waymark.com/
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
In this special cross-posted episode of Doom Debates, originally posted here on The Human Podcast, we cover a wide range of topics including the definition of “doom”, P(Doom), various existential risks like pandemics and nuclear threats, and the comparison of rogue AI risks versus AI misuse risks.
00:00 Introduction
01:47 Defining Doom and AI Risks
05:53 P(Doom)
10:04 Doom Debates’ Mission
16:17 Personal Reflections and Life Choices
24:57 The Importance of Debate
27:07 Personal Reflections on AI Doom
30:46 Comparing AI Doom to Other Existential Risks
33:42 Strategies to Mitigate AI Risks
39:31 The Global AI Race and Game Theory
43:06 Philosophical Reflections on a Good Life
45:21 Final Thoughts
Show Notes
The Human Podcast with Joe Murray: https://www.youtube.com/@thehumanpodcastofficial
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Don’t miss the other great AI doom show, For Humanity: https://youtube.com/@ForHumanityAIRisk
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Alexander Campbell claims that having superhuman intelligence doesn’t necessarily translate into having vast power, and that Gödel's Incompleteness Theorem ensures AI can’t get too powerful. I strongly disagree.
Alex has a Master's of Philosophy in Economics from the University of Oxford and an MBA from the Stanford Graduate School of Business, has worked as a quant trader at Lehman Brothers and Bridgewater Associates, and is the founder of Rose AI, a cloud data platform that leverages generative AI to help visualize data.
This debate was recorded in August 2023.
00:00 Intro and Alex’s Background
05:29 Alex's Views on AI and Technology
06:45 Alex’s Non-Doomer Position
11:20 Goal-to-Action Mapping
15:20 Outcome Pump Thought Experiment
21:07 Liron’s Doom Argument
29:10 The Dangers of Goal-to-Action Mappers
34:39 The China Argument and Existential Risks
45:18 Ideological Turing Test
48:38 Final Thoughts
Show Notes
Alexander Campbell’s Twitter: https://x.com/abcampbell
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Roko Mijic has been an active member of the LessWrong and AI safety community since 2008. He’s best known for “Roko’s Basilisk”, a thought experiment he posted on LessWrong that made Eliezer Yudkowsky freak out, and years later became the topic that helped Elon Musk get interested in Grimes.
His view on AI doom is that:* AI alignment is an easy problem* But the chaos and fighting from building superintelligence poses a high near-term existential risk* But humanity’s course without AI has an even higher near-term existential risk
While my own view is very different, I’m interested to learn more about Roko’s views and nail down our cruxes of disagreement.
00:00 Introducing Roko
03:33 Realizing that AI is the only thing that matters
06:51 Cyc: AI with “common sense”
15:15 Is alignment easy?
21:19 What’s Your P(Doom)™
25:14 Why civilization is doomed anyway
37:07 Roko’s AI nightmare scenario
47:00 AI risk mitigation
52:07 Market Incentives and AI Safety
57:13 Are RL and GANs good enough for superalignment?
01:00:54 If humans learned to be honest, why can’t AIs?
01:10:29 Is our test environment sufficiently similar to production?
01:23:56 AGI Timelines
01:26:35 Headroom above human intelligence
01:42:22 Roko’s Basilisk
01:54:01 Post-Debate Monologue
Show Notes
Roko’s Twitter: https://x.com/RokoMijic
Explanation of Roko’s Basilisk on LessWrong: https://www.lesswrong.com/w/rokos-basilisk
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Sir Roger Penrose is a mathematician, mathematical physicist, philosopher of science, and Nobel Laureate in Physics.
His famous body of work includes Penrose diagrams, twistor theory, Penrose tilings, and the incredibly bold claim that intelligence and consciousness are uncomputable physical phenomena related to quantum wave function collapse.
Dr. Penrose is such a genius that it's just interesting to unpack his worldview, even if it’s totally implausible. How can someone like him be so wrong? What exactly is it that he's wrong about? It's interesting to try to see the world through his eyes, before recoiling from how nonsensical it looks.
00:00 Episode Highlights
01:29 Introduction to Roger Penrose
11:56 Uncomputability
16:52 Penrose on Gödel's Incompleteness Theorem
19:57 Liron Explains Gödel's Incompleteness Theorem
27:05 Why Penrose Gets Gödel Wrong
40:53 Scott Aaronson's Gödel CAPTCHA
46:28 Penrose's Critique of the Turing Test
48:01 Searle's Chinese Room Argument
52:07 Penrose's Views on AI and Consciousness
57:47 AI's Computational Power vs. Human Intelligence
01:21:08 Penrose's Perspective on AI Risk
01:22:20 Consciousness = Quantum Wave Function Collapse?
01:26:25 Final Thoughts
Show Notes
Source video — Feb 22, 2025 Interview with Roger Penrose on “This Is World” — https://www.youtube.com/watch?v=biUfMZ2dts8
Scott Aaronson’s “Gödel CAPTCHA” — https://www.scottaaronson.com/writings/captcha.html
My recent Scott Aaronson episode — https://www.youtube.com/watch?v=xsGqWeqKjEg
My explanation of what’s wrong with arguing “by definition” — https://www.youtube.com/watch?v=ueam4fq8k8I
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
The Center for AI Safety just dropped a fascinating paper — they discovered that today’s AIs like GPT-4 and Claude have preferences! As in, coherent utility functions. We knew this was inevitable, but we didn’t know it was already happening.
This episode has two parts:
In Part I (48 minutes), I react to David Shapiro’s coverage of the paper and push back on many of his points.
In Part II (60 minutes), I explain the paper myself.
00:00 Episode Introduction
05:25 PART I: REACTING TO DAVID SHAPIRO
10:06 Critique of David Shapiro's Analysis
19:19 Reproducing the Experiment
35:50 David's Definition of Coherence
37:14 Does AI have “Temporal Urgency”?
40:32 Universal Values and AI Alignment
49:13 PART II: EXPLAINING THE PAPER
51:37 How The Experiment Works
01:11:33 Instrumental Values and Coherence in AI
01:13:04 Exchange Rates and AI Biases
01:17:10 Temporal Discounting in AI Models
01:19:55 Power Seeking, Fitness Maximization, and Corrigibility
01:20:20 Utility Control and Bias Mitigation
01:21:17 Implicit Association Test
01:28:01 Emailing with the Paper’s Authors
01:43:23 My Takeaway
Show Notes
David’s source video: https://www.youtube.com/watch?v=XGu6ejtRz-0
The research paper: http://emergent-values.ai
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at
https://doomdebates.com
and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
My friend Gil Mark, who leads generative AI products at LinkedIn, thinks competition among superintelligent AIs will lead to a good outcome for humanity. In his view, the alignment problem becomes significantly easier if we build multiple AIs at the same time and let them compete.
I completely disagree, but I hope you’ll find this to be a thought-provoking episode that sheds light on why the alignment problem is so hard.
00:00 Introduction
02:36 Gil & Liron’s Early Doom Days
04:58: AIs : Humans :: Humans : Ants
08:02 The Convergence of AI Goals
15:19 What’s Your P(Doom)™
19:23 Multiple AIs and Human Welfare
24:42 Gil’s Alignment Claim
42:31 Cheaters and Frankensteins
55:55 Superintelligent Game Theory
01:01:16 Slower Takeoff via Resource Competition
01:07:57 Recapping the Disagreement
01:15:39 Post-Debate Banter
Show Notes
Gil’s LinkedIn: https://www.linkedin.com/in/gilmark/
Gil’s Twitter: https://x.com/gmfromgm
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com -
Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die?
AI doomers are often misconstrued as having "no evidence" or just "anthropomorphizing". This toy model will help you understand why a drive to eliminate humans is NOT a handwavy anthropomorphic speculation, but rather something we expect by default from any sufficiently powerful search algorithm.
We’re not talking about AGI or ASI here — we’re just looking at an AI that does brute-force search over actions in a simple grid world.
The slide deck I’m presenting was created by Jaan Tallinn, cofounder of the Future of Life Institute.
00:00 Introduction
01:24 The Toy Model
06:19 Misalignment and Manipulation Drives
12:57 Search Capacity and Ontological Insights
16:33 Irrelevant Concepts in AI Control
20:14 Approaches to Solving AI Control Problems
23:38 Final Thoughts
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com - Visa fler