Avsnitt

  • My friend Gil Mark, who leads generative AI products at LinkedIn, thinks competition among superintelligent AIs will lead to a good outcome for humanity. In his view, the alignment problem becomes significantly easier if we build multiple AIs at the same time and let them compete.

    I completely disagree, but I hope you’ll find this to be a thought-provoking episode that sheds light on why the alignment problem is so hard.

    00:00 Introduction

    02:36 Gil & Liron’s Early Doom Days

    04:58: AIs : Humans :: Humans : Ants

    08:02 The Convergence of AI Goals

    15:19 What’s Your P(Doom)™

    19:23 Multiple AIs and Human Welfare

    24:42 Gil’s Alignment Claim

    42:31 Cheaters and Frankensteins

    55:55 Superintelligent Game Theory

    01:01:16 Slower Takeoff via Resource Competition

    01:07:57 Recapping the Disagreement

    01:15:39 Post-Debate Banter

    Show Notes

    Gil’s LinkedIn: https://www.linkedin.com/in/gilmark/

    Gil’s Twitter: https://x.com/gmfromgm

    Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

    PauseAI, the volunteer organization I’m part of: https://pauseai.info

    Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die?

    AI doomers are often misconstrued as having "no evidence" or just "anthropomorphizing". This toy model will help you understand why a drive to eliminate humans is NOT a handwavy anthropomorphic speculation, but rather something we expect by default from any sufficiently powerful search algorithm.

    We’re not talking about AGI or ASI here — we’re just looking at an AI that does brute-force search over actions in a simple grid world.

    The slide deck I’m presenting was created by Jaan Tallinn, cofounder of the Future of Life Institute.

    00:00 Introduction

    01:24 The Toy Model

    06:19 Misalignment and Manipulation Drives

    12:57 Search Capacity and Ontological Insights

    16:33 Irrelevant Concepts in AI Control

    20:14 Approaches to Solving AI Control Problems

    23:38 Final Thoughts

    Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

    PauseAI, the volunteer organization I’m part of: https://pauseai.info

    Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Saknas det avsnitt?

    Klicka här för att uppdatera flödet manuellt.

  • Bryan Cantrill, co-founder of Oxide Computer, says in his talk that engineering in the physical world is too complex for any AI to do it better than teams of human engineers. Success isn’t about intelligence; it’s about teamwork, character and resilience.

    I completely disagree.

    00:00 Introduction

    02:03 Bryan’s Take on AI Doom

    05:55 The Concept of P(Doom)

    08:36 Engineering Challenges and Human Intelligence

    15:09 The Role of Regulation and Authoritarianism in AI Control

    29:44 Engineering Complexity: A Case Study from Oxide Computer

    40:06 The Value of Team Collaboration

    46:13 Human Attributes in Engineering

    49:33 AI's Potential in Engineering

    58:23 Existential Risks and AI Predictions

    Bryan’s original talk:

    Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/watch?v=9CUFbqh16Fg

    PauseAI, the volunteer organization I’m part of: https://pauseai.info

    Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Thanks for everyone who participated in the live Q&A on Friday!

    The topics covered include advice for computer science students, working in AI trustworthiness, what good AI regulation looks like, the implications of the $500B Stargate project, the public's gradual understanding of AI risks, the impact of minor AI disasters, and the philosophy of consciousness.

    00:00 Advice for Comp Sci Students

    01:14 The $500B Stargate Project

    02:36 Eliezer's Recent Podcast

    03:07 AI Safety and Public Policy

    04:28 AI Disruption and Politics

    05:12 DeepSeek and AI Advancements

    06:54 Human vs. AI Intelligence

    14:00 Consciousness and AI

    24:34 Dark Forest Theory and AI

    35:31 Investing in Yourself

    42:42 Probability of Aliens Saving Us from AI

    43:31 Brain-Computer Interfaces and AI Safety

    46:19 Debating AI Safety and Human Intelligence

    48:50 Nefarious AI Activities and Satellite Surveillance

    49:31 Pliny the Prompter Jailbreaking AI

    50:20 Can’t vs. Won’t Destroy the World

    51:15 How to Make AI Risk Feel Present

    54:27 Keeping Doom Arguments On Track

    57:04 Game Theory and AI Development Race

    01:01:26 Mental Model of Average Non-Doomer

    01:04:58 Is Liron a Strict Bayesian and Utilitarian?

    01:09:48 Can We Rename “Doom Debates”

    01:12:34 The Role of AI Trustworthiness

    01:16:48 Minor AI Disasters

    01:18:07 Most Likely Reason Things Go Well

    01:21:00 Final Thoughts

    Show Notes

    Previous post where people submitted questions: https://lironshapira.substack.com/p/ai-twitter-beefs-3-marc-andreessen

    Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!

    PauseAI, the volunteer organization I’m part of: https://pauseai.info

    Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • It’s time for AI Twitter Beefs #3:

    00:00 Introduction

    01:27 Marc Andreessen vs. Sam Altman

    09:15 Mark Zuckerberg

    35:40 Martin Casado

    47:26 Gary Marcus vs. Miles Brundage Bet

    58:39 Scott Alexander’s AI Art Turing Test

    01:11:29 Roon

    01:16:35 Stephen McAleer

    01:22:25 Emmett Shear

    01:37:20 OpenAI’s “Safety”

    01:44:09 Naval Ravikant vs. Eliezer Yudkowsky

    01:56:03 Comic Relief

    01:58:53 Final Thoughts

    Show Notes

    Upcoming Live Q&A: https://lironshapira.substack.com/p/2500-subscribers-live-q-and-a-ask

    “Make Your Beliefs Pay Rent In Anticipated Experiences” by Eliezer Yudkowsky on LessWrong: https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences

    Scott Alexander’s AI Art Turing Test: https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing

    Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!

    PauseAI, the volunteer organization I’m part of: https://pauseai.info

    Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Effective Altruism has been a controversial topic on social media, so today my guest and I are going to settle the question once and for all: Is it good or bad?

    Jonas Sota is a Software Engineer at Rippling, BA in Philosophy from UC Berkeley, who’s been observing the Effective Altruism (EA) movement in the San Francisco Bay Area for over a decade… and he’s not a fan.

    00:00 Introduction

    01:22 Jonas’s Criticisms of EA

    03:23 Recoil Exaggeration

    05:53 Impact of Malaria Nets

    10:48 Local vs. Global Altruism

    13:02 Shrimp Welfare

    25:14 Capitalism vs. Charity

    33:37 Cultural Sensitivity

    34:43 The Impact of Direct Cash Transfers

    37:23 Long-Term Solutions vs. Immediate Aid

    42:21 Charity Budgets

    45:47 Prioritizing Local Issues

    50:55 The EA Community

    59:34 Debate Recap

    1:03:57 Announcements

    Show Notes

    Jonas’s Instagram: @jonas_wanders

    Will MacAskill’s famous book, Doing Good Better: https://www.effectivealtruism.org/doing-good-better

    Scott Alexander’s excellent post about the people he met at EA Global: https://slatestarcodex.com/2017/08/16/fear-and-loathing-at-effective-altruism-global-2017/

    Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!

    PauseAI, the volunteer organization I’m part of: https://pauseai.info

    Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Matthew Adelstein, better known as Bentham’s Bulldog on Substack, is a philosophy major at the University of Michigan and an up & coming public intellectual.

    He’s a rare combination: Effective Altruist, Bayesian, non-reductionist, theist.

    Our debate covers reductionism, evidence for god, the implications of a fine-tuned universe, moral realism, and AI doom.

    00:00 Introduction

    02:56 Matthew’s Research

    11:29 Animal Welfare

    16:04 Reductionism vs. Non-Reductionism Debate

    39:53 The Decline of God in Modern Discourse

    46:23 Religious Credences

    50:24 Pascal's Wager and Christianity

    56:13 Are Miracles Real?

    01:10:37 Fine-Tuning Argument for God

    01:28:36 Cellular Automata

    01:34:25 Anthropic Principle

    01:51:40 Mathematical Structures and Probability

    02:09:35 Defining God

    02:18:20 Moral Realism

    02:21:40 Orthogonality Thesis

    02:32:02 Moral Philosophy vs. Science

    02:45:51 Moral Intuitions

    02:53:18 AI and Moral Philosophy

    03:08:50 Debate Recap

    03:12:20 Show Updates

    Show Notes

    Matthew’s Substack: https://benthams.substack.com

    Matthew's Twitter: https://x.com/BenthamsBulldog

    Matthew's YouTube: https://www.youtube.com/@deliberationunderidealcond5105

    Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg

    PauseAI, the volunteer organization I’m part of — https://pauseai.info/

    Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Prof. Kenneth Stanley is a former Research Science Manager at OpenAI leading the Open-Endedness Team in 2020-2022. Before that, he was a Professor of Computer Ccience at the University of Central Florida and the head of Core AI Research at Uber. He coauthored Why Greatness Cannot Be Planned: The Myth of the Objective, which argues that as soon as you create an objective, then you ruin your ability to reach it.

    In this episode, I debate Ken’s claim that superintelligent AI *won’t* be guided by goals, and then we compare our views on AI doom.

    00:00 Introduction

    00:45 Ken’s Role at OpenAI

    01:53 “Open-Endedness” and “Divergence”

    9:32 Open-Endedness of Evolution

    21:16 Human Innovation and Tech Trees

    36:03 Objectives vs. Open Endedness

    47:14 The Concept of Optimization Processes

    57:22 What’s Your P(Doom)™

    01:11:01 Interestingness and the Future

    01:20:14 Human Intelligence vs. Superintelligence

    01:37:51 Instrumental Convergence

    01:55:58 Mitigating AI Risks

    02:04:02 The Role of Institutional Checks

    02:13:05 Exploring AI's Curiosity and Human Survival

    02:20:51 Recapping the Debate

    02:29:45 Final Thoughts

    SHOW NOTES

    Ken’s home page: https://www.kenstanley.net/

    Ken’s Wikipedia: https://en.wikipedia.org/wiki/Kenneth_Stanley

    Ken’s Twitter: https://x.com/kenneth0stanley

    Ken’s PicBreeder paper: https://wiki.santafe.edu/images/1/1e/Secretan_ecj11.pdf

    Ken's book, Why Greatness Cannot Be Planned: The Myth of the Objective: https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237

    The Rocket Alignment Problem by Eliezer Yudkowsky: https://intelligence.org/2018/10/03/rocket-alignment/

    ---

    Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg

    PauseAI, the volunteer organization I’m part of — https://pauseai.info/

    Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

    ---

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • OpenAI just announced o3 and smashed a bunch of benchmarks (ARC-AGI, SWE-bench, FrontierMath)!

    A new Anthropic and Redwood Research paper says Claude is resisting its developers’ attempts to retrain its values!

    What’s the upshot — what does it all mean for P(doom)?

    00:00 Introduction

    01:45 o3’s architecture and benchmarks

    06:08 “Scaling is hitting a wall” 🤡

    13:41 How many new architectural insights before AGI?

    20:28 Negative update for interpretability

    31:30 Intellidynamics — ***KEY CONCEPT***

    33:20 Nuclear control rod analogy

    36:54 Sam Altman's misguided perspective

    42:40 Claude resisted retraining from good to evil

    44:22 What is good corrigibility?

    52:42 Claude’s incorrigibility doesn’t surprise me

    55:00 Putting it all in perspective

    ---

    SHOW NOTES

    Scott Alexander’s analysis of the Claude incorrigibility result: https://www.astralcodexten.com/p/claude-fights-back and https://www.astralcodexten.com/p/why-worry-about-incorrigible-claude

    Zvi Mowshowitz’s analysis of the Claude incorrigibility result: https://thezvi.wordpress.com/2024/12/24/ais-will-increasingly-fake-alignment/

    ---

    PauseAI Website: https://pauseai.info

    PauseAI Discord: https://discord.gg/2XXWXvErfA

    Say hi to me in the #doom-debates-podcast channel!

    Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • This week Liron was interview by Gaëtan Selle on @the-flares about AI doom.

    Cross-posted from their channel with permission.

    Original source: https://www.youtube.com/watch?v=e4Qi-54I9Zw

    0:00:02 Guest Introduction

    0:01:41 Effective Altruism and Transhumanism

    0:05:38 Bayesian Epistemology and Extinction Probability

    0:09:26 Defining Intelligence and Its Dangers

    0:12:33 The Key Argument for AI Apocalypse

    0:18:51 AI’s Internal Alignment

    0:24:56 What Will AI's Real Goal Be?

    0:26:50 The Train of Apocalypse

    0:31:05 Among Intellectuals, Who Rejects the AI Apocalypse Arguments?

    0:38:32 The Shoggoth Meme

    0:41:26 Possible Scenarios Leading to Extinction

    0:50:01 The Only Solution: A Pause in AI Research?

    0:59:15 The Risk of Violence from AI Risk Fundamentalists

    1:01:18 What Will General AI Look Like?

    1:05:43 Sci-Fi Works About AI

    1:09:21 The Rationale Behind Cryonics

    1:12:55 What Does a Positive Future Look Like?

    1:15:52 Are We Living in a Simulation?

    1:18:11 Many Worlds in Quantum Mechanics Interpretation

    1:20:25 Ideal Future Podcast Guest for Doom Debates

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Roon is a member of the technical staff at OpenAI. He’s a highly respected voice on tech Twitter, despite being a pseudonymous cartoon avatar account. In late 2021, he invented the terms “shape rotator” and “wordcel” to refer to roughly visual/spatial/mathematical intelligence vs. verbal intelligence. He is simultaneously a serious thinker, a builder, and a shitposter.

     I'm excited to learn more about Roon, his background, his life, and of course, his views about AI and existential risk.

    00:00 Introduction

    02:43 Roon’s Quest and Philosophies

    22:32 AI Creativity

    30:42 What’s Your P(Doom)™

    54:40 AI Alignment

    57:24 Training vs. Production

    01:05:37 ASI

    01:14:35 Goal-Oriented AI and Instrumental Convergence

    01:22:43 Pausing AI

    01:25:58 Crux of Disagreement

    1:27:55 Dogecoin

    01:29:13 Doom Debates’s Mission

    Show Notes

    Follow Roon: https://x.com/tszzl

    For Humanity: An AI Safety Podcast with John Sherman — https://www.youtube.com/@ForHumanityPodcast

    Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg

    PauseAI, the volunteer organization I’m part of — https://pauseai.info/

    Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Today I’m reacting to the recent Scott Aaronson interview on the Win-Win podcast with Liv Boeree and Igor Kurganov.

    Prof. Aaronson is the Director of the Quantum Information Center at the University of Texas at Austin. He’s best known for his research advancing the frontier of complexity theory, especially quantum complexity theory, and making complex insights from his field accessible to a wider readership via his blog.

    Scott is one of my biggest intellectual influences. His famous Who Can Name The Bigger Number essay and his long-running blog are among my best memories of coming across high-quality intellectual content online as a teen. His posts and lectures taught me much of what I know about complexity theory.

    Scott recently completed a two-year stint at OpenAI focusing on the theoretical foundations of AI safety, so I was interested to hear his insider account.

    Unfortunately, what I heard in the interview confirms my worst fears about the meaning of “safety” at today’s AI companies: that they’re laughably clueless at how to achieve any measure of safety, but instead of doing the adult thing and slowing down their capabilities work, they’re pushing forward recklessly.

    00:00 Introducing Scott Aaronson

    02:17 Scott's Recruitment by OpenAI

    04:18 Scott's Work on AI Safety at OpenAI

    08:10 Challenges in AI Alignment

    12:05 Watermarking AI Outputs

    15:23 The State of AI Safety Research

    22:13 The Intractability of AI Alignment

    34:20 Policy Implications and the Call to Pause AI

    38:18 Out-of-Distribution Generalization

    45:30 Moral Worth Criterion for Humans

    51:49 Quantum Mechanics and Human Uniqueness

    01:00:31 Quantum No-Cloning Theorem

    01:12:40 Scott Is Almost An Accelerationist?

    01:18:04 Geoffrey Hinton's Proposal for Analog AI

    01:36:13 The AI Arms Race and the Need for Regulation

    01:39:41 Scott Aronson's Thoughts on Sam Altman

    01:42:58 Scott Rejects the Orthogonality Thesis

    01:46:35 Final Thoughts

    01:48:48 Lethal Intelligence Clip

    01:51:42 Outro

    Show Notes

    Scott’s Interview on Win-Win with Liv Boeree and Igor Kurganov: https://www.youtube.com/watch?v=ANFnUHcYza0

    Scott’s Blog: https://scottaaronson.blog

    PauseAI Website: https://pauseai.info

    PauseAI Discord: https://discord.gg/2XXWXvErfA

    Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Today I’m reacting to a July 2024 interview that Prof. Subbarao Kambhampati did on Machine Learning Street Talk.

    Rao is a Professor of Computer Science at Arizona State University, and one of the foremost voices making the claim that while LLMs can generate creative ideas, they can’t truly reason.

    The episode covers a range of topics including planning, creativity, the limits of LLMs, and why Rao thinks LLMs are essentially advanced N-gram models.

    00:00 Introduction

    02:54 Essentially N-Gram Models?

    10:31 The Manhole Cover Question

    20:54 Reasoning vs. Approximate Retrieval

    47:03 Explaining Jokes

    53:21 Caesar Cipher Performance

    01:10:44 Creativity vs. Reasoning

    01:33:37 Reasoning By Analogy

    01:48:49 Synthetic Data

    01:53:54 The ARC Challenge

    02:11:47 Correctness vs. Style

    02:17:55 AIs Becoming More Robust

    02:20:11 Block Stacking Problems

    02:48:12 PlanBench and Future Predictions

    02:58:59 Final Thoughts

    Show Notes

    Rao’s interview on Machine Learning Street Talk: https://www.youtube.com/watch?v=y1WnHpedi2A

    Rao’s Twitter: https://x.com/rao2z

    PauseAI Website: https://pauseai.info

    PauseAI Discord: https://discord.gg/2XXWXvErfA

    Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.

    Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).

    00:00 Nethys Introduction

    04:47 The Vulnerable World Hypothesis

    10:01 What’s Your P(Doom)™

    14:04 Nethys’s Banger YouTube Comment

    26:53 Living with High P(Doom)

    31:06 Losing Access to Distant Stars

    36:51 Defining AGI

    39:09 The Convergence of AI Models

    47:32 The Role of “Unlicensed” Thinkers

    52:07 The PauseAI Movement

    58:20 Lethal Intelligence Video Clip

    Show Notes

    Eliezer Yudkowsky’s post on “Death with Dignity”: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy

    PauseAI Website: https://pauseai.info

    PauseAI Discord: https://discord.gg/2XXWXvErfA

    Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Fraser Cain is the publisher of Universe Today, co-host of Astronomy Cast, a popular YouTuber about all things space, and guess what… he has a high P(doom)! That’s why he’s joining me on Doom Debates for a very special AI + space crossover episode.

    00:00 Fraser Cain’s Background and Interests

    5:03 What’s Your P(Doom)™

    07:05 Our Vulnerable World

    15:11 Don’t Look Up

    22:18 Cosmology and the Search for Alien Life

    31:33 Stars = Terrorists

    39:03 The Great Filter and the Fermi Paradox

    55:12 Grabby Aliens Hypothesis

    01:19:40 Life Around Red Dwarf Stars?

    01:22:23 Epistemology of Grabby Aliens

    01:29:04 Multiverses

    01:33:51 Quantum Many Worlds vs. Copenhagen Interpretation

    01:47:25 Simulation Hypothesis

    01:51:25 Final Thoughts

    SHOW NOTES

    Fraser’s YouTube channel: https://www.youtube.com/@frasercain

    Universe Today (space and astronomy news): https://www.universetoday.com/

    Max Tegmark’s book that explains 4 levels of multiverses: https://www.amazon.com/Our-Mathematical-Universe-Ultimate-Reality/dp/0307744256

    Robin Hanson’s ideas:

    Grabby Aliens: https://grabbyaliens.com

    The Great Filter: https://en.wikipedia.org/wiki/Great_Filter

    Life in a high-dimensional space: https://www.overcomingbias.com/p/life-in-1kdhtml

    ---

    Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

    ---

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are back for a Part II! This time we’re going straight to debating my favorite topic, AI doom.

    00:00 Introduction

    02:23 High-Level AI Doom Argument

    17:06 How Powerful Could Intelligence Be?

    22:34 “Knowledge Creation”

    48:33 “Creativity”

    54:57 Stand-Up Comedy as a Test for AI

    01:12:53 Vaden & Ben’s Goalposts

    01:15:00 How to Change Liron’s Mind

    01:20:02 LLMs are Stochastic Parrots?

    01:34:06 Tools vs. Agents

    01:39:51 Instrumental Convergence and AI Goals

    01:45:51 Intelligence vs. Morality

    01:53:57 Mainline Futures

    02:16:50 Lethal Intelligence Video

    Show Notes

    Vaden & Ben’s Podcast: https://www.youtube.com/@incrementspod

    Recommended playlists from their podcast:

    * The Bayesian vs Popperian Epistemology Series

    * The Conjectures and Refutations Series

    Vaden’s Twitter: https://x.com/vadenmasrani

    Ben’s Twitter: https://x.com/BennyChugg

    Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Dr. Andrew Critch is the co-founder of the Center for Applied Rationality, a former Research Fellow at the Machine Intelligence Research Institute (MIRI), a Research Scientist at the UC Berkeley Center for Human Compatible AI, and the co-founder of a new startup called Healthcare Agents.

    Dr. Critch’s P(Doom) is a whopping 85%! But his most likely doom scenario isn’t what you might expect. He thinks humanity will successfully avoid a self-improving superintelligent doom scenario, only to still go extinct via the slower process of “industrial dehumanization”.

    00:00 Introduction01:43 Dr. Critch’s Perspective on LessWrong Sequences06:45 Bayesian Epistemology15:34 Dr. Critch's Time at MIRI18:33 What’s Your P(Doom)™26:35 Doom Scenarios40:38 AI Timelines43:09 Defining “AGI”48:27 Superintelligence53:04 The Speed Limit of Intelligence01:12:03 The Obedience Problem in AI01:21:22 Artificial Superintelligence and Human Extinction01:24:36 Global AI Race and Geopolitics01:34:28 Future Scenarios and Human Relevance01:48:13 Extinction by Industrial Dehumanization01:58:50 Automated Factories and Human Control02:02:35 Global Coordination Challenges02:27:00 Healthcare Agents02:35:30 Final Thoughts

    ---

    Show Notes

    Dr. Critch’s LessWrong post explaining his P(Doom) and most likely doom scenarios: https://www.lesswrong.com/posts/Kobbt3nQgv3yn29pr/my-motivation-and-theory-of-change-for-working-in-ai

    Dr. Critch’s Website: https://acritch.com/

    Dr. Critch’s Twitter: https://twitter.com/AndrewCritchPhD

    ---

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • It’s time for AI Twitter Beefs #2:

    00:42 Jack Clark (Anthropic) vs. Holly Elmore (PauseAI US)

    11:02 Beff Jezos vs. Eliezer Yudkowsky, Carl Feynman

    18:10 Geoffrey Hinton vs. OpenAI & Meta

    25:14 Samuel Hammond vs. Liron

    30:26 Yann LeCun vs. Eliezer Yudkowsky

    37:13 Roon vs. Eliezer Yudkowsky

    41:37 Tyler Cowen vs. AI Doomers

    52:54 David Deutsch vs. Liron

    Twitter people referenced:

    * Jack Clark: https://x.com/jackclarkSF

    * Holly Elmore: https://x.com/ilex_ulmus

    * PauseAI US: https://x.com/PauseAIUS

    * Geoffrey Hinton: https://x.com/GeoffreyHinton

    * Samuel Hammond: https://x.com/hamandcheese

    * Yann LeCun: https://x.com/ylecun

    * Eliezer Yudkowsky: https://x.com/esyudkowsky

    * Roon: https://x.com/tszzl

    * Beff Jezos: https://x.com/basedbeffjezos

    * Carl Feynman: https://x.com/carl_feynman

    * Tyler Cowen: https://x.com/tylercowen

    * David Deutsch: https://x.com/DavidDeutschOxf

    Show Notes

    Holly Elmore’s EA forum post about scouts vs. soldiers

    Manifund info & donation page for PauseAI US: https://manifund.org/projects/pauseai-us-2025-through-q2

    PauseAI.info - join the Discord and find me in the #doom-debates channel!

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are joining me to debate Bayesian vs. Popperian epistemology.

    I’m on the Bayesian side, heavily influenced by the writings of Eliezer Yudkowsky. Vaden and Ben are on the Popperian side, heavily influenced by David Deutsch and the writings of Popper himself.

    We dive into the theoretical underpinnings of Bayesian reasoning and Solomonoff induction, contrasting them with the Popperian perspective, and explore real-world applications such as predicting elections and economic policy outcomes.

    The debate highlights key philosophical differences between our two epistemological frameworks, and sets the stage for further discussions on superintelligence and AI doom scenarios in an upcoming Part II.

    00:00 Introducing Vaden and Ben

    02:51 Setting the Stage: Epistemology and AI Doom

    04:50 What’s Your P(Doom)™

    13:29 Popperian vs. Bayesian Epistemology

    31:09 Engineering and Hypotheses

    38:01 Solomonoff Induction

    45:21 Analogy to Mathematical Proofs

    48:42 Popperian Reasoning and Explanations

    54:35 Arguments Against Bayesianism

    58:33 Against Probability Assignments

    01:21:49 Popper’s Definition of “Content”

    01:31:22 Heliocentric Theory Example

    01:31:34 “Hard to Vary” Explanations

    01:44:42 Coin Flipping Example

    01:57:37 Expected Value

    02:12:14 Prediction Market Calibration

    02:19:07 Futarchy

    02:29:14 Prediction Markets as AI Lower Bound

    02:39:07 A Test for Prediction Markets

    2:45:54 Closing Thoughts

    Show Notes

    Vaden & Ben’s Podcast: https://www.youtube.com/@incrementspod

    Vaden’s Twitter: https://x.com/vadenmasrani

    Ben’s Twitter: https://x.com/BennyChugg

    Bayesian reasoning: https://en.wikipedia.org/wiki/Bayesian_inference

    Karl Popper: https://en.wikipedia.org/wiki/Karl_Popper

    Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/the_credence_assumption/

    Vaden’s disproof of probabilistic induction (including Solomonoff Induction): https://arxiv.org/abs/2107.00749

    Vaden’s referenced post about predictions being uncalibrated > 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations

    Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/

    Sources for claim that superforecasters gave a P(doom) below 1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/https://www.astralcodexten.com/p/the-extinction-tournament

    Vaden’s Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
  • Our top researchers and industry leaders have been warning us that superintelligent AI may cause human extinction in the next decade.

    If you haven't been following all the urgent warnings, I'm here to bring you up to speed.

    * Human-level AI is coming soon

    * It’s an existential threat to humanity

    * The situation calls for urgent action

    Listen to this 15-minute intro to get the lay of the land.

    Then follow these links to learn more and see how you can help:

    * The Compendium

    A longer written introduction to AI doom by Connor Leahy et al

    * AGI Ruin — A list of lethalities

    A comprehensive list by Eliezer Yudkowksy of reasons why developing superintelligent AI is unlikely to go well for humanity

    * AISafety.info

    A catalogue of AI doom arguments and responses to objections

    * PauseAI.info

    The largest volunteer org focused on lobbying world government to pause development of superintelligent AI

    * PauseAI Discord

    Chat with PauseAI members, see a list of projects and get involved

    ---

    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com