PodcastsBusinessDoom Debates!

Doom Debates!

Liron Shapira
Doom Debates!
Latest episode

141 episodes

  • Doom Debates!

    How Friendly AI Will Become Deadly — Dr. Steven Byrnes (AGI Safety Researcher, Harvard Physics Postdoc) Returns!

    10/03/2026 | 1h 28 mins.
    Fan favorite Dr. Steven Byrnes returns to discuss recent AI progress and the concerning paradigm shift to "ruthless sociopath AI" he sees on the horizon.
    Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem.
    Timestamps
    00:00:00 — Cold Open
    00:00:48 — Welcoming Back the Returning Champion
    00:02:38 — Research Update: What's New in The Last 6 Months
    00:04:31 — The Rise of AI Agents
    00:07:49 — What's Your P(Doom)?™
    00:13:42 — "Brain-Like AGI": The Next Generation of AI
    00:17:01 — Can LLMs Ever Match the Human Brain?
    00:31:51 — Will AI Kill Us Before It Takes Our Jobs?
    00:36:12 — Country of Geniuses in a Data Center
    00:41:34 — Why We Should Expect "Ruthless Sociopathic" ASI
    00:54:15 — Post-Training & RLVR — A "Thin Layer" of Real Intelligence
    01:02:32 — Consequentialism and the Path to Superintelligence
    01:17:02 — Airplanes vs. Rockets: An Analogy for AI
    01:24:33 — FOOM and Recursive Self-Improvement
    Links
    Steven Byrnes’ Website & Research— https://sjbyrnes.com/
    Steve’s X—https://x.com/steve47285
    Astera Institute—https://astera.org/
    “Why We Should Expect Ruthless Sociopath ASI” — https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asi
    Intro to Brain-Like-AGI Safety—https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8
    Steve on LessWrong—https://www.lesswrong.com/users/steve2152
    AI 2027 — Scenario Timeline — https://ai-2027.com/
    Part 1: “The Man Who Might SOLVE AI Alignment”—
    https://www.youtube.com/watch?v=_ZRUq3VEAc0
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Q&A — Claude Code's Impact, Anthropic vs The Pentagon, Roko('s Basilisk) Returns + Liron Updates His Views!

    05/03/2026 | 2h 19 mins.
    Multiple live callers join this month's Q&A as we cover the imminent demise of programming as a profession, the Anthropic/Pentagon showdown, and debate the finer details of wireheading.
    I clarify my recent AI doom belief updates, and then the man behind Roko's Basilisk crashes the stream to argue I haven't updated nearly far enough.
    Timestamps
    00:00:00 — Cold Open
    00:00:56 — Welcome to the Livestream & Taking Questions from Chat
    00:12:44 — Anonymous Caller Asks If Rationalists Should Prioritize Attention-Grabbing Protests
    00:18:30 — The Good Case Scenario
    00:26:00 — Hugh Chungus Joins the Stream
    00:30:54 — Producer Ori, Liron's Recent Alignment Updates
    00:43:47 — We're In an Era of Centaurs
    00:47:40 — Noah Smith's Updates on AGI and Alignment
    00:48:44 — Co Co Chats Cybersecurity
    00:57:32 — The Attacker's Advantage in Offense/Defense Balance
    01:02:55 — Anthropic vs The Pentagon
    01:06:20 — "We're Getting Frog Boiled"
    01:11:06 — Stoner AI & Debating the Finer Points of Wireheading
    01:25:00 — A Caller Backs the Penrose Argument
    01:34:01 — Greyson Dials In
    01:40:21 — Surprise Guest Joins & Says Alignment Isn't a Problem
    02:05:15 — More Q&A with Chat
    02:14:26 — Closing Thoughts
    Links
    * Liron on X — https://x.com/liron
    * AI 2027 — https://ai-2027.com/
    * “Good Luck, Have Fun, Don’t Die” (film) — https://www.imdb.com/title/tt38301748/
    * “The AI Doc” (film) — https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimist
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    AI Will Take Our Jobs But SPARE Our Lives —Top AI Professor Moshe Vardi (Rice University)

    03/03/2026 | 1h 7 mins.
    Professor Moshe Vardi thinks AI will kill us with kindness by automating away our jobs. I think they'll just kill us for real.
    Who’s right? Tune into this episode and decide where you get off the Doom Train™.
    Some highlights of Professor Vardi’s impressive CV:
    * University Professor at Rice — a rare distinction that lets him teach in any department.
    * 65,000+ citations, an H-index above 100, and nearly 50 years spent mechanizing reasoning, which makes him one of the most decorated computer scientists alive.
    * He ran the ACM’s flagship publication for a decade, and now bridges CS and policy at Rice’s Baker Institute.
    * He has been sounding the alarm on AI-driven job automation for over ten years.
    * He signed the 2023 AI extinction risk statement, and calls himself “part of the resistance.”
    Links
    * Moshe Vardi’s Wikipedia — https://en.wikipedia.org/wiki/Moshe_Vardi
    * Moshe Vardi, Rice University — https://profiles.rice.edu/faculty/moshe-y-vardi
    * Baker Institute for Public Policy — https://www.bakerinstitute.org/
    * Nick Bostrom, Deep Utopia: Life and Meaning in a Solved World — https://www.amazon.com/Deep-Utopia-Life-Meaning-Solved/dp/1646871642
    * Joseph Aoun, Robot-Proof: Higher Education in the Age of Artificial Intelligence — https://www.amazon.com/Robot-Proof-Higher-Education-Artificial-Intelligence/dp/0262535971
    Timestamps
    00:00:00 — Cold Open
    00:00:54 — Introducing Professor Vardi
    00:02:01 — Professor Vardi’s Academic Focus: CS, AI, & Public Policy
    00:07:18 — What’s Your P(Doom)™?
    00:12:28 — We’re Not Doomed, “We’re Screwed”
    00:16:44 — AI’s Impact on Meaning & Purpose
    00:27:47 — Let’s Ride the Doom Train ™
    00:35:43 — The Future of Jobs
    00:39:24 — A Country of Geniuses in a Data Center
    00:41:04 — Corporations as Superintelligence
    00:45:49 — Agency, Consciousness, and the Limits of AI
    00:50:07 — The Mad Scientist Scenario
    00:54:02 — Could a Data Center of Geniuses Destroy Humanity?
    01:03:13 — The WALL-E Meme and Fun Theory
    01:04:01 — Why Professor Vardi Signed the AI Extinction Risk Statement
    01:06:02 — Wrap-Up + 1 Way Ticket to Doom
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Destiny's Fans Challenged Me to an AI Doom Debate

    26/02/2026 | 38 mins.
    Fresh off my debate with Destiny, his Discord community invited me into their voice chat to talk about AI doom. Just like the man himself, his fans are sharp.
    Let's find out where they get off The Doom Train™.
    My recent debate with Destiny — https://www.youtube.com/watch?v=rNgffLZTeWw
    Timestamps
    00:00:00 — Cold Open
    00:00:54 — Liron Joins Destiny’s Discord
    00:02:21 — The AI Doom Premise
    00:03:27 — Defining Intelligence and Is An LLM Really AI?
    00:07:12 — Will AI Become Uncontrollable?
    00:12:44 — The AI Alignment Problem
    00:24:11 — The Difficulty of Pausing AI
    00:26:01 — AI vs The Human Brain
    00:32:41 — Future AI Capabilities, Steering Toward Goals, & Philosophical Disagreements
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Doomsday Clock Physicist Warns AI Is Major Threat to Humanity!

    24/02/2026 | 1h 36 mins.
    Renowned scientists just set The Doomsday Clock closer than ever to midnight.
    I agree our collective doom seems imminent, but why do the clocksetters point to climate change—not rogue AI—as an existential threat?
    UChicago Professor Daniel Holz, a top physicist who sets the Doomsday Clock, joins me to debate society’s top existential risks.
    00:00:00 — Cold Open 
    00:00:51 — Introducing Professor Holz
    00:02:08 — The Doomsday Clock is at 85 Seconds to Midnight! 
    00:04:37 — What's Your P(Doom)?™ 
    00:08:09 — Making A Probability of Doomsday, or P(Doom), Equation 
    00:12:07 — How We All Die: Nuclear vs Climate vs AI 
    00:21:08 — Nuclear Close Calls from The Cold War 
    00:28:38 — History of The Doomsday Clock 
    00:30:18 — The Threat of Biological Risks Like Mirror Life 
    00:33:40 — Professor Holz’s Position on AI Misalignment Risk 
    00:44:49 — Does The Doomsday Clock Give Short Shrift to AI Misalignment Risk? 
    00:59:09 — Why Professor Holz Founded UChicago’s Existential Risk Laboratory (XLab) 
    01:06:22 — The State of Academic Research on AI Safety & Existential Risks 
    01:12:32 — The Case for Pausing AI Development 
    01:17:11 — Debate: Is Climate Change an Existential Threat? 
    01:28:48 — Call to Action: How to Reduce Our Collective Threat
    Links
    Professor Daniel Holz’s Wikipedia — https://en.wikipedia.org/wiki/Daniel_Holz
    XLab (Existential Risk Laboratory) — https://xrisk.uchicago.edu/
    2026 Doomsday Clock Statement — https://thebulletin.org/doomsday-clock/2026-statement/
    The Bulletin of the Atomic Scientists (Nonprofit that produces the clock)— https://thebulletin.org/
    UChicago Magazine features Prof. Holz’s class — “Are We Doomed?”: https://mag.uchicago.edu/science-medicine/are-we-doomed
    The Doomsday Machine by Daniel Ellsberg — https://www.amazon.com/Doomsday-Machine-Confessions-Nuclear-Planner/dp/1608196704
    If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares — https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640
    Learn more about pausing frontier AI development from PauseAI — https://pauseai.info
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe

More Business podcasts

About Doom Debates!

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com
Podcast website

Listen to Doom Debates!, Open Book with Anthony Scaramucci and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/11/2026 - 12:22:47 PM