Powered by RND
PodcastsBusinessDoom Debates

Doom Debates

Liron Shapira
Doom Debates
Latest episode

Available Episodes

5 of 95
  • Tech CTO Has 99.999% P(Doom) — “This is my bugout house” — Louis Berman, AI X-Risk Activist
    Louis Berman is a polymath who brings unique credibility to AI doom discussions. He's been coding AI for 25 years, served as CTO of major tech companies, recorded the first visual sighting of what became the dwarf planet Eris, and has now pivoted to full-time AI risk activism. He's lobbied over 60 politicians across multiple countries for PauseAI and authored two books on existential risk.Louis and I are both baffled by the calm, measured tone that dominates AI safety discourse. As Louis puts it: "No one is dealing with this with emotions. No one is dealing with this as, oh my God, if they're right. Isn't that the scariest thing you've ever heard about?"Louis isn't just talking – he's acting on his beliefs. He just bought a "bug out house" in rural Maryland, though he's refreshingly honest that this isn't about long-term survival. He expects AI doom to unfold over months or years rather than Eliezer's instant scenario, and he's trying to buy his family weeks of additional time while avoiding starvation during societal collapse.He's spent extensive time in congressional offices and has concrete advice about lobbying techniques. His key insight: politicians' staffers consistently claim "if just five people called about AGI, it would move the needle". We need more people like Louis!Timestamps* 00:00:00 - Cold Open: The Missing Emotional Response* 00:00:31 - Introducing Louis Berman: Polymath Background and Donor Disclosure* 00:03:40 - The Anodyne Reaction: Why No One Seems Scared* 00:07:37 - P-Doom Calibration: Gary Marcus and the 1% Problem* 00:11:57 - The Bug Out House: Prepping for Slow Doom* 00:13:44 - Being Amazed by LLMs While Fearing ASI* 00:18:41 - What’s Your P(Doom)™* 00:25:42 - Bayesian Reasoning vs. Heart of Hearts Beliefs* 00:32:10 - Non-Doom Scenarios and International Coordination* 00:40:00 - The Missing Mood: Where's the Emotional Response?* 00:44:17 - Prepping Philosophy: Buying Weeks, Not Years* 00:52:35 - Doom Scenarios: Slow Takeover vs. Instant Death* 01:00:43 - Practical Activism: Lobbying Politicians and Concrete Actions* 01:16:44 - Where to Find Louis's Books and Final Wrap-up* 01:18:17 - Outro: Super Fans and Mission PartnersLinksLouis’s website — https://xriskbooks.com — Buy his books!ControlAI’s form to easily contact your representative and make a difference — https://controlai.com/take-action/usa — Highly recommended!Louis’s interview about activism with John Sherman and Felix De Simone — https://www.youtube.com/watch?v=Djd2n4cufTMIf Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares — https://ifanyonebuildsit.comBecome a Mission Partner!Want to meaningfully help the show’s mission (raise awareness of AI x-risk & raise the level of debate around this crucial subject) become reality? Donate $1,000+ to the show (no upper limit) and I’ll invite you to the private Discord channel. Email me at [email protected] if you have questions or want to donate crypto.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    1:21:05
  • Rob Miles, Top AI Safety Educator: Humanity Isn’t Ready for Superintelligence!
    Rob Miles is the most popular AI safety educator on YouTube, with millions of views across his videos explaining AI alignment to general audiences. He dropped out of his PhD in 2011 to focus entirely on AI safety communication – a prescient career pivot that positioned him as one of the field's most trusted voices over a decade before ChatGPT made AI risk mainstream.Rob sits firmly in the 10-90% P(Doom) range, though he admits his uncertainty is "hugely variable" and depends heavily on how humanity responds to the challenge. What makes Rob particularly compelling is the contrast between his characteristic British calm and his deeply serious assessment of our situation. He's the type of person who can explain existential risk with the measured tone of a nature documentarian while internally believing we're probably headed toward catastrophe.Rob has identified several underappreciated problems, particularly around alignment stability under self-modification. He argues that even if we align current AI systems, there's no guarantee their successors will inherit those values – a discontinuity problem that most safety work ignores. He's also highlighted the "missing mood" in AI discourse, where people discuss potential human extinction with the emotional register of an academic conference rather than an emergency.We explore Rob's mainline doom scenario involving recursive self-improvement, why he thinks there's enormous headroom above human intelligence, and his views on everything from warning shots to the Malthusian dynamics that might govern a post-AGI world. Rob makes a fascinating case that we may be the "least intelligent species capable of technological civilization" – which has profound implications for what smarter systems might achieve.Our key disagreement centers on strategy: Rob thinks some safety-minded people should work inside AI companies to influence them from within, while I argue this enables "tractability washing" that makes the companies look responsible while they race toward potentially catastrophic capabilities. Rob sees it as necessary harm reduction; I see it as providing legitimacy to fundamentally reckless enterprises.The conversation also tackles a meta-question about communication strategy. Rob acknowledges that his measured, analytical approach might be missing something crucial – that perhaps someone needs to be "running around screaming" to convey the appropriate emotional urgency. It's a revealing moment from someone who's spent over a decade trying to wake people up to humanity's most important challenge, only to watch the world continue treating it as an interesting intellectual puzzle rather than an existential emergency.Timestamps* 00:00:00 - Cold Open* 00:00:28 - Introducing Rob Miles* 00:01:42 - Rob's Background and Childhood* 00:02:05 - Being Aspie* 00:04:50 - Less Wrong Community and "Normies"* 00:06:24 - Chesterton's Fence and Cassava Root* 00:09:30 - Transition to AI Safety Research* 00:11:52 - Discovering Communication Skills* 00:15:36 - YouTube Success and Channel Growth* 00:16:46 - Current Focus: Technical vs Political* 00:18:50 - Nuclear Near-Misses and Y2K* 00:21:55 - What’s Your P(Doom)™* 00:27:31 - Uncertainty About Human Response* 00:31:04 - Views on Yudkowsky and AI Risk Arguments* 00:42:07 - Mainline Catastrophe Scenario* 00:47:32 - Headroom Above Human Intelligence* 00:54:58 - Detailed Doom Scenario* 01:01:07 - Self-Modification and Alignment Stability* 01:17:26 - Warning Shots Problem* 01:20:28 - Moving the Overton Window* 01:25:59 - Protests and Political Action* 01:33:02 - The Missing Mood Problem* 01:40:28 - Raising Society's Temperature* 01:44:25 - "If Anyone Builds It, Everyone Dies"* 01:51:05 - Technical Alignment Work* 01:52:00 - Working Inside AI Companies* 01:57:38 - Tractability Washing at AI Companies* 02:05:44 - Closing Thoughts* 02:08:21 - How to Support Doom Debates: Become a Mission PartnerLinksRob’s YouTube channel — https://www.youtube.com/@RobertMilesAIRob’s Twitter — https://x.com/robertskmilesRational Animations (another great YouTube channel, narrated by Rob) — https://www.youtube.com/RationalAnimationsBecome a Mission Partner!Want to meaningfully help the show’s mission (raise awareness of AI x-risk & raise the level of debate around this crucial subject) become reality? Donate $1,000+ to the show (no upper limit) and I’ll invite you to the private Discord channel. Email me at [email protected] if you have questions or want to donate crypto. Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    2:11:50
  • Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?
    Vitalik Buterin is the founder of Ethereum, the world's second-largest cryptocurrency by market cap, currently valued at around $500 billion. But beyond revolutionizing blockchain technology, Vitalik has become one of the most thoughtful voices on AI safety and existential risk.He's donated over $665 million to pandemic prevention and other causes, and has a 12% P(Doom) – putting him squarely in what I consider the "sane zone" for AI risk assessment. What makes Vitalik particularly interesting is that he's both a hardcore techno-optimist who built one of the most successful decentralized systems ever created, and someone willing to seriously consider AI regulation and coordination mechanisms.Vitalik coined the term "d/acc" – defensive, decentralized, democratic, differential acceleration – as a middle path between uncritical AI acceleration and total pause scenarios. He argues we need to make the world more like Switzerland (defensible, decentralized) and less like the Eurasian steppes (vulnerable to conquest).We dive deep into the tractability of AI alignment, whether current approaches like DAC can actually work when superintelligence arrives, and why he thinks a pluralistic world of competing AIs might be safer than a single aligned superintelligence. We also explore his vision for human-AI merger through brain-computer interfaces and uploading.The crux of our disagreement is that I think we're heading for a "plants vs. animals" scenario where AI will simply operate on timescales we can't match, while Vitalik believes we can maintain agency through the right combination of defensive technologies and institutional design.Finally, we tackle the discourse itself – I ask Vitalik to debunk the common ad hominem attacks against AI doomers, from "it's just a fringe position" to "no real builders believe in doom." His responses carry weight given his credibility as both a successful entrepreneur and someone who's maintained intellectual honesty throughout his career.Timestamps* 00:00:00 - Cold Open* 00:00:37 - Introducing Vitalik Buterin* 00:02:14 - Vitalik's altruism* 00:04:36 - Rationalist community influence* 00:06:30 - Opinion of Eliezer Yudkowsky and MIRI* 00:09:00 - What’s Your P(Doom)™* 00:24:42 - AI timelines* 00:31:33 - AI consciousness* 00:35:01 - Headroom above human intelligence* 00:48:56 - Techno optimism discussion* 00:58:38 - e/acc: Vibes-based ideology without deep arguments* 01:02:49 - d/acc: Defensive, decentralized, democratic acceleration* 01:11:37 - How plausible is d/acc?* 01:20:53 - Why libertarian acceleration can paradoxically break decentralization* 01:25:49 - Can we merge with AIs?* 01:35:10 - Military AI concerns: How war accelerates dangerous development* 01:42:26 - The intractability question* 01:51:10 - Anthropic and tractability-washing the AI alignment problem* 02:00:05 - The state of AI x-risk discourse* 02:05:14 - Debunking ad hominem attacks against doomers* 02:23:41 - Liron’s outroLinksVitalik’s website: https://vitalik.eth.limoVitalik’s Twitter: https://x.com/vitalikbuterinEliezer Yudkowsky’s explanation of p-Zombies: https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies—Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    2:26:10
  • Why I'm Scared GPT-9 Will Murder Me — Liron on Robert Wright’s Nonzero Podcast
    Today I’m sharing my interview on Robert Wright’s Nonzero Podcast from last May. Rob is an especially sharp interviewer who doesn't just nod along, he had great probing questions for me.This interview happened right after Ilya Sutskever and Jan Leike resigned from OpenAI in May 2024, continuing a pattern that goes back to Dario Amodei leaving to start Anthropic. These aren't fringe doomers; these are the people hired specifically to solve the safety problem, and they keep concluding it's not solvable at the current pace.00:00:00 - Liron’s preface00:02:10 - Robert Wright introduces Liron00:04:02 - PauseAI protests at OpenAI headquarters00:05:15 - OpenAI resignations (Ilya Sutskever, Jan Leike, Dario Amodei, Paul Christiano, Daniel Kokotajlo)00:15:30 - P vs NP problem as analogy for AI alignment difficulty00:22:31 - AI pause movement and protest turnout00:29:02 - Defining AI doom and sci-fi scenarios00:32:05 - What’s My P(Doom)™00:35:18 - Fast vs slow AI takeoff and Sam Altman's position00:42:33 - Paperclip thought experiment and instrumental convergence explanation00:54:40 - Concrete examples of AI power-seeking behavior (business assistant scenario)01:00:58 - GPT-4 TaskRabbit deception example and AI reasoning capabilities01:09:00 - AI alignment challenges and human values discussion01:17:33 - Wrap-up and transition to premium subscriber contentShow NotesThis episode on Rob’s Nonzero Newsletter. You can subscribe for premium access to the last 1 hour of our discussion! — https://www.nonzero.org/p/in-defense-of-ai-doomerism-robertThis episode on Rob’s YouTube — https://www.youtube.com/watch?v=VihA_-8kBNgPauseAI — https://pauseai.infoPauseAI US — http://pauseai-us.orgDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    1:19:23
  • The Man Who Might SOLVE AI Alignment — Dr. Steven Byrnes, AGI Safety Researcher @ Astera Institute
    Dr. Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem.Steve has a whopping 90% P(Doom), but unlike most AI safety researchers who focus on current LLMs, he argues that LLMs will plateau before becoming truly dangerous, and the real threat will come from next-generation "brain-like AGI" based on actor-critic reinforcement learning.For the last five years, he's been diving deep into neuroscience to reverse engineer how human brains actually work, and how to use that knowledge to solve the technical AI alignment problem. He's one of the few people who both understands why alignment is hard and is taking a serious technical shot at solving it.We cover his "two subsystems" model of the brain, why current AI safety approaches miss the mark, his disagreements with social evolution approaches, and why understanding human neuroscience matters for building aligned AGI.* 00:00:00 - Cold Open: Solving the technical alignment problem* 00:00:26 - Introducing Dr. Steven Byrnes and his impressive background* 00:01:59 - Steve's unique mental strengths* 00:04:08 - The cold fusion research story demonstrating Steve's approach* 00:06:18 - How Steve got interested in neuroscience through Jeff Hawkins* 00:08:18 - Jeff Hawkins' cortical uniformity theory and brain vs deep learning* 00:11:45 - When Steve first encountered Eliezer's sequences and became AGI-pilled* 00:15:11 - Steve's research direction: reverse engineering human social instincts* 00:21:47 - Four visions of alignment success and Steve's preferred approach* 00:29:00 - The two brain subsystems model: steering brain vs learning brain* 00:35:30 - Brain volume breakdown and the learning vs steering distinction* 00:38:43 - Cerebellum as the "LLM" of the brain doing predictive learning* 00:46:44 - Language acquisition: Chomsky vs learning algorithms debate* 00:54:13 - What LLMs fundamentally can't do: complex context limitations* 01:07:17 - Hypothalamus and brainstem doing more than just homeostasis* 01:13:45 - Why morality might just be another hypothalamus cell group* 01:18:00 - Human social instincts as model-based reinforcement learning* 01:22:47 - Actor-critic reinforcement learning mapped to brain regions* 01:29:33 - Timeline predictions: when brain-like AGI might arrive* 01:38:28 - Why humans still beat AI on strategic planning and domain expertise* 01:47:27 - Inner vs outer alignment: cocaine example and reward prediction* 01:55:13 - Why legible Python code beats learned reward models* 02:00:45 - Outcome pumps, instrumental convergence, and the Stalin analogy* 02:11:48 - What’s Your P(Doom)™* 02:16:45 - Massive headroom above human intelligence* 02:20:45 - Can AI take over without physical actuators? (Yes)* 02:26:18 - Steve's bold claim: 30 person-years from proto-AGI to superintelligence* 02:32:17 - Why overhang makes the transition incredibly dangerous* 02:35:00 - Social evolution as alignment solution: why it won't work* 02:46:47 - Steve's research program: legible reward functions vs RLHF* 02:59:52 - AI policy discussion: why Steven is skeptical of pause AI* 03:05:51 - Lightning round: offense vs defense, P(simulation), AI unemployment* 03:12:42 - Thanking Steve and wrapping up the conversation* 03:13:30 - Liron's outro: Supporting the show and upcoming episodes with Vitalik and EliezerShow Notes* Steven Byrnes' Website & Research — https://sjbyrnes.com/* Steve’s Twitter — https://x.com/steve47285* Astera Institute — https://astera.org/Steve’s Sequences* Intro to Brain-Like-AGI Safety — https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8* Foom & Doom 1: “Brain in a box in a basement” — https://www.alignmentforum.org/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement* Foom & Doom 2: Technical alignment is hard — https://www.alignmentforum.org/posts/bnnKGSCHJghAvqPjS/foom-and-doom-2-technical-alignment-is-hard---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates Get full access to Doom Debates at lironshapira.substack.com/subscribe
    --------  
    3:15:16

More Business podcasts

About Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com
Podcast website

Listen to Doom Debates, Prof G Markets and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.23.3 | © 2007-2025 radio.de GmbH
Generated: 8/31/2025 - 5:44:31 PM