Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die?AI doomers are often misconstrued as having "no evidence" or just "anthropomorphizing". This toy model will help you understand why a drive to eliminate humans is NOT a handwavy anthropomorphic speculation, but rather something we expect by default from any sufficiently powerful search algorithm.We’re not talking about AGI or ASI here — we’re just looking at an AI that does brute-force search over actions in a simple grid world.The slide deck I’m presenting was created by Jaan Tallinn, cofounder of the Future of Life Institute.00:00 Introduction01:24 The Toy Model06:19 Misalignment and Manipulation Drives12:57 Search Capacity and Ontological Insights16:33 Irrelevant Concepts in AI Control20:14 Approaches to Solving AI Control Problems23:38 Final ThoughtsWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
--------
25:37
Superintelligent AI vs. Real-World Engineering | Liron Reacts to Bryan Cantrill
Bryan Cantrill, co-founder of Oxide Computer, says in his talk that engineering in the physical world is too complex for any AI to do it better than teams of human engineers. Success isn’t about intelligence; it’s about teamwork, character and resilience.I completely disagree.00:00 Introduction02:03 Bryan’s Take on AI Doom05:55 The Concept of P(Doom)08:36 Engineering Challenges and Human Intelligence15:09 The Role of Regulation and Authoritarianism in AI Control29:44 Engineering Complexity: A Case Study from Oxide Computer40:06 The Value of Team Collaboration46:13 Human Attributes in Engineering49:33 AI's Potential in Engineering58:23 Existential Risks and AI PredictionsBryan’s original talk: Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/watch?v=9CUFbqh16FgPauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
--------
1:05:33
2,500 Subscribers Live Q&A
Thanks for everyone who participated in the live Q&A on Friday!The topics covered include advice for computer science students, working in AI trustworthiness, what good AI regulation looks like, the implications of the $500B Stargate project, the public's gradual understanding of AI risks, the impact of minor AI disasters, and the philosophy of consciousness.00:00 Advice for Comp Sci Students01:14 The $500B Stargate Project02:36 Eliezer's Recent Podcast03:07 AI Safety and Public Policy04:28 AI Disruption and Politics05:12 DeepSeek and AI Advancements06:54 Human vs. AI Intelligence14:00 Consciousness and AI24:34 Dark Forest Theory and AI35:31 Investing in Yourself42:42 Probability of Aliens Saving Us from AI43:31 Brain-Computer Interfaces and AI Safety46:19 Debating AI Safety and Human Intelligence48:50 Nefarious AI Activities and Satellite Surveillance49:31 Pliny the Prompter Jailbreaking AI50:20 Can’t vs. Won’t Destroy the World51:15 How to Make AI Risk Feel Present54:27 Keeping Doom Arguments On Track57:04 Game Theory and AI Development Race01:01:26 Mental Model of Average Non-Doomer01:04:58 Is Liron a Strict Bayesian and Utilitarian?01:09:48 Can We Rename “Doom Debates”01:12:34 The Role of AI Trustworthiness01:16:48 Minor AI Disasters01:18:07 Most Likely Reason Things Go Well01:21:00 Final ThoughtsShow NotesPrevious post where people submitted questions: https://lironshapira.substack.com/p/ai-twitter-beefs-3-marc-andreessenWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
--------
1:23:18
AI Twitter Beefs #3: Marc Andreessen, Sam Altman, Mark Zuckerberg, Yann LeCun, Eliezer Yudkowsky & More!
It’s time for AI Twitter Beefs #3:00:00 Introduction01:27 Marc Andreessen vs. Sam Altman09:15 Mark Zuckerberg35:40 Martin Casado47:26 Gary Marcus vs. Miles Brundage Bet58:39 Scott Alexander’s AI Art Turing Test01:11:29 Roon01:16:35 Stephen McAleer01:22:25 Emmett Shear01:37:20 OpenAI’s “Safety”01:44:09 Naval Ravikant vs. Eliezer Yudkowsky01:56:03 Comic Relief01:58:53 Final ThoughtsShow NotesUpcoming Live Q&A: https://lironshapira.substack.com/p/2500-subscribers-live-q-and-a-ask“Make Your Beliefs Pay Rent In Anticipated Experiences” by Eliezer Yudkowsky on LessWrong: https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiencesScott Alexander’s AI Art Turing Test: https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turingWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
--------
2:07:00
Effective Altruism Debate with Jonas Sota
Effective Altruism has been a controversial topic on social media, so today my guest and I are going to settle the question once and for all: Is it good or bad?Jonas Sota is a Software Engineer at Rippling, BA in Philosophy from UC Berkeley, who’s been observing the Effective Altruism (EA) movement in the San Francisco Bay Area for over a decade… and he’s not a fan.00:00 Introduction01:22 Jonas’s Criticisms of EA03:23 Recoil Exaggeration05:53 Impact of Malaria Nets10:48 Local vs. Global Altruism13:02 Shrimp Welfare25:14 Capitalism vs. Charity33:37 Cultural Sensitivity34:43 The Impact of Direct Cash Transfers37:23 Long-Term Solutions vs. Immediate Aid42:21 Charity Budgets45:47 Prioritizing Local Issues50:55 The EA Community59:34 Debate Recap1:03:57 AnnouncementsShow NotesJonas’s Instagram: @jonas_wandersWill MacAskill’s famous book, Doing Good Better: https://www.effectivealtruism.org/doing-good-betterScott Alexander’s excellent post about the people he met at EA Global: https://slatestarcodex.com/2017/08/16/fear-and-loathing-at-effective-altruism-global-2017/Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com