Powered by RND
PodcastsTechnologyFuture of Life Institute Podcast

Future of Life Institute Podcast

Future of Life Institute
Future of Life Institute Podcast
Latest episode

Available Episodes

5 of 244
  • AGI Security: How We Defend the Future (with Esben Kran)
    Esben Kran joins the podcast to discuss why securing AGI requires more than traditional cybersecurity, exploring new attack surfaces, adaptive malware, and the societal shifts needed for resilient defenses. We cover protocols for safe agent communication, oversight without surveillance, and distributed safety models across companies and governments. Learn more about Esben's work at: https://blog.kran.ai 00:00 – Intro and preview 01:13 – AGI security vs traditional cybersecurity 02:36 – Rebuilding societal infrastructure for embedded security 03:33 – Sentware: adaptive, self-improving malware 04:59 – New attack surfaces 05:38 – Social media as misaligned AI 06:46 – Personal vs societal defenses 09:13 – Why private companies underinvest in security 13:01 – Security as the foundation for any AI deployment 14:15 – Oversight without a surveillance state 17:19 – Protocols for safe agent communication 20:25 – The expensive internet hypothesis 23:30 – Distributed safety for companies and governments 28:20 – Cloudflare’s “agent labyrinth” example 31:08 – Positive vision for distributed security 33:49 – Human value when labor is automated 41:19 – Encoding law for machines: contracts and enforcement 44:36 – DarkBench: detecting manipulative LLM behavior 55:22 – The AGI endgame: default path vs designed future 57:37 – Powerful tool AI 01:09:55 – Fast takeoff risk 01:16:09 – Realistic optimism
    --------  
    1:18:20
  • Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)
    Benjamin Todd joins the podcast to discuss how reasoning models changed AI, why agents may be next, where progress could stall, and what a self-improvement feedback loop in AI might mean for the economy and society. We explore concrete timelines (through 2030), compute and power bottlenecks, and the odds of an industrial explosion. We end by discussing how people can personally prepare for AGI: networks, skills, saving/investing, resilience, citizenship, and information hygiene. Follow Benjamin's work at: https://benjamintodd.substack.com Timestamps: 00:00 What are reasoning models? 04:04 Reinforcement learning supercharges reasoning 05:06 Reasoning models vs. agents 10:04 Economic impact of automated math/code 12:14 Compute as a bottleneck 15:20 Shift from giant pre-training to post-training/agents 17:02 Three feedback loops: algorithms, chips, robots 20:33 How fast could an algorithmic loop run? 22:03 Chip design and production acceleration 23:42 Industrial/robotics loop and growth dynamics 29:52 Society’s slow reaction; “warning shots” 33:03 Robotics: software and hardware bottlenecks 35:05 Scaling robot production 38:12 Robots at ~$0.20/hour? 43:13 Regulation and humans-in-the-loop 49:06 Personal prep: why it still matters 52:04 Build an information network 55:01 Save more money 58:58 Land, real estate, and scarcity in an AI world 01:02:15 Valuable skills: get close to AI, or far from it 01:06:49 Fame, relationships, citizenship 01:10:01 Redistribution, welfare, and politics under AI 01:12:04 Try to become more resilient 01:14:36 Information hygiene 01:22:16 Seven-year horizon and scaling limits by ~2030
    --------  
    1:27:00
  • From Peak Horse to Peak Human: How AI Could Replace Us (with Calum Chace)
    On this episode, Calum Chace joins me to discuss the transformative impact of AI on employment, comparing the current wave of cognitive automation to historical technological revolutions. We talk about "universal generous income", fully-automated luxury capitalism, and redefining education with AI tutors. We end by examining verification of artificial agents and the ethics of attributing consciousness to machines. Learn more about Calum's work here: https://calumchace.com Timestamps: 00:00:00 Preview and intro 00:03:02 Past tech revolutions and AI-driven unemployment 00:05:43 Cognitive automation: from secretaries to every job 00:08:02 The “peak horse” analogy and avoiding human obsolescence 00:10:55 Infinite demand and lump of labor 00:18:30 Fully-automated luxury capitalism 00:23:31 Abundance economy and a potential employment cliff 00:29:37 Education reimagined with personalized AI tutors 00:36:22 Real-world uses of LLMs: memory, drafting, emotional insight 00:42:56 Meaning beyond jobs: aristocrats, retirees, and kids 00:49:51 Four futures of superintelligence 00:57:20 Conscious AI and empathy as a safety strategy 01:10:55 Verifying AI agents 01:25:20 Over-attributing vs under-attributing machine consciousness
    --------  
    1:37:20
  • How AI Could Help Overthrow Governments (with Tom Davidson)
    On this episode, Tom Davidson joins me to discuss the emerging threat of AI-enabled coups, where advanced artificial intelligence could empower covert actors to seize power. We explore scenarios including secret loyalties within companies, rapid military automation, and how AI-driven democratic backsliding could differ significantly from historical precedents. Tom also outlines key mitigation strategies, risk indicators, and opportunities for individuals to help prevent these threats. Learn more about Tom's work here: https://www.forethought.org Timestamps: 00:00:00 Preview: why preventing AI-enabled coups matters 00:01:24 What do we mean by an “AI-enabled coup”? 00:01:59 Capabilities AIs would need (persuasion, strategy, productivity) 00:02:36 Cyber-offense and the road to robotized militaries 00:05:32 Step-by-step example of an AI-enabled military coup 00:08:35 How AI-enabled coups would differ from historical coups 00:09:24 Democratic backsliding (Venezuela, Hungary, U.S. parallels) 00:12:38 Singular loyalties, secret loyalties, exclusive access 00:14:01 Secret-loyalty scenario: CEO with hidden control 00:18:10 From sleeper agents to sophisticated covert AIs 00:22:22 Exclusive-access threat: one project races ahead 00:29:03 Could one country outgrow the rest of the world? 00:40:00 Could a single company dominate global GDP? 00:47:01 Autocracies vs democracies 00:54:43 Mitigations for singular and secret loyalties 01:06:25 Guardrails, monitoring, and controlled-use APIs 01:12:38 Using AI itself to preserve checks-and-balances 01:24:53 Risk indicators to watch for AI-enabled coups 01:33:05 Tom’s risk estimates for the next 5 and 30 years 01:46:50 How you can help – research, policy, and careers
    --------  
    1:53:49
  • What Happens After Superintelligence? (with Anders Sandberg)
    Anders Sandberg joins me to discuss superintelligence and its profound implications for human psychology, markets, and governance. We talk about physical bottlenecks, tensions between the technosphere and the biosphere, and the long-term cultural and physical forces shaping civilization. We conclude with Sandberg explaining the difficulties of designing reliable AI systems amidst rapid change and coordination risks. Learn more about Anders's work here: https://mimircenter.org/anders-sandberg Timestamps: 00:00:00 Preview and intro 00:04:20 2030 superintelligence scenario 00:11:55 Status, post-scarcity, and reshaping human psychology 00:16:00 Physical limits: energy, datacenter, and waste-heat bottlenecks 00:23:48 Technosphere vs biosphere 00:28:42 Culture and physics as long-run drivers of civilization 00:40:38 How superintelligence could upend markets and governments 00:50:01 State inertia: why governments lag behind companies 00:59:06 Value lock-in, censorship, and model alignment 01:08:32 Emergent AI ecosystems and coordination-failure risks 01:19:34 Predictability vs reliability: designing safe systems 01:30:32 Crossing the reliability threshold 01:38:25 Personal reflections on accelerating change
    --------  
    1:44:54

More Technology podcasts

About Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Podcast website

Listen to Future of Life Institute Podcast, Hard Fork and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Future of Life Institute Podcast: Podcasts in Family

Social
v7.23.3 | © 2007-2025 radio.de GmbH
Generated: 8/24/2025 - 9:07:51 PM