PodcastsTechnologyFuture of Life Institute Podcast

Future of Life Institute Podcast

Future of Life Institute
Future of Life Institute Podcast
Latest episode

494 episodes

  • Future of Life Institute Podcast

    What AI Companies Get Wrong About Curing Cancer (with Emilia Javorsky)

    20/03/2026 | 1h 12 mins.
    Emilia Javorsky is a physician-scientist and Director of the Futures Program at the Future of Life Institute. 
    She joins the podcast to discuss her newly published essay on AI and cancer. She challenges tech claims that superintelligence will cure cancer, explaining why biology’s complexity, poor data, and misaligned incentives are bigger bottlenecks than raw intelligence. The conversation covers realistic roles for AI in drug discovery, clinical trials, and cutting unnecessary medical bureaucracy.
    You can read the full essay at: curecancer.ai
    CHAPTERS:
    (00:00) Episode Preview
    (01:10) Introduction and essay motivation
    (06:30) Intelligence vs data bottlenecks
    (19:03) Cancer's complexity and heterogeneity
    (29:05) Measurement, health, and homeostasis
    (41:41) AI in drug development
    (50:13) Regulation, FDA, and innovation
    (01:02:58) Practical paths toward cures
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    AI vs Cancer - How AI Can, and Can't, Cure Cancer (by Emilia Javorsky)

    16/03/2026 | 2h 43 mins.
    Tech executives have promised that AI will cure cancer. The reality is more complicated — and more hopeful. This essay examines where AI genuinely accelerates cancer research, where the promises fall short, and what researchers, policymakers, and funders need to do next.
    You can read the full essay at: curecancer.ai
    CHAPTERS:
    (00:00) Essay Preview
    (00:54) How AI Can, and Can't, Cure Cancer
    (17:05) Reckoning with Past Failures
    (35:23) Misguiding Myths and Errors
    (59:15) AI Solutions Derive from First Principles or Data
    (01:31:31) Systemic Bottlenecks & Misalignments
    (02:08:46) Conclusion
    (02:14:35) The Roadmap Forward
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    How AI Hacks Your Brain's Attachment System (with Zak Stein)

    05/03/2026 | 1h 44 mins.
    Zak Stein is a researcher focused on child development, education, and existential risk. He joins the podcast to discuss the psychological harms of anthropomorphic AI. We examine attention and attachment hacking, AI companions for kids, loneliness, and cognitive atrophy. Our conversation also covers how we can preserve human relationships, redesign education, and build cognitive security tools that keep AI from undermining our humanity.

    LINKS:
    AI Psychological Harms Research Coalition

    Zak Stein official website

    CHAPTERS:

    (00:00) Episode Preview

    (00:56) Education to existential risk

    (03:03) Lessons from social media

    (08:41) Attachment systems and AI

    (18:42) AI companions and attachment

    (27:23) Anthropomorphism and user disempowerment

    (36:06) Cognitive atrophy and tools

    (45:54) Children, toys, and attachment

    (57:38) AI psychosis and selfhood

    (01:10:31) Cognitive security and parenting

    (01:26:15) Education, collapse, and speciation

    (01:36:40) Preserving humanity and values

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    The Case for a Global Ban on Superintelligence (with Andrea Miotti)

    20/02/2026 | 1h 7 mins.
    Andrea Miotti is the founder and CEO of Control AI, a nonprofit. He joins the podcast to discuss efforts to prevent extreme risks from superintelligent AI. The conversation covers industry lobbying, comparisons with tobacco regulation, and why he advocates a global ban on AI systems that can outsmart and overpower humans. We also discuss informing lawmakers and the public, and concrete actions listeners can take.

    LINKS:
    Control AI

    Control AI global action page

    ControlAI's lawmaker contact tools

    Open roles at ControlAI

    ControlAI's theory of change

    CHAPTERS:

    (00:00) Episode Preview

    (00:52) Extinction risk and lobbying

    (08:59) Progress toward superintelligence

    (16:26) Building political awareness

    (24:27) Global regulation strategy

    (33:06) Race dynamics and public

    (42:36) Vision and key safeguards

    (51:18) Recursive self-improvement controls

    (58:13) Power concentration and action

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    Can AI Do Our Alignment Homework? (with Ryan Kidd)

    06/02/2026 | 1h 46 mins.
    Ryan Kidd is a co-executive director at MATS. This episode is a cross-post from "The Cognitive Revolution", hosted by Nathan Labenz. In this conversation, they discuss AGI timelines, model deception risks, and whether safety work can avoid boosting capabilities. Ryan outlines MATS research tracks, key researcher archetypes, hiring needs, and advice for applicants considering a career in AI safety. Learn more about Ryan's work and MATS at: https://matsprogram.org

    CHAPTERS:

    (00:00) Episode Preview

    (00:20) Introductions and AGI timelines

    (10:13) Deception, values, and control

    (23:20) Dual use and alignment

    (32:22) Frontier labs and governance

    (44:12) MATS tracks and mentors

    (58:14) Talent archetypes and demand

    (01:12:30) Applicant profiles and selection

    (01:20:04) Applications, breadth, and growth

    (01:29:44) Careers, resources, and ideas

    (01:45:49) Final thanks and wrap

    PRODUCED BY:

    https://aipodcast.ing

    SOCIAL LINKS:

    Website: https://podcast.futureoflife.org

    Twitter (FLI): https://x.com/FLI_org

    Twitter (Gus): https://x.com/gusdocker

    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/

    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/

    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978

    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

More Technology podcasts

About Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Podcast website

Listen to Future of Life Institute Podcast, Hard Fork and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Future of Life Institute Podcast: Podcasts in Family