PodcastsSociety & CultureFor Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

The AI Risk Network
For Humanity: An AI Risk Podcast
Latest episode

129 episodes

  • For Humanity: An AI Risk Podcast

    Can't We Just Pause AI? | For Humanity #78

    31/1/2026 | 1h 13 mins.
    What happens when AI risk stops being theoretical—and starts showing up in people’s jobs, families, and communities?In this episode of For Humanity, John sits down with Maxime Fournes, the new CEO of PauseAI Global, for a wide-ranging and deeply human conversation about burnout, strategy, and what it will actually take to slow down runaway AI development. From meditation retreats and personal sustainability to mass job displacement, data center backlash, and political capture, Maxime lays out a clear-eyed view of where the AI safety movement stands in 2026—and where it must go next.They explore why regulation alone won’t save us, how near-term harms like job loss, youth mental health crises, and community disruption may be the most powerful on-ramps to existential risk awareness, and why movement-building—not just policy papers—will decide our future. This conversation reframes AI safety as a struggle over power, narratives, and timing, and asks what it would take to hit a true global tipping point before it’s too late.
    Together, they explore:
    * Why AI safety must address real, present-day harms, not just abstract futures
    * How burnout and mental resilience shape long-term movement success
    * Why job displacement, youth harm, and data centers are political leverage points
    * The limits of regulation without enforcement and public pressure
    * How tipping points in public opinion actually form
    * Why protests still matter—even when they’re small
    * What it will take to build a global, durable AI safety movement
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.


    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
  • For Humanity: An AI Risk Podcast

    Why Laws, Treaties, and Regulations Won’t Save Us from AI | For Humanity Ep. 77

    17/1/2026 | 1h 23 mins.
    What if the biggest mistake in AI safety is believing that laws, treaties, and regulations will save us?In this episode of For Humanity, John sits down with Peter Sparber, a former architect of Big Tobacco’s successful war against regulation, to confront a deeply uncomfortable truth: the AI industry is using the exact same playbook—and it’s working. Drawing on decades of experience inside Washington’s most effective lobbying operations, Peter explains why regulation almost always fails against powerful industries, how AI companies are already neutralizing political pressure, and why real change will never come from lawmakers alone. Instead, he argues that the only path to meaningful AI safety is making unsafe AI bad for business—by injecting risk, liability, and uncertainty directly into boardrooms and C-suites. Peter reveals why AI doesn’t need to outsmart humanity to defeat regulation, it only needs money, time, and political cover. By exposing how industries evade oversight, delay enforcement, and co-opt regulators, this conversation re-frames AI safety around power, incentives, and accountability.
    Together, they explore:
    * Why laws, treaties, and regulations repeatedly fail against powerful industries
    * How Big AI is following Big Tobacco’s exact regulatory playbook
    * Why public outrage rarely translates into effective policy
    * How companies neutralize enforcement without breaking the law
    * Why third-party standards may matter more than legislation
    * How local resistance, liability, and investor pressure can change behavior
    * Why making unsafe AI bad for business is the only strategy with teeth
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.


    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
  • For Humanity: An AI Risk Podcast

    What We Lose When AI Makes Choices for Us | For Humanity #76

    20/12/2025 | 1h 20 mins.
    What if the greatest danger of AI isn’t extinction — but the quiet loss of our ability to think and choose for ourselves? In this episode of For Humanity, John sits down with journalist and author Jacob Ward (CNN, PBS, Al Jazeera; The Loop) to unpack the most under-discussed risk of artificial intelligence: decision erosion.
    Jacob explains why AI doesn’t need to become sentient to be dangerous — it only needs to be convenient. Drawing from neuroscience, behavioral psychology, and real-world reporting, he reveals how systems designed to “help” us are slowly pushing humans into cognitive autopilot.
    Together, they explore:
    * Why AI threatens near-term human agency more than long-term sci-fi extinction
    * How Google Maps offers a chilling preview of AI’s effect on the human brain
    * The difference between fast-thinking and slow-thinking — and why AI exploits it
    * Why persuasive AI may outperform humans politically and psychologically
    * How profit incentives, not intelligence, are driving the most dangerous outcomes
    * Why focusing only on extinction risk alienates the public — and weakens AI safety efforts
    👉 Follow More of Jacob Ward’s Work:
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #ForHumanity #JacobWard #AIandSociety #ArtificialIntelligence #HumanAgency #TechEthics #AIResponsibility


    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
  • For Humanity: An AI Risk Podcast

    The Congressman Who Gets AI Extinction Risk— Rep. Bill Foster on the Future of Humanity | For Humanity | Ep. 75

    06/12/2025 | 1h 10 mins.
    In this episode of For Humanity, John Sherman sits down with Congressman Bill Foster — the only PhD scientist in Congress, a former Fermilab physicist, and one of the few lawmakers deeply engaged with advanced AI risks. Together, they dive into a wide-ranging conversation about the accelerating capabilities of AI, the systemic vulnerabilities inside Congress, and why the next few years may determine the fate of our species.
    Foster unpacks why AI risk mirrors nuclear risk in scale, how interpretability is collapsing as models evolve, why Congress is structurally incapable of responding fast enough, and how geopolitical pressures distort every conversation on safety. They also explore the looming financial bubble around AI, the coming energy crunch from massive data centers, and the emerging threat of anonymous encrypted compute — a pathway that could enable rogue actors or rogue AIs to operate undetected.
    If you want a deeper understanding of how AI intersects with power, geopolitics, compute, regulation, and existential risk, this conversation is essential.
    Together, they explore:
    * • The real risks emerging from today’s AI systems — and what’s coming next
    * Why Congress is unprepared for AGI-level threats
    * How compute verification could become humanity’s safety net
    * Why data centers may reshape energy, economics, and local politics
    * How scientific literacy in government could redefine AI governance
    👉 Follow More of Congressman Foster’s Work:
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #FutureOfAI #AIandWarfare #AutonomousWeapons #AIEthics #TechForGood #ArtificialIntelligence


    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe
  • For Humanity: An AI Risk Podcast

    AI Risk, Superintelligence & The Fight Ahead — A Deep Dive with Liv Boeree | For Humanity #74

    22/11/2025 | 1h 17 mins.
    In this episode of For Humanity, John sits down with Liv Boeree — poker champion, systems thinker, and longtime AI risk advocate — for a candid conversation about where we truly stand in the race toward advanced AI. Liv breaks down why public understanding of superintelligence is so uneven, how misaligned incentives shape the entire ecosystem, and why issues like surveillance, culture, and gender dynamics matter more than people realize.
    They explore the emotional realities of working on existential risk, the impact of doomscrolling, and how mindset and intuition keep people grounded in such turbulent times. The result is a clear, grounded, and surprisingly hopeful look at the future of technology, power, and responsibility. If you’re passionate about understanding AI’s real impacts (today and tomorrow), this is a must-watch.
    Together, they explore:
    * The real risks we face from AI — today and in the coming years
    * Why public understanding of superintelligence is so fractured
    * How incentives, competition, and culture misalign technology with human flourishing
    * What poker teaches us about deception, risk, and reading motives
    * The role of women, intuition, and “mama bear energy” in the AI safety movement
    👉 Follow More of Liv Boeree’s Work:
    📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat.
    #AISafety #AIAlignment #ForHumanityPodcast #AIRisk #FutureOfAI #AIandWarfare #AutonomousWeapons #AIEthics #TechForGood #ArtificialIntelligence


    Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

More Society & Culture podcasts

About For Humanity: An AI Risk Podcast

For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity. theairisknetwork.substack.com
Podcast website

Listen to For Humanity: An AI Risk Podcast, Shameless and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

For Humanity: An AI Risk Podcast: Podcasts in Family

Social
v8.3.1 | © 2007-2026 radio.de GmbH
Generated: 2/1/2026 - 7:01:06 PM