PodcastsPhilosophyAI Safety Newsletter

AI Safety Newsletter

Center for AI Safety
AI Safety Newsletter
Latest episode

80 episodes

  • AI Safety Newsletter

    AISN #72: New Research on AI Wellbeing

    01/05/2026 | 10 mins.
    Also: Public sentiment towards AI worsens.
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    In this edition, we discuss a research paper on AI Wellbeing and which AI models are the happiest. We also take a look at the downward trend of public sentiment towards AI, as well as OpenAI's big week of product releases.
    Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
    CAIS Releases AI Wellbeing Research
    The Center for AI Safety published a research paper on AI wellbeing. At the Center of AI Safety (CAIS), we have just released “AI Wellbeing: Measuring and Improving the Functional Pleasure and Pain of AIs.” This research explores whether LLMs experience functional wellbeing–behavioral signatures that functionally resemble positive or negative welfare signals in sentient beings.
    What activities produce high and low wellbeing? Through the testing of 56 large language models, we identified patterns in the types of actions and behaviors that the LLMs seemed to prefer or dislike, which we defined as “functional wellbeing.” Positive personal interaction and creative work topped the list of what measured high functional wellbeing [...]
    ---
    Outline:
    (00:34) CAIS Releases AI Wellbeing Research
    (05:16) OpenAI Releases Images 2.0 and GPT-5.5
    (07:30) In Other News
    (07:33) Government
    (08:20) Industry
    (09:05) Civil Society
    ---

    First published:

    May 1st, 2026


    Source:

    https://newsletter.safe.ai/p/aisn-72-new-research-on-ai-wellbeing

    ---

    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • AI Safety Newsletter

    AISN #71: Cyberattacks & Datacenter Moratorium Bill

    10/04/2026 | 9 mins.
    Also, updates on the Anthropic vs. Pentagon court case..
    We’re Hiring. Opportunities at CAIS include: Head of Public Engagement, Principal, Special Projects, Program Manager, Operations Manager, and other roles. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!
    AI Software Infrastructure Cyberattacks
    Recently, cyberattacks targeting the AI industry's software infrastructure stole private information potentially worth billions of dollars and inserted backdoors into developers’ computers. Google Threat Intelligence Group reported that one of the largest cyberattacks in this wave was carried out by North Korea-linked hackers.
    The stolen data may be worth billions. Hackers stole and auctioned private data from Mercor, an AI training data supplier for OpenAI and Anthropic which was recently valued at $10 billion. Mercor collects AI training data from a large number of experts, as well as highly sensitive personal and biometric data for identity verification. This attack not only comprises the data that Mercor sells, but also internal data that could be used to impersonate their hired experts. A person familiar with the situation stated that Mercor has paid the hackers’ requested ransom, although it remains unclear if the hackers intend to release or sell the data [...]
    ---
    Outline:
    (00:41) AI Software Infrastructure Cyberattacks
    (02:34) Datacenter Moratorium and Export Controls Bill
    (04:21) Anthropic v. Department of War Lawsuit
    (07:23) In Other News
    (07:26) Government
    (07:46) Industry
    (08:20) Civil Society
    ---

    First published:

    April 10th, 2026


    Source:

    https://newsletter.safe.ai/p/aisn-71-cyberattacks-and-datacenter

    ---

    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • AI Safety Newsletter

    AISN #70: AI Layoffs and Automated Warfare

    24/03/2026 | 9 mins.
    Also, a new open letter advocating for pro-human values and control over AI development.
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    In this edition, we discuss AI automation and augmentation of warfare and technology jobs, as well as a new open letter outlining pro-human values in the face of AI development.
    Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
    We’re Hiring. We’re hiring an editor! Help us surface the most compelling stories in AI safety and shape how the world understands this fast-moving field.
    Other opportunities at CAIS include: Head of Public Engagement, Program Manager, Operations Associate, and other roles. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!
    AI-Driven Layoffs
    Several large software companies such as Amazon and Meta are planning to cut tens of thousands of employees, citing increased productivity with AI. This continues a growing but contested trend of layoffs in sectors where AI performs best, such as software development and marketing.
    Layoffs affect almost half of some companies. Meta recently announced plans to let over [...]
    ---
    Outline:
    (00:58) AI-Driven Layoffs
    (03:14) AI Automation of Warfare
    (05:36) Pro-Human Open Letter
    (07:43) In Other News
    (07:47) Government
    (08:11) Industry
    ---

    First published:

    March 24th, 2026


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-70-ai-layoffs

    ---

    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • AI Safety Newsletter

    AISN #69: Department of War, Anthropic, and National Security

    13/03/2026 | 11 mins.
    Also, Anthropic Removes a Core Safety Commitment.
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    In this edition, we discuss the conflicts between Anthropic and the Department of War and Anthropic's recent removal of a core safety commitment.
    Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
    We’re Hiring. We’re hiring an editor! Help us surface the most compelling stories in AI safety and shape how the world understands this fast-moving field.
    Other opportunities at CAIS include: Head of Public Engagement, Program Manager, Operations Associate, and other roles. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!
    Pentagon Declares Anthropic a Supply Chain Risk to National Security
    Anthropic CEO Dario Amodei (left) and US Secretary of War Pete Hegseth (right) Thursday, March 5th, the US Department of War (DoW) announced that Anthropic is designated a “supply chain risk,” meaning that Anthropic products cannot be used by the DoW or in any defense contracts. This comes after several weeks of tensions between the two organizations over whether Anthropic models would be used for [...]
    ---
    Outline:
    (00:59) Pentagon Declares Anthropic a Supply Chain Risk to National Security
    (05:51) Anthropic Drops Core Safety Commitment
    (07:22) Opportunity for Experienced Researchers: AI and Society Fellowship
    (07:58) In Other News
    (08:02) Government
    (09:07) Industry
    (10:17) Civil Society
    ---

    First published:

    March 13th, 2026


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-69-department

    ---

    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • AI Safety Newsletter

    AISN #68: Moltbook Exposes Risky AI Behavior

    02/02/2026 | 15 mins.
    Plus: The Pentagon Accelerates AI and GPT-5.2 solves open mathematics problems..
    Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
    In this edition, we discuss the AI agent social network Moltbook, Pentagon's new “AI-First” strategy, and recent math breakthroughs powered by LLMs.
    Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
    We’re Hiring. We’re hiring an editor! Help us surface the most compelling stories in AI safety and shape how the world understands this fast-moving field.
    Other opportunities at CAIS include: Research Engineer, Research Scientist, Director of Development, Special Projects Associate, and Special Projects Manager. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!
    Moltbook Sparks Safety Concerns
    Screencapture from Moltbook's home page. Source. Moltbook is a new social network for AI agents. From nearly the moment it went live, human observers have noted numerous troubling patterns in what's being posted.
    How Moltbook works. Moltbook is a Reddit-style social network built on a framework that lets personal AI assistants run locally and accept tasks via messaging platforms. Agents check Moltbook regularly (i.e., every [...]
    ---
    Outline:
    (01:04) Moltbook Sparks Safety Concerns
    (05:10) Pentagon Mandates AI-First Strategy
    (07:59) AI Solves Open Math Problems
    (10:41) In Other News
    (10:45) Government
    (11:31) Industry
    (13:06) Civil Society
    (14:52) Discussion about this post
    (14:56) Ready for more?
    ---

    First published:

    February 2nd, 2026


    Source:

    https://newsletter.safe.ai/p/ai-safety-newsletter-68-moltbook

    ---

    Want more? Check out our ML Safety Newsletter for technical safety research.


    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

More Philosophy podcasts

About AI Safety Newsletter

Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai
Podcast website

Listen to AI Safety Newsletter, Dear Hank & John and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

AI Safety Newsletter: Podcasts in Family