PodcastsScienceAstral Codex Ten Podcast

Astral Codex Ten Podcast

Jeremiah
Astral Codex Ten Podcast
Latest episode

1142 episodes

  • Astral Codex Ten Podcast

    Mantic Monday: Groundhog Day

    02/04/2026 | 30 mins.
    Having Your Own Government Try To Destroy You Is (At Least Temporarily) Good For Business
    On Friday, the Pentagon declared AI company Anthropic a "supply chain risk", a designation never before given to an American company. This unprecedented move was seen as an attempt to punish, maybe destroy the company. How effective was it?
    Anthropic isn't publicly traded, so we turn to the prediction markets. Ventuals.com has a "perpetual future" on Anthropic stock, a complicated instrument attempting to track the company's valuation, to be resolved at the IPO. Here's what they've got:
    https://www.astralcodexten.com/p/mantic-monday-groundhog-day
  • Astral Codex Ten Podcast

    "All Lawful Use": Much More Than You Wanted To Know

    02/04/2026 | 19 mins.
    Last Friday, Secretary of War Pete Hegseth declared AI company Anthropic a "supply chain risk", the first time this designation has ever been applied to a US company. The trigger for the move was Anthropic's refusal to allow the Department of War to use their AIs for mass surveillance and autonomous weapons.
    A few hours later, Hegseth and Sam Altman declared an agreement-in-principle for OpenAI's models to be used in the niche vacated by Anthropic. Altman stated that he had received guarantees that OpenAI's models wouldn't be used for mass surveillance or autonomous weapons either, but given Hegseth's unwillingness to concede these points with Anthropic, observers speculated that the safeguards in Altman's contract must be weaker or, in a worst-case scenario, completely toothless.
    The debate centers on the Department of War's demand that AIs be permitted for "all lawful use". Anthropic worried that mass surveillance and autonomous weaponry would de facto fall in this category; Hegseth and Altman have tried to reassure the public that they won't, and the parts of their agreement that have leaked to the public cite the statutes that Altman expects to constrain this category. Altman's initial statement seemed to suggest additional prohibitions, but on a closer read, provide little tangible evidence of meaningful further restrictions.
    Some alert ACX readers1 have done a deep dive into national security law to try to untangle the situation. Their conclusion mirrors that of Anthropic and the majority of Twitter commenters: this is not enough. Current laws against domestic mass surveillance and autonomous weapons have wide loopholes in practice. Further, many of the rules which do exist can be changed by the Department of War at any time. Although OpenAI's national security lead said that "we intended [the phrase 'all lawful use'] to mean [according to the law] at the time the contract is signed', this is not how contract law usually works, and not how the provision is likely to be enforced2. Therefore, these guarantees are not helpful.
    To learn more about the details, let's look at the law:
    https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you
  • Astral Codex Ten Podcast

    Next-Token Predictor Is An AI's Job, Not Its Species

    02/04/2026 | 16 mins.
    I.
    In The Argument, Kelsey Piper gives a good description of the ways that AIs are more than just "next-token predictors" or "stochastic parrots" - for example, they also use fine-tuning and RLHF. But commenters, while appreciating the subtleties she introduces, object that they're still just extra layers on top of a machine that basically runs on next-token prediction.
    I want to approach this from a different direction. I think overemphasizing next-token prediction is a confusion of levels. On the levels where AI is a next-token predictor, you are also a next-token (technically: next-sense-datum) predictor. On the levels where you're not a next-token predictor, AI isn't one either.
  • Astral Codex Ten Podcast

    The Pentagon Threatens Anthropic

    14/03/2026 | 23 mins.
    Here's my understanding of the situation:
    Anthropic signed a contract with the Pentagon last summer. It originally said the Pentagon had to follow Anthropic's Usage Policy like everyone else. In January, the Pentagon attempted to renegotiate, asking to ditch the Usage Policy and instead have Anthropic's AIs available for "all lawful purposes"1. Anthropic demurred, asking for a guarantee that their AIs would not be used for mass surveillance of American citizens or no-human-in-the-loop killbots. The Pentagon refused the guarantees, demanding that Anthropic accept the renegotiation unconditionally and threatening "consequences" if they refused. These consequences are generally understood to be some mix of :
    canceling the contract
    using the Defense Production Act, a law which lets the Pentagon force companies to do things, to force Anthropic to agree.
    the nuclear option, designating Anthropic a "supply chain risk". This would ban US companies that use Anthropic products from doing business with the military2. Since many companies do some business with the government, this would lock them out of large parts of the corporate world and be potentially fatal to their business3. The "supply chain risk" designation has previously only been used for foreign companies like Huawei that we think are using their connections to spy on or implant malware in American infrastructure. Using it as a bargaining chip to threaten a domestic company in contract negotiations is unprecedented.
    https://www.astralcodexten.com/p/the-pentagon-threatens-anthropic
  • Astral Codex Ten Podcast

    Malicious Streetlight Effects Vs. "Directional Correctness" - A Semi-Non-Apology

    14/03/2026 | 5 mins.
    Malicious streetlights are an evil trick from Dark Data Journalism. Some annoying enemy has a valid complaint. So you use FACTS and LOGIC to prove that something similar-sounding-but-slightly-different is definitely false. Then you act like you've debunked the complaint.
    My "favorite" example, spotted during the 2016 election, was a response to some #BuildTheWall types saying that illegal immigration through the southern border was near record highs. Some data journalist got good statistics and proved that the number of Mexicans illegally entering the country was actually quite low. When I looked into it further, I found that this was true - illegal immigration had shifted from Mexicans to Hondurans/Guatemalans/Salvadoreans etc entering through Mexico. If you counted those, illegal immigration through the southern border was near record highs.
    But the inverse evil trick is saying something "directionally correct", ie slightly stronger than the truth can support. If your enemy committed assault, say he committed murder. If he committed sexual harassment, say he committed rape. If your drug increases cancer survival by 5% in rats, say that it "cures cancer". Then, if someone calls you on it, accuse them of "literally well ackshually-ing" you, because you were "directionally correct" and it's offensive to the victims to try to defend assault-committed sexual harassers. This is the sort of pathetic defense I called out in If It's Worth Your Time To Lie, It's Worth My Time To Correct It.
    But trying to call out one of these failure modes looks like falling into the other. I ran into this on my series of posts on crime last week. I wrote these because I regularly saw people make the arguments I tried to debunk.
    https://www.astralcodexten.com/p/malicious-streetlight-effects-vs

More Science podcasts

About Astral Codex Ten Podcast

The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Podcast website

Listen to Astral Codex Ten Podcast, Futureproof with Jonathan McCrea and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Astral Codex Ten Podcast: Podcasts in Family