Anthropic’s Claude was used in the military operation to kidnap president Maduro earlier this year. Why? Unclear. Was this legal? Absolutely not.
More like this: AI In Gaza: Live from Mexico City
Surprise, surprise: the DoD feels that they should able to use AI models however they want, as long as its lawful — but… was this lawful? They are now threatening to designate Anthropic as a supply chain risk. What does this all mean?
For this short, Alix was joined by Amos Toh, senior counsel at the Brennan Centre for Justice, to help us understand why the US defence department and an AI company are arguing about how best to us AI models for dehumanising and unjust military purposes.
Further reading & resources:
Pentagon's use of Claude during Maduro raid sparks Anthropic feud — Axios, Feb 13
Anthropic on shaky ground with Pentagon amid feud after Maduro raid — The Hill, Feb 19
US used Anthropic's Claude during the Venezuela raid, WSJ reports — Reuters, Feb 16
Pentagon Used Anthropic’s Claude in Maduro Venezuela Raid — WSJ, Feb 15
Amos’s Bluesky thread sharing more thoughts on the story
Computer Says Maybe Shorts bring in experts to give their ten-minute take on recent news. If there’s ever a news story you think we should bring in expertise on for the show, please email
[email protected]Post Production by Sarah Myles | Pre Production by Georgia Iacovou