Powered by RND
PodcastsTechnologyConsistently Candid
Listen to Consistently Candid in the App
Listen to Consistently Candid in the App
(524)(250,057)
Save favourites
Alarm
Sleep timer

Consistently Candid

Podcast Consistently Candid
Sarah Hastings-Woodhouse
AI safety, philosophy and other things.

Available Episodes

5 of 18
  • #18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more
    A lot has happened in AI since the last time I spoke to Nathan Labenz of The Cognitive Revolution, so I invited him back on for a whistlestop tour of the most important developments we've seen over the last year!We covered reasoning models, DeepSeek, the many spooky alignment failures we've observed in the last few months & much more!Follow Nathan on TwitterListen to The Cognitive Revolution My Twitter & Substack 
    --------  
    1:46:17
  • #17 Fun Theory with Noah Topper
    The Fun Theory Sequence is one of Eliezer Yudkowsky's cheerier works, and considers questions such as 'how much fun is there in the universe?', 'are we having fun yet' and 'could we be having more fun?'. It tries to answer some of the philosophical quandries we might encounter when envisioning a post-AGI utopia. In this episode, I discussed Fun Theory with Noah Topper, who loyal listeners will remember from episode 7, in which we tackled EY's equally interesting but less fun essay, A List of Lethalities. Follow Noah on Twitter and check out his Substack!
    --------  
    1:25:53
  • #16 John Sherman on the psychological experience of learning about x-risk and AI safety messaging strategies
    John Sherman is the host of the For Humanity Podcast, which (much like this one!) aims to explain AI safety to a non-expert audience. In this episode, we compared our experiences of encountering AI safety arguments for the first time and the psychological experience of being aware of x-risk, as well as what messaging strategies the AI safety community should be using to engage more people. Listen & subscribe to the For Humanity Podcast on YouTube and follow John on Twitter!
    --------  
    52:49
  • #15 Should we be engaging in civil disobedience to protest AGI development?
    StopAI are a non-profit aiming to achieve a permanent ban on the development of AGI through peaceful protest. In this episode, I chatted with three of founders of StopAI – Remmelt Ellen, Sam Kirchner and Guido Reichstadter. We talked about what protest tactics StopAI have been using, and why they want a stop (and not just a pause!) in the development of AGI. Follow Sam, Remmelt and Guido on TwitterMy Twitter 
    --------  
    1:18:20
  • #14 Buck Shlegeris on AI control
    Buck Shlegeris is the CEO of Redwood Research, a non-profit working to reduce risks from powerful AI. We discussed Redwood's research into AI control, why we shouldn't feel confident that witnessing an AI escape attempt would persuade labs to undeploy dangerous models, lessons from the vetoing of SB1047, the importance of lab security and more. Posts discussed:The case for ensuring that powerful AIs are controlledWould catching your AIs trying to escape convince AI developers to slow down or undeploy?You can, in fact, bamboozle an unaligned AI into sparing your lifeFollow Buck on Twitter and subscribe to his Substack!
    --------  
    50:52

More Technology podcasts

About Consistently Candid

AI safety, philosophy and other things.
Podcast website

Listen to Consistently Candid, Uncanny Valley | WIRED and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.11.0 | © 2007-2025 radio.de GmbH
Generated: 3/18/2025 - 4:23:09 PM