#20 Frances Lorenz on the emotional side of AI x-risk, being a woman in a male-dominated online space & more
In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety, the emotional impact of learning about x-risk, what it's like to be female in a male-dominated community and more!Follow Frances on TwitterSubscribe to her SubstackApply for EAG London!
--------
51:42
#19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk
Gabe Alfour is a co-founder of Conjecture and an advisor to Control AI, both organisations working to reduce risks from advanced AI. We discussed why AI poses an existential risk to humanity, what makes this problem very hard to solve, why Gabe believes we need to prevent the development of superintelligence for at least the next two decades, and more. Follow Gabe on TwitterRead The Compendium and A Narrow Path
--------
1:36:40
#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more
A lot has happened in AI since the last time I spoke to Nathan Labenz of The Cognitive Revolution, so I invited him back on for a whistlestop tour of the most important developments we've seen over the last year!We covered reasoning models, DeepSeek, the many spooky alignment failures we've observed in the last few months & much more!Follow Nathan on TwitterListen to The Cognitive Revolution My Twitter & Substack
--------
1:46:17
#17 Fun Theory with Noah Topper
The Fun Theory Sequence is one of Eliezer Yudkowsky's cheerier works, and considers questions such as 'how much fun is there in the universe?', 'are we having fun yet' and 'could we be having more fun?'. It tries to answer some of the philosophical quandries we might encounter when envisioning a post-AGI utopia. In this episode, I discussed Fun Theory with Noah Topper, who loyal listeners will remember from episode 7, in which we tackled EY's equally interesting but less fun essay, A List of Lethalities. Follow Noah on Twitter and check out his Substack!
--------
1:25:53
#16 John Sherman on the psychological experience of learning about x-risk and AI safety messaging strategies
John Sherman is the host of the For Humanity Podcast, which (much like this one!) aims to explain AI safety to a non-expert audience. In this episode, we compared our experiences of encountering AI safety arguments for the first time and the psychological experience of being aware of x-risk, as well as what messaging strategies the AI safety community should be using to engage more people. Listen & subscribe to the For Humanity Podcast on YouTube and follow John on Twitter!