Powered by RND
PodcastsEducation80,000 Hours Podcast

80,000 Hours Podcast

Rob, Luisa, and the 80,000 Hours team
80,000 Hours Podcast
Latest episode

Available Episodes

5 of 284
  • Serendipity, weird bets, & cold emails that actually work: Career advice from 16 former guests
    How do you navigate a career path when the future of work is uncertain? How important is mentorship versus immediate impact? Is it better to focus on your strengths or on the world’s most pressing problems? Should you specialise deeply or develop a unique combination of skills?From embracing failure to finding unlikely allies, we bring you 16 diverse perspectives from past guests who’ve found unconventional paths to impact and helped others do the same.Links to learn more and full transcript.Chapters:Cold open (00:00:00)Luisa's intro (00:01:04)Holden Karnofsky on just kicking ass at whatever (00:02:53)Jeff Sebo on what improv comedy can teach us about doing good in the world (00:12:23)Dean Spears on being open to randomness and serendipity (00:19:26)Michael Webb on how to think about career planning given the rapid developments in AI (00:21:17)Michelle Hutchinson on finding what motivates you and reaching out to people for help (00:41:10)Benjamin Todd on figuring out if a career path is a good fit for you (00:46:03)Chris Olah on the value of unusual combinations of skills (00:50:23)Holden Karnofsky on deciding which weird ideas are worth betting on (00:58:03)Karen Levy on travelling to learn about yourself (01:03:10)Leah Garcés on finding common ground with unlikely allies (01:06:53)Spencer Greenberg on recognising toxic people who could derail your career and life (01:13:34)Holden Karnofsky on the many jobs that can help with AI (01:23:13)Danny Hernandez on using world events to trigger you to work on something else (01:30:46)Sarah Eustis-Guthrie on exploring and pivoting in careers (01:33:07)Benjamin Todd on making tough career decisions (01:38:36)Hannah Ritchie on being selective when following others’ advice (01:44:22)Alex Lawsen on getting good mentorship (01:47:25)Chris Olah on cold emailing that actually works (01:54:49)Pardis Sabeti on prioritising physical health to do your best work (01:58:34)Chris Olah on developing good taste and technique as a researcher (02:04:39)Benjamin Todd on why it’s so important to apply to loads of jobs (02:09:52)Varsha Venugopal on embracing uncomfortable situations and celebrating failures (02:14:25)Luisa's outro (02:17:43)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore
    --------  
    2:18:41
  • #215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power
    Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could flourish for the first time — as nations needed educated, informed, and empowered citizens to deploy advanced technologies and remain competitive.Unfortunately there’s every reason to think artificial general intelligence (AGI) will reverse that trend. Today’s guest — Tom Davidson of the Forethought Centre for AI Strategy — claims in a new paper published today that advanced AI enables power grabs by small groups, by removing the need for widespread human participation. Links to learn more, video, highlights, and full transcript. https://80k.info/tdAlso: come work with us on the 80,000 Hours podcast team! https://80k.info/workThere are a few routes by which small groups might seize power:Military coups: Though rare in established democracies due to citizen/soldier resistance, future AI-controlled militaries may lack such constraints. Self-built hard power: History suggests maybe only 10,000 obedient military drones could seize power.Autocratisation: Leaders using millions of loyal AI workers, while denying others access, could remove democratic checks and balances.Tom explains several reasons why AI systems might follow a tyrant’s orders:They might be programmed to obey the top of the chain of command, with no checks on that power.Systems could contain "secret loyalties" inserted during development.Superior cyber capabilities could allow small groups to control AI-operated military infrastructure.Host Rob Wiblin and Tom discuss all this plus potential countermeasures.Chapters:Cold open (00:00:00)A major update on the show (00:00:55)How AI enables tiny groups to seize power (00:06:24)The 3 different threats (00:07:42)Is this common sense or far-fetched? (00:08:51)“No person rules alone.” Except now they might. (00:11:48)Underpinning all 3 threats: Secret AI loyalties (00:17:46)Key risk factors (00:25:38)Preventing secret loyalties in a nutshell (00:27:12)Are human power grabs more plausible than 'rogue AI'? (00:29:32)If you took over the US, could you take over the whole world? (00:38:11)Will this make it impossible to escape autocracy? (00:42:20)Threat 1: AI-enabled military coups (00:46:19)Will we sleepwalk into an AI military coup? (00:56:23)Could AIs be more coup-resistant than humans? (01:02:28)Threat 2: Autocratisation (01:05:22)Will AGI be super-persuasive? (01:15:32)Threat 3: Self-built hard power (01:17:56)Can you stage a coup with 10,000 drones? (01:25:42)That sounds a lot like sci-fi... is it credible? (01:27:49)Will we foresee and prevent all this? (01:32:08)Are people psychologically willing to do coups? (01:33:34)Will a balance of power between AIs prevent this? (01:37:39)Will whistleblowers or internal mistrust prevent coups? (01:39:55)Would other countries step in? (01:46:03)Will rogue AI preempt a human power grab? (01:48:30)The best reasons not to worry (01:51:05)How likely is this in the US? (01:53:23)Is a small group seizing power really so bad? (02:00:47)Countermeasure 1: Block internal misuse (02:04:19)Countermeasure 2: Cybersecurity (02:14:02)Countermeasure 3: Model spec transparency (02:16:11)Countermeasure 4: Sharing AI access broadly (02:25:23)Is it more dangerous to concentrate or share AGI? (02:30:13)Is it important to have more than one powerful AI country? (02:32:56)In defence of open sourcing AI models (02:35:59)2 ways to stop secret AI loyalties (02:43:34)Preventing AI-enabled military coups in particular (02:56:20)How listeners can help (03:01:59)How to help if you work at an AI company (03:05:49)The power ML researchers still have, for now (03:09:53)How to help if you're an elected leader (03:13:14)Rob’s outro (03:19:05)This episode was originally recorded on January 20, 2025.Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongCamera operator: Jeremy ChevillotteTranscriptions and web: Katy Moore
    --------  
    3:22:44
  • Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys
    "We are aiming for a place where we can decouple the scorecard from our worthiness. It’s of course the case that in trying to optimise the good, we will always be falling short. The question is how much, and in what ways are we not there yet? And if we then extrapolate that to how much and in what ways am I not enough, that’s where we run into trouble." —Hannah BoettcherWhat happens when your desire to do good starts to undermine your own wellbeing?Over the years, we’ve heard from therapists, charity directors, researchers, psychologists, and career advisors — all wrestling with how to do good without falling apart. Today’s episode brings together insights from 16 past guests on the emotional and psychological costs of pursuing a high-impact career to improve the world — and how to best navigate the all-too-common guilt, burnout, perfectionism, and imposter syndrome along the way.Check out the full transcript and links to learn more: https://80k.info/mhIf you’re dealing with your own mental health concerns, here are some resources that might help:If you’re feeling at risk, try this for the the UK: How to get help in a crisis, and this for the US: National Suicide Prevention Lifeline.The UK’s National Health Service publishes useful, evidence-based advice on treatments for most conditions.Mental Health Navigator is a service that simplifies finding and accessing mental health information and resources all over the world — built specifically for the effective altruism communityWe recommend this summary of treatments for depression, this summary of treatments for anxiety, and Mind Ease, an app created by Spencer Greenberg.We’d also recommend It’s Not Always Depression by Hilary Hendel.Some on our team have found Overcoming Perfectionism and Overcoming Low Self-Esteem very helpful.And there’s even more resources listed on these episode pages: Having a successful career with depression, anxiety, and imposter syndrome, Hannah Boettcher on the mental health challenges that come with trying to have a big impact, Tim LeBon on how altruistic perfectionism is self-defeating.Chapters:Cold open (00:00:00)Luisa's intro (00:01:32)80,000 Hours’ former CEO Howie on what his anxiety and self-doubt feels like (00:03:47)Evolutionary psychiatrist Randy Nesse on what emotions are for (00:07:35)Therapist Hannah Boettcher on how striving for impact can affect our self-worth (00:13:45)Luisa Rodriguez on grieving the gap between who you are and who you wish you were (00:16:57)Charity director Cameron Meyer Shorb on managing work-related guilt and shame (00:24:01)Therapist Tim LeBon on aiming for excellence rather than perfection (00:29:18)Author Cal Newport on making time to be alone with our thoughts (00:36:03)80,000 Hours career advisors Michelle Hutchinson and Habiba Islam on prioritising mental health over career impact (00:40:28)Charity founder Sarah Eustis-Guthrie on the ups and downs of founding an organisation (00:45:52)Our World in Data researcher Hannah Ritchie on feeling like an imposter as a generalist (00:51:28)Moral philosopher Will MacAskill on being proactive about mental health and preventing burnout (01:00:46)Grantmaker Ajeya Cotra on the psychological toll of big open-ended research questions (01:11:00)Researcher and grantmaker Christian Ruhl on how having a stutter affects him personally and professionally (01:19:30)Mercy For Animals’ CEO Leah Garcés on insisting on self-care when doing difficult work (01:32:39)80,000 Hours’ former CEO Howie on balancing a job and mental illness (01:37:12)Therapist Hannah Boettcher on how self-compassion isn’t self-indulgence (01:40:39)Journalist Kelsey Piper on communicating about mental health in ways that resonate (01:43:32)Luisa's outro (01:46:10)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore
    --------  
    1:47:10
  • #214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway
    Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.So some are developing a backup plan to safely deploy models we fear are actively scheming to harm us — so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier.Today’s guest — Buck Shlegeris, CEO of Redwood Research — has spent the last few years developing control mechanisms, and for human-level systems they’re more plausible than you might think. He argues that given companies’ unwillingness to incur large costs for security, accepting the possibility of misalignment and designing robust safeguards might be one of our best remaining options.Links to learn more, highlights, video, and full transcript.As Buck puts it: "Five years ago I thought of misalignment risk from AIs as a really hard problem that you’d need some really galaxy-brained fundamental insights to resolve. Whereas now, to me the situation feels a lot more like we just really know a list of 40 things where, if you did them — none of which seem that hard — you’d probably be able to not have very much of your problem."Of course, even if Buck is right, we still need to do those 40 things — which he points out we’re not on track for. And AI control agendas have their limitations: they aren’t likely to work once AI systems are much more capable than humans, since greatly superhuman AIs can probably work around whatever limitations we impose.Still, AI control agendas seem to be gaining traction within AI safety. Buck and host Rob Wiblin discuss all of the above, plus:Why he’s more worried about AI hacking its own data centre than escapingWhat to do about “chronic harm,” where AI systems subtly underperform or sabotage important work like alignment researchWhy he might want to use a model he thought could be conspiring against himWhy he would feel safer if he caught an AI attempting to escapeWhy many control techniques would be relatively inexpensiveHow to use an untrusted model to monitor another untrusted modelWhat the minimum viable intervention in a “lazy” AI company might look likeHow even small teams of safety-focused staff within AI labs could matterThe moral considerations around controlling potentially conscious AI systems, and whether it’s justifiedChapters:Cold open |00:00:00|  Who’s Buck Shlegeris? |00:01:27|  What's AI control? |00:01:51|  Why is AI control hot now? |00:05:39|  Detecting human vs AI spies |00:10:32|  Acute vs chronic AI betrayal |00:15:21|  How to catch AIs trying to escape |00:17:48|  The cheapest AI control techniques |00:32:48|  Can we get untrusted models to do trusted work? |00:38:58|  If we catch a model escaping... will we do anything? |00:50:15|  Getting AI models to think they've already escaped |00:52:51|  Will they be able to tell it's a setup? |00:58:11|  Will AI companies do any of this stuff? |01:00:11|  Can we just give AIs fewer permissions? |01:06:14|  Can we stop human spies the same way? |01:09:58|  The pitch to AI companies to do this |01:15:04|  Will AIs get superhuman so fast that this is all useless? |01:17:18|  Risks from AI deliberately doing a bad job |01:18:37|  Is alignment still useful? |01:24:49|  Current alignment methods don't detect scheming |01:29:12|  How to tell if AI control will work |01:31:40|  How can listeners contribute? |01:35:53|  Is 'controlling' AIs kind of a dick move? |01:37:13|  Could 10 safety-focused people in an AGI company do anything useful? |01:42:27|  Benefits of working outside frontier AI companies |01:47:48|  Why Redwood Research does what it does |01:51:34|  What other safety-related research looks best to Buck? |01:58:56|  If an AI escapes, is it likely to be able to beat humanity from there? |01:59:48|  Will misaligned models have to go rogue ASAP, before they're ready? |02:07:04|  Is research on human scheming relevant to AI? |02:08:03|This episode was originally recorded on February 21, 2025.Video: Simon Monsour and Luke MonsourAudio engineering: Ben Cordell, Milo McGuire, and Dominic ArmstrongTranscriptions and web: Katy Moore
    --------  
    2:16:03
  • 15 expert takes on infosec in the age of AI
    "There’s almost no story of the future going well that doesn’t have a part that’s like '…and no evil person steals the AI weights and goes and does evil stuff.' So it has highlighted the importance of information security: 'You’re training a powerful AI system; you should make it hard for someone to steal' has popped out to me as a thing that just keeps coming up in these stories, keeps being present. It’s hard to tell a story where it’s not a factor. It’s easy to tell a story where it is a factor." — Holden KarnofskyWhat happens when a USB cable can secretly control your system? Are we hurtling toward a security nightmare as critical infrastructure connects to the internet? Is it possible to secure AI model weights from sophisticated attackers? And could AI might actually make computer security better rather than worse?With AI security concerns becoming increasingly urgent, we bring you insights from 15 top experts across information security, AI safety, and governance, examining the challenges of protecting our most powerful AI models and digital infrastructure — including a sneak peek from an episode that hasn’t yet been released with Tom Davidson, where he explains how we should be more worried about “secret loyalties” in AI agents. You’ll hear:Holden Karnofsky on why every good future relies on strong infosec, and how hard it’s been to hire security experts (from episode #158)Tantum Collins on why infosec might be the rare issue everyone agrees on (episode #166)Nick Joseph on whether AI companies can develop frontier models safely with the current state of information security (episode #197)Sella Nevo on why AI model weights are so valuable to steal, the weaknesses of air-gapped networks, and the risks of USBs (episode #195)Kevin Esvelt on what cryptographers can teach biosecurity experts (episode #164)Lennart Heim on on Rob’s computer security nightmares (episode #155)Zvi Mowshowitz on the insane lack of security mindset at some AI companies (episode #184)Nova DasSarma on the best current defences against well-funded adversaries, politically motivated cyberattacks, and exciting progress in infosecurity (episode #132)Bruce Schneier on whether AI could eliminate software bugs for good, and why it’s bad to hook everything up to the internet (episode #64)Nita Farahany on the dystopian risks of hacked neurotech (episode #174)Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (episode #194)Nathan Labenz on how even internal teams at AI companies may not know what they’re building (episode #176)Allan Dafoe on backdooring your own AI to prevent theft (episode #212)Tom Davidson on how dangerous “secret loyalties” in AI models could be (episode to be released!)Carl Shulman on the challenge of trusting foreign AI models (episode #191, part 2)Plus lots of concrete advice on how to get into this field and find your fitCheck out the full transcript on the 80,000 Hours website.Chapters:Cold open (00:00:00)Rob's intro (00:00:49)Holden Karnofsky on why infosec could be the issue on which the future of humanity pivots (00:03:21)Tantum Collins on why infosec is a rare AI issue that unifies everyone (00:12:39)Nick Joseph on whether the current state of information security makes it impossible to responsibly train AGI (00:16:23)Nova DasSarma on the best available defences against well-funded adversaries (00:22:10)Sella Nevo on why AI model weights are so valuable to steal (00:28:56)Kevin Esvelt on what cryptographers can teach biosecurity experts (00:32:24)Lennart Heim on the possibility of an autonomously replicating AI computer worm (00:34:56)Zvi Mowshowitz on the absurd lack of security mindset at some AI companies (00:48:22)Sella Nevo on the weaknesses of air-gapped networks and the risks of USB devices (00:49:54)Bruce Schneier on why it’s bad to hook everything up to the internet (00:55:54)Nita Farahany on the possibility of hacking neural implants (01:04:47)Vitalik Buterin on how cybersecurity is the key to defence-dominant futures (01:10:48)Nova DasSarma on exciting progress in information security (01:19:28)Nathan Labenz on how even internal teams at AI companies may not know what they’re building (01:30:47)Allan Dafoe on backdooring your own AI to prevent someone else from stealing it (01:33:51)Tom Davidson on how dangerous “secret loyalties” in AI models could get (01:35:57)Carl Shulman on whether we should be worried about backdoors as governments adopt AI technology (01:52:45)Nova DasSarma on politically motivated cyberattacks (02:03:44)Bruce Schneier on the day-to-day benefits of improved security and recognising that there’s never zero risk (02:07:27)Holden Karnofsky on why it’s so hard to hire security people despite the massive need (02:13:59)Nova DasSarma on practical steps to getting into this field (02:16:37)Bruce Schneier on finding your personal fit in a range of security careers (02:24:42)Rob's outro (02:34:46)Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongContent editing: Katy Moore and Milo McGuireTranscriptions and web: Katy Moore
    --------  
    2:35:54

More Education podcasts

About 80,000 Hours Podcast

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.
Podcast website

Listen to 80,000 Hours Podcast, The Jordan B. Peterson Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.16.2 | © 2007-2025 radio.de GmbH
Generated: 4/26/2025 - 9:33:09 AM