Powered by RND
PodcastsEducationEnough About AI

Enough About AI

Enough About AI
Enough About AI
Latest episode

Available Episodes

5 of 9
  • Bursting the Bubble?
    Dónal and Ciarán explore the increasingly looming questions about the overinflation of the major AI companies' valuations and the anxiety about whether we are in a bubble - and what might happen if it pops. Relevant to their discussion of the need for AI models to keep the hype levels high, they discuss the muted reception to the release of ChatGPT-5 and some of the emerging strategies to make AI chatbots more palatable to certain audiences who are worried about "woke".Topics in this episodeAre we in an AI bubble? Spoiler alert: Yes - based on any normal metric of what an investment bubble is - but why is the promise of an almost-there superintelligence keeping things from popping?Despite being in a bubble, the lasting impacts of where AI already is on jobs and society is discussedThe recent release of ChatGPT-5 has led to negative feedback from the tech press and vocal users - this is contrasted with other recent version releases.Examining how AI companies are trying to find new ways to add value, leading to a discussion of "Third Devices" and AI hardwareThe limitations and diminishing returns of training on synthetic data - and the apparent slowing down in model progressAI & Ideology - what does it mean to have a non-woke AI?Resources & LinksThe Economist story mentioned by Dónal: "AI valuations are verging on the unhinged - Unless superintelligence is just around the corner" (25 June 2025)Article in TheJournal.ie on "Brendan", the AI Dublin Tour GuideChatGPT's dodgy graph is linked and discussed here: "OpenAI gets caught vibe graphing" (The Verge, 07 August)Sam Altman (OpenAI) tells venture capitalists that he will take billions of their money and build AGI - and then ask it how to make a return on the investment  (Twitter Video, Warren Terra)Some good discussion on the struggles of agentive AI ("AI Agents have, so far, mostly been a dud", Gary Marcus, Substack)Apple's important recent paper on the limitations of "reasoning" within tested reasoning models is available as a PDF hereCoverage of Truth Social's deal with Perplexity - to make a non-woke chatbot for the platformYou can get in touch with us - [email protected] - where we'd love to hear your questions, comments or suggestions!
    --------  
    46:46
  • Alignment Anxieties & Persuasion Problems
    Dónal and Ciarán continue the 2025 season with a second quarterly update that looks at some recent themes in AI development. They're pondering doom again, as we increasingly grapple with the evidence that AI systems are powerfully persuasive and full of flattery at the same time as our ability to meaningfully supervise them seems to be diminishing.Topics in this episodeCan we see how reasoning models reason? If AI is thinking, or sharing information and it's not in human language, how can we check that it's aligned with our values. This interpretability issue is tied to the concept of neuralese - inscrutable machine thoughts!We discuss the predictions and prophetic doom visions of the AI-2027 documentIncreasing ubiquity and sometimes invisibility of AI, as it's inserted into other products. Is this more enshittification? AI is becoming a persuasion machine - we look at the recent issues on Reddit's r/ChangeMyView, where researchers skipped good ethics practice but ended up with worrying resultsWe talk about flattery, manipulation, and Eli Yudkowsky's AI-Box thought experimentResources & LinksThe AI-2027 piece, from Daniel Kokotajlo et al. is a must-read!Dario Amodei's latest essay, The Urgency of InterpretabilityT.O.P.I.C. - A detailed referencing model for indicating the use of GenAI Tools in academic assignments. Yudkowsky's AI-box Experiment, described on his site."The Worst Internet-Research Ethics Violation I Have Ever Seen" - coverage of the University of Zurich / Reddit study, by Tom Barlett for The AtlanticChatGPT wants us to buy things via our AI conversations (reported by Reece Rogers, for Wired)You can get in touch with us - [email protected] - where we'd love to hear your questions, comments or suggestions!
    --------  
    46:48
  • Political Upheaval & Reasoning Models
    Dónal and Ciarán start a new season for 2025, with a slight change in format to bring you roughly quarterly updates on the themes and topics required to help you know enough about AI. This first episode of 2025, recorded in mid February, gives a summary of what's happened since last November and answers some of your submitted questions. (Thanks for those!)Topics in this episodeWe can't avoid talking about them: Musk & Trump and some of the effects they've both had on the AI space in the last three monthsRegulatory Developments, and the EU's AI Act starting to come into forceUS - EU relations, and the continued innovation vs regulation chatterDeepSeek and China's bold entry into the contemporary AI model spaceWhat are Reasoning Models and how do they work?Some history and concepts related to machine logicAI Benchmarks - a quick primer as we move closer to "Humanity's Last Exam"Resources & LinksJD Vance's Speech from the Paris AI Conference is here (The American Presidency Project)Details on HLE (Humanity's Last Exam) here, including some sample questions. Let us know how you did!Details on the ARC-AGI here.A graph of OpenAI model performance (and discussion) is here (80,000 hours)You can get in touch with us - [email protected] - where we'd love to hear your questions, comments or suggestions!
    --------  
    56:04
  • 2024 E6 Doom of Humanity?
    Dónal and Ciarán discuss the ways - both real and imagined in fiction - that AI could bring about civilization-ending doom for us all. What can we learn from how sci-fi has treated this topic? What are the distant and nearer potential dooms, and what can we do now, apart from saying thanks to ChatGPT? Oh, and note that listening to this episode may drastically affect your life and cause a future powerful AI to punish you in a psychic prison!Topics in this episodeWhat is p(Doom) and why are we hearing about it from AI researchers and investors?How has AI doom been dealt with in Sci-fi and can this teach us anything useful?What is Dead Internet Theory and why might AI contribute to the Enshitification of the internet?Why has the religious concept of Paschal's Wager found a new form in AI discussions that started on internet forums?Resources & LinksMore on the history of p(Doom) on wikipedia here.An interesting article on Dead Internet Theory & AI Walter, Y. Artificial influencers and the dead internet theory. AI & Society (2024).Read about Roko's Basilisk (if you dare)More on Roko's Basilisk on the LessWrong forum where the though experiment emerged in 2009You can get in touch with us - [email protected] - where we'd love to hear your questions, comments or suggestions!
    --------  
    44:26
  • 2025 E5 Misinformation and Regulation
    Dónal and Ciarán discuss some of the concerns about misinformation and disinformation that have emerged with the rise of impressively capable GenAI models, and provide some detail on what their effects might be. They discuss the calls for regulation and how this has begun to take shape in the EU, Ireland,  and elsewhere.Topics in this episodeWhat are the implications for misinformation inherent in the current and emerging GenAI models?Why have there been calls to pause development, and why did this not lead anywhere?How have the various language, image, audio, and video models already been used for problematic content?Is social media ready for the onslaught to come?Can we regulate AI to combat this and how is that beginning?Why should we be critical of offers to self-regulate from the tech companies?What's the EU AI Act? And why is Ireland using the word "doomsayers" in policy documents about AI?Resources & LinksThe EU's AI Act: https://artificialintelligenceact.eu/Some of ISD's work on AI & Misinformation: https://www.isdglobal.org/digital_dispatches/disconnected-from-reality-american-voters-grapple-with-ai-and-flawed-osint-strategies/More on the Slovak Deepfake case discussed by Ciarán: https://misinforeview.hks.harvard.edu/article/beyond-the-deepfake-hype-ai-democracy-and-the-slovak-case/GenAI & ISIS: https://gnet-research.org/2024/02/05/ai-caliphate-pro-islamic-state-propaganda-and-generative-ai/?The Irish Government's "Friend or Foe" Report: https://www.gov.ie/en/publication/6538e-artificial-intelligence-friend-or-foe/You can get in touch with us - [email protected] - where we'd love to hear your questions, comments or suggestions!
    --------  
    40:00

More Education podcasts

About Enough About AI

Enough about the key tech topic of our time for you to feel a bit more confident and informed. Dónal & Ciarán discuss AI, and keep you up to date.
Podcast website

Listen to Enough About AI, The Mel Robbins Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/8/2025 - 6:15:01 AM