What are we protecting? AI, learning, and the myth of the good old days | Ep. 60
In this episode of ChatEDU (What are we protecting? AI, learning, and the myth of the good old days), Matt and Jonathan return to the ChatEDU studio while Liz globe-trots her way to ASCD authorship, to tackle two big stories shaping the AI-in-education conversation. First, they dive into NASA’s spring guidance warning that generative AI is too unreliable for mission-critical applications, and unpack what that means for education, ethics, and expectations. Then, they go beneath the surface with a new article from Jonathan Costa exploring G.K. Chesterton’s “fence” and what it reveals about our assumptions around reading, writing, and what students really need to know. From dog impressions to deep epistemology, this episode covers serious ground.Story 1: NASA’s Take on Generative AIIn a springtime memo to chief information officers, NASA came out strong: generative AI is not to be used for critical research or safety work. Why? Hallucinations, poor data quality, and instruction ignoring are still too common. Matt and Jonathan explore the implications of this position and why context matters; what’s a dealbreaker in rocket science might be a minor annoyance in dinner recipes. They also do a dramatic reading of a fictional “AI performance review” pulled from a CIO.com op-ed to highlight how strange our current AI tolerance levels really are.Beneath the Surface: Chesterton’s Fence and the Myth of the Good Old DaysJonathan shares his new piece on Chesterton’s Fence, a metaphor for not tearing down long-standing traditions unless you understand why they exist. He and Matt explore how this metaphor applies to the future of literacy, learning, and school design in an AI-powered world. Does reading still matter if you can generate a podcast from any text? Is decoding the same as thinking? They examine writing, world languages, engineering fluency, and post-literate futures, while offering practical insights for superintendents navigating change. It’s a smart, provocative conversation about learning in the age of acceleration.Bright Byte: Stanford’s BRP DiscoveryThis week’s Bright Byte spotlights a health tech breakthrough from Stanford Medicine. Using a peptide-predicting AI model, researchers identified BRP, a naturally occurring amino acid that reduces appetite and body weight in animal studies with fewer side effects than Ozempic. The model analyzed 20,000 protein-coding genes to find active peptides, a task too complex for traditional lab methods. It’s another example of how AI can support high-impact research and deliver real-world benefits in health and medicine.AnnouncementsSummer Micro-Credential Cohort is OpenLearn more and register at: skills21.org/ai/microReferenced Articles and ResourcesWendy Costa's awesome photography websitehttps://www.alternaterealityphotos.com/NASA’s Generative AI Cautionhttps://www.computerworld.com/article/3951046/nasa-finds-generative-ai-cant-be-trusted.html#:~:text=The%20NASA%20report%20found%20that,systems%20that%20create%20unacceptable%20risk.Stanford’s AI Discovery of BRPhttps://med.stanford.edu/news/all-news/2025/03/ozempic-rival.html#:~:text=Naturally%20occurring%20molecule%20rivals%20Ozempic%20in%20weight%20loss%2C%20sidesteps%20side%20effects&text=The%2012%2Damino%2Dacid%20BRP,causing%20nausea%20or%20food%20aversion.SponsorThis episode is sponsored by the National Center for Next Generation Manufacturing, supporting AI-powered innovation and workforce readiness.