PodcastsNewsThe New Stack Podcast

The New Stack Podcast

The New Stack
The New Stack Podcast
Latest episode

382 episodes

  • The New Stack Podcast

    How Microsoft is governing thousands of Kubernetes clusters without manual intervention

    07/05/2026 | 25 mins.
    Managing Kubernetes at fleet scale introduces significant complexity, especially as organizations expand from a few clusters to hundreds or thousands across cloud, on-premises, and edge environments. While GitOps remains the dominant model for declarative management, its traditional one-to-one repository-to-cluster approach struggles to handle multi-cluster realities such as global traffic routing, shared secrets, and unified observability. AsStephane Erbrech, Principal Software Engineer at Microsoftexplains, the challenge shifts from deployment to governance—maintaining consistency, security, and compliance across a vast distributed system without manual intervention.

    This need is amplified by the rise of AI workloads at the edge, where inference is increasingly decentralized. To address these challenges,Microsoft Azure Kubernetes Fleet Managerenables coordinated, staged rollouts across clusters, allowing teams to validate updates in lower-risk environments before production. Supporting this,Cilium Cluster Meshprovides seamless cross-cluster connectivity, enabling workload mobility and efficient resource use, especially for scarce GPU capacity. Together, these tools help modern platform teams manage lifecycle, networking, and orchestration at scale. 

    Learn more from The New Stack around managing Kubernetes at fleet scale: 

    KubeFleet: The Future of Multicluster Kubernetes App Management

    Why Microsoft is betting on temporary identities to stop autonomous agents from going rogue

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Why long-running AI agents break on HTTP and how Ably is fixing it

    06/05/2026 | 31 mins.
    In this episode ofThe New Stack Makers, Matthew O’Riordan, CEO of Ably, explains how infrastructure originally built for human collaboration is now well-suited for long-running AI agents. While Ably initially resisted positioning itself as an AI company, the rise of agents that reason, call tools, and operate over extended periods revealed a natural fit for its real-time communication platform.

    O’Riordan highlights the limitations of HTTP for these use cases. While effective for short, request-response interactions, HTTP struggles with persistent, stateful experiences—such as handling dropped connections, multi-device usage, or mid-task interruptions. To address this, a new “durable session” layer is emerging, enabling continuous synchronization between agents and users through shared state, presence, and recovery mechanisms.

    Ably’s solution, AI Transport, augments existing architectures by keeping HTTP for requests while shifting responses to durable sessions. Features like mutable message streams and “live objects” allow seamless reconnection and collaboration. The goal is to provide a drop-in layer that developers can adopt without rethinking their stack—moving beyond traditional pub/sub models.

    Learn more from The New Stack around Ably and AI Transport: 

    How MCP Uses Streamable HTTP for Real-Time AI Tool Interaction

    Ably Touts Real-Time Starter Kits for Vercel and Netlify

    AI Agents Need Help. Here’s 4 Ways To Ship Software Reliably

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Why the Linux Foundation adopted MCP, with Jim Zemlin and Mazin Gilbert

    06/05/2026 | 32 mins.
    Agentic AI is advancing rapidly, with open-source projects racing to keep pace with real-world deployment. To accelerate progress, the Linux Foundation consolidated key technologies—Model Context Protocol (MCP), Goose, and AGENTS.md—under the newly formed Agentic AI Foundation (AAIF) in late 2025. At the MCP Dev Summit in New York City, Linux Foundation CEO Jim Zemlin and newly appointed AAIF executive director Mazin Gilbert discussed this transition. Zemlin explained that leading both organizations was unsustainable, prompting a careful search for a leader with both technical expertise and collaborative leadership skills.

    Gilbert now takes on the challenge of guiding AAIF as it shapes the emerging agentic AI ecosystem. While the foundation currently oversees three projects, its broader mission involves defining the future architecture of agent-driven systems—deciding what to build, when, and why. These decisions will influence the trajectory of open-source AI development. The conversation also highlights the importance of open collaboration, funding dynamics, and early adopters in shaping the agentic stack’s evolution.

     

    Learn more from The New Stack around the latest in open-source projects and The Linux Foundation: 

    Anthropic Donates the MCP Protocol to the Agentic AI Foundation

    SAFE-MCP, a Community-Built Framework for AI Agent Security

    Google Donates the Agent2Agent Protocol to the Linux Foundation

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Fresh data has us asking, does AI demand Kubernetes?

    01/05/2026 | 23 mins.
    Kubernetes is rapidly emerging as the de facto operating system for AI, with two-thirds of organizations using it for generative AI inference and 82% adopting it in production. Its ecosystem — including tools like Kubeflow — enables organizations to build, scale, and retain control of AI systems through open, community-driven infrastructure. Bob Killen of CNCF and Liam Bollmann-Dodd of SlashData shared insights from recent reports showing that AI success still hinges on strong engineering fundamentals—especially internal developer platforms and overall developer experience.

    While AI-generated code accelerates development, it shifts bottlenecks to DevOps, reliability, and security, increasing operational complexity. As a result, operator experience and well-defined guardrails have become critical to safely scaling AI. These controls help constrain both human and AI developers, reducing risk while enabling speed. At the same time, organizations are evolving team structures, expanding platform engineering groups to support internal users more effectively. Despite growing complexity, the core lesson remains consistent: open source innovation thrives on people, processes, and collaboration as much as on technology itself.

    Learn more from The New Stack around the latest in Kubernetes and its emergence as an operating system for AI: 

    Kubernetes and AI: Are They a Fit?

    How AI Is Pushing Kubernetes Storage Beyond Its Limits

    Kubernetes and AI Are Shaping the Next Generation of Platforms

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    How SUSE positions itself as the infrastructure layer for the AI era

    30/04/2026 | 26 mins.
    In this episode ofThe New Stack Makers,Pete Smailsoutlines howSUSEis evolving from its Linux roots into an AI-native infrastructure platform. Speaking atKubeCon + CloudNativeCon Europe 2026, Smails explains the company’s strategy to unify AI, containers and virtual machines on a single open, enterprise-ready foundation. Central to this isSUSE Rancher Prime, which enables consistent orchestration across hybrid and multi-cloud environments, alongsideSUSE Virtualizationfor modernizing legacy systems.

    A key innovation is “Liz,” a context-aware AI agent embedded in Rancher Prime that helps engineers identify vulnerabilities, troubleshoot deployments and interact with infrastructure using natural language. Unlike generic AI tools, Liz understands real-time cluster states and uses Model Context Protocol to deliver actionable insights.

    Smails emphasizes developer experience as critical to adoption, highlighting Rancher Developer Access for simplified local Kubernetes workflows. Overall, SUSE aims to deliver secure, automated infrastructure that reduces complexity while accelerating cloud-native and AI adoption.

    Learn more from The New Stack around the latest around SUSE: 

    SUSE Displays Enhanced Enterprise Linux at SECESSION

    SUSE Launches a Sovereign Premium Support Service for EU Customers

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

More News podcasts

About The New Stack Podcast

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Podcast website

Listen to The New Stack Podcast, The News Agents and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The New Stack Podcast: Podcasts in Family