PodcastsNewsThe New Stack Podcast

The New Stack Podcast

The New Stack
The New Stack Podcast
Latest episode

366 episodes

  • The New Stack Podcast

    Microsoft wants to make service mesh invisible

    08/04/2026 | 21 mins.
    At KubeCon EU 2026, Mitch Connors of Microsoft outlined a vision to make service meshes effectively invisible to users. Now working on Azure Kubernetes Application Network, a fully managed service built on Istio’s ambient mode, Connors aims to deliver core capabilities like mTLS without requiring users to engage with the complexity traditionally associated with service meshes. Ambient mode eliminates sidecar upgrade challenges by shifting functionality to node-level and waypoint proxies, though adoption still faces hurdles, including lagging CVE patching.

    Connors emphasized that AI workloads are reshaping network demands, as request variability in large language models requires smarter routing and resource management. Istio is addressing this through a two-speed model: stable APIs for reliability and experimental integrations like Agent Gateway for emerging AI protocols. Features such as inference-aware routing and policy enforcement for approved LLM endpoints highlight the mesh’s growing role in AI governance.

    With multi-cluster support and GPU scarcity driving workload mobility, Microsoft’s approach bets that simplifying and abstracting the mesh will broaden adoption while meeting the evolving needs of AI-driven systems.

    Learn more from The New Stack about service meshes: 

    The Hidden Costs of Service Meshes

    All the Things a Service Mesh Can Do

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Amazon EKS Auto Mode wants to end Kubernetes toil — one node at a time

    07/04/2026 | 22 mins.
    At KubeCon + CloudNativeCon Europe 2026 in Amsterdam, Alex Kestner, principal product manager for Amazon Elastic Kubernetes Service (EKS), discussed how Amazon EKS Auto Mode aims to reduce the operational burden of running Kubernetes at scale. While Kubernetes delivers significant power, it also introduces complexity—particularly through repetitive, day-to-day tasks like managing node lifecycles, ensuring security updates, and selecting optimal infrastructure.

    Kestner emphasized that much of this “undifferentiated heavy lifting” distracts platform teams from delivering business value. Amazon EKS Auto Mode addresses this by automating infrastructure operations across the full node lifecycle, shifting responsibility for key operational components outside the cluster and into AWS-managed services.

    Built in collaboration with the EC2 team and leveraging technologies like Karpenter, Auto Mode dynamically provisions right-sized compute resources based on workload requirements. While it doesn’t eliminate all challenges—such as unpredictable workloads or diverse deployment needs—it provides a more application-focused approach to scaling and cost optimization. Ultimately, Auto Mode represents a meaningful step toward simplifying Kubernetes operations in increasingly complex cloud-native environments.

    Learn more from The New Stack about the latest developments around the latest with Amazon Elastic Kubernetes Service (EKS):

    2026 Will Be the Year of Agentic Workloads in Production on Amazon EKS

    How Amazon EKS Auto Mode Simplifies Kubernetes Cluster Management (Part 1)

    A Deep Dive Into Amazon EKS Auto (Part 2)

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Edge-forward: Akamai eyes sweet spot between centralized & decentralized AI inference

    01/04/2026 | 22 mins.
    At KubeCon + CloudNativeCon Europe 2026, Lena Hall and Thorsten Hans of Akamai outlined how the company is evolving from a CDN provider into a developer-focused cloud platform for AI. Akamai’s strategy centers on low-latency, distributed computing, combining managed Kubernetes, serverless functions, and a distributed AI inference platform to support modern workloads.

    With a global footprint of core and “distributed reach” datacenters, Akamai aims to bring compute closer to users while still leveraging centralized infrastructure for heavier processing. This hybrid model enables faster feedback loops critical for applications like fraud detection, robotics, and conversational AI.

    To address concerns about complexity, Akamai emphasizes managed infrastructure and self-service tools that abstract away integration challenges. Its platform supports open source through managed Kubernetes and pre-packaged tools, simplifying deployment.

    Akamai also invests in serverless technologies like WebAssembly-based functions, enabling developers to build and deploy globally distributed applications quickly. Overall, the company prioritizes developer experience, allowing teams to focus on application logic rather than infrastructure management.

    Learn more from The New Stack about the latest developments around how Akamai is transforming to a developer-focused cloud platform for AI.

    Akamai Picks Up Hosting for Kernel.org

    Should You Care About Fermyon Wasm Functions on Akamai?

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Kubernetes co-founder Brendan Burns: AI-generated code will become as invisible as assembly

    24/03/2026 | 43 mins.
    In this episode of The New Stack Makers, Microsoft Corporate Vice President and Technical Fellow, Brendan Burns discusses how AI is reshaping Kubernetes and modern infrastructure. Originally designed for stateless applications, Kubernetes is evolving to support AI workloads that require complex GPU scheduling, co-location, and failure sensitivity. Features like Dynamic Resource Allocation and projects such as KAITO introduce AI-specific capabilities, while maintaining Kubernetes’ core strength: vendor-neutral extensibility. 

    Burns highlights that AI also changes how systems are monitored. Success is no longer binary; it depends on answer quality, user feedback, and large-scale testing using thousands of prompts and even AI evaluators. 

    On software development, Burns argues that the industry’s focus on reviewing AI-generated code is temporary. Just as developers stopped inspecting compiler output, AI-generated code will become a disposable artifact validated by tests and specifications. This shift will redefine engineering roles and may lead to programming languages designed for machines rather than humans, signaling a fundamental transformation in how software is built and maintained.

    Learn more from The New Stack about the latest developments around how AI is reshaping Kubernetes and modern infrastructure:

    How To Use AI To Design Intelligent, Adaptable Infrastructure

    The AI Infrastructure crisis: When ambition meets ancient systems 

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    AI can write your infrastructure code. There's a reason most teams won't let it.

    20/03/2026 | 29 mins.
    In this episode ofThe New Stack Agents, Marcin Wyszynski, co-founder of Spacelift and OpenTofu, explains how AI is transforming infrastructure as code (IaC). Originally built for individual operators, tools like Terraform struggled to scale across teams, prompting Wyszynski to help launch OpenTofu after HashiCorp’s 2023 license change. Now, the bigger shift is AI: engineers no longer write configuration languages like HCL manually, as AI tools generate it, dramatically lowering the barrier to entry.

    However, this creates a dangerous gap between generating infrastructure and truly understanding it—like using a phrasebook to ask questions in a foreign language but not understanding the response. In infrastructure, that lack of comprehension can lead to serious risks.

    To address this, Spacelift introduced Intent, which allows AI to directly interact with cloud systems in real time while enforcing deterministic guardrails through policy controls. The broader challenge remains balancing speed with control—enabling faster experimentation without sacrificing safety. Wyszynski argues that, like humans, AI can be trusted when constrained by strong guardrails.

    Learn more from The New Stack about the latest developments around how AI is transforming infrastructure as code (IaC).

    The Maturing State of Infrastructure as Code in 2025

    Generative AI Tools for Infrastructure as Code

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

More News podcasts

About The New Stack Podcast

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Podcast website

Listen to The New Stack Podcast, The Rest Is Politics: US and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The New Stack Podcast: Podcasts in Family