
AI and productivity: A year-in-review with Microsoft, Google, and GitHub researchers
29/12/2025 | 42 mins.
As AI adoption accelerates across the software industry, engineering leaders are increasingly focused on a harder question: how to understand whether these tools are actually improving developer experience and organizational outcomes.In this year-end episode of the Engineering Enablement podcast, host Laura Tacho is joined by Brian Houck from Microsoft, Collin Green and Ciera Jaspan from Google, and Eirini Kalliamvakou from GitHub to examine what 2025 research reveals about AI impact in engineering teams. The panel discusses why measuring AI’s effectiveness is inherently complex, why familiar metrics like lines of code continue to resurface despite their limitations, and how multidimensional frameworks such as SPACE and DORA provide a more accurate view of developer productivity.The conversation also looks ahead to 2026, exploring how AI is beginning to reshape the role of the developer, how junior engineers’ skill sets may evolve, where agentic workflows are emerging, and why some widely shared AI studies were misunderstood. Together, the panel offers a grounded perspective on moving beyond hype toward more thoughtful, evidence-based AI adoption.Where to find Brian Houck:• LinkedIn: https://www.linkedin.com/in/brianhouck/ • Website: https://www.microsoft.com/en-us/research/people/bhouck/ Where to find Collin Green: • LinkedIn: https://www.linkedin.com/in/collin-green-97720378 • Website: https://research.google/people/107023Where to find Ciera Jaspan: • LinkedIn: https://www.linkedin.com/in/ciera • Website: https://research.google/people/cierajaspan/Where to find Eirini Kalliamvakou: • LinkedIn: https://www.linkedin.com/in/eirini-kalliamvakou-1016865/• X: https://x.com/irina_kAl • Website: https://www.microsoft.com/en-us/research/people/eikalliWhere to find Laura Tacho: • LinkedIn: https://www.linkedin.com/in/lauratacho/• X: https://x.com/rhein_wein• Website: https://lauratacho.com/• Laura’s course (Measuring Engineering Performance and AI Impact) https://lauratacho.com/developer-productivity-metrics-courseIn this episode, we cover:(00:00) Intro(02:35) Introducing the panel and the focus of the discussion(04:43) Why measuring AI’s impact is such a hard problem(05:30) How Microsoft approaches AI impact measurement(06:40) How Google thinks about measuring AI impact(07:28) GitHub’s perspective on measurement and insights from the DORA report(10:35) Why lines of code is a misleading metric(14:27) The limitations of measuring the percentage of code generated by AI(18:24) GitHub’s research on how AI is shaping the identity of the developer(21:39) How AI may change junior engineers’ skill sets(24:42) Google’s research on using AI and creativity (26:24) High-leverage AI use cases that improve developer experience(32:38) Open research questions for AI and developer productivity in 2026(35:33) How leading organizations approach change and agentic workflows(38:02) Why the METR paper resonated and how it was misunderstoodReferenced:• Measuring AI code assistants and agents• Kiro• Claude Code - AI coding agent for terminal & IDE• SPACE framework: a quick primer• DORA | State of AI-assisted Software Development 2025• Martin Fowler - by Gergely Orosz - The Pragmatic Engineer• Seamful AI for Creative Software Engineering: Use in Software Development Workflows | IEEE Journals & Magazine | IEEE Xplore• AI Where It Matters: Where, Why, and How Developers Want AI Support in Daily Work - Microsoft Research• Unpacking METR’s findings: Does AI slow developers down?• DX Annual 2026

Running data-driven evaluations of AI engineering tools
12/12/2025 | 37 mins.
AI engineering tools are evolving fast. New coding assistants, debugging agents, and automation platforms emerge every month. Engineering leaders want to take advantage of these innovations while avoiding costly experiments that create more distraction than impact.In this episode of the Engineering Enablement podcast, host Laura Tacho and Abi Noda outline a practical model for evaluating AI tools with data. They explain how to shortlist tools by use case, run trials that mirror real development work, select representative cohorts, and ensure consistent support and enablement. They also highlight why baselines and frameworks like DX’s Core 4 and the AI Measurement Framework are essential for measuring impact.Where to find Laura Tacho: • LinkedIn: https://www.linkedin.com/in/lauratacho/• X: https://x.com/rhein_wein• Website: https://lauratacho.com/• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-courseWhere to find Abi Noda:• LinkedIn: https://www.linkedin.com/in/abinoda • Substack: https://substack.com/@abinoda In this episode, we cover:(00:00) Intro: Running a data-driven evaluation of AI tools(02:36) Challenges in evaluating AI tools(06:11) How often to reevaluate AI tools(07:02) Incumbent tools vs challenger tools(07:40) Why organizations need disciplined evaluations before rolling out tools(09:28) How to size your tool shortlist based on developer population(12:44) Why tools must be grouped by use case and interaction mode(13:30) How to structure trials around a clear research question(16:45) Best practices for selecting trial participants(19:22) Why support and enablement are essential for success(21:10) How to choose the right duration for evaluations(22:52) How to measure impact using baselines and the AI Measurement Framework(25:28) Key considerations for an AI tool evaluation(28:52) Q&A: How reliable is self-reported time savings from AI tools?(32:22) Q&A: Why not adopt multiple tools instead of choosing just one?(33:27) Q&A: Tool performance differences and avoiding vendor lock-inReferenced:Measuring AI code assistants and agentsQCon conferencesDX Core 4 engineering metricsDORA’s 2025 research on the impact of AIUnpacking METR’s findings: Does AI slow developers down?METR’s study on how AI affects developer productivityClaude CodeCursorWindsurfDo newer AI-native IDEs outperform other AI coding assistants?

DORA’s 2025 research on the impact of AI
21/11/2025 | 26 mins.
Nathen Harvey leads research at DORA, focused on how teams measure and improve software delivery. In today’s episode of Engineering Enablement, Nathen sits down with host Laura Tacho to explore how AI is changing the way teams think about productivity, quality, and performance.Together, they examine findings from the 2025 DORA research on AI-assisted software development and DX’s Q4 AI Impact report, comparing where the data aligns and where important gaps emerge. They discuss why relying on traditional delivery metrics can give leaders a false sense of confidence and why AI acts as an amplifier, accelerating healthy systems while intensifying existing friction and failure.The conversation focuses on how AI is reshaping engineering systems themselves. Rather than treating AI as a standalone tool, they explore how it changes workflows, feedback loops, team dynamics, and organizational decision-making, and why leaders need better system-level visibility to understand its real impact.Where to find Nathen Harvey:• LinkedIn: https://www.linkedin.com/in/nathenWhere to find Laura Tacho: • LinkedIn: https://www.linkedin.com/in/lauratacho/• X: https://x.com/rhein_wein• Website: https://lauratacho.com/• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-courseIn this episode, we cover:(00:00) Intro(00:55) Why the four key DORA metrics aren’t enough to measure AI impact(03:44) The shift from four to five DORA metrics and why leaders need more than dashboards(06:20) The one-sentence takeaway from the 2025 DORA report(07:38) How AI amplifies both strengths and bottlenecks inside engineering systems(08:58) What DX data reveals about how junior and senior engineers use AI differently(10:33) The DORA AI Capabilities Model and why AI success depends on how it’s used(18:24) How a clear and communicated AI stance improves adoption and reduces friction(23:02) Why talking to your teams still matters Referenced:• DORA | State of AI-assisted Software Development 2025• Steve Fenton - Octonaut | LinkedIn• AI-assisted engineering: Q4 impact report

How Monzo runs data-driven AI experimentation
31/10/2025 | 41 mins.
In this episode of Engineering Enablement, host Laura Tacho talks with Fabien Deshayes, who leads multiple platform engineering teams at Monzo Bank. Fabien explains how Monzo is adopting AI responsibly within a highly regulated industry, balancing innovation with structure, control, and data-driven decision-making.They discuss how Monzo runs structured AI trials, measures adoption and satisfaction, and uses metrics to guide investment and training. Fabien shares why the company moved from broad rollouts to small, focused cohorts, how they are addressing existing PR review bottlenecks that AI has intensified, and what they have learned from empowering product managers and designers to use AI tools directly.He also offers insights into budgeting and experimentation, the results Monzo is seeing from AI-assisted engineering, and his outlook on what comes next, from agent orchestration to more seamless collaboration across roles.Where to find Fabien Deshayes: • LinkedIn: https://www.linkedin.com/in/fabiendeshayesWhere to find Laura Tacho: • LinkedIn: https://www.linkedin.com/in/lauratacho/• X: https://x.com/rhein_wein• Website: https://lauratacho.com/• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-courseIn this episode, we cover:(00:00) Intro (01:01) An overview of Monzo bank and Fabien’s role (02:05) Monzo’s careful, structured approach to AI experimentation (05:30) How Monzo’s AI journey began (06:26) Why Monzo chose a structured approach to experimentation and what criteria they used (09:21) How Monzo selected AI tools for experimentation (11:51) Why individual tool stipends don’t work for large, regulated organizations (15:32) How Monzo measures the impact of AI tools and uses the data (18:10) Why Monzo limits AI tool trials to small, focused cohorts (20:54) The phases of Monzo’s AI rollout and how learnings are shared across the organization (22:43) What Monzo’s data reveals about AI usage and spending (24:30) How Monzo balances AI budgeting with innovation (26:45) Results from DX’s spending poll and general advice on AI budgeting (28:03) What Monzo’s data shows about AI’s impact on engineering performance (29:50) The growing bottleneck in PR reviews and how Monzo is solving it with tenancies (33:54) How product managers and designers are using AI at Monzo (36:36) Fabien’s advice for moving the needle with AI adoption (38:42) The biggest changes coming next in AI engineering Referenced:Monzo The Go Programming LanguageSwift.orgKotlinGitHub Copilot in VS Code CursorWindsurfClaude CodePlanning your 2026 AI tooling budget: guidance for engineering leaders

Planning your 2026 AI tooling budget: guidance for engineering leaders
17/10/2025 | 38 mins.
In this episode of Engineering Enablement, Laura Tacho and Abi Noda discuss how engineering leaders can plan their 2026 AI budgets effectively amid rapid change and rising costs. Drawing on data from DX’s recent poll and industry benchmarks, they explore how much organizations should expect to spend per developer, how to allocate budgets across AI tools, and how to balance innovation with cost control.Laura and Abi also share practical insights on building a multi-vendor strategy, evaluating ROI through the right metrics, and ensuring continuous measurement before and after adoption. They discuss how to communicate AI’s value to executives, avoid the trap of cost-cutting narratives, and invest in enablement and training to make adoption stick.Where to find Abi Noda:• LinkedIn: https://www.linkedin.com/in/abinoda • Substack: https://substack.com/@abinoda Where to find Laura Tacho: • LinkedIn: https://www.linkedin.com/in/lauratacho/• X: https://x.com/rhein_wein• Website: https://lauratacho.com/• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-courseIn this episode, we cover:(00:00) Intro: Setting the stage for AI budgeting in 2026(01:45) Results from DX’s AI spending poll and early trends(03:30) How companies are currently spending and what to watch in 2026(04:52) Why clear definitions for AI tools matter and how Laura and Abi think about them(07:12) The entry point for 2026 AI tooling budgets and emerging spending patterns(10:14) Why 2026 is the year to prove ROI on AI investments(11:10) How organizations should approach AI budgeting and allocation(15:08) Best practices for managing AI vendors and enterprise licensing(17:02) How to define and choose metrics before and after adopting AI tools(19:30) How to identify bottlenecks and AI use cases with the highest ROI(21:58) Key considerations for AI budgeting (25:10) Why AI investments are about competitiveness, not cost-cutting(27:19) How to use the right language to build trust and executive buy-in(28:18) Why training and enablement are essential parts of AI investment(31:40) How AI add-ons may increase your tool costs(32:47) Why custom and fine-tuned models aren’t relevant for most companies today(34:00) The tradeoffs between stipend models and enterprise AI licensesReferenced:DX Core 4 Productivity FrameworkMeasuring AI code assistants and agents2025 State of AI Report: The Builder's PlaybookGitHub Copilot · Your AI pair programmerCursorGleanClaude CodeChatGPTWindsurfTrack Claude Code adoption, impact, and ROI, directly in DXMeasuring AI code assistants and agents with the AI Measurement FrameworkDriving enterprise-wide AI tool adoptionSentryPoolside



Engineering Enablement by DX