PodcastsBusinessThe Vernon Richard Show

The Vernon Richard Show

Vernon Richards and Richard Bradshaw
The Vernon Richard Show
Latest episode

34 episodes

  • The Vernon Richard Show

    6 AI Tool Ideas That Will Transform How You Test

    02/03/2026 | 51 mins.
    In this episode, Richard and Vernon explore the evolving concept of automation in quality, especially in the context of AI and Gen AI. They discuss how new technologies are blurring the lines between testing and quality, and what this means for the future of software development and testing practices.
    00:00 - Intro
    00:52 - Welcome and weekly catch-up
    01:11 - Vern's deep dive into the AI rabbit hole
    02:39 - Rich’s quit(er) work week, new threads, and dentists
    04:15 - Richard buys a domain and we started the pod proper
    06:09 - Tool idea #1: Using an LLM to evaluate user stories and acceptance criteria automatically
    07:35 - Is analysing a story "testing" or "quality"? The ISTQB static analysis debate
    10:27 - Vernon's diabetes analogy: AI is forcing us to finally do what we always said we should
    12:19 - Better stories = better testing: how quality work amplifies everything downstream
    13:11 - Tool idea #2: "If we made this change, what areas of the system would be impacted?"
    14:23 - Distilling years of system knowledge into 5–10 questions an agent could ask
    18:37 - Tool idea #3: The PR Analyser — summarising code changes through a testing and quality lens
    21:45 - Vernon's "1 unit of effort, 5 units of testing" — the quality multiplier effect
    23:29 - Comparing story analysis to actual implementation: where did understanding diverge?
    24:43 - Tool idea #4: Dynamic test selection — cherry-picking the right tests to run first
    27:05 - Tool idea #5: An agent that analyses failed builds and attempts to fix them
    27:28 - Why Richard's first attempt always "fixed" the test instead of the code (and what was missing)
    29:21 - Dan's AI agents: one thinking partner, one employee monitoring production
    32:42 - The documentation goldmine: why AI-generated RCA notes might matter more than the fix
    33:39 - Tool idea #6: A holistic quality dashboard pulling insights across stories, code, tests, and process
    36:43 - John Cutler on context: it's not data you pass around — it's formed through interaction
    40:43 - More options than ever: whether it's testing, quality, or static analysis — you can do it differently now
    41:56 - The real skill: spotting the opportunity to make yourself more effective
    42:30 - Ge Hill's Lump of Code Fallacy and why task analysis matters
    43:34 - Why Richard got into automation: efficiency, not because he was told to
    45:03 - Vernon's big question: in a world where agents can do everything, what's your performance review about?
    46:52 - Context, craft, and product knowledge can't be delegated to tools yet
    48:29 - Call to action: What are you building? What tools couldn't you build before that you can now?
    49:29 - Upcoming: Test Automation Days and PeerCon Live in Nottingham
    Links to stuff we mentioned during the pod:
    04:15 - Automation in QualityRichard bought the automationinquality.com domain! The concept explored throughout this episode.

    05:28 - Kalpesh Sodha aka KalpsShout out to Richard's colleague who played devil's advocate on the "is it testing or quality?" question

    07:31 - Static analysis
    29:44 - Dan "The Agile Guy" ElliottHis post about how he uses AI agents as a "thinking partner" and an "employee" with different missions and capabilities
    Dan’s website
    Dan's LinkedIn

    36:52 - John CutlerJohn Cutler's piece on how context isn't just data you move around — it's formed through interaction between people
    John's newsletter
    John's LinkedIn

    42:37 - Rob SabourinMy quick Perplexity search for Rob's public material on Task Analysis
    Rob's Linkedin

    42:45 - Michael “GeePaw” HillHis Lump of Code Fallacy. The idea that coding isn't just one activity — there are three flavours of work that occur when you code
    Michael’s website
    Michaels Mastadon

    49:35 - Test Automation DaysRichard will be keynoting at Test Automation Days
    Make sure you say hi if you’re there

    50:10 - PeersConVernon and Richard will be recording a live episode at PeersCon!
    If you're there, come say hi and grab a mic 🎙️
  • The Vernon Richard Show

    Six Principles of Automation in Testing: Still Relevant in 2026?

    23/02/2026 | 1h 3 mins.
    In this episode, Richard Bradshaw and Vernon discuss the relevance and application of the six principles of automation in testing in the context of AI advancements. They explore how these principles hold up in 2026, the challenges faced in automation, and the future of testing strategies.

    00:00 - Intro
    01:47 - Welcome (Richard is not at home 👀)
    02:07 - Ramadan, cooking without tasting, and plastic teeth 🦷
    04:01 - Today's topic: revisiting the AiT principles ahead of a keynote
    04:58 - What is Automation in Testing (AiT)?
    06:49 - Principle 1: Supporting Testing over Replicating Testing
    07:01 - Vernon's take: testing is a performance, not a click sequence
    08:22 - What the industry promised vs what automation actually does
    08:49 - The serendipity you lose when a human isn't testing
    09:59 - Agentic testing: observing more, but still not replicating humans
    10:56 - The danger of anthropomorphising AI output
    12:10 - LLMs always give an answer — and that's the problem
    13:03 - Principle 2: Testability over Automatability
    13:14 - Vernon's take: narrow vs broad — operate, control, observe
    14:38 - Making apps automatable for the robots but not the humans
    15:37 - The shiniest framework in a broken testing context
    16:40 - If it's testable, it's probably automatable — but not vice versa
    16:55 - Automation strategy vs testing strategy: when they compete, everyone loses
    17:46 - The problem has always been testing, not automation
    19:57 - Principle 3: Testing Expertise over Coding Expertise
    20:18 - Vernon's take: testing expertise lets you leverage the tools
    21:47 - The spoonfed tests problem: great at automating, lost without guidance
    22:36 - The "code school" era: everyone told to learn to code
    22:51 - Coding agents have changed the maths on this
    26:01 - The new nuance: test design and framework knowledge over writing the code
    28:44 - Evaluating code is a testing problem — and LLMs can help you do it
    30:43 - Are agents as good as a junior developer?
    31:42 - Outcome Engineering (O16G) and the race to write the AI principles
    32:13 - Simon Wardley: we're in the wild west again
    33:22 - Principle 4: Problems over Tools
    33:29 - Vernon's take: the hammer and the nail
    34:07 - Don't let your problems be shaped by the framework you have
    34:36 - New automation opportunities beyond testing: PRs, logs, story review
    35:30 - Principle 5: Risk over Coverage
    36:12 - Vernon's take: 100% coverage ≠ 100% risk coverage
    38:00 - The one test case, one automated test fallacy
    39:04 - Where in the system is the risk? Do you even know your layers?
    39:49 - Probabilistic vs non-deterministic: refining the language around AI
    40:53 - Coverage as intentional vs coverage as a number someone picked once
    43:15 - Principle 6: Observability over Understanding
    43:24 - Vernon's take: just-in-time understanding vs reading everything upfront
    44:12 - What the principle was actually about: making automation results observable
    47:00 - Does this principle belong in testing, or has it grown into quality?
    49:00 - So... what's missing?
    50:00 - The four pillars: Strategy, Creation, Usage, and Education
    57:05 - Automation in Quality: the bigger opportunity
    01:01:00 - Wrap up + Vern's Lead Dev panel

    Links to stuff we mentioned during the pod:
    04:00 - Automation in Testing (AiT)The principles live at automationintesting.com
    AiT was co-created by Richard Bradshaw and Mark Winteringham

    04:00 - Test Automation Days
    The conference where Richard is giving his keynote — testautomationdays.com
    24:48 - James Thomas
    The "kid in a candy shop" himself — James's blog and LinkedIn
    31:42 - Outcome Engineering (016G)
    The article Richard shared before recording — worth tracking down if you're interested in where agentic development practices are heading
    32:13 - Simon Wardley
    If you're not following Simon Wardley, please follow Simon Wardley! His work on Wardley Maps and situational awareness in strategy is essential reading
    Simon's LinkedIn
    43:30 - Abby Bangser
    Vern's go-to person for all things observability. Abby's LinkedIn
    46:04 - Noah Susman
    As it turns out, the quote Vern's referencing: advanced monitoring as "indistinguishable from testing" was not by Noah! It was Ed Keyes at GTAC 2007.
    Noah's blog and LinkedIn
    59:30 - Angie Jones
    Vern's been reading Angie's work on testing AI-enabled applications here and here.
    Angie's website and LinkedIn
    01:01:30 - The Lead Dev panel Vernon will be part of
    "How to Measure the Business Impact of AI" — happening 25th February, free to sign up
    01:02:00 - Richard's Selenium Conf talk"Redefining Test Automation" — the talk that the Test Automation Days keynote is shaping up to be a spiritual successor to.
  • The Vernon Richard Show

    This Was Supposed to Be About Testing

    26/01/2026 | 53 mins.
    This was supposed to be about testing.Instead, it turned into a conversation about burnout, money, leadership, community, AI, and what it actually takes to build a sustainable life in tech.Richard and Vernon kick off 2026 reflecting on what they’re changing, what they’re rebuilding, and how testing and quality fit into a future shaped by intention rather than hustle.
    Links to stuff we mentioned during the pod:
    05:19 - The Malazan Book of the Fallen by Steven Erikson
    14:59 - The $1k Challenge by Ali Abdaal Vernon took part in last year
    17:23 - The video from Daniel Pink on how to have a successful yearHere's where Daniel talks about having a Challenger Network (but the whole video is 😙🤌🏾)

    18:46 - Toby SinclairToby's website
    Toby's LinkedIn

    19:24 - Keith KlainKeith's blog
    Keith's podcast
    Keith's LinkedIn

    19:25 - Agile Testing Days conference
    35:45 - What is Model Drift?
    41:06 - Glue workTanya's Glue Work presentation which you can read or watch
    Vernon's talk about how glue work impacts Quality Engineers, Testers, etc.

    48:06 - Gary "GaryVee" VaynerchukGary's website
    Gary's YouTube

    00:00 - Intro
    00:54 - Greetings & where have we been?
    01:32 - The holidays
    02:34 - Rest & mood
    04:00 - Routines for success
    05:59 - Push-up challenge!
    08:35 - Dopamine detox
    10:28 - THE EPISODE BEGINS!
    10:29 - What are our personal 2026 themes (rather than resolutions)?
    10:59 - Rich's 2026 themes
    13:10 - Vern's themes
    17:58 - Friendship, loneliness, and being the initiator
    21:28 - Rich has a two itches. One about writing...
    21:56 - ...and another about hats
    25:23 - Vern's leadership focus and testing foundations
    31:06 - AI work: data mindset, agents, and the vibe coding divide
    40:11 - Rant about AI testing being stuck in the past
    46:37 - Do "cool" shit and "talk" about it. How to stand out from AI Slop
    50:10 - Our podcast themes for 2026
  • The Vernon Richard Show

    Shifting Left: Agile vs. Waterfall in QA

    21/10/2025 | 1h
    In this episode of the Vernon and Richard show, the hosts engage in light-hearted banter about football before diving into a deep discussion on QA, QE, and testing. They explore the concept of 'shift left' in software development, comparing its application in agile versus waterfall methodologies. The conversation shifts to the evolving roles of QA and QE in the context of AI's impact on the industry, emphasizing the importance of task analysis and building a quality culture within teams. The episode concludes with reflections on managing expectations in QA roles and the future of jobs in the field.
    00:00 - Intro
    00:48 - Welcome and "Hey" (may contain traces of ⚽️)
    04:45 - Olly's first question: Does shift left lend itself more to waterfall (than other methodologies)?
    14:41 - Olly's second question: Does this limit how much agile can be used? Is there potentially a new methodology that can emerge from this?
    22:31 - Olly's third question (remixed by Rich a little): "...is it more now a case of making people aware that they can, should be considering things ahead of development?"
    34:24 - Olly's fourth question: How far can you shift-left before it becomes overstepping?
    51:53 - Olly's... which question is this now?! Next question! That works!: Where does the QA role end?

    Links to stuff we mentioned during the pod:
    04:26 - Olly FairhallOlly's LinkedIn
    Here's a link to what Olly sent us

    04:45 - Waterfall (in software development)Wikipedia article about the history of the term 
    This article goes into a little more detail about the different phases and characteristics of the model 

    07:29 - Dan Ashby's (yes DAN'S!) famous diagram is part of his often cited "Continuous Testing" post
    07:50 - For folks who don't understand that reference, it's... taken (🥁) scene from the movie Taken
    08:10 - Rich's Whiteboard used to get a lot more love😞 
    22:31 - Olly's questions and thoughts that are guiding our conversation. Thanks Olly!
    44:12 - The book "Who Not How" by Dan Sullivan and Dr. Benjamin Hardy
    46:33 - Elisabeth HendricksonGet Elisabeth's excellent book Explore It!
    Elisabeth's LinkedIn

    46:49 - Alan PageAlan's newsletter
    Alan and Brent's podcast
    Alan's LinkedIn

    51:53 - Kelsey HightowerKelsey did a Q&A at Cloud Native PDX and you can listen to the question and answer I was trying to describe here.
    I urge you to listen to the whole thing. Kelsey is an excellent orator, storyteller, and all-around human ❤️

    55:33 - Rob SabourinMy quick Perplexity search for Rob's public material on Task Analysis
    Rob's Linkedin

    56:59 - Vernon's newsletter "Yeah But Does it Work?!"The issue mentioned is called "What Is The Vaughn Tan Rule and How Does It Impact Testing?" and talks about where we might start with unbundling
  • The Vernon Richard Show

    Measuring Software Testing When The Labels Don’t Fit

    01/10/2025 | 1h
    This episode is about the struggle to explain, measure, and name the work testers and quality advocates actually do — especially when traditional labels and metrics fall short.
    Links to stuff we mentioned during the pod:
    05:05 - Defect Detection Rate (DDR)The rate at which bugs are detected per test case (automated or manual)
    No. of defects found by test team / No. of Test Cases executed) *100

    15:06 - David Evans' LinkedIn
    24:57 - Janet GregoryJanet's website
    Janet's LinkedIn

    26:01 - Defect Prevention RatePerplexity search results here

    28:28 - Jerry WeinbergJerry's Wikipedia page (his books are highly recommended)

    49:33 - Shift-Left: The concept of moving testing activities earlier in the software development lifecycyle.

    Some resources explaining the Shift-Left concept (Perplexity link)
    00:00 - Intro
    01:11 - Welcome & "woke" testing 😳
    03:15 - QA, QE, Testing… whatever we call it, how do we measure if we're doing a good job?
    03:44 - Vernon’s first experience with testing metrics: more = better?
    05:00 - Defect Detection Rate enters the chat
    06:41 - Rich reverse engineers quality skills needed in the AI era
    10:54 - How do we know if we’re doing any of this well?
    12:40 - Trigger warning: the topic of coverage is incoming 😅
    16:54 - Bugs in production
    21:09 - Automation metrics: flakiness, pass rates, and execution time
    24:29 - Can you measure something that didn’t happen? (Prevention metrics)
    27:43 - Do DORA metrics actually measure prevention?
    32:03 - Here comes Jerry!
    33:50 - The one metric the business cares about...
    36:23 - QA vs QE: whose “quality” are we "assuring"?
    39:25 - What's the story behind the numbers?
    48:29 - Rich brings in Shift Left Testing
    50:14 - Metrics that reach beyond engineering
    53:14 - Rich gets a new perspective on QE and the business
    56:50 - Who does this work? Testers? QEs? Or someone else?

More Business podcasts

About The Vernon Richard Show

Vernon Richards and Richard Bradshaw discuss all things software testing, quality engineering and life in the world of software development. Plus our own personal journeys navigating our careers and lifes.
Podcast website

Listen to The Vernon Richard Show, Ask About Wealth and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/4/2026 - 3:03:59 PM