In this episode of the Vernon and Richard show, the hosts engage in light-hearted banter about football before diving into a deep discussion on QA, QE, and testing. They explore the concept of 'shift left' in software development, comparing its application in agile versus waterfall methodologies. The conversation shifts to the evolving roles of QA and QE in the context of AI's impact on the industry, emphasizing the importance of task analysis and building a quality culture within teams. The episode concludes with reflections on managing expectations in QA roles and the future of jobs in the field.
--------
1:00:28
--------
1:00:28
Measuring Software Testing When The Labels Don’t Fit
This episode is about the struggle to explain, measure, and name the work testers and quality advocates actually do — especially when traditional labels and metrics fall short.Links to stuff we mentioned during the pod:05:05 - Defect Detection Rate (DDR)The rate at which bugs are detected per test case (automated or manual)No. of defects found by test team / No. of Test Cases executed) *10015:06 - David Evans' LinkedIn24:57 - Janet GregoryJanet's websiteJanet's LinkedIn26:01 - Defect Prevention RatePerplexity search results here28:28 - Jerry WeinbergJerry's Wikipedia page (his books are highly recommended)49:33 - Shift-Left: The concept of moving testing activities earlier in the software development lifecycyle.Some resources explaining the Shift-Left concept (Perplexity link)00:00 - Intro01:11 - Welcome & "woke" testing 😳03:15 - QA, QE, Testing… whatever we call it, how do we measure if we're doing a good job?03:44 - Vernon’s first experience with testing metrics: more = better?05:00 - Defect Detection Rate enters the chat06:41 - Rich reverse engineers quality skills needed in the AI era10:54 - How do we know if we’re doing any of this well?12:40 - Trigger warning: the topic of coverage is incoming 😅16:54 - Bugs in production21:09 - Automation metrics: flakiness, pass rates, and execution time24:29 - Can you measure something that didn’t happen? (Prevention metrics)27:43 - Do DORA metrics actually measure prevention?32:03 - Here comes Jerry!33:50 - The one metric the business cares about...36:23 - QA vs QE: whose “quality” are we "assuring"?39:25 - What's the story behind the numbers?48:29 - Rich brings in Shift Left Testing50:14 - Metrics that reach beyond engineering53:14 - Rich gets a new perspective on QE and the business56:50 - Who does this work? Testers? QEs? Or someone else?
--------
1:00:15
--------
1:00:15
When Everything Sounds Like Testing… How Do You Explain What You Really Do?
In this episode, Richard and Vernon delve into the complexities of Quality Assurance (QA), Quality Engineering (QE), and testing in software development. They explore the evolution of these concepts, their interrelations, and the importance of metrics in assessing quality. The conversation highlights the need for a holistic approach to quality, emphasizing that both prevention and detection of bugs are essential. The hosts also discuss the challenges of defining these terms and the future of quality in the industry.Links to stuff we mentioned during the pod:08:50 - Dan AshbyWe're referring to Dan's's excellent post called "Continuous Testing" (featuring his famous diagram!)17:13 - Jit GosaiJit's blog Jit's Quality Engineering Newsletter Jit's LinkedIn19:24 - Quality Talks PodcastStu's Quality Talks podcast that he co-hosts with Chris HendersonStu's LinkedInChris's Linkedin19:55 - The Testing Peers podcast22:00 - DORA Metrics: DORA metrics are a set of key performance indicators developed by Google’s DevOps Research and Assessment team to measure the effectiveness of software delivery and DevOps processes, focusing on both throughput and stability26:13 - A link from Episode 10 where Vern discusses Glue Work (be sure to check out the show notes on that episode)Quick overview of DORA metrics34:43 - The Credibility PlaybookA video course by Vernon as he experiments with building digital products.Check it out and let him know what you think of it! 😊46:24 - Ali AbdaalAli's websiteAli's YouTube00:00 - Intro01:36 - Welcome02:40 - Today's topic: What the hell is QA? QE? Testing? And is it all changing?03:00 - Why is this bugging Rich?05:11 - Fruit fly tangent 🍌🍊🍎🪰🐝🦋06:27 - Rich's take on QA, QE, and Testing08:31 - Vern's take on QA, QE, and Testing11:15 - Is shift-left testing the same as QE?13:05 - When the team tests early... is that QE then?!16:18 - What's the big deal if we can’t define QE clearly?19:27 - Why the Efficiency Era makes this even harder22:55 - Trying to draw the Testing, QA, QE, Venn diagram27:24 - Getting the QA, QE, Testing blend just right. What's the right mix?29:52 - The kinds of work we take on as our careers grow34:08 - What Testers get rewarded for45:34 - How Ali Abdaal helped Vern think differently about quality48:18 - Rich talks measurement
--------
53:37
--------
53:37
Embedding Quality Using AI
In this conversation, Vernon and Richard explore the evolving role of AI in quality engineering and software development. They discuss how AI can enhance quality control processes, the importance of embedding quality early in the development cycle, and the potential challenges and opportunities that arise from integrating AI tools. The conversation also touches on the need for skill development and community engagement in adapting to these changes, as well as the implications for roles within the industry.Description and Thumbnail made with AI to assess the quality, we had to!00:00 - Intro01:02 - Welcome and footy ⚽️02:15 - Today's topic: The impact that AI may or may not have on Quality Engineering03:22 - Rich's wild idea about AI and software quality14:10 - Vern asks a clarifying question22:45 - Communities of excellence… for machines?!24:03 - Vern thinks there's an obvious risk that follows from this idea...31:31 - Rich addresses the risk (Oracles, prompts, and tester superpowers)36:13 – Reflection: the hidden skill AI forces on us41:40 – Shifting in all directions (not just left)43:04 - Feeding your past self into an AI: smart or scary?45:53 – Operation 400 subscribers (and bot listeners)47:13 – Tony Bruce calls us out on sloppy show notes and outroLinks to stuff we mentioned during the pod:04:18 - Shift-Left: The concept of moving testing activities earlier in the software development lifecycyle.Some resources explaining the Shift-Left concept (Perplexity link)25:35 - Rob BowleyRob's LinkedInThe post Vernon referred to......a follow-up post not long after that one too!26:40 - Alan PageAlan and Brent's podcastAlan's LinkedIn34:43 - Saskia CoplansDigital Interruption Saskia's cybersecurity consultancyREXscan Saskia's automated mobile application vulnerability scannerSaskia's LinkedIn (highly recommended follow)41:49 - Paul ColesPaul Coles published 3 of his 4 part series "The Subtle Art of Hearding Cats" over on Dev.To Recommended reading!Paul's LinkedIn43:09 - Maaret PyhäjärviMaaret's websiteMaaret's blogMaaret's LinkedIn
--------
47:50
--------
47:50
Six Hard Lessons From Building With AI Agents
In this episode of the Vernon Richard show, the hosts discuss their experiences with AI tools and agents, focusing on the challenges and lessons learned from using these technologies in coding and software engineering. They explore best practices for utilizing AI effectively, the importance of context in interactions with AI, and the future of AI agents in the workplace. The conversation highlights the balance between leveraging AI for efficiency while maintaining control and understanding of the underlying processes.Links to stuff we mentioned during the pod:09:16 - The LinkedIn post talking about Replit messing with someone's production code 😳And the link to the thread of person who went through itThe tool in question, Replit13:01 - Rich's LinkedIn post with his tips14:21 - GitHub Copilot18:09 - VS Code29:01 - Folks at different ends of the "AI Enthusiasm Spectrum"On the enthusiastic endJason Arbon is on the positive side and is always creating something interesting like...testers.aiOn the unenthusiastic endKeith Klain has created a reading list to help get us up to speed...Keith's reading AI reading listYou can see his full resources list hereMaaike Brinkhof has a bunch of thought-provoking posts on the topic......like this oneand this one34:44 - Want to know what "conflabulation" means? Listen to Martin explain it on the Ghost in th code podcast (that's not a typo!)37:24 - What is Context Engineering? Perplexity has answers!46:38 - The legendary Lt. Geordi La Forge from Star Trek: The Next Generation.51:48 - After recording, the very cool Paul Coles published his article The Subtle Art of Herding Cats: Why AI Agents Ignore Your Rules (Part 1 of 4, explaining the topic of Context Engineering. It’s brilliant!59:04 - The promises of technology over the years...60:50 - The always insightful Meredith Whittaker of Signal fame, where is the president and services on its board of directors, explains the privacy and security concerns with agentic technology.Watch the clip, then go back and watch the whole thing!00:00 - Intro01:17 - Welcome01:30 - TANGENT BEGINS... All kinds of egregious waffling follows. Skip to the actual content at 08:3401:31 - Rich VS Tree Stump01:57 - What on earth did Rich need the pulley for?02:26 - Vern's nerdy confession and pulley confusion02:52 - Does Rich live next door to Tony Stark?!03:22 - What to do when you need a steel RSJ03:35 - We admit defeat. 03:36 - Welcome to Rich's Garden Adventures Podcast!07:25 - What has Vern been up to?08:34 - We attempt to segue into the episode at last!08:35 - TANGENT ENDS...08:51 - Rich’s POC: using agents to help build AI tools09:45 - The Replit disaster: vibe coding meets deleted production data 11:12 - Sociopathic assistants and the case for AI gaslighting 11:55 - Vernon wants his team experimenting with AI tools12:50 - Rich explains the context for his latest AI adventures13:18 - Rich’s bench project and “putting the engineering hat on” 15:22 - Setting up the stack and staying in control 16:53 - A familiar story: things were going fine until they weren’t 17:00 - Ask vs Edit vs Agent mode in Copilot explained 19:06 - The innocent linting error that spiralled out of control 21:16 - Stuck in a loop: “I didn’t know what it was doing, but I let it keep going” 22:11 - The fateful click: “I’m going to reset the DB” 23:10 - The aftermath: no data, no damage… but very nearly 23:33 - Security wake-up call: agents are acting as you 24:39 - You can’t fix what you don’t know it broke 25:52 - Can you interrupt an agent mid-task? 27:14 - When agents get “are you sure?” moments 28:15 - Tea breaks as a dev strategy: outsourcing work to agents 29:24 - Jason Aborn vs Keith & Maaike: where Rich sits on the AI enthusiasm spectrum 30:41 - Tip1. The first of Rich’s 6 agent tips: commit after every interaction32:12 - Why trusting the “keep all” button is risky 34:01 - Writing your own commits vs letting the agent do it 35:26 - When agents lose the plot: reset instead of fixing 36:55 - “You’re insane now, GPT. I’m giving you a break.” 37:54 - Tip 2: Make the task as small as possible 39:59 - The middle ground between 'ask' and full agent delegation 41:12 - Tip 3: Ask the agent to break the task down for you 43:36 - The order matters: why you shouldn’t start with the form UI 44:33 - Vernon compares it to shell command pipelines 45:09 - It can now open browsers and run Playwright tests (!) 46:23 - Star Trek and the rise of the engineer-agent hybrid 47:57 - Tips 4–6: Test often, review the code, use other models 49:39 - Pattern drift and the importance of prompt templates 50:51 - Vernon’s nemesis: m dashes, emojis, and being ignored by GPT 51:48 - Context engineering vs prompt engineering 52:43 - When codebases get too big for agents to cope 53:40 - Why agents sometimes act dumber than your IDE 54:32 - The danger of outsourcing good practices to AI 54:48 - Spoilers: Rich’s upcoming keynote at TestIt 55:01 - Agents don’t ask why — they just keep going 56:42 - Goals vs loops: when failure isn’t part of the plan 58:32 - The question of efficiency: is training agents worth it? 59:47 - Rich’s take: we’ll buy agents like we buy SaaS 61:08...
Vernon Richards and Richard Bradshaw discuss all things software testing, quality engineering and life in the world of software development. Plus our own personal journeys navigating our careers and lifes.