PodcastsHealth & WellnessDigital Pathology Podcast

Digital Pathology Podcast

Aleksandra Zuraw, DVM, PhD
Digital Pathology Podcast
Latest episode

189 episodes

  • Digital Pathology Podcast

    196: DigiPath Digest #39 - If AI Sees More Than We Do. What Makes It Clinically Trustworthy?

    09/03/2026 | 26 mins.
    Send a text
    If AI can detect patterns we cannot see, how do we know when its answers are clinically trustworthy?
    In this episode of DigiPath Digest #39, I explore a big-picture question in digital pathology and medical AI. Many models now match or even exceed human performance in specific diagnostic tasks. But most of that evidence comes from controlled or retrospective datasets. So what happens when we try to bring these tools into real clinical workflows?
    I review four recent papers that help frame this challenge and point toward the next steps for trustworthy AI in healthcare. 
    You will hear about the role of prospective validation, real-world effectiveness, transparent reporting standards, and multimodal data integration as recurring themes across these studies.
    Key Highlights
    00:00 – Introduction
    What do we do when AI detects signals that humans cannot see? The core challenge is verifying those outputs before trusting them in clinical decision making. 
    03:32 – AI Across the Healthcare Continuum
    A narrative review shows AI achieving clinician-level performance in well-defined imaging tasks, including digital pathology. But most evidence comes from retrospective or controlled environments, and prospective validation remains limited. 
    08:34 – Multi-Omics and AI in Gastric Biopsy Diagnostics
    Morphology alone cannot fully capture molecular heterogeneity or predict disease progression. Integrating genomics, proteomics, metabolomics, and other omics with AI is shifting gastric pathology toward data-driven precision gastroenterology. 
    13:38 – Hyperspectral Imaging for Real-Time Surgical Guidance
    Spectral imaging can analyze tissue composition during surgery without staining, freezing, or contact with the tissue. Studies show promising sensitivity for detecting malignancy and supporting intraoperative decision making. 
    17:20 – REFINE Reporting Guideline for Foundation Models and LLMs
    An international consensus guideline introduces a 44-item reporting checklist to standardize how AI studies are described. The goal is transparent, reproducible, and comparable research in medical AI. 
    22:35 – Big Takeaway
    AI should be viewed as clinical decision support, not a replacement for clinicians. Real-world validation, ethical governance, and reproducible research standards will determine how these tools enter pathology workflows. 
    References (Articles Discussed)
    Artificial Intelligence in Healthcare: From Diagnosis to Rehabilitation
     https://pubmed.ncbi.nlm.nih.gov/41755929/

    Transforming Gastric Biopsy Diagnostics: Integrating Omics Technologies and Artificial Intelligence
     https://pubmed.ncbi.nlm.nih.gov/41751306/

    From Image-Guided Surgery to Computer-Assisted Real-Time Diagnosis with Hyperspectral and Multispectral Imaging
     https://pubmed.ncbi.nlm.nih.gov/41750768/

    REFINE Reporting Guideline for Foundation and Large Language Models in Medical Research
     https://pubmed.ncbi.nlm.nih.gov/41762555/

    If you enjoy staying current with digital pathology and AI research, this episode will help you connect the dots between promising algorithms and practical clinical adoption.
    Support the show
    Get the "Digital Pathology 101" FREE E-book and join us!
  • Digital Pathology Podcast

    191: Hallucinations, Agents, and AI in Pathology

    02/03/2026 | 30 mins.
    Send a text
    Clinical Artificial Intelligence in 2026. Accuracy, Education, and Guardrails
    Artificial intelligence is evolving fast in medicine. But how accurate is it. And are we building it safely?
    In this episode of DigiPath Digest, I review five new studies shaping digital pathology, radiology, burn diagnostics, and agent-based large language model systems. We discuss accuracy gains, hallucination filtering, education challenges, and why safeguards are essential before clinical deployment.
    Clear. Practical. Evidence-based.
    ⏱ Topics & Timestamps
    [00:02] Introduction
    Weekly journal club on digital pathology and artificial intelligence.
    [05:13] Hallucination Filtering in Radiology
    Using Discrete Semantic Entropy to detect hallucination-prone responses in Vision Language Models.
    Accuracy improved from 51.7 percent to 76.3 percent after filtering high-entropy answers.
    [15:04] Artificial Intelligence in Pathology Training
    Supervised use during residency.
    Balancing artificial intelligence adoption with preservation of morphological analysis and critical thinking.
    [20:12] Colorectal Cancer Lymph Node Detection
    Two-stage classification and segmentation model in Whole Slide Imaging.
    Recall 1.0. Specificity 0.935. Dice coefficient 0.818.
    Artificial intelligence as a second opinion.
    [25:04] Burn Depth Prediction with Artificial Intelligence
    Tissue Doppler Elastography and Harmonic B-mode ultrasound combined with artificial intelligence.
    90 to 95 percent accuracy in human subjects.
    [31:20] Agent-Based Large Language Model Systems
    OpenManus and Manus evaluated in clinical simulations.
    Up to 60.3 percent accuracy. High computational cost.
    89.9 percent of hallucinations filtered by safeguards.
    [40:08] Patient Access to Pathology Images
    Why viewing pathology slides can empower patients and improve communication.
    Resources
    https://pubmed.ncbi.nlm.nih.gov/41720937/
    https://pubmed.ncbi.nlm.nih.gov/41720644/
    https://pubmed.ncbi.nlm.nih.gov/41716065/
    https://pubmed.ncbi.nlm.nih.gov/41709317/
    https://pubmed.ncbi.nlm.nih.gov/41708802/
    Support the show
    Get the "Digital Pathology 101" FREE E-book and join us!
  • Digital Pathology Podcast

    190: Can a Better Stain Improve AI in Pathology?

    24/02/2026 | 55 mins.
    Send a text
    What if one of the biggest sources of diagnostic variability in prostate cancer isn’t the pathologist—but the stain we’ve trusted for decades?
    In this episode, I speak with Professor Ingid Carlbom, founder of CADESS.AI, about a different way to approach prostate cancer grading—by rethinking staining, segmentation, and AI decision support from the ground up. We explore why 30–40% interobserver variability persists in Gleason grading and how optimized stains combined with explainable AI can significantly reduce that uncertainty.
    Ingrid shares her journey from applied mathematics and computer science into pathology, the skepticism she faced in 2008, and why CADESS.AI chose not to “optimize H&E,” but instead developed a Picrosirius red + hematoxylin stain designed specifically for computational pathology. We discuss how grading at the gland and cellular level improves reproducibility, why explainability matters for trust, and what it really takes to build both stain and software as a single diagnostic workflow.
    This conversation challenges long-held assumptions—and asks whether improving data quality should come before building smarter algorithms.

    Highlights:
    [00:00–01:08] The problem: 30–40% disagreement in prostate cancer grading
    [01:08–03:03] Ingrid’s path from applied math to digital pathology
    [03:03–04:58] Early skepticism toward AI in pathology and fear of replacement
    [04:58–08:56] Why H&E limits segmentation—and how a new stain changes that
    [10:55–15:09] Clinical testing: non-inferiority, AI assistance, and NCCN risk stratification
    [19:47–22:59] Explainable UI: color-coded glands and pathologist override
    [26:16–27:29] Why grading glands (not whole slides) reduces variability
    [38:09–41:47] Regulatory challenges of combined stain + AI devices
    [45:52–48:55] The future of optimized stains in routine pathology

    Resources from This Episode
    CADESS.AI – Prostate cancer decision support system
    NCCN prostate cancer risk stratification guidelines
    Support the show
    Get the "Digital Pathology 101" FREE E-book and join us!
  • Digital Pathology Podcast

    189: Digital Pathology Deployment Decoded the Rigorous 4 Phase Framework

    24/02/2026 | 22 mins.
    Send a text
    Sometimes a paper comes out that’s so practical and relevant to what we do in digital pathology that I know we have to talk about it.
    In this episode, I dive into “A Guide for the Deployment, Validation and Accreditation of Clinical Digital Pathology Tools” from Geneva University Hospital (HUG) — one of the most useful, real-world frameworks I’ve seen for bringing digital pathology tools safely into clinical practice.
    If you’ve ever built an AI model and wondered, “Now what?”, this episode is for you.
    Because building the model is often the easy part — deployment is where things get complex.
    This guide breaks the process into four practical phases every lab can follow:
    1️⃣ Pre-Development – Define your clinical need, project scope, and validation plan before writing a single line of code.
    2️⃣ Development – Build and integrate the algorithm in a production-ready environment.
    3️⃣ Validation & Hardening – Turn your research code into a reliable, secure, and compliant clinical tool.
    4️⃣ Production & Monitoring – Keep the tool validated and performing consistently over time.
    We also discuss what makes qualification, validation, and accreditation different — and why that order really matters.
    You’ll hear about the multidisciplinary team behind these deployments, especially the deployment engineer (DE) — the technical linchpin who turns AI research into clinical reality.
    I share the story of HUG’s H. pylori detection tool, which cut diagnostic time by 26% while maintaining a 0% false negative rate. The team’s secret? Careful planning, quality control, and continuous user feedback — not just great code.
    Other highlights include:
    Why integration often takes longer than building the AI model itself
    How to avoid invalidating your validation data
    What continuous performance monitoring looks like in real labs
    And why every lab still needs to do local validation, even with proven tools
    If you’re working on digital or computational pathology tools — or just want to understand how AI safely moves from research to routine diagnostics — this episode will give you a roadmap grounded in real experience.
    🎧 Listen now to learn how to move from algorithm to accreditation, step by step.
    And if you’re just getting started in digital pathology, I’d love to give you my free eBook, Digital Pathology One-on-One: All You Need to Know to Start and Continue Your Digital Pathology Journey.
    You’ll find the link to download it in the show notes.
    See you in the episode!
    Support the show
    Get the "Digital Pathology 101" FREE E-book and join us!
  • Digital Pathology Podcast

    188: AI in Pathology: Biomarkers, Multimodal Data & the Patient

    21/02/2026 | 21 mins.
    Send a text
    Is AI in pathology actually improving diagnosis — or just adding complexity?
    In DigiPath Digest #37, we reviewed four recent publications covering AI-based biomarker quantification in glioblastoma, real-world digital workflow integration in prostate cancer, multimodal AI combining histopathology and genomics, and patient perspectives on AI in cancer diagnostics.
    This episode connects technical performance with something equally important: trust.
    Episode Highlights
    [00:02] Community & updates
    Digital Pathology 101 free PDF, upcoming patient-focused book, and global attendance.
    [04:07] AI-based image analysis in glioblastoma
    AI showed strong consistency with pathologists when quantifying Ki-67, P53, and PHH3.
    Significant biological correlations (Ki-67 ↔ PHH3, PHH3 ↔ P53) were detected by AI — not by manual assessment.
    Takeaway: computational quantification improves precision.
    [09:28] Real-world digital workflow + AI in prostate cancer (France)
    AI-pathologist concordance:
    • 93.2% (high probability cancer detection)
    • 99.0% (low probability slides)
    Gleason concordance: 76.6%
    10% failure rate due to pre-analytical artifacts.
    Takeaway: infrastructure and sample quality still matter.
    [15:58] Multimodal AI (MARBIX framework)
    Combines whole slide images + immunogenomic data in a shared latent space using binary “monograms.”
    Performance in lung cancer: 85–89% vs 69–76% unimodal models.
    Takeaway: integrated data improves case retrieval and similarity reasoning.
    [22:13] AI-powered paper summary subscription introduced
    Structured summaries for busy professionals who want more than abstracts.
    [26:17] Patient roundtable on AI in pathology (Belgium)
    Patients expect:
    • Better accuracy
    • Faster turnaround
    • Stronger collaboration
    Trust is high when:
     • Algorithms use diverse datasets
     • Pathologists retain final responsibility
    Clinical validity mattered more than full algorithm transparency.
     Privacy concerns focused more on insurer misuse than cloud transfer.
    Key Takeaways
    AI improves biomarker precision in glioblastoma.
    Digital pathology implementation works — but pre-analytics can limit AI performance.
    Multimodal AI represents the next meaningful step in precision diagnostics.
    Patients are not afraid of AI — they want validation, oversight, and governance.
    Human–AI collaboration remains central.
    If you’re working in digital pathology, computational pathology, or precision oncology, this episode connects evidence, implementation, and patient perspective.
    Support the show
    Get the "Digital Pathology 101" FREE E-book and join us!

More Health & Wellness podcasts

About Digital Pathology Podcast

Aleksandra Zuraw from Digital Pathology Place discusses digital pathology from the basic concepts to the newest developments, including image analysis and artificial intelligence. She reviews scientific literature and together with her guests discusses the current industry and research digital pathology trends.
Podcast website

Listen to Digital Pathology Podcast, Happy Place and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/9/2026 - 6:05:18 PM