
Organic AI and the future of humanoid robots with Bruno Maisonnier
05/12/2025 | 58 mins.
What if we could build robots and AI that learn like the human brain, without burning through nuclear-plant levels of energy?In this episode of Explainable AI, hosts Paul Anthony Claxton and Rohan Hall sit down with legendary roboticist and deep-tech entrepreneur Bruno Maisonnier (founder of Aldebaran Robotics creators of the NAO and Pepper robots, and AnotherBrain). They explore the future of humanoid robots, “organic AI,” and what real intelligence looks like beyond today’s deep learning and LLMs.

AI, venture capital and the moneyball future with Rafe Furst
21/11/2025 | 44 mins.
In this episode of Explainable AI, hosts Paul Anthony Claxton and Rohan Hall sit down with entrepreneur, investor and former AI researcher Rafe Furst for a deep exploration of how artificial intelligence is transforming the world of venture capital. Rafe shares insights from his early days in AI research, his Silicon Valley startup successes and his experience as a professional poker player. Together they unpack why the traditional VC model is faltering, how AI can reduce risk and improve decision making, and why the next wave of investing will look more like Moneyball than the old Silicon Valley playbook.

Your data, your rules: Self Sovereign AI with Amit Pradhan
06/11/2025 | 57 mins.
In this episode of Explainable AI, hosts Paul Anthony Claxton and Rohan Hall sit down with Amit Pradhan — CEO & Founder of Rainfall and longtime leader in decentralized technology — to talk about self-sovereign AI and why owning your data matters more than ever.Amit explains how AI has already been shaping our lives for over a decade, often without us realizing it. We dig into the shift from AI happening to us to AI serving us — and what needs to change to get there.

AI, data ownership and liability with Omeed Tabiei
25/7/2025 | 1h 7 mins.
In this episode of Explainable AI, hosts Paul Anthony Claxton and Rohan Hall sit down with Omeed Tabiei, the self-proclaimed “coolest lawyer ever,” to tackle the legal and ethical challenges shaping the future of AI. From data privacy, ownership, and compliance with regulations like GDPR, to the thorny issue of AI liability—who’s responsible when AI systems make mistakes?—this conversation dives deep into the risks, responsibilities, and opportunities of emerging AI technologies.

The road to responsible AI with Serg Masis
24/7/2025 | 1h 13 mins.
How do we build trust in AI systems? Paul Anthony Claxton and Rohan Hall sit down with Serg Masis, author of Interpretable Machine Learning with Python, to explore the 'trust equation'—from breaking open the black box to ensuring responsible, explainable AI. Discover why transparency, human oversight, and ethics are key to AI’s future.



Explainable AI Podcast